You are implementing a big data solution that runs on two Azure Virtual Machines (VMs). A VM named model1 is used to train a deep learning algorithm that uses GPU processing. A VM named database1 runs a NoSQL database that requires high disk throughput and IO.
You need to implement the most appropriate VM sizes for these VMs.
Choose all that apply
Implement a high performance compute VM for model1 and a Dsv3 size VM for database1: This solution does not meet the goal. You can use high performance compute (HPC) VMs for workloads that might use high-throughput network interfaces like remote direct memory access (RDMA), such as genomics, computational chemistry, and financial risk modeling. You can use a Dsv3 size VM for general-purpose workloads with a good CPU-to-memory ratio, like small or medium databases and web servers.
Implement a GPU optimized VM for model1 and an Lsv2 size VM for database1: This solution meet the goal. You can use a GPU optimized VM for model1, which provides access to GPU hardware to train the deep learning algorithm. You can use an Lsv2 size VM, which is a storage optimized VM with high disk throughput and IO. This is ideal for Big Data solutions, NoSQL databases, data warehousing, and large transactional databases.
Implement a memory optimized VM for model1 and a Fsv2 size VM for database1: This solution does not meet the goal. You can use memory optimized VMs for workloads that require high memory-to-CPU ratio, like medium to large caching solutions like Redis and in-memory analytics. You can use a Fsv2 size VM for compute optimized workloads with a high CPU-to-memory ratio, like network appliances, batch processes, and application servers.