Notifications
Clear all
Topic starter
A customer is migrating there on-promises data analytics solution to Google Cloud. The current solution has a lot of data being read form and written to disk. The performance of this approach has occasionally been a bottleneck for a scale of operations that your customer has. The application is fault tolerant and can with stand machine going down frequently. In moving to Google Cloud they are asking your advice on any way to improve performance?
- A . Use Big Query Which has very fast data access and analysis
B . Use Cloud Storage which can be central, scalable storage
C . Use local SSDs with the VMs
D . Use Persistent Disk with the VMs
Suggested Answer: C
Explanation:
Local SSDs are attached to the VM and have very high throughput. However, when the VM shuts down, The local SSD is also shut down, Since our Workload here is foult tolerant, than is not an issue.
Explanation:
Local SSDs are attached to the VM and have very high throughput. However, when the VM shuts down, The local SSD is also shut down, Since our Workload here is foult tolerant, than is not an issue.
Posted : 30/10/2022 12:40 pm