Notifications
Clear all
Topic starter
You are a database manager working for a new product that will need millions of reading and writing from the database, with zero downtime, key-value i.e. NoSQL features, no manual steps should be required to ensure consistency, repair data, synchronize writes and deletes.
Which of the following database you choose?
- A . Cloud SQL
B. Cloud BigTable
C. Cloud Spanner
D. Cloud Firestore
Suggested Answer: B
Explanation:
Cloud BigTable
Key features
High throughput at low latency
Bigtable is ideal for storing very large amounts of data in a key-value store and supports high read and write throughput at low latency for fast access to large amounts of data. Throughput scales linearly―you can increase QPS (queries per second) by adding
Bigtable nodes. Bigtable is built with proven infrastructure that powers Google products used by billions such as Search and Maps.
Cluster resizing without downtime
Scale seamlessly from thousands to millions of reads/writes per second. Bigtable throughput can be dynamically adjusted by adding or removing cluster nodes without restarting, meaning you can increase the size of a Bigtable cluster for a few hours to handle a large load, then reduce the cluster's size again―all without any downtime.
Flexible, automated replication to optimize any workload
Write data once and automatically replicate where needed with eventual consistency―giving you control for high availability and isolation of reading and write workloads. No manual steps are needed to ensure consistency, repair data, or synchronize writes and deletes. Benefit from a high availability SLA of 99.999% for instances with multi-cluster routing across 3 or more regions (99.9% for single-cluster instances).
Explanation:
Cloud BigTable
Key features
High throughput at low latency
Bigtable is ideal for storing very large amounts of data in a key-value store and supports high read and write throughput at low latency for fast access to large amounts of data. Throughput scales linearly―you can increase QPS (queries per second) by adding
Bigtable nodes. Bigtable is built with proven infrastructure that powers Google products used by billions such as Search and Maps.
Cluster resizing without downtime
Scale seamlessly from thousands to millions of reads/writes per second. Bigtable throughput can be dynamically adjusted by adding or removing cluster nodes without restarting, meaning you can increase the size of a Bigtable cluster for a few hours to handle a large load, then reduce the cluster's size again―all without any downtime.
Flexible, automated replication to optimize any workload
Write data once and automatically replicate where needed with eventual consistency―giving you control for high availability and isolation of reading and write workloads. No manual steps are needed to ensure consistency, repair data, or synchronize writes and deletes. Benefit from a high availability SLA of 99.999% for instances with multi-cluster routing across 3 or more regions (99.9% for single-cluster instances).
Posted : 16/11/2022 1:33 am