π Can I store large data volumes in CrateDB?
CrateDB thrives in use cases in the range of hundreds of terabytes, being able to handle petabytes in some configurations.
π How do sharding and distribution work in CrateDB?
Every table in CrateDB is sharded, which means that data is divided and distributed across the nodes of a cluster. A unique internal identifier is assigned to each row; this identifier is used to rout the requests. The shard assignment never changes, even if the shard is moved from one node to another.
To assure consistency, an arbitrary number of replica shards can be configured per table, with every replica holding a fully synchronized copy of the primary shard. To make the cluster fault-tolerant, CrateDB automatically distributes replica shards across the cluster in a way that assures that there is always a complete copy of the data accessible, even if a certain number of failures occur.
Tables can also be partitioned. Partitions can be of different sizes, allowing a second dimension for scaling and moving data through the cluster.
π What is the average query response time of CrateDB?
This depends on the use case, but CrateDB aims for responses in the sub-second time frame.
Primary-key lookups work really fast on CrateDB. In complex cases with a high number of nodes, the right data optimization strategy can reduce delays considerably. And to increase performance with high cardinality data, we implemented approximate functions like HyperLogLog.
π How querying works in CrateDB?
Real-time databases often require all data to fit in the main memory, limiting the amount of data that can be managed. But to achieve real-time performance without data volume limitations, CrateDB implements memory-resident columnar field caches for each shard.
The caches tell the query engine whether there are rows on that shard that meet the query criteria and where the rows are located, all performed at in-memory speed.
Besides, CrateDB implements a query planner that decides which nodes are best suited to finalize the processing of aggregations and joins.
π Is CrateDB a good choice for use cases with frequent parallel queries?
Yes, CrateDB presents excellent performance with multiple concurrent users.
π Does CrateDB offer ACID transactions?
No, CrateDB does not focus on ACID. In order to prioritize data availability, CrateDB implements an eventual consistency model, allowing it to operate with high performance even in complex multi-node operations with parallel queries.
π How does eventual consistency work?
CrateDB is built for applications that require the handling of βfire-hosesβ of data, i.e. high volumes of staple data that need to be queried intensively. In many of these applications, strong consistency is not a key requirement: efficiency and performance are.
This is the reason why CrateDB was built with an eventually consistent, non-blocking data insertion model. This eliminates the OLTP locking overhead traditionally found in other databases, leading to a significant improvement in performance.
To eventually assure consistency, CrateDB includes record versioning, optimistic concurrency control, and a table-level refresh frequency setting, which forces the cluster to become consistent after a certain number of milliseconds. Making this compromise maximizes data availability, resulting in the great response of CrateDB with challenging workloads.
Queries retrieving a specific row by its primary key are an exception. These are fully consistent, always receiving the most recent row (read-after-write consistency). All the other queries in CrateDB will return eventually consistent data.
π If CrateDB is eventual-consistent, how it assures atomicity and durability?
CrateDB implements WAL (write-ahead logging) to ensure transactional atomicity, data durability, and to reduce disk overhead.
WAL guarantees a few things:
- Operations on rows (internally stored in CrateDB as JSON documents by the Lucene shards) are atomic. This means that a write operation on a row either succeeds as a whole, or has no effect at all.
- Operations on rows are persisted to disk without having to issue a Lucene-Commit for every write operation. When the translog gets flushed, all data is written to the persistent index storage of Lucene, and the translog gets cleared.
- In the case of an unclean shutdown of a shard, the transactions in the translog are replayed upon startup, to ensure that all executed operations are permanent.
- The translog is also directly transferred when a newly allocated replica initializes itself from the primary shard.
π What about schemas?
Instead of implementing traditional relational schemas, which can be very painful to change, CrateDBβs schemas are very flexible:
CrateDB can handle INSERT statements including a column that wasnβt originally defined on the table. It can be configured to a) enforce the original schema by throwing an error, b) automatically update the schema by adding the new column, or c) just storing the plain JSON value.
Internally, every relational record in CrateDB is stored as a JSON document. They can be extended seamlessly, which gives CrateDB the flexibility to handle evolving data structures. JSON objects can be stored in βobjectβ columns, which can have arbitrary numbers of attributes, nesting levels, and even arrays of objects.
As new attributes are added to objects, CrateDB can automatically add them to the schema, and they can be accessed by βobject.attributeβ in SQL queries.
This schema flexibility, combined with SQL, JSON & full-text search, is one of the main reasons why CrateDB customers choose to replace multiple databases (e.g., MySQL + Elasticsearch) with CrateDB.
π Does distribution work the same for reads and for writes?
With read operations, there is no difference between executing the operation on the primary shard or in any of the replicas.
However, write operations are handled differently than reads.
Write operations are synchronous over all the active replicas, with the following flow:
- For the given operation, the affected primary shard and its active replicas are locked up in the cluster state. For this step to succeed, the primary shard and a quorum of the configured replicas need to be available.
- The operation is then routed to the primary shard for execution.
- If the operation succeeds on the primary, the operation gets executed on all replicas in parallel.
- Once all replica operations finish, the result gets returned to the caller.
- If any replica shard fails to write the data, or if it times out in step 4, the data itβs immediately considered as unavailable.
All these operations are managed internally by CrateDB. Data rebalancing, replication, and distribution are automatic.
π Does CrateDB support full-text search?
Yes, CrateDB integrates Lucene-powered full-text search. Check out our docs for more info.
π Is CrateDB compatible with Telegraf/Prometheus?
Yes. CrateDB supports the Postgres protocol, which makes it compatible with all the common data interfaces for Postgres.
π How can I access CrateDB?
We offer the following client interfaces:
- PostgreSQL wire protocol
- Web administrator user interface (Admin UI)
- Crash, an interactive command line interface (CLI)
- JDBC
- DBAL
- PDO
- Python client
- Plugin for Npgsql
There's nothing better than trying things by yourself! Download CrateDB or sign up for a CrateDB Cloud free trial. Experiment... And tell us what you think π
Apart from Dev.to, you can reach to the Crate.io team in:
- Github (We'd love a βοΈ π)
- Our community page
See you around π
Top comments (0)