News

Welcome to End Point’s blog

Ongoing observations by End Point people

New NoSQL benchmark: Cassandra, MongoDB, HBase, Couchbase

Today we are pleased to announce the results of a new NoSQL benchmark we did to compare scale-out performance of Apache Cassandra, MongoDB, Apache HBase, and Couchbase. This represents work done over 8 months by Josh Williams, and was commissioned by DataStax as an update to a similar 3-way NoSQL benchmark we did two years ago.

The database versions we used were Cassandra 2.1.0, Couchbase 3.0, MongoDB 3.0 (with the Wired Tiger storage engine), and HBase 0.98. We used YCSB (the Yahoo! Cloud Serving Benchmark) to generate the client traffic and measure throughput and latency as we scaled each database server cluster from 1 to 32 nodes. We ran a variety of benchmark tests that included load, insert heavy, read intensive, analytic, and other typical transactional workloads.

We avoided using small datasets that fit in RAM, and included single-node deployments only for the sake of comparison, since those scenarios do not exercise the scalability features expected from NoSQL databases. We performed the benchmark on Amazon Web Services (AWS) EC2 instances, with each test being performed three separate times on three different days to avoid unreproduceably anomalies. We used new EC2 instances for each test run to further reduce the impact of any “lame instance” or “noisy neighbor” effect on any one test.

Which database won? It was pretty overwhelmingly Cassandra. One graph serves well as an example. This is the throughput comparison in the Balanced Read/Write Mix:

Our full report, Benchmarking Top NoSQL Databases, contains full details about the configurations, and provides this and other graphs of performance at various node counts. It also provides everything needed for others to perform the same tests and verify in their own environments. But beware: Your AWS bill will grow pretty quickly when testing large numbers of server nodes using EC2 i2.xlarge instances as we did!

Earlier this morning we also sent out a press release to announce our results and the availability of the report.

Update: See our note about updated test runs and revised report as of June 4, 2015.

14 comments:

Fabio Mariotti said...

It might be odd .. but I do not see a conclusions chapter.

While we can read the numbers, it would be nice a comment from the guys running these tests.

I see a section on difficulties.. but would be nice a conclusive summary section.

Fabio Mariotti said...

Technically, I would not mind to know why a factor 10 or even bigger can apply to this comparison. In other words: Why the others are so poor?

Unknown said...

Fabio: numbers are so self-speaking that every conclusive summary would be further humiliation for other databases. Which conclusion do you expect?

douglas said...

I read through the detailed write up.

Can you clarify the zookeeper setup for the hbase system?

It appears that regardless of cluster size, there was only one zookeeper node used.

is that correct? If so, what might the impact on hbase performance be?

Another point of interest to me is if you ran any benchmarking on the performance of the datanodes themselves in order to conclude that the hdfs setup was as good as can be.

thanks again for a great report.

Kalyan said...

Wondering what is the write consistency level that was maintained in drawing these benchmarks? Did you do any specific configurations so that they deal with likes like log files, memstores, etc and also use of blooms if any and so forth! And the report points to Datastax. Can I trust these numbers? Apologize for strong wording here, but like to hear without any bias involved.

Josh Williams said...

Hello Kalyan,

The write consistency levels were set to durable writes for all databases. There wasn't much specific tuning to keep each on as close a level as possible, apart from what was mentioned in the paper.

The numbers should all be trustworthy; no magic done to make any database system look especially good or bad. In other words, if you were to run the same test the results should be the same for you.

Roko said...

It seems some basic tuning should have been done across the systems as this is more of a comparison of how systems work with default settings. Also with HBase you used gzip compression which is one of the slowest compressions you could use. You should not have used compression on any of the databases for a fair comparison or chosen something like Snappy which has much lower overhead vs Gzip.

Jeff de Anda said...

This is clearly bias as datastax has a vested interest in Cassandra. Most likely Cassandra was tuned well where the other systems had default settings. There are other benchmarks from independents that refute these claims.

Jon Jensen said...

Jeff, of course DataStax has a vested interest in Cassandra. But no, it was not "tuned well where the other systems had default settings". We published the configuration details. You're welcome to question specifics if you want. I hear only vague accusations from you.

Is there a reason you didn't post links to these "benchmarks from independents that refute these claims"?

Colm Smyth said...

Responding to the comments from douglas about the Hadoop configuration used here...

Judging by the name of the mount point, it looks like the single Zookeeper instance was persisting to HDFS, presumably mounted using FUSE. I can't imagine that there are frequent writes to Zookeeper during the test run, but it does seem to be a point of contention.

I would be interested to see results for HBase where Zookeeper was run in a cluster, each writing to a local log file.

manikanth said...

I don't know how could you even attempt to compare these?
The master slave architecture of Hbase, Mongodb leans to data consistency while the ring topology of Cassandra leans to availability. Both serve different purpose.
And are all the reads of mongodb directed to Master only which severely hits the concurrency?
And what about the integrity of data during writes (updates) under the same load for Cassandra?
And with the similar settings United software associates showed a different view of benchmark comparisons where mongodb was leading. And as obviously they were lying right?
http://info-mongodb-com.s3.amazonaws.com/High%2BPerformance%2BBenchmark%2BWhite%2BPaper_final.pdf

Torsten Bronger said...

I think "United Software Associates" benchmarked with immediate consistency, which obviously slows down Cassandra compared to the others, but is fairer at the same time. Clearly, with those devastating performance figures of MongoDB in the DataStax benchmark, nobody could use MongoDB sensefully in production. But many do.

sam said...

These numbers are clearly biased towards Cassandra as my own tests (and others) have shown Couchbase to out-perform Cassandra every time (with Mongo behind both). I'm pretty sure the authors have a vested interest in pushing Cassandra and deliberately crippled the other tested systems.

Can you please be clear about the size of the Couchbase cluster and exactly how it was configured? Did the data fit completely in RAM? etc etc. If you cannot provide a fully repeatable tests your claims are bogus.

David Christensen said...

Hi Sam,

You can see full details of configuration, etc, in the paper referenced in the followup blog article:

http://blog.endpoint.com/2015/06/updated-nosql-benchmark-cassandra.html

Please let us know if this does not answer your questions; obviously benchmarks are all artificial workloads at some level, so they really only measure the scenarios they are developed to do so. That said, we certainly did not configure the testing systems to intentionally cripple any specific product's performance.

Best,

David