

- #Nosql benchmark tests update
- #Nosql benchmark tests full
- #Nosql benchmark tests software
- #Nosql benchmark tests series
No hardware limitation was hit during the tests. This test is designed to verify cluster behavior under normal operating circumstances so the cluster was monitored permanently. The view was also setup before running any other test. To test the Query performance we used the following map/reduce view: Mapper function The time series was then analysed using Octave. The data was aggregated from all loaders and saved in CSVs. The Couchbase clients discover the nodes participating in the cluster and connect to the individual nodes directly thus yielding more than 2000 concurrent connections to the cluster.Īll the hosts had the fs.file-max ulimit increased to 55k. Also, the auto-compaction feature was disabled.Īll the tests were executed using 1000 concurrent client threads that instantiate a separate client instance on each loader machine, which totals 2000 concurrent threads. The bucket that held the data was always deleted and recreated before running the tests. The nodes were added sequentially to the pool in increments of 2. The dataset is the Last.fm training dataset which has about one million JSON formatted records of songs. The custom sampler instantiates the Couchbase client and then uses it to execute independent PUT, GET and then QUERY operations.
/filters:no_upscale()/news/2013/04/NoSQL-Benchmark/en/resources/NoSQL-benchmark-6.png)
#Nosql benchmark tests software
The loader software was Apache JMeter 2.11 with custom samplers written by us that are available at our github repository. Also the nodes were connected to our Solid Store iSCSI Block Storage via a third independent 10 Gbps link per node. These nodes were connected with two independent 10 Gbps networks, one for the actual loading and inter-node communication and the other one for backend inter-loader communication. - 2 FMCI 20.128 instances with 2 x Intel Xeon E5-2690v2 CPUs (10 physical cores at 3 GHz each) and 128GB of RAM for JMeter loader nodes.- 10 FMCI 16.192 instances with 2 x Intel Xeon E5-2690 CPUs (8 physical cores at 2.9 GHz each) and 192 GB of RAM for Couchbase nodes.We used the following infrastructure for the tests: We have also tested the Memory-Access-Time sensitivity of Couchbase. This performance benchmark on Couchbase shows sub-millisecond response times but also a difference between GET/PUT operations and QUERY operations when multiple instances are added to the cluster. Many of our findings can be applied to on premise infrastructure as well and even some cloud scenarios. Our goal is to understand the various scaling profiles of distributed database technologies as well as identify environments that provide optimum performance/price.
#Nosql benchmark tests series
Update: See our note about updated test runs and revised report as of June 4, 2015.This is the first of a series of performance benchmarks on NoSQL DBs that we plan to share with you. But beware: Your AWS bill will grow pretty quickly when testing large numbers of server nodes using EC2 i2.xlarge instances as we did!Įarlier this morning we also sent out a press release to announce our results and the availability of the report. It also provides everything needed for others to perform the same tests and verify in their own environments.
#Nosql benchmark tests full
Our full report, Benchmarking Top NoSQL Databases, contains full details about the configurations, and provides this and other graphs of performance at various node counts. This is the throughput comparison in the Balanced Read/Write Mix: Which database won? It was pretty overwhelmingly Cassandra. We used new EC2 instances for each test run to further reduce the impact of any “lame instance” or “noisy neighbor” effect on any one test. We performed the benchmark on Amazon Web Services (AWS) EC2 instances, with each test being performed three separate times on three different days to avoid unreproduceably anomalies. We avoided using small datasets that fit in RAM, and included single-node deployments only for the sake of comparison, since those scenarios do not exercise the scalability features expected from NoSQL databases.

We ran a variety of benchmark tests that included load, insert heavy, read intensive, analytic, and other typical transactional workloads. We used YCSB (the Yahoo! Cloud Serving Benchmark) to generate the client traffic and measure throughput and latency as we scaled each database server cluster from 1 to 32 nodes. The database versions we used were Cassandra 2.1.0, Couchbase 3.0, MongoDB 3.0 (with the Wired Tiger storage engine), and HBase 0.98.

#Nosql benchmark tests update
This represents work done over 8 months by Josh Williams, and was commissioned by DataStax as an update to a similar 3-way NoSQL benchmark we did two years ago. Today we are pleased to announce the results of a new NoSQL benchmark we did to compare scale-out performance of Apache Cassandra, MongoDB, Apache HBase, and Couchbase.
