News

Welcome to End Point’s blog

Ongoing observations by End Point people

Documenting web services with Perl POD and AJAX

Perl POD is a handy, convenient, but low-tech approach to embedded documentation. Consider a web service in Dancer:

get time => sub {
  return scalar(localtime());
};

(Disclaimer: my actual use-case of this technique was even more legacy: I was documenting Interchange Actionmaps that returned images, JSON, etc.)

Your application might have several, or even dozens of these, with various parameters, returning data in JSON or TXT or CSV or who-knows-what. I chose to document these in Perl POD (Plain Old Documentation) format, e.g.,


=pod

=head1 time

Retrieves the current time

=over 3

=item Parameters

None.

=item Example

=begin html



=end html

=back

=cut

This block gets inserted right in-line with the web service code, so it's immediately obvious to anyone maintaining it (and thus has the best chance of being maintained if and when the code changes!). Now I can generate an HTML page directly from my Perl code:

$ pod2html MyPackage.pm

Your output looks something like this (excerpted for clarity):

time

Retrieves the current time

Parameters

None.

Example

Where the magic comes in is the Javascript code that allows an in-line example, live and accurate, within the documentation page. You'll actually get something more like this:

time

Retrieves the current time

Parameters

None.

Example
(results appear here)

Note that the code I have below is not factored by choice; I could move a lot of it out to a common routine, but for clarity I'm leaving it all in-line. I am breaking up the script into a few chunks for discussion, but you can and should construct it all into one file (in my example, "js/example-time.js").

/* example-time.js */
$(document).ready(
  function(){
    $('script[src$="/example-time.js"]').after(
"
" + /* Note 1 */ "" + "" + "
" + "
" );

Note 1: This being a painfully simple example of a web service, there are no additional inputs. If you have some, you would add them to the HTML being assembled into the <form> tag, and then using jQuery, add them below to the url parameter, or into the data structure as required by your particular web service.

This step just inserts a simple <form> into the document. I chose to embed the form into the Javascript code, rather than the POD, because it reduces the clutter and separates the example from the web service.

    var $form = $('form[action="/time"]');
    $form.submit(function(){
      $.ajax(
        {
          'url': $form.attr('action') /* Note 1 also */,
          'data': {},
          'dataType': 'text',
          'async': false,
          'success':
             function(data){
                 $('#time-result').html($('<pre;//>').html(data))
                     .addClass('json');
             },

Here we have a submit handler that performs a very simple AJAX submit using the form's information, and upon success, inserts the results into a result <div> as a pre-formatted block. I added a "json" class which just tweaks the font and other typographic presentation a bit; you can provide your own if you wish.

I'm aware that there are various jQuery plug-ins that will handle AJAX-ifying a form, but I couldn't get the exact behavior I wanted on my first tries, so I bailed out and just constructed this approach.

          'error':
             function(){
                 $('#time-result').html('Error retrieving data!')
                     .removeClass('json');
             },
/* */

(That stray-looking comment above is just a work-around for the syntax highlighter.)

Error handling goes here. If you have something more comprehensive, such as examining the result for error codes or messages, this is where you'd put it.

          'complete':
             function(){
                 $form.find('input[name="hide"]').show();
             }
         }
      );
      return false;
    }).find('input[type="button"]').click(function(){
      $('#time-result').html('');
    });
  }
);

And just a bit of UI kindness: we have a "hide" button to make the example go away. Some of my actual examples ran to dozens of lines of JSON output, so I wanted a way to clean up after the example.

IPython Tips and Tricks

Recently I have been working on Python automation scripts. Very often I use IPython to develop/debug the code.
IPython is an advanced interactive python shell. It is a powerful tool which has many more features. However, here I would like to share some of the cool tricks of IPython.

Getting help

Typing object_name? will print all sorts of details about any object, including docstrings, function definition lines (for call arguments) and constructor details for classes.
In [1]: import datetime
In [2]: datetime.datetime?
Docstring:
datetime(year, month, day[, hour[, minute[, second[, microsecond[,tzinfo]]]]])

The year, month and day arguments are required. tzinfo may be None, or an
instance of a tzinfo subclass. The remaining arguments may be ints or longs.
File:      /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-dynload/datetime.so
Type:      type

Magic commands

Edit

This will bring up an editor to type multiline code and execute the resulting code.
In [3]: %edit
IPython will make a temporary file named: /var/folders/xh/2m0ydjs51qxd_3y2k7x50hjc0000gn/T/ipython_edit_jnVJ51/ipython_edit_NdnenL.py
In [3]: %edit -p
This will bring up the editor with the same data as the previous time it was used or saved. (in the current session)

Run a script

This will execute the script and print the results.
In [12]: %run date_sample.py
Current date and time:  2015-06-18 16:10:34.444674
Or like this:  15-06-18-16-10
Week number of the year:  24
Weekday of the week:  4

Debug

Activate the interactive debugger.
In [15]: %run date.py
Current date and time:  2015-06-18 16:12:32.417691
Or like this: ---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
/Users/kannan/playground/date.py in ()
      3 
      4 print "Current date and time: " , datetime.datetime.now()
----> 5 print "Or like this: " ,datetime.datetime.strftime("%y-%m-%d-%H-%M")
      6 print "Week number of the year: ", datetime.date.today().strftime("%W")
      7 print "Weekday of the week: ", datetime.date.today().strftime("%w")

TypeError: descriptor 'strftime' requires a 'datetime.date' object but received a 'str'

In [16]: %debug
> /Users/kannan/playground/date.py(5)()
      4 print "Current date and time: " , datetime.datetime.now()
----> 5 print "Or like this: " ,datetime.datetime.strftime("%y-%m-%d-%H-%M")
      6 print "Week number of the year: ", datetime.date.today().strftime("%W")

ipdb>
I made a error in the line number 5, it should have to look like this. So %debug command took me into the Python debugger.
print "Or like this: " ,datetime.datetime.now().strftime("%y-%m-%d-%H-%M")

Save

This will save the specified lines to a given file. You can pass any number of arguments separated by space.
In [21]: %save hello.py 1-2 2-3
The following commands were written to file `hello.py`:
import datetime
datetime.datetime?
%edit
%edit -p

Recall

Repeat a command, or get command to input line for editing.
In [28]: %recall 21

In [29]: import datetime

Timeit

Time execution of a Python statement or expression
It can be one line or multiline statement. In a one liner we can pass through multiple ones separated by semicolon.
In [33]: %timeit range(100)
1000000 loops, best of 3: 752 ns per loop

Shell Commands

Basic UNIX shell integration (you can run simple shell commands such as cp, ls, rm, cp, etc. directly from the ipython command line)

To execute any other shell commands we just need to add '!' beginning of the command line. We can assign the result of the system command to a Python variable to further use.
In [38]: list_of_files = !ls

In [39]: list_of_files
Out[39]: 
['lg-live-build',
 'lg-live-image',
 'lg-peruse-a-rue',
 'lg_chef',
 'lg_cms',
 'playground']

History

Print input history, with most recent last.
In [41]: %history 20-22
ls
import datetime
datetime.datetime.now()
%history ~1/4 #Line 4, from last session
This will list the previous session history.

Pastebin

This will upload the specifed input commands to Github’s Gist paste bin, and display the URL
It will upload the code as anonymous user
In [43]: %pastebin [-d “Date Example”] 20-23
Out[43]: u'https://gist.github.com/a660948b8323280a0d27'

For more info on this topic:
http://ipython.org/ipython-doc/dev/interactive/tutorial.html
http://ipython.org/ipython-doc/dev/interactive/magics.html

Heroku: dumping production database to staging

If you need to dump the production database locally Heroku has a nice set of tools to make this as smooth as humanly possible. In short, remember these two magic words: pg:pull and pg:push. This article details the process https://devcenter.heroku.com/articles/heroku-postgresql#pg-push-and-pg-pull

However, when I first tried it I had to resolved few issues.

My first problem was:

pg:pull not found

To fix this:

1. Uninstall the 'heroku' gem with

gem uninstall heroku (Select 'All Versions')

2. Find your Ruby 'bin' path by running

gem env
(it's under 'EXECUTABLE DIRECTORY:')

3. Cd to the 'bin' folder.

4. Remove the Heroku executable with

rm heroku

5. Restart your shell (close Terminal tab and re-open)

6. Type

heroku version
you should now see something like:
heroku-toolbelt/2.33.1 (x86_64-darwin10.8.0) ruby/1.9.3


Now you can proceed with the transfer:

1. Type

heroku config --app production-app

Note the DATABASE_URL, for example let's imagine that the production database url is HEROKU_POSTGRESQL_KANYE_URL, and the staging database url is HEROKU_POSTGRESQL_NORTH

2. Run

heroku pg:pull HEROKU_POSTGRESQL_KANYE rtwtransferdb --app production-app
heroku config --app staging-app
heroku pg:push rtwtransferdb HEROKU_POSTGRESQL_NORTH --app rtwtest


This is when I hit the second problem:

database is not empty

I fixed it by doing:

heroku pg:reset HEROKU_POSTGRESQL_NORTH

Happy database dumping!

Google Maps JavaScript API LatLng Property Name Changes

Debugging Broken Maps

A few weeks ago I had to troubleshoot some Google Maps related code that had suddenly stopped working. Some debugging revealed the issue: the code adding markers to the page was attempting to access properties that did not exist. This seemed odd because the latitude and longitude values were the result of a geocoding request which was completing successfully. The other thing which stood out to me were the property names themselves:

var myLoc = new google.maps.LatLng(results[0].geometry.location.k, results[0].geometry.location.D);

It looked like the original author had inspected the geocoded response, found the 'k' and 'D' properties which held latitude and longitude values and used them in their maps code. This had all been working fine until Google released a new version of their JavaScript API. Sites that did not specify a particular version of the API were upgraded to the new version automatically. If you have Google Maps code which stopped working recently this might be the reason why.

The Solution: Use the built-in methods in the LatLng class

Screen Shot 2015 06 10 at 3 47 32 PM

I recalled there being some helper methods for LatLng objects and confirmed this with a visit to the docs for the LatLng class which had recently been updated and given the Material design treatment — thanks Google! The lat() and lng() methods were what I needed and updating the code with them fixed the issue. The fixed code was similar to this:

var myLoc = new google.maps.LatLng(results[0].geometry.location.lat(), results[0].geometry.location.lng());

Digging Deeper

I was curious about this so I mapped out the differences between the three latest versions of the API:

API Version Latitude Property Longitude Property Constructor Name
3.21.x (experimental) A F rf
3.20.x (release) A F pf
3.19.x (frozen) k D pf

It seems to me that the property name changes are a result of running the Google Maps API code through the Closure Compiler. Make sure to use the built-in lat() and lng() methods as these property names are very likely to change again in future!

The Portal project - Jenkins Continuous Integration summary

This post describes some of our experiences at End Point in designing and working on comprehensive QA/CI facilities for a new system which is closely related to the Liquid Galaxy.

Due to the design of the system, the full deployment cycle can be rather lengthy and presents us with extra reasons for investing heavily in unit test development. Because of the very active ongoing development on the system we benefit greatly from running the tests in an automated fashion on the Jenkins CI (Continuous Integration) server.

Our Project's CI Anatomy

Our Jenkins CI service defines 10+ job types (a.k.a. Jenkins projects) that cover our system. These job types differ as far as source code branches are concerned, as well as by combinations of the types of target environments the project builds are executed on.

The skeleton of a Jenkins project is what one finds under the Configure section on the Jenkins service webpage. The source code repository and branch are defined here. Each of our Jenkins projects also fetches a few more source code repositories during the build pre-execution phase. The environment variables are defined in a flat text file:

Another configuration file is in the JSON format and defines variables for the test suite itself. Furthermore, we have a preparation phase bash script and then a second bash script which eventually executes the test suite. Factoring out all degrees of freedom into two pairs of externally managed (by Chef) concise files allows for pure and simple Jenkins job build definition:

It’s well possible to have all variables and content of the bash scripts laid out directly in the corresponding text fields in the Jenkins configuration. We used to have that. It’s actually a terrible practice and the above desire for purity comes from a tedious and clumsy experience that changing a variable (e.g. an URL or such) in 10+ job types involves an unbearable amount of mouse clicking through the Jenkins service webpage. Performing some level of debugging of the CI environment (like when setting up ROS stack which the project depends on) one is in for repetitive strain injury.

In essence, keeping knowledge about job types on the Jenkins server itself at a minimum and having it managed externally serves us well and is efficient. Another step forward would be managing everything (the entire job type definition) by Chef. We have yet to experiment with the already existing Chef community cookbooks for Jenkins.

The tests themselves are implemented in Python using pytest unit testing envelope. The test cases depend on Selenium - the web automation framework. Python drives the browser through Selenium according to testing scenarios, sometimes rather complex. The Selenium framework provides handles by which the browser is controlled - this includes user data input, clicking buttons, etc.

We use Selenium in two modes:
local mode: selenium drives a browser running on the Jenkins CI machine itself, locally. The browser runs in the Xvfb environment. In this case everything runs on the Jenkins master machine.
remote mode: the remote driver connects to a browser running on a remote machine (node A, B) and drives the browser there, as described in the diagram below. The test cases are run on the Jenkins slave machine located on a private network. The only difference between browser A and B is that they load their different respective Chrome extensions.

The usual unit testing assertions are made on the state or values of HTML elements in the web page.

Custom dashboard

Our Jenkins server runs builds of 10+ various job types. The builds of each type are executed periodically and the builds are also triggered by git pushes as well as by git pull requests. As a result, we get a significant number of builds on daily basis.

While Jenkins CI is extensible with very many plugins available out there, enabling and configuring a plugin gets cumbersome as the number of job types to configure rises. This is just to explain my personal aversion to experimenting with plugins on Jenkins for our project.

The Jenkins service webpage itself does not offer creating a simple aggregated view across a number of job types to allow for a simple, concise, single page view. Natively, there is just the single job type trends $JOB_URL/buildTimeTrend page (see below).

A view which immediately tells whether there is an infrastructure problem (such as loss of connectivity) or conveys straight away that everything passes on Jenkins,  seems to be missing. Such a view or feature is even more important in an environment suffering from occasional transient issues. Basically, we wanted a combination of JENKINS/Dashboard+View and JENKINS/Project+Statistics+Plugin, yet a lot simpler (see below).
So yes, we coded up our own wheel, circular just according to our liking and thus developed the jenkins-watcher application.

jenkins-watcher

The application is freely available from this repository, deploys on the Google App Engine platform and so utilizes certain platform features like Datastore, Cron jobs, TaskQueue and Access Control. A single configuration file contains mainly Jenkins CI server access credentials and job type names we are interested in. The above repository merely provides a template of this (secret) config file. AngularJS is used on the frontend and a smashing Jenkins API Python library is used to communicate from Python to the Jenkins CI server through its REST API. See below the result view it provides, the screenshot is cropped to show only 5 job types and their builds within the last 24 hours:

Colour coding in green (passed), red (failed) and grey (aborted) shows a build status and is in fact just standard Jenkins colour coding. Each table row corresponds to 1 build of the build ID, build timestamp (start of the build), build duration, number of test cases which passed (P), failed (F), were skipped (S), or suffered from errors (E). The last item in the row is a direct link to the build console output, very handy for immediate inspection. In my experience, this is enough for a Jenkins babysitter’s swift daily checks. This is nothing fancy: no cool stats, graphs or plots. It is just a brief, useful overview.

The application also performs periodic checks and aborts builds which take too long (yes, a Jenkins plugin with this functionality exists as well).

For example, at a glance it’s obvious that the following failed builds suffer from some kind of transient infrastructure problems: no tests were run, nothing failed, the builds were marked as failure since some command in either their prep or build scripts failed:

Or let’s take a look at another situation proving how simple visualisation can sometimes be very useful and immediately hint-providing. We observed a test case, interestingly only on just one particular job type, which sometimes ended up with a “Connection refused” error between the Selenium driver and the web browser (in the remote mode):

Only after seeing the failures visualized, the pattern struck us. We immediately got an idea that something is rotten in the state of Denmark shortly after midnight: from that point on, the previously mysterious issue boiled down to an erroneous cronjob command. The killall command was killing everything and not just what it was supposed to (bug filed here):

killall --older-than 2h -r chromedriver

Once we fixed the cronjob with a more complex but functional solution, without the killall command this time, so that the builds had not the chromedriver blanket pulled from under them while running, the mysterious error disappeared.

Summary, conclusion

Jenkins CI proved in general very useful for our Portal project. Keeping its configuration minimal and handling it externally worked most efficient. The custom jenkins-watcher application provides useful, aggregated, dashboard-like view. It is very easily configurable and not in any way dependent on the base project - take it for free, configure a bit and push as your own Google App Engine project. The visualisation can sometimes be a useful debugging tool.

MediaWiki complete test wiki via cloning

Being able to create a quick copy of your MediaWiki site is an important skill that has many benefits. Any time you are testing an upgrade, whether major or minor, it is great to be able to perform the upgrade on a test site first. Tracking down bugs becomes a lot easier when you can add all the debugging statements you need and not worry about affecting any of the users of your wiki. Creating and modifying extensions also goes a lot smoother when you can work with an identical copy of your production wiki. I will outline the steps I use to create such a copy, also known as a "test wiki".

Before creating a copy, there are two things that should be done to an existing MediaWiki installation: use git, and move the images directory. By "use git", I mean to put your existing mediawiki directory (e.g. where your LocalSettings.php file lives) into version control. Because the MediaWiki software is not that large, it is simplest to just add nearly everything into git, with the exception of the images and the cache information. Here is a recipe to do just that:


$ cd /var/www/mediawiki
$ git init .
Initialized empty Git repository in /var/www/mediawiki/.git/
$ echo /cache/ >> .gitignore
$ echo /images/ >> .gitignore
$ git add --force .
$ git commit -a -m "Initial MediaWiki commit, version 1.24"
[master (root-commit) bd7db2b] Initial MediaWiki commit, version 1.24
 10024 files changed, 1910576 insertions(+)
 create mode 100644 .gitignore
...

Replace that commit message with your specific version, of course, or whatever you like, although I highly recommend your git commits always mention the version on major changes.

The second thing that should be done is to move the images directory and use a symlink to the new location. The "images" directory in MediaWiki is special in many ways. It is the only directory (except 'cache') directly written by MediaWiki (all other changes are stored in the database, not on disk). It is the only directory that comes pre-populated in the MediaWiki tarballs and is always a pain to upgrade. Finally, it invariably contains a large collection of static files that are not well suited for version control, and are usually better backed up and stored in ways different than the rest of MediaWiki. For all these reasons, I recommend making images into a symlink. The simplest recipe is to just move the images directory "up a level". This will also help us below when cloning the wiki.

$ cd /var/www/mediawiki
$ mv images ..
$ ln -s ../images .

Now that those two important prerequisites are out of the way, let's get a quick overview of the steps to create a clone of your wiki:

  • Make a backup of your existing wiki (files and database)
  • Make a copy of your database
  • Create a new directory, and copy the mediawiki files into it
  • Create a new git branch
  • Adjust the LocalSettings.php file
  • Mark it clearly as a test wiki
  • Do a git commit
  • Adjust your web server

The first step is to make a backup of your existing wiki. You can never have too many backups, and right before you go copying a lot of files is a great time to create one. Before backing up, make sure everything is up to date in git with "git status". Make a backup of the mediawiki directory, for example with tar, making sure the resulting backup file is well labeled:

$ tar cfz /backups/mediawiki.backup.20150601.tar.gz --exclude=mediawiki/cache --anchored mediawiki/

If your images directory is somewhere else, make sure you back that up as well. Backing up your database is dead simple if you are using Postgres:

$ pg_dump wikidb | gzip --fast > /backups/mediawiki.database.backup.wikidb.20150601.pg.gz

The next step is to create a new copy of the database for your cloned wiki to access:

$ dropdb test_wikidb
$ createdb -T wikidb test_wikidb
$ psql test_wikidb -c 'alter database test_wikidb owner to wikiuser'

Now we want to create a new directory for the cloned wiki, and populate it with files from the production wiki. For this example, the existing production wiki lives in /var/www/mediawiki, and the new cloned test wiki will live in /var/www/test_mediawiki.

$ cd /var/www
$ mkdir test_mediawiki
$ rsync -a -W --exclude=/images/ mediawiki/ test_mediawiki/
## rsync will copy symlinks as well - such as the images directory!

I like to create a new git branch right away, to avoid any confusion with the "actual" git repository in the production mediawiki directory. If you do end up making any changes in the test directory, it's easy enough to roll them into the other git repo. Branch names should be short and clearly indicate why you have created this copy of the wiki. Doing this means the name shows up as the first line whenever you do a "git status", which is nice.

$ cd /var/www/test_mediawiki
$ git checkout -b testing_version_1.25.2
Switched to a new branch 'testing_version_1.25.2'

The next step is critical: editing the LocalSettings.php file! As this was copied from the production wiki, we need to make sure it points back to itself via paths, and that it connects to our newly created database. Add all these to the bottom of your test_mediawiki/LocalSettings.php file:

## Change important paths:
$wgArticlePath          = '/testwiki/$1';
$wgScriptPath           = '/test_mediawiki';
## Point to the correct database:
$wgDBname               = 'test_wikidb';
## The logo may be hardcoded, so:
$wgLogo                 = '/test_mediawiki/End_Point_logo.png';
## Disable all email notifications:
$wgUsersNotifiedOnAllChanges = array();

It's also a good idea to make this wiki read-only until needed. Also important if you symlinked the images directory is to disallow any uploads. If you need to enable uploads, and thus writes to the images directory, make sure you remove the symlink and create a new images directory! You can either copy all the files from the production wiki, or simply leave it empty and expect to see a lot of "missing file" errors, which can safely be ignored.

$wgReadOnly       = 'Test wiki: upgrading to MediaWiki 1.25.2';
$wgEnableUploads  = false;

The $wgReadOnly message will appear when people try to edit a page, but we want to make it very visible to all users so as soon as they see the wiki that "here be Danger" (and edits will be lost). To that end, there are four additional steps you can take. First, you can set a sitewide message. This will appear near the top of every page. You can add HTML to this, and it is set in your LocalSettings.php file as $wgSiteNotice. You can also change the $wgSiteName parameter, which will appear in the title of every page.

$wgSiteNotice  = '<strong>TEST WIKI ONLY!</strong>';
$wgSitename    = 'TEST WIKI';

The third additional step is to change the CSS of every page. I use this to slightly change the background color of every page. This requires that the $wgUseSiteCss setting is enabled. It is on by default, but there is no harm setting it to true explicitly. Getting it to work on all pages, including the login page, requires enabling $wgAllowSiteCSSOnRestrictedPages as well.

$wgUseSiteCss                     = true;
$wgAllowSiteCSSOnRestrictedPages  = true;

Once the above is done, navigate to MediaWiki:Common.css and add the text below. Note that you may need to wait until "making the wiki active" step below - and comment out the $wgReadOnly variable.

* { background-color: #ddeeff !important }

The last method to mark the wiki as test only is to change the wiki logo. You can replace it with a custom image, or you can modify the existing logo. I like the latter approach. Annotating text is easy from the command line by using ImageMagick. Use the "polaroid" feature to give it a nice effect (use "-polaroid 0" to avoid the neat little rotation). The command and the result:

$ convert End_Point.logo.png -caption "TEST WIKI ONLY\!" -gravity center -polaroid 20 End_Point.tilted.testonly.png
OriginalCaptioned

At this point, all of the changes to the test wiki are complete, so we/you should commit all your changes:

$ git commit -a -m "Changes for the test wiki"

The final step is to make your test wiki active by adjusting your web server. Generally this is easy and basically means copying the existing wiki parameters. For Apache, it can be as simple as adding a new Alias directive to your http.conf file:

Alias /testwiki /var/www/test_mediawiki/index.php

Reload the web server, and Bob's your uncle. You now have a fully functional, safely sandboxed, magnificently marked-up copy of your production wiki. The above may seem like a lot of work, but this was an overly-detailed post - the actual work only takes around 10 minutes (or much less if you script it!)

Updated NoSQL benchmark: Cassandra, MongoDB, HBase, Couchbase

Back in April, we published a benchmark report on a number of NoSQL databases including Cassandra MongoDB, HBase, and Couchbase. We endeavored to keep things fair and configured as identically as possible between the database engines. But a short while later, DataStax caught two incorrect configuration items, in Cassandra and HBase, and contacted us to verify the problem. Even with the effort we put in to keeping everything even, a couple erroneous parameters slipped through the cracks! I'll save the interesting technical details for another post coming soon, but once that was confirmed we jumped back in and started work on getting corrected results.

With the configuration fixed we re-ran a full suite of tests for both Cassandra and HBase. The updated results have published a revised report that you can download in PDF format from the DataStax website (or see the overview link).

The revised results still show Cassandra leading MongoDB, HBase, and Couchbase in the various YCSB tests.

For clarity the paper also includes a few additional configuration details that weren't in the original report. We regret any confusion caused by the prior report, and worked as quickly as possible to correct the data. Feel free to get in contact if you have any questions.