End Point


Welcome to End Point's blog

Ongoing observations by End Point people.

Odd pg_basebackup Connectivity Failures Over SSL

A client recently came to me with an ongoing mystery: A remote Postgres replica needed replaced, but repeatedly failed to run pg_basebackup. It would stop part way through every time, reporting something along the lines of:

pg_basebackup: could not read COPY data: SSL error: decryption failed or bad record mac

The first hunch we had was to turn off SSL renegotiation, as that isn't supported in some OpenSSL versions. By default it renegotiates keys after 512MB of traffic, and setting ssl_renegotiation_limit to 0 in postgresql.conf disables it. That helped pg_basebackup get much further along, but they were still seeing the process bail out before completion.

The client's Chef has a strange habit of removing my ssh key from the database master, so while that was being fixed I connected in and took a look at the replica. Two pg_basebackup runs later, a pattern started to emerge:
$ du -s 9.2/data.test*
67097452        9.2/data.test
67097428        9.2/data.test2
While also being a nearly identical size, those numbers are also suspiciously close to 64GB. I like round numbers, when a problem happens close to one that's often a pretty good tell of some boundary or limit. On a hunch that it wasn't a coincidence I checked around for any similar references and found a recent openssl package bug report:


RHEL 6, check. SSL connection, check. Failure at 64 GiB, check. And lastly, a connection with psql confirmed AES-GCM:
SSL connection (cipher: DHE-RSA-AES256-GCM-SHA384, bits: 256)

Once the Postgres service could be restarted to load in the updated OpenSSL library, the base backup process completed without issue.

Remember, keep those packages updated!

Broken wikis due to PHP and MediaWiki "namespace" conflicts

I was recently tasked with resurrecting an ancient wiki. In this case, a wiki last updated in 2005, running MediaWiki version 1.5.2, and that needed to get transformed to something more modern (in this case, version 1.25.3). The old settings and extensions were not important, but we did want to preserve any content that was made.

The items available to me were a tarball of the mediawiki directory (including the LocalSettings.php file), and a MySQL dump of the wiki database. To import the items to the new wiki (which already had been created and was gathering content), an XML dump needed to be generated. MediaWiki has two simple command-line scripts to export and import your wiki, named dumpBackup.php and importDump.php. So it was simply a matter of getting the wiki up and running enough to run dumpBackup.php.

My first thought was to simply bring the wiki up as it was - all the files were in place, after all, and specifically designed to read the old version of the schema. (Because the database scheme changes over time, newer MediaWikis cannot run against older database dumps.) So I unpacked the MediaWiki directory, and prepared to resurrect the database.

Rather than MySQL, the distro I was using defaulted to using the freer and arguably better MariaDB, which installed painlessly.

## Create a quick dummy database:
$ echo 'create database footest' | sudo mysql

## Install the 1.5.2 MediaWiki database into it:
$ cat mysql-acme-wiki.sql | sudo mysql footest

## Sanity test as the output of the above commands is very minimal:
echo 'select count(*) from revision' | sudo mysql footest

Success! The MariaDB instance was easily able to parse and load the old MySQL file. The next step was to unpack the old 1.5.2 mediawiki directory into Apache's docroot, adjust the LocalSettings.php file to point to the newly created database, and try and access the wiki. Once all that was done, however, both the browser and the command-line scripts spat out the same error:

Parse error: syntax error, unexpected 'Namespace' (T_NAMESPACE), 
  expecting identifier (T_STRING) in 
  /var/www/html/wiki/includes/Namespace.php on line 52

What is this about? Turns out that some years ago, someone added a class to MediaWiki with the terrible name of "Namespace". Years later, PHP finally caved to user demands and added some non-optimal support for namespaces, which means that (surprise), "namespace" is now a reserved word. In short, older versions of MediaWiki cannot run with modern (5.3.0 or greater) versions of PHP. Amusingly, a web search for this error on DuckDuckGo revealed not only many people asking about this error and/or offering solutions, but many results were actual wikis that are currently not working! Thus, their wiki was working fine one moment, and then PHP was (probably automatically) upgraded, and now the wiki is dead. But DuckDuckGo is happy to show you the wiki and its now-single page of output, the error above. :)

There are three groups to blame for this sad situation, as well as three obvious solutions to the problem. The first group to share the blame, and the most culpable, is the MediaWiki developers who chose the word "Namespace" as a class name. As PHP has always had very non-existent/poor support for packages, namespaces, and scoping, it is vital that all your PHP variables, class names, etc. are as unique as possible. To that end, the name of the class was changed at some point to "MWNamespace" - but the damage has been done. The second group to share the blame is the PHP developers, both for not having namespace support for so long, and for making it into a reserved word full knowing that one of the poster children for "mature" PHP apps, MediaWiki, was using "namespace". Still, we cannot blame them too much for picking what is a pretty obvious word choice. The third group to blame is the owners of all those wikis out there that are suffering that syntax error. They ought to be repairing their wikis. The fixes are pretty simple, which leads us to the three solutions to the problem.

MediaWiki's cool install image

The quickest (and arguably worst) solution is to downgrade PHP to something older than 5.3. At that point, the wiki will probably work again. Unless it's a museum (static) wiki, and you do not intend to upgrade anything on the server ever again, this solution will not work long term. The second solution is to upgrade your MediaWiki! The upgrade process is actually very robust and works well even for very old versions of MediaWiki (as we shall see below). The third solution is to make some quick edits to the code to replace all uses of "Namespace" with "MWNamespace". Not a good solution, but ideal when you just need to get the wiki up and running. Thus, it's the solution I tried for the original problem.

However, once I solved the Namespace problem by renaming to MWNamespace, some other problems popped up. I will not run through them here - although they were small and quickly solved, it began to feel like a neverending whack-a-mole game, and I decided to cut the Gordian knot with a completely different approach.

As mentioned, MediaWiki has an upgrade process, which means that you can install the software and it will, in theory, transform your database schema and data to the new version. However, version 1.5 of MediaWiki was released in October 2005, almost exactly 10 years ago from the current release (1.25.3 as of this writing). Ten years is a really, really long time on the Internet. Could MediaWiki really convert something that old? (spoilers: yes!). Only one way to find out. First, I prepared the old database for the upgrade. Note that all of this was done on a private local machine where security was not an issue.

## As before, install mariadb and import into the 'footest' database
$ echo 'create database footest' | sudo mysql test
$ cat mysql-acme-wiki.sql | sudo mysql footest
$ echo "set password for 'root'@'localhost' = password('foobar')" | sudo mysql test

Next, I grabbed the latest version of MediaWiki, verified it, put it in place, and started up the webserver:

$ wget http://releases.wikimedia.org/mediawiki/1.25/mediawiki-1.25.3.tar.gz
$ wget http://releases.wikimedia.org/mediawiki/1.25/mediawiki-1.25.3.tar.gz.sig

$ gpg --verify mediawiki-1.25.3.tar.gz.sig 
gpg: assuming signed data in `mediawiki-1.25.3.tar.gz'
gpg: Signature made Fri 16 Oct 2015 01:09:35 PM EDT using RSA key ID 23107F8A
gpg: Good signature from "Chad Horohoe "
gpg:                 aka "keybase.io/demon "
gpg:                 aka "Chad Horohoe (Personal e-mail) "
gpg:                 aka "Chad Horohoe (Alias for existing email) "
## Chad's cool. Ignore the below.
gpg: WARNING: This key is not certified with a trusted signature!
gpg:          There is no indication that the signature belongs to the owner.
Primary key fingerprint: 41B2 ABE8 17AD D3E5 2BDA  946F 72BC 1C5D 2310 7F8A

$ tar xvfz mediawiki-1.25.3.tar.gz
$ mv mediawiki-1.25.3 /var/www/html/
$ cd /var/www/html/mediawiki-1.25.3
## Because "composer" is a really terrible idea:
$ git clone https://gerrit.wikimedia.org/r/p/mediawiki/vendor.git 
$ sudo service httpd start

Now, we can call up the web page to install MediaWiki.

  • Visit http://localhost/mediawiki-1.25.3, see the familiar yellow flower
  • Click "set up the wiki"
  • Click next until you find "Database name", and set to "footest"
  • Set the "Database password:" to "foobar"
  • Aha! Looks what shows up: "Upgrade existing installation" and "There are MediaWiki tables in this database. To upgrade them to MediaWiki 1.25.3, click Continue"

It worked! Next messages are: "Upgrade complete. You can now start using your wiki. If you want to regenerate your LocalSettings.php file, click the button below. This is not recommended unless you are having problems with your wiki." That message is a little misleading. You almost certainly *do* want to generate a new LocalSettings.php file when doing an upgrade like this. So say yes, leave the database choices as they are, and name your wiki something easily greppable like "ABCD". Create an admin account, save the generated LocalSettings.php file, and move it to your mediawiki directory.

Voila! At this point, we can do what we came here for: generate a XML dump of the wiki content in the database, so we can import it somewhere else. We only wanted the actual content, and did not want to worry about the history of the pages, so the command was:

$ php maintenance/dumpBackup.php --current > acme.wiki.2005.xml

It ran without a hitch. However, close examination showed that it had an amazing amount of unwanted stuff from the "MediaWiki:" namespace. While there are probably some clever solutions that could be devised to cut them out of the XML file (either on export, import, or in between), sometimes quick beats clever, and I simply opened the file in an editor and removed all the "page" sections with a title beginning with "MediaWiki:". Finally, the file was shipped to the production wiki running 1.25.3, and the old content was added in a snap:

$ php maintenance/importDump.php acme.wiki.2005.xml

The script will recommend rebuilding the "Recent changes" page by running rebuildrecentchanges.php (can we get consistentCaps please MW devs?). However, this data is at least 10 years old, and Recent changes only goes back 90 days by default in version 1.25.3 (and even shorter in previous versions). So, one final step:

## 20 years should be sufficient
$ echo '$wgRCMAxAge = 20 * 365 * 24 * 3600;' >> LocalSettings.php
$ php maintenance/rebuildrecentchanges.php

Voila! All of the data from this ancient wiki is now in place on a modern wiki!

Liquid Galaxy at UNESCO in Paris

The National Congress of Industrial Heritage of Japan (NCoIH) recently deployed a Liquid Galaxy at UNESCO Headquarters in Paris, France. The display showed several locations throughout southern Japan that were key to her rapid industrialization in the late 19th and early 20th century. Over the span of 30 years, Japan went from an agrarian society dominated by Samurai still wearing swords in public to an industrial powerhouse, forging steel and building ships that would eventually form a world-class navy and an industrial base that still dominates many lead global industries.

End Point assisted by supplying the servers, frame, and display hardware for this temporary installation. The NCoIH supplied panoramic photos, historical records, and location information. Together using our Roscoe Content Management Application, we built out presentations that guided the viewer through several storylines for each location: viewers could see the early periods of Trial & Error and then later industrial mastery, or could view the locations by technology: coal mining, shipbuilding, and steel making. The touchscreen interface was custom-designed to allow a self-exploration among these storylines, and also showed thumbnail images of each scene in the presentations that, when touched, brought the viewer directly to that location and showed a short explanatory text, historical photos, as well as transitioning directly into Google Street View to show the preserved site.

From a technical point of view, End Point debuted several new features with this deployment:

  • New scene control and editing functionalities in the Roscoe Content Management System
  • A new touchscreen interface that shows presentations and scenes within a presentation in a compact, clean layout
  • A new Street View interface that allows the "pinch and zoom" map navigation that we all expect from our smart phones and tablets
  • Debut of the new ROS-based operating system, including new ROS-nodes that can control Google Earth, Street View, panoramic content viewers, browser windows, and other interfaces
  • Deployment of some very nice NEC professional-grade displays
Overall, the exhibit was a great success. Several diplomats from European, African, Asian, and American countries came to the display, explored the sites, and expressed their wonderment at the platform's ability to bring a given location and history into such vivid detail. Japan recently won recognition for these sites from the overall UNESCO governing body, and this exhibit was a chance to show those locations back to the UNESCO delegates.

From here, the Liquid Galaxy will be shipped to Japan where it will be installed permanently at a regional museum, hopefully to be joined by a whole chain of Liquid Galaxy platforms throughout Japan showing her rich history and heritage to museum visitors.

Taking control of your IMAP mail with IMAPFilter

Organizing and dealing with incoming email can be tedious, but with IMAPFilter's simple configuration syntax you can automate any action that you might want to perform on an email and focus your attention on the messages that are most important to you.

Most desktop and mobile email clients include support for rules or filters to deal with incoming mail messages but I was interested in finding a client-agnostic solution that could run in the background, processing incoming messages before they ever reached my phone, tablet or laptop. Configuring a set of rules in a desktop email client isn't as useful when you might also be checking your mail from a web interface or mobile client; either you need to leave your desktop client running 24/7 or end up with an unfiltered mailbox on your other devices.

I've configured IMAPFilter to run on my home Linux server and it's doing a great job of processing my incoming mail, automatically sorting things like newsletters and automated Git commit messages into separate mailboxes and reserving my inbox for higher priority incoming mail.

IMAPFilter is available in most package managers and easily configured with a single ~/.imapfilter/config.lua file. A helpful example config.lua is available in IMAPFilter's GitHub repository and is what I used as the basis for my personal configuration.

A few of my favorite IMAPFilter rules (where 'endpoint' is configured as my work IMAP account):

-- Mark daily timesheet reports as read, move them into a Timesheets archive mailbox
timesheets = endpoint['INBOX']:contain_from('timesheet@example.com')

-- Sort newsletters into newsletter-specific mailboxes
jsweekly = endpoint['INBOX']:contain_from('jsw@peterc.org')
jsweekly:move_messages(endpoint['Newsletters/JavaScript Weekly'])

hn = endpoint['INBOX']:contain_from('kale@hackernewsletter.com')
hn:move_messages(endpoint['Newsletters/Hacker Newsletter'])

Note that IMAPFilter will create missing mailboxes when running 'move_messages', so you don't need to set those up ahead of time. These are basic examples but the sample config.lua is a good source of other filter ideas, including combining messages matching multiple criteria into a single result set.

In addition to these basic rules, IMAPFilter also supports more advanced configurations including the ability to perform actions on messages based on the results of passing their content through an external command. This opens up possibilities like performing your own local spam filtering by sending each message through SpamAssassin and moving messages into spam mailboxes based on the exit codes returned by spamc. As of this writing I'm still in the process of training SpamAssassin to reliably recognize spam vs. ham but hope to integrate its spam detection into my own IMAPFilter configuration soon.

Biennale Arte 2015 Liquid Galaxy Installation

If there is anyone who doesn’t know about the incredible collections of art that the Google Cultural Institute has put together, I would urge them to visit google.com/culturalinstitute and be overwhelmed by their indoor and outdoor Street View tours of some of the world’s greatest museums. Along these same lines, the Cultural Institute recently finished doing a Street View capture of the interior of 70 pavilions representing 80 countries of the Biennale Arte 2015, in Venice, Italy. We, at End Point, were lucky enough to be asked to come along for the ride: Google decided that not only would this Street View version of the Biennale be added to the Cultural Institute’s collection, but that they would install a Liquid Galaxy at the Biennale headquarters, at Ca’ Giustinian on the Grand Canal, where visitors can actually use the Liquid Galaxy to navigate through the installations. Since the pavilions close in November 2015, and the Galaxy is slated to remain open until the end of January 2016, this will permit art lovers who missed the Biennale to experience it in a way that is astoundingly firsthand.

End Point basically faced two challenges during the Liquid Galaxy Installations for the Cultural Institute. The first challenge was to develop a custom touch screen that would allow users to easily navigate/choose among the many pavilions. Additionally, wanting to mirror the way the Google Cultural Institute presents content, both online, as well as on the wall at their Paris office, we decided to add a swipe-able thumbnail runway to the touch screen map which would appear once a given pavilion was chosen.

As we took on this project, it became evident to our R&D team that ordinary Street View wasn't really the ideal platform for indoor pavilion navigation because of the sheer size and scope of the pavilions. For this reason, our team decided that a ROS-based spherical Street View would provide a much smoother navigating experience. The new Street View viewer draws Street View tiles inside a WebGL sphere. This is a dramatic performance and visual enhancement over the old Maps API based viewer, and can now support spherical projection, hardware acceleration, and seamless panning. For a user in the multi-screen Liquid Galaxy setting, this means, for the first time, being able to roll the view vertically as well as horizontally, and zoom in and out, with dramatically improved frame rates. The result was such a success that we will be rolling out this new Street View to our entire fleet.

The event itself consisted of two parts: at noon, Luisella Mazza, Google’s Head of Country Operations at the Cultural Institute, gave a presentation to the international press; as a result, we have already seen coverage emerge in ANSA, Arte.it, L'Arena, and more. This was followed by a 6PM closed door presentation to the Aspen Institute.

Using the Liquid Galaxy and other supports from the exhibition, Luisella spoke at length about the role of culture in what Google refers to as the “digital transformation”.

The Aspen Institute is very engaged with these questions of “whitherto”, and Luisella’s presentation was followed by a long, and lively, round table discussion on the subject.

We were challenged to do something cool here and we came through in a big way: our touchscreen design and functionality are the stuff of real creative agency work, and meeting the technical challenge of making Street View perform in a new and enhanced way not only made for one very happy client, but is the kind of technical breakthrough that we all dream of. And how great that we got to do it all in Venice and be at the center of the action!

Top 15 Best Unix Command Line Tools

Here are some of the unix command line tools which we feel make our hands faster and lives easier. Let’s go through them in this post and make sure to leave a comment with your favourite!

1. Find the command that you are unaware of

In many situations we need to perform a command line operation but we might not know the right utility to run. The command (apropos) searches for the given keyword against its short description in the unix manual page and returns a list of commands that we may use to accomplish our need.

If you can not find the right utility, then Google is our friend :)

$ apropos "list dir"
$ man -k "find files"

2. Fix typos in our commands

It's normal to make typographical errors when we type so fast. Consider a situation where we need to run a command with a long list of arguments and when executing it returns "command not found" and you noticed that you have made a "typo" on the executed command.
Now, we really do not want to retype the long list of arguments, instead use the following to simply just correct the typo command and execute
$ ^typo_cmd^correct_cmd
 $ dc /tmp
 $ ^dc^cd
The above will navigate to /tmp directory

3. Bang and its Magic

Bang quite useful, when we want to play with the bash history commands . Bang helps by letting you execute commands in history easily when you need them
  • !! --> Execute the last executed command in the bash history
  • !* --> Execute the command with all the arguments passed to the previous command
  • !ˆ --> Get the first argument of the last executed command in the bash history
  • !$ --> Get the last argument of the last executed command in the bash history
  • ! --> Execute a command which is in the specified number in bash history
  • !?keyword? --> Execute a command from bash history for the first pattern match of the specified keyword
  • !-N --> Execute the command that was Nth position from the last in bash history
$ ~/bin/lg-backup
 $ sudo !!
In the last part of the above example we didn't realize that the lg-backup command had to be run as "sudo". Now, Instead of typing the whole command again with sudo, we can just use "sudo !!" which will re-run the last executed command in bash history as sudo, which saves us lot of time.

4. Working with Incron

This incron configuration is almost like crontab setup, but the main difference is that "Incron" monitors a directory for specific changes and triggers future actions as specified
Syntax: $directory $file_change_mask $command_or_action

/var/www/html/contents/ IN_CLOSE_WRITE,IN_CREATE,IN_DELETE /usr/bin/rsync –exclude ’*.tmp’ -a /home/ram/contents/ user@another_host:/home/ram/contents/
 /tmp IN_ALL_EVENTS logger "/tmp action for #file"
The above example shows triggering an "rsync" event whenever there is a change in "/var/www/html/contents" directory. In cases of immediate backup implementations this will be really helpful. Find more about incron here.

5. Double dash

There are situations where we end up in creating/deleting the directories whose name start with a symbol. These directories can not be removed by just using "rm -rf or rmdir". So we need to use the "double dash" (--) to perform deletion of such directories
$ rm -rf -- $symbol_dir
There are situations where you may want to create a few directory that starts with a symbol. "You can just these create directories using double dash(--) and starting the directory name with a symbol"
$ mkdir -- $symbol_dir

6. Comma and Braces Operators

We can do lot with comma and braces to make our life easier when we are performing some operations, lets see few usages
  • Rename and backup operations with comma & braces operator
  • Pattern matching with comma & braces operator
  • Rename and backup (prefixing name) operations on long file names
To backup the httpd.conf to httpd.conf.bak
$ cp httpd.conf{,.bak}
To revert the file from httpd.conf.bak to httpd.conf
$ mv http.conf{.bak,}
To rename the file with prefixing 'old'
$ cp exampleFile old-!#ˆ

7. Read only vim

As we all know, vim is a powerful command line editor. We can also use vim to view files in read only mode if you want to stick to vim
$ vim -R filename
We can also use the "view" tool which is nothing but read only vim
$ view filename 

8. Push and Pop Directories

Sometimes when we are working with various directories and looking at the logs and executing scripts we find alot of our time is spent navigating the directory structure. If you think your directory navigations resembles a stack structure then use push and pop utilities which will save you lots of time
  • Push the directory using pushd
  • List the stack directories using the command "dirs"
  • Pop the directories using popd
  • This is mainly used in navigating between directories

9. Copy text from Linux terminal(stdin) to the system clipboard

Install xclip and create the below alias
$ alias pbcopy=’xclip -selection clipboard’
$ alias pbpaste=’xclip -selection clipboard -o’
We need to have the X window system running it to work. In Mac OS X, these pbcopy and pbpaste commands are readily available to you
To Copy:
$ ls | pbcopy
To Paste:
$ pbpaste > lstxt.txt 

10. TimeMachine like Incremental Backups in Linux using rsync --link-dest

This means that it will not recopy all of the files every single time a backup is performed. Instead, only the files that have been newly created or modified since the last backup will be copied. Unchanged files are hard linked from prevbackup to the destination directory.
$ rsync -a –link-dest=prevbackup src dst

11. To display the ASCII art of the Process tree

Showing your processes in a tree structure is very useful for confirming the relationship between every process running on your system. Here is an option which is available by default on most of the Linux systems.
$ ps -aux –forest
–forest is an argument to ps command, which displays ASCII art of process tree

There are many commands available like 'pstree', 'htop' to achieve the same thing.

12. Tree view of git commits

If you want to see git commits in a repo as tree view to understand the commit history better, the below option will be super helpful. This is available with the git installation and you do not need any additional packages.
$ git log –graph –oneline

13. Tee

Tee command is used to store and view (at the same time) the output of any other command.
(ie) At the same time it writes to the STDOUT, and to a file. It helps when you want to view the command output and at the time same time if you want to write it into a file or using pbcopy you can copy the output
$ crontab -l | tee crontab.backup.txt
The tee command is named after plumbing terminology for a T-shaped pipe splitter. This Unix command splits the output of a command, sending it to a file and to the terminal output. Thanks Jon for sharing this.

14. ncurses disk usage analyzer

Analysing disk usage with nurses interface, is fast and simple to use.
$ sudo apt-get install ncdu

15. hollywood

You all have seen the hacking scene on hollywood movies. Yes, there is a package which will let you create that for you.
$ sudo apt-add-repository ppa:hollywood/ppa 
$ sudo apt-get update
$ sudo apt-get install hollywood
$ hollywood

End Pointers’ Favorite Liquid Galaxy Tours

The Liquid Galaxy is an open source project founded by Google and further developed by End Point along with contributions from others. It allows for “viewsyncing” multiple instances of Google Earth and Google Maps (including Street View) and other applications that are configured with geometric offsets that allow multiple screens to be set up surrounding users of the system. It has evolved to become an ideal data visualization tool for operations, marketing, and research. It immerses users in an environment with rich satellite imagery, elevation data, oceanic data, and panoramic images.

End Point has had the opportunity to make incredible custom presentation for dozens of clients. I had a chance to connect with members of the End Point Liquid Galaxy team, and learn about which presentations they enjoyed making the most.

Rick Peltzman, CEO

One of the most exciting presentations we made was for my son’s 4th grade history class. They were learning about the American Revolution. So, I came up with the storyboard, and TJ in our NYC office created the presentation. He gathered documents, maps of the time, content (that the kids each took turns reading), drawings and paintings, and put them in an historical context and overlaid them on current topographical presentations. Then the “tour” went from forts to battlefields to historical locations to important cities. The teachers were able to discuss issues and gather the kids’ excited responses to the platform and what it was presenting to them that day. The experience was a big hit! It proved representative of the tremendous educational opportunities that Liquid Galaxy can provide.

Ben Witten, Project Specialist

My favorite presentation was one that I created, for fun, in preparation for the 2015 Major League Baseball Postseason. This was the very first presentation I made on the Liquid Galaxy. I appreciated the opportunity to combine creating a presentation revolving around my favorite sport, while at the same time teaching myself how to make exciting presentations in the process. I was able to combine images and overlays of the teams and players with videos of the matchup, all while creating orbits around the different postseason stadiums using the Liquid Galaxy’s Google Earth capabilities.

Ben Goldstein, President

My favorite experience on the Liquid Galaxy (or at least the one I think is most important) is seeing the XL Catlin Seaview Survey, which is creating a complete panoramic survey of the ocean’s coral reefs. It’s an amazing scientific endeavor and it’s a wonder of the world that they are documenting for humanity’s appreciation and for scientific purposes. Unfortunately, as the survey is documenting, we’re witnessing the destruction of the coral reefs of the world. What XL Catlin is doing is providing an invaluable visual data set for scientific analysis. The panoramic image data sets that the XL Catlin Seaview survey has collected, and that Google presents in Street View, show how breathtakingly beautiful the ocean’s coral reefs are when they are in good health. It is now also documenting the destruction of the coral over time because the panoramic images of the coral reefs are geospatially tagged and timestamped so the change to the coral is apparent and quantifiable.

Kiel Christofferson, Liquid Galaxy Lead Architect

The tour of all of the End Point employees still stands out in my mind, just because it’s data that represents End Point. It was created for our company’s 20th anniversary, to celebrate our staff that works all across the globe. That presentation kind of hit close to home, because it was something we made for ourselves.

Dave Jenkins, VP Business Development

The complex presentations that mix video, GIS data, and unique flight paths are really something to see. We created a sort of ‘treasure hunt’ at SXSW last year for the XPrize, where viewers entered a code on the touchscreen based on other exhibits that they had viewed. If they got the code right, the Liquid Galaxy shot them into space, but if they entered the wrong code—just a splash into the ocean!