Welcome to End Point’s blog

Ongoing observations by End Point people

Odd pg_basebackup Connectivity Failures Over SSL

A client recently came to me with an ongoing mystery: A remote Postgres replica needed replaced, but repeatedly failed to run pg_basebackup. It would stop part way through every time, reporting something along the lines of:

pg_basebackup: could not read COPY data: SSL error: decryption failed or bad record mac

The first hunch we had was to turn off SSL renegotiation, as that isn't supported in some OpenSSL versions. By default it renegotiates keys after 512MB of traffic, and setting ssl_renegotiation_limit to 0 in postgresql.conf disables it. That helped pg_basebackup get much further along, but they were still seeing the process bail out before completion.

The client's Chef has a strange habit of removing my ssh key from the database master, so while that was being fixed I connected in and took a look at the replica. Two pg_basebackup runs later, a pattern started to emerge:
$ du -s 9.2/data.test*
67097452        9.2/data.test
67097428        9.2/data.test2
While also being a nearly identical size, those numbers are also suspiciously close to 64GB. I like round numbers, when a problem happens close to one that's often a pretty good tell of some boundary or limit. On a hunch that it wasn't a coincidence I checked around for any similar references and found a recent openssl package bug report:

RHEL 6, check. SSL connection, check. Failure at 64 GiB, check. And lastly, a connection with psql confirmed AES-GCM:
SSL connection (cipher: DHE-RSA-AES256-GCM-SHA384, bits: 256)

Once the Postgres service could be restarted to load in the updated OpenSSL library, the base backup process completed without issue.

Remember, keep those packages updated!

Broken wikis due to PHP and MediaWiki "namespace" conflicts

I was recently tasked with resurrecting an ancient wiki. In this case, a wiki last updated in 2005, running MediaWiki version 1.5.2, and that needed to get transformed to something more modern (in this case, version 1.25.3). The old settings and extensions were not important, but we did want to preserve any content that was made.

The items available to me were a tarball of the mediawiki directory (including the LocalSettings.php file), and a MySQL dump of the wiki database. To import the items to the new wiki (which already had been created and was gathering content), an XML dump needed to be generated. MediaWiki has two simple command-line scripts to export and import your wiki, named dumpBackup.php and importDump.php. So it was simply a matter of getting the wiki up and running enough to run dumpBackup.php.

My first thought was to simply bring the wiki up as it was - all the files were in place, after all, and specifically designed to read the old version of the schema. (Because the database scheme changes over time, newer MediaWikis cannot run against older database dumps.) So I unpacked the MediaWiki directory, and prepared to resurrect the database.

Rather than MySQL, the distro I was using defaulted to using the freer and arguably better MariaDB, which installed painlessly.

## Create a quick dummy database:
$ echo 'create database footest' | sudo mysql

## Install the 1.5.2 MediaWiki database into it:
$ cat mysql-acme-wiki.sql | sudo mysql footest

## Sanity test as the output of the above commands is very minimal:
echo 'select count(*) from revision' | sudo mysql footest

Success! The MariaDB instance was easily able to parse and load the old MySQL file. The next step was to unpack the old 1.5.2 mediawiki directory into Apache's docroot, adjust the LocalSettings.php file to point to the newly created database, and try and access the wiki. Once all that was done, however, both the browser and the command-line scripts spat out the same error:

Parse error: syntax error, unexpected 'Namespace' (T_NAMESPACE), 
  expecting identifier (T_STRING) in 
  /var/www/html/wiki/includes/Namespace.php on line 52

What is this about? Turns out that some years ago, someone added a class to MediaWiki with the terrible name of "Namespace". Years later, PHP finally caved to user demands and added some non-optimal support for namespaces, which means that (surprise), "namespace" is now a reserved word. In short, older versions of MediaWiki cannot run with modern (5.3.0 or greater) versions of PHP. Amusingly, a web search for this error on DuckDuckGo revealed not only many people asking about this error and/or offering solutions, but many results were actual wikis that are currently not working! Thus, their wiki was working fine one moment, and then PHP was (probably automatically) upgraded, and now the wiki is dead. But DuckDuckGo is happy to show you the wiki and its now-single page of output, the error above. :)

There are three groups to blame for this sad situation, as well as three obvious solutions to the problem. The first group to share the blame, and the most culpable, is the MediaWiki developers who chose the word "Namespace" as a class name. As PHP has always had very non-existent/poor support for packages, namespaces, and scoping, it is vital that all your PHP variables, class names, etc. are as unique as possible. To that end, the name of the class was changed at some point to "MWNamespace" - but the damage has been done. The second group to share the blame is the PHP developers, both for not having namespace support for so long, and for making it into a reserved word full knowing that one of the poster children for "mature" PHP apps, MediaWiki, was using "namespace". Still, we cannot blame them too much for picking what is a pretty obvious word choice. The third group to blame is the owners of all those wikis out there that are suffering that syntax error. They ought to be repairing their wikis. The fixes are pretty simple, which leads us to the three solutions to the problem.

MediaWiki's cool install image

The quickest (and arguably worst) solution is to downgrade PHP to something older than 5.3. At that point, the wiki will probably work again. Unless it's a museum (static) wiki, and you do not intend to upgrade anything on the server ever again, this solution will not work long term. The second solution is to upgrade your MediaWiki! The upgrade process is actually very robust and works well even for very old versions of MediaWiki (as we shall see below). The third solution is to make some quick edits to the code to replace all uses of "Namespace" with "MWNamespace". Not a good solution, but ideal when you just need to get the wiki up and running. Thus, it's the solution I tried for the original problem.

However, once I solved the Namespace problem by renaming to MWNamespace, some other problems popped up. I will not run through them here - although they were small and quickly solved, it began to feel like a neverending whack-a-mole game, and I decided to cut the Gordian knot with a completely different approach.

As mentioned, MediaWiki has an upgrade process, which means that you can install the software and it will, in theory, transform your database schema and data to the new version. However, version 1.5 of MediaWiki was released in October 2005, almost exactly 10 years ago from the current release (1.25.3 as of this writing). Ten years is a really, really long time on the Internet. Could MediaWiki really convert something that old? (spoilers: yes!). Only one way to find out. First, I prepared the old database for the upgrade. Note that all of this was done on a private local machine where security was not an issue.

## As before, install mariadb and import into the 'footest' database
$ echo 'create database footest' | sudo mysql test
$ cat mysql-acme-wiki.sql | sudo mysql footest
$ echo "set password for 'root'@'localhost' = password('foobar')" | sudo mysql test

Next, I grabbed the latest version of MediaWiki, verified it, put it in place, and started up the webserver:

$ wget
$ wget

$ gpg --verify mediawiki-1.25.3.tar.gz.sig 
gpg: assuming signed data in `mediawiki-1.25.3.tar.gz'
gpg: Signature made Fri 16 Oct 2015 01:09:35 PM EDT using RSA key ID 23107F8A
gpg: Good signature from "Chad Horohoe "
gpg:                 aka " "
gpg:                 aka "Chad Horohoe (Personal e-mail) "
gpg:                 aka "Chad Horohoe (Alias for existing email) "
## Chad's cool. Ignore the below.
gpg: WARNING: This key is not certified with a trusted signature!
gpg:          There is no indication that the signature belongs to the owner.
Primary key fingerprint: 41B2 ABE8 17AD D3E5 2BDA  946F 72BC 1C5D 2310 7F8A

$ tar xvfz mediawiki-1.25.3.tar.gz
$ mv mediawiki-1.25.3 /var/www/html/
$ cd /var/www/html/mediawiki-1.25.3
## Because "composer" is a really terrible idea:
$ git clone 
$ sudo service httpd start

Now, we can call up the web page to install MediaWiki.

  • Visit http://localhost/mediawiki-1.25.3, see the familiar yellow flower
  • Click "set up the wiki"
  • Click next until you find "Database name", and set to "footest"
  • Set the "Database password:" to "foobar"
  • Aha! Looks what shows up: "Upgrade existing installation" and "There are MediaWiki tables in this database. To upgrade them to MediaWiki 1.25.3, click Continue"

It worked! Next messages are: "Upgrade complete. You can now start using your wiki. If you want to regenerate your LocalSettings.php file, click the button below. This is not recommended unless you are having problems with your wiki." That message is a little misleading. You almost certainly *do* want to generate a new LocalSettings.php file when doing an upgrade like this. So say yes, leave the database choices as they are, and name your wiki something easily greppable like "ABCD". Create an admin account, save the generated LocalSettings.php file, and move it to your mediawiki directory.

At this point, we can do what we came here for: generate a XML dump of the wiki content in the database, so we can import it somewhere else. We only wanted the actual content, and did not want to worry about the history of the pages, so the command was:

$ php maintenance/dumpBackup.php --current >

It ran without a hitch. However, close examination showed that it had an amazing amount of unwanted stuff from the "MediaWiki:" namespace. While there are probably some clever solutions that could be devised to cut them out of the XML file (either on export, import, or in between), sometimes quick beats clever, and I simply opened the file in an editor and removed all the "page" sections with a title beginning with "MediaWiki:". Finally, the file was shipped to the production wiki running 1.25.3, and the old content was added in a snap:

$ php maintenance/importDump.php

The script will recommend rebuilding the "Recent changes" page by running rebuildrecentchanges.php (can we get consistentCaps please MW devs?). However, this data is at least 10 years old, and Recent changes only goes back 90 days by default in version 1.25.3 (and even shorter in previous versions). So, one final step:

## 20 years should be sufficient
$ echo '$wgRCMAxAge = 20 * 365 * 24 * 3600;' >> LocalSettings.php
$ php maintenance/rebuildrecentchanges.php

Voila! All of the data from this ancient wiki is now in place on a modern wiki!

Liquid Galaxy at UNESCO in Paris

The National Congress of Industrial Heritage of Japan (NCoIH) recently deployed a Liquid Galaxy at UNESCO Headquarters in Paris, France. The display showed several locations throughout southern Japan that were key to her rapid industrialization in the late 19th and early 20th century. Over the span of 30 years, Japan went from an agrarian society dominated by Samurai still wearing swords in public to an industrial powerhouse, forging steel and building ships that would eventually form a world-class navy and an industrial base that still dominates many lead global industries.

End Point assisted by supplying the servers, frame, and display hardware for this temporary installation. The NCoIH supplied panoramic photos, historical records, and location information. Together using our Roscoe Content Management Application, we built out presentations that guided the viewer through several storylines for each location: viewers could see the early periods of Trial & Error and then later industrial mastery, or could view the locations by technology: coal mining, shipbuilding, and steel making. The touchscreen interface was custom-designed to allow a self-exploration among these storylines, and also showed thumbnail images of each scene in the presentations that, when touched, brought the viewer directly to that location and showed a short explanatory text, historical photos, as well as transitioning directly into Google Street View to show the preserved site.

From a technical point of view, End Point debuted several new features with this deployment:

  • New scene control and editing functionalities in the Roscoe Content Management System
  • A new touchscreen interface that shows presentations and scenes within a presentation in a compact, clean layout
  • A new Street View interface that allows the "pinch and zoom" map navigation that we all expect from our smart phones and tablets
  • Debut of the new ROS-based operating system, including new ROS-nodes that can control Google Earth, Street View, panoramic content viewers, browser windows, and other interfaces
  • Deployment of some very nice NEC professional-grade displays
Overall, the exhibit was a great success. Several diplomats from European, African, Asian, and American countries came to the display, explored the sites, and expressed their wonderment at the platform's ability to bring a given location and history into such vivid detail. Japan recently won recognition for these sites from the overall UNESCO governing body, and this exhibit was a chance to show those locations back to the UNESCO delegates.

From here, the Liquid Galaxy will be shipped to Japan where it will be installed permanently at a regional museum, hopefully to be joined by a whole chain of Liquid Galaxy platforms throughout Japan showing her rich history and heritage to museum visitors.

Taking control of your IMAP mail with IMAPFilter

Organizing and dealing with incoming email can be tedious, but with IMAPFilter's simple configuration syntax you can automate any action that you might want to perform on an email and focus your attention on the messages that are most important to you.

Most desktop and mobile email clients include support for rules or filters to deal with incoming mail messages but I was interested in finding a client-agnostic solution that could run in the background, processing incoming messages before they ever reached my phone, tablet or laptop. Configuring a set of rules in a desktop email client isn't as useful when you might also be checking your mail from a web interface or mobile client; either you need to leave your desktop client running 24/7 or end up with an unfiltered mailbox on your other devices.

I've configured IMAPFilter to run on my home Linux server and it's doing a great job of processing my incoming mail, automatically sorting things like newsletters and automated Git commit messages into separate mailboxes and reserving my inbox for higher priority incoming mail.

IMAPFilter is available in most package managers and easily configured with a single ~/.imapfilter/config.lua file. A helpful example config.lua is available in IMAPFilter's GitHub repository and is what I used as the basis for my personal configuration.

A few of my favorite IMAPFilter rules (where 'endpoint' is configured as my work IMAP account):

-- Mark daily timesheet reports as read, move them into a Timesheets archive mailbox
timesheets = endpoint['INBOX']:contain_from('')

-- Sort newsletters into newsletter-specific mailboxes
jsweekly = endpoint['INBOX']:contain_from('')
jsweekly:move_messages(endpoint['Newsletters/JavaScript Weekly'])

hn = endpoint['INBOX']:contain_from('')
hn:move_messages(endpoint['Newsletters/Hacker Newsletter'])

Note that IMAPFilter will create missing mailboxes when running 'move_messages', so you don't need to set those up ahead of time. These are basic examples but the sample config.lua is a good source of other filter ideas, including combining messages matching multiple criteria into a single result set.

In addition to these basic rules, IMAPFilter also supports more advanced configurations including the ability to perform actions on messages based on the results of passing their content through an external command. This opens up possibilities like performing your own local spam filtering by sending each message through SpamAssassin and moving messages into spam mailboxes based on the exit codes returned by spamc. As of this writing I'm still in the process of training SpamAssassin to reliably recognize spam vs. ham but hope to integrate its spam detection into my own IMAPFilter configuration soon.

Biennale Arte 2015 Liquid Galaxy Installation

If there is anyone who doesn’t know about the incredible collections of art that the Google Cultural Institute has put together, I would urge them to visit and be overwhelmed by their indoor and outdoor Street View tours of some of the world’s greatest museums. Along these same lines, the Cultural Institute recently finished doing a Street View capture of the interior of 70 pavilions representing 80 countries of the Biennale Arte 2015, in Venice, Italy. We, at End Point, were lucky enough to be asked to come along for the ride: Google decided that not only would this Street View version of the Biennale be added to the Cultural Institute’s collection, but that they would install a Liquid Galaxy at the Biennale headquarters, at Ca’ Giustinian on the Grand Canal, where visitors can actually use the Liquid Galaxy to navigate through the installations. Since the pavilions close in November 2015, and the Galaxy is slated to remain open until the end of January 2016, this will permit art lovers who missed the Biennale to experience it in a way that is astoundingly firsthand.

End Point basically faced two challenges during the Liquid Galaxy Installations for the Cultural Institute. The first challenge was to develop a custom touch screen that would allow users to easily navigate/choose among the many pavilions. Additionally, wanting to mirror the way the Google Cultural Institute presents content, both online, as well as on the wall at their Paris office, we decided to add a swipe-able thumbnail runway to the touch screen map which would appear once a given pavilion was chosen.

As we took on this project, it became evident to our R&D team that ordinary Street View wasn't really the ideal platform for indoor pavilion navigation because of the sheer size and scope of the pavilions. For this reason, our team decided that a ROS-based spherical Street View would provide a much smoother navigating experience. The new Street View viewer draws Street View tiles inside a WebGL sphere. This is a dramatic performance and visual enhancement over the old Maps API based viewer, and can now support spherical projection, hardware acceleration, and seamless panning. For a user in the multi-screen Liquid Galaxy setting, this means, for the first time, being able to roll the view vertically as well as horizontally, and zoom in and out, with dramatically improved frame rates. The result was such a success that we will be rolling out this new Street View to our entire fleet.

The event itself consisted of two parts: at noon, Luisella Mazza, Google’s Head of Country Operations at the Cultural Institute, gave a presentation to the international press; as a result, we have already seen coverage emerge in ANSA,, L'Arena, and more. This was followed by a 6PM closed door presentation to the Aspen Institute.

Using the Liquid Galaxy and other supports from the exhibition, Luisella spoke at length about the role of culture in what Google refers to as the “digital transformation”.

The Aspen Institute is very engaged with these questions of “whitherto”, and Luisella’s presentation was followed by a long, and lively, round table discussion on the subject.

We were challenged to do something cool here and we came through in a big way: our touchscreen design and functionality are the stuff of real creative agency work, and meeting the technical challenge of making Street View perform in a new and enhanced way not only made for one very happy client, but is the kind of technical breakthrough that we all dream of. And how great that we got to do it all in Venice and be at the center of the action!

Top 15 Best Unix Command Line Tools

Here are some of the unix command line tools which we feel make our hands faster and lives easier. Let’s go through them in this post and make sure to leave a comment with your favourite!

1. Find the command that you are unaware of

In many situations we need to perform a command line operation but we might not know the right utility to run. The command (apropos) searches for the given keyword against its short description in the unix manual page and returns a list of commands that we may use to accomplish our need.

If you can not find the right utility, then Google is our friend :)

$ apropos "list dir"
$ man -k "find files"

2. Fix typos in our commands

It's normal to make typographical errors when we type so fast. Consider a situation where we need to run a command with a long list of arguments and when executing it returns "command not found" and you noticed that you have made a "typo" on the executed command.
Now, we really do not want to retype the long list of arguments, instead use the following to simply just correct the typo command and execute
$ ^typo_cmd^correct_cmd
 $ dc /tmp
 $ ^dc^cd
The above will navigate to /tmp directory

3. Bang and its Magic

Bang quite useful, when we want to play with the bash history commands . Bang helps by letting you execute commands in history easily when you need them
  • !! --> Execute the last executed command in the bash history
  • !* --> Execute the command with all the arguments passed to the previous command
  • !ˆ --> Get the first argument of the last executed command in the bash history
  • !$ --> Get the last argument of the last executed command in the bash history
  • ! --> Execute a command which is in the specified number in bash history
  • !?keyword? --> Execute a command from bash history for the first pattern match of the specified keyword
  • !-N --> Execute the command that was Nth position from the last in bash history
$ ~/bin/lg-backup
 $ sudo !!
In the last part of the above example we didn't realize that the lg-backup command had to be run as "sudo". Now, Instead of typing the whole command again with sudo, we can just use "sudo !!" which will re-run the last executed command in bash history as sudo, which saves us lot of time.

4. Working with Incron

This incron configuration is almost like crontab setup, but the main difference is that "Incron" monitors a directory for specific changes and triggers future actions as specified
Syntax: $directory $file_change_mask $command_or_action

/var/www/html/contents/ IN_CLOSE_WRITE,IN_CREATE,IN_DELETE /usr/bin/rsync –exclude ’*.tmp’ -a /home/ram/contents/ user@another_host:/home/ram/contents/
 /tmp IN_ALL_EVENTS logger "/tmp action for #file"
The above example shows triggering an "rsync" event whenever there is a change in "/var/www/html/contents" directory. In cases of immediate backup implementations this will be really helpful. Find more about incron here.

5. Double dash

There are situations where we end up in creating/deleting the directories whose name start with a symbol. These directories can not be removed by just using "rm -rf or rmdir". So we need to use the "double dash" (--) to perform deletion of such directories
$ rm -rf -- $symbol_dir
There are situations where you may want to create a few directory that starts with a symbol. "You can just these create directories using double dash(--) and starting the directory name with a symbol"
$ mkdir -- $symbol_dir

6. Comma and Braces Operators

We can do lot with comma and braces to make our life easier when we are performing some operations, lets see few usages
  • Rename and backup operations with comma & braces operator
  • Pattern matching with comma & braces operator
  • Rename and backup (prefixing name) operations on long file names
To backup the httpd.conf to httpd.conf.bak
$ cp httpd.conf{,.bak}
To revert the file from httpd.conf.bak to httpd.conf
$ mv http.conf{.bak,}
To rename the file with prefixing 'old'
$ cp exampleFile old-!#ˆ

7. Read only vim

As we all know, vim is a powerful command line editor. We can also use vim to view files in read only mode if you want to stick to vim
$ vim -R filename
We can also use the "view" tool which is nothing but read only vim
$ view filename 

8. Push and Pop Directories

Sometimes when we are working with various directories and looking at the logs and executing scripts we find alot of our time is spent navigating the directory structure. If you think your directory navigations resembles a stack structure then use push and pop utilities which will save you lots of time
  • Push the directory using pushd
  • List the stack directories using the command "dirs"
  • Pop the directories using popd
  • This is mainly used in navigating between directories

9. Copy text from Linux terminal(stdin) to the system clipboard

Install xclip and create the below alias
$ alias pbcopy=’xclip -selection clipboard’
$ alias pbpaste=’xclip -selection clipboard -o’
We need to have the X window system running it to work. In Mac OS X, these pbcopy and pbpaste commands are readily available to you
To Copy:
$ ls | pbcopy
To Paste:
$ pbpaste > lstxt.txt 

10. TimeMachine like Incremental Backups in Linux using rsync --link-dest

This means that it will not recopy all of the files every single time a backup is performed. Instead, only the files that have been newly created or modified since the last backup will be copied. Unchanged files are hard linked from prevbackup to the destination directory.
$ rsync -a –link-dest=prevbackup src dst

11. To display the ASCII art of the Process tree

Showing your processes in a tree structure is very useful for confirming the relationship between every process running on your system. Here is an option which is available by default on most of the Linux systems.
$ ps -aux –forest
–forest is an argument to ps command, which displays ASCII art of process tree

There are many commands available like 'pstree', 'htop' to achieve the same thing.

12. Tree view of git commits

If you want to see git commits in a repo as tree view to understand the commit history better, the below option will be super helpful. This is available with the git installation and you do not need any additional packages.
$ git log –graph –oneline

13. Tee

Tee command is used to store and view (at the same time) the output of any other command.
(ie) At the same time it writes to the STDOUT, and to a file. It helps when you want to view the command output and at the time same time if you want to write it into a file or using pbcopy you can copy the output
$ crontab -l | tee crontab.backup.txt
The tee command is named after plumbing terminology for a T-shaped pipe splitter. This Unix command splits the output of a command, sending it to a file and to the terminal output. Thanks Jon for sharing this.

14. ncurses disk usage analyzer

Analysing disk usage with nurses interface, is fast and simple to use.
$ sudo apt-get install ncdu

15. hollywood

You all have seen the hacking scene on hollywood movies. Yes, there is a package which will let you create that for you.
$ sudo apt-add-repository ppa:hollywood/ppa 
$ sudo apt-get update
$ sudo apt-get install hollywood
$ hollywood

End Pointers’ Favorite Liquid Galaxy Tours

The Liquid Galaxy is an open source project founded by Google and further developed by End Point along with contributions from others. It allows for “viewsyncing” multiple instances of Google Earth and Google Maps (including Street View) and other applications that are configured with geometric offsets that allow multiple screens to be set up surrounding users of the system. It has evolved to become an ideal data visualization tool for operations, marketing, and research. It immerses users in an environment with rich satellite imagery, elevation data, oceanic data, and panoramic images.

End Point has had the opportunity to make incredible custom presentation for dozens of clients. I had a chance to connect with members of the End Point Liquid Galaxy team, and learn about which presentations they enjoyed making the most.

Rick Peltzman, CEO

One of the most exciting presentations we made was for my son’s 4th grade history class. They were learning about the American Revolution. So, I came up with the storyboard, and TJ in our NYC office created the presentation. He gathered documents, maps of the time, content (that the kids each took turns reading), drawings and paintings, and put them in an historical context and overlaid them on current topographical presentations. Then the “tour” went from forts to battlefields to historical locations to important cities. The teachers were able to discuss issues and gather the kids’ excited responses to the platform and what it was presenting to them that day. The experience was a big hit! It proved representative of the tremendous educational opportunities that Liquid Galaxy can provide.

Ben Witten, Project Specialist

My favorite presentation was one that I created, for fun, in preparation for the 2015 Major League Baseball Postseason. This was the very first presentation I made on the Liquid Galaxy. I appreciated the opportunity to combine creating a presentation revolving around my favorite sport, while at the same time teaching myself how to make exciting presentations in the process. I was able to combine images and overlays of the teams and players with videos of the matchup, all while creating orbits around the different postseason stadiums using the Liquid Galaxy’s Google Earth capabilities.

Ben Goldstein, President

My favorite experience on the Liquid Galaxy (or at least the one I think is most important) is seeing the XL Catlin Seaview Survey, which is creating a complete panoramic survey of the ocean’s coral reefs. It’s an amazing scientific endeavor and it’s a wonder of the world that they are documenting for humanity’s appreciation and for scientific purposes. Unfortunately, as the survey is documenting, we’re witnessing the destruction of the coral reefs of the world. What XL Catlin is doing is providing an invaluable visual data set for scientific analysis. The panoramic image data sets that the XL Catlin Seaview survey has collected, and that Google presents in Street View, show how breathtakingly beautiful the ocean’s coral reefs are when they are in good health. It is now also documenting the destruction of the coral over time because the panoramic images of the coral reefs are geospatially tagged and timestamped so the change to the coral is apparent and quantifiable.

Kiel Christofferson, Liquid Galaxy Lead Architect

The tour of all of the End Point employees still stands out in my mind, just because it’s data that represents End Point. It was created for our company’s 20th anniversary, to celebrate our staff that works all across the globe. That presentation kind of hit close to home, because it was something we made for ourselves.

Dave Jenkins, VP Business Development

The complex presentations that mix video, GIS data, and unique flight paths are really something to see. We created a sort of ‘treasure hunt’ at SXSW last year for the XPrize, where viewers entered a code on the touchscreen based on other exhibits that they had viewed. If they got the code right, the Liquid Galaxy shot them into space, but if they entered the wrong code—just a splash into the ocean!

Top 7 Funniest Perl Modules

And now for something completely different ...

Programmers in general, and Perl programmers in particular, seem to have excellent, if warped, senses of humor. As a result, the CPAN library is replete with modules that have oddball names, or strange and wonderful purposes, or in some delightful cases -- both!

Let's take a look.

  1. Bone::Easy
    I'm going to take the coward's way out on this one right away. Go see for yourself, or don't.
  2. Acme::EyeDrops
    Really, anything in the Acme::* (meaning "perfect") namespace is just programmer-comedy gold, depending on what you find amusing and what is just plain forehead-smacking stupid to you. This one allows you to transform your Perl programs (small ones work better) from this:
    print "hello world\n";
    to this:
    Oh, that's not just a picture of a camel. That's actual Perl code; you can run that, and it executes in the exact same way as the original one-liner. So much more stylish. Plus, you can impress your boss/cow-orker/heroic scientist boyfriend.
  3. common::sense
    This one makes the list because (a) it is just so satisfying to see
      use common::sense;
    atop a Perl program, and (b) a citation of this on our company IRC chat is what planted the seed for this article.

    Another is, as in "use sanity;". Seems like a good approach.
  4. Silly::Werder
    Not a terribly interesting name, but it produces some head-scratching output. For instance,
    Broringers isess ailerwreakers paciouspiris dests bursonsinvading buggers companislandet despa ascen?
    I suppose you might use this to generate some Lorem ipsum-type text, or maybe temporary passwords? Dialog for your science fiction novel?
  5. Any module with the word "Moose" in it. "Moose" is a funny word.
  6. D::oh
    The humor here is a bit obscure: you have to have been around for Perl4-style namespace addressing, when you would have had to load this via:
    use D'oh;
  7. your
    As in:
    use your qw($wits %head @tools);
    Here the name is the funny bit; the module itself is all business.

Well, that seems like enough to get you started. If you find others, post them here in the comments!

Updating rbenv, ruby-build on Ubuntu: ruby version not found

Hi! Steph here, former long-time End Point employee now blogging from afar as a software developer for Pinhole Press. While I’m no longer an employee of End Point, I’m happy to blog and share here.

A while back, I was in the middle of upgrading Piggybak, an open source Ruby on Rails platform developed and supported by End Point, and I came across a quick error that I thought I'd share.

I develop locally and I use rbenv on Ubuntu. I need to jump from Ruby 1.9.3 to Ruby 2.1.1 in this upgrade. When I attempt to run rbenv install 2.1.1, I see errors reporting ruby-build: definition not found: 2.1.1, meaning that rbenv and ruby-build (a plugin used with rbenv to ease installation) do not include version 2.1.1 in the available versions. My version of rbenv is out of date, so this isn't surprising. But how do I fix it?

I found many directions for updating rbenv and ruby-build with Homebrew via Google, but that doesn't apply here. Most of the instructions point to running a git pull on rbenv (probably located in ~/.rbenv), but give no references to upgrading ruby-build.

cd ~/.rbenv
git pull

I did a bit of experimenting and simply tried pulling to update the ruby-build plugin (also a git repo):

cd ~/.rbenv/plugins/ruby-build/
git pull

And tada - that was all that was needed. rbenv version -l now includes ruby 2.1.1, and I can install it with rbenv install 2.1.1.

End Point’s 20th anniversary meeting, part 2

Friday, October 2nd, was the second and final day of our company meeting. (See the earlier report on day 1 of our meeting if you missed it.) Another busy day of talks, this day was kicked off by Ben Goldstein, who gave us a more detailed rundown of End Point's roots.

The History of End Point

Ben and Rick met in the second or third grade (a point of friendly dispute), and from the early days of their friendship were both heavily influenced by each other's parents. Their first business enterprise together was painting houses in the summer to earn money for college.

After attending college, Ben worked with Unix and dabbled with the World Wide Web when it was brand new. Rick worked on Wall Street for a while, then decided he had had enough of that and worked briefly in real estate, then left to pursue more creative interests.

Ben showed Rick some simple websites he had been working on and Rick said that is what they should do: they should start a business building websites together. Soon they made the big decision and End Point was officially incorporated on August 8, 1995. Their earliest clients were all found by word of mouth, with the first website being made for one of Ben's cousins.

At first they made only static websites. But Ben had worked with Oracle databases and knew some scripting languages, so the possibility of making dynamic data-driven web applications on the server seemed within reach. They met someone who had been scanning wine labels and putting the data into a Mini SQL (msql) database. Ben wrote some Perl scripts and soon had created End Point's first dynamic website.

Rick met an employee of Michael C. Fina, a company that did wedding registries and wanted to move to the web. Ben got started working on that in 1998. Around the same time, he found the open source MiniVend web application framework, exactly what he needed for a project like that which would be much more than a few CGI scripts.

Once End Point's early dynamic websites went into production, Ben wanted to grow more solid hosting and support services. After working with a few independent consultants who were a little too fly-by-night, he went to Akopia for help. Akopia had just acquired MiniVend and renamed it to Interchange. They brought Mike Heins, the creator of MiniVend, on board, and were building out a support and hosting business around Interchange.

Before long, Akopia was acquired by Red Hat, and Ben met Jon Jensen there while getting his help with Interchange and Linux questions. Later when Red Hat was phasing out its Interchange consulting group, Ben offered Jon a job, and Jon introduced Ben and Rick to his co-worker Mark Johnson who was expert at all things database, Perl, and Interchange. Rick and Ben hired both Jon and Mark in 2002, and End Point continued to grow with new clients and soon more employees as well.

The story continues with End Point moving into PostgreSQL support, Ruby on Rails development, AFS support, the creation of Spree Commerce, programming with Python & Django, Java, PHP, Node.js, AngularJS, Puppet and Chef and Ansible, and a major move into the Liquid Galaxy world. By then things are documented a little better thanks to wikis and blogs, so Ben was able to keep to the highlights.

A lot happens in a business in 20 years!

Using Trello

Next Josh Ausborne talked about how we make our lives easier by tracking tasks with Trello, a popular software as a service offering. At End Point we use Trello as one way to keep track of what we're working on in a project, along with other systems for certain projects or preferred by our various clients.

Most work tracking systems store data about progress and status, but Trello's strength is that it provides a nice way to look at things as a whole and to streamline collaboration. Trello is simple and easy to use, comes with just enough features to be helpful but not to overwhelm, has great apps for Android and iOS, and costs nothing to use for almost all functionality.

Using Trello is simple. It's made up of "boards", each of which contain lists of "cards". Each card can be used to represent a task or small project. People can be assigned to a card, watch it for notifications, comment, create checklists, upload images, share links, and more.

Cards are organized into lists, where they can be organized by status, priority, person, or any way else you choose. A popular arrangement is a "Kanban"-style board with one list each for "Ideas", "To do", "Blocked", "Doing", and "Done/Review". Nearly everything can be organized or moved with simple drag-and-drop gestures.

Automated Hosting

Lele Calò and Richard Templet talk about automated versus manual infrastructure management. In the beginning of the web era, system administrators did everything by hand. They soon moved on to a “shell for-loop” style of system administration, but many things were still done by hand and often incompatible between systems. That’s where automation comes in. With tools like Puppet, Chef, Salt, and Ansible, it becomes easy to automate much of the configuration across many servers, even of different operating system distributions and versions.

So what should automation be used for? Mainly repetitive tasks that don’t require human touch. A lot of things in server setup and update deployment are easily done once, but become tedious very quickly.

What does End Point use automation for? We use it in our web hosting environment for initial operating system setup on new servers, managing changes to SSH public key lists and iptables firewall rules, and deploying monitoring configurations. For certain applications, we automate building, deploying, and updating entire systems with consistent configuration across many hundreds of nodes. We use Puppet, Chef, and Ansible for various internal and customer projects.

For those who are looking to get started, Lele and Richard recommended starting with automation on new servers. It's very simple and safe to experiment there, as there isn't anything yet to lose. Later once you're confident in what you're doing you can start to carefully spread your automation to existing servers.

Command Line Tools

Kannan Ponnusamy and Ram Kuppuchamy showed us some of their favorite Unix command-line tools. Here are some of the cool things I liked.

You can use ^ (caret) to correct typos in the previous command, like so:

user@host $ cd Donloads                                                                                                                                                              
cd: no such file or directory: Donloads                                                                                                                                                    
user@host $ ^on^own                                                                                                                                                                       
cd Downloads                                                                                                                                                                               
user@host:~/Downloads $

Use ! ("bang") commands to access commands and arguments in the history:

  • !! - entire previous command
  • !* - all arguments of previous command
  • !^ - first argument of previous command
  • !$ - last argument of previous command
  • !N - command at position N in history
  • !?keyword? - most recent command with pattern match of keyword
  • !-N - command at Nth position from last in history

Using Ctrl-R will do a reverse search of your command history, letting you see and edit old commands. If you press Ctrl-O on a historic command, it will execute it and put the following command from the history into the prompt. Additional presses of Ctrl-O will continue down the history.

The ps --forest option creates a visual ASCII art tree of the process hierarchy. Likewise, Git has git log --graph, which shows a visual representation of the repository history. Try using git log --oneline in addition to --graph to make it a little more concise.

tee $filename lets you pipe to STDOUT and a file at the same time. For example, crontab -l | tee crontab_backup.txt will print the crontab and put it in a text file.

ls -d */ will list all directories in the current directory.

These are just a few of the neat things they showed us. See their blog post about these and other Unix commands.

ROS in the Liquid Galaxy

Wojciech Ziniewicz and Matt Vollrath gave us a preview of their talk “ROS-driven user applications in idempotent environments” to be presented at ROSCon 2015 in Hamburg, Germany a few days later. The Liquid Galaxy project recently transitioned away from ad-hoc services and protocols to ROS (Robot Operating System) and their presentation slides give a good idea of how much was involved in that process.

State of the Company

Next, Rick gave a talk on the current state of the company, which he summarized with one word: Transition. End Point is a company that has been changing since its inception in 1995, and now is no exception. A major transition over the last year or so has been growing to a head-count of 50 people. While we are in many ways similar to when we were, say, 30 people, more people requires different approaches for management and coordination.

A larger End Point presents us with both opportunities and challenges. However, the core values of our company have remained the same and are part of what make us what we are.

Personal Tech Security

Marco Matarazzo and Lele Calò next spoke to us on personal tech security. Why should you secure your personal or work devices? One obvious reason is to prevent disclosure of sensitive data. But just as important is not losing important data or becoming a conduit for attacks on other systems and networks.

So how should you approach security? It's important to think of usability vs. security. A door with 100 locks on it may be more secure than one with two, but getting in and out of it, even with the proper keys, would be far too difficult. So security should be adapted for the scenario. Securing a personal laptop with pictures, music, and games should be approached differently from a work device with passwords and SSH or GnuPG keys.

For members of our hosting team and employees who work with clients that require it, we have certain more stringent security policies they must follow. Some things are considered common sense, such as shredding or burning business-related papers and being careful with access to work environments.

In public places, make sure shared networks have proper encryption. Do not use untrusted computers, such as public computers at libraries or internet cafes, for work or any personal sites you need to log into. Be careful to not leave any work data behind, whether on an old backup disk or computer you get rid of, or on scraps of paper or notepads.

Keep all of your devices safe physically and in software! Apply operating system and other software updates promptly, and reboot at least a few times a week to let everything get fully updated. That includes laptops, desktops, phones, tablets, etc. And don't forget external drives! Keep automatic password-protected screen locks on your devices, encrypt your data and swap partitions, as well as phones and removable devices.

Backup your data to a safe place, and remember to share your passwords with someone trusted who may need them in case of an emergency.

Make sure your private SSH keys are password-protected, and ensure you're asked for confirmation when using them. Avoid common and unsafe passwords, like '12345' and 'password', although 'pizza1' is perfectly fine :). Use PGP to encrypt private messages and confidential data at rest.

“Brain bowl” challenge

We finished our meetings with a little friendly competition led by Jon Jensen. We were divided into ad-hoc teams by Ron Phipps, and were presented with trivia questions to see which team could answer correctly first. Some of the questions included:

  • Who created the World Wide Web? In what year?
  • What is now wrong with the term “SSL certificate”?
  • What do HIPAA and PCI-DSS stand for?
  • The Agile Manifesto says its authors have come to value what things over what other things?
  • Where does the word “pixel” come from?
  • Where did the Unix command “tee” that Kannan mentioned get its name?
  • What does the name UTF-8 stand for?
  • How many bytes are in a terabyte? In a tebibyte?

Then we had some questions about programming languages we work with, such as which of Python's built-in types are immutable, or what values are boolean false in Ruby, Perl, and JavaScript.

We ended with a programming problem that required HTML parsing and number-crunching. The task was the same for all teams, but each team used a different toolset: Node.js, Ruby, Perl, Python, or bash + classic Unix text tools sed, awk, sort, cut, etc. The Perl, Python, and bash/Unix teams came up with working and impressive solutions at about the same time.

Company party

We ended the day with a party nearby at Spin where we played ping-pong and had dinner and socialized and met significant others who were also visiting New York City.

It was great to get everyone together in person!