End Point


Welcome to End Point's blog

Ongoing observations by End Point people.

DevOpsDays India - 2015

DevOpsIndia 2015 was held at The Royal Orchid in Bengaluru on Sep 12-13, 2015. After saying hello to a few familiar faces who I often see at the conferences, I collected some goodies and entered into the hall. Everything was set up for the talks. Niranjan Paranjape, one of the organizers, was giving the introduction and overview of the conference.

Justin Arbuckle from Chef gave a wonderful keynote talk about the “Hedgehog Concept” and spoke more about the importance of consistency, scale and velocity in software development.

In addition, he quoted "A small team with generalists who have a specialization, deliver far more than a large team of single skilled people."

A talk on “DevOps of Big Data infrastructure at scale” was given by Rajat Venkatesh from Qubole. He explained the architecture of Qubole Data Service (QDS), which helps to autoscale the Hadoop cluster. In short, scale up happens based on the data from Hadoop Job Tracker about the number of jobs running and time to complete the jobs. Scale down will be done by decommissioning the node, and the server will be chosen by which is reaching the boundary of an hour. This is because most of the cloud service providers charge for an hour regardless of whether the usage is 1 minute or 59 minutes.

Vishal Uderani, a DevOps guy from WebEngage, presented “Automating AWS infrastructure and code deployment using Ansible.” He shared the issues facing the environments like task failure due to ssh timeout on executing a giant task using Ansible and solved by monitoring the task after triggering the task to get out of the system immediately. Integrating Rundeck with Ansible is an alternative for enterprise Ansible Tower. He also stated the following reasons for using Ansible:

  • Good learning curve
  • No agents will be running on the client side, which avoids having to monitor the agents at client nodes
  • Great deployment tool

Vipul Sharma from CodeIgnition stated the importance of Resilience Testing. The application should be tested periodically to be tough and strong enough to handle any kind of load. He said Simian Army can be used to create the problems in environments and then resolving them to make them flawless. Simian Army can used to improve application using security monkey, chaos monkey, janitor monkey, etc… Also “Friday Failure” is a good method to identify the problem and improve the application.

Avi Cavale from Shippable gave an awesome talk on "Modern DevOps with Docker". He talk started with “What is the first question that arises during an outage ? ... What changed ?”After fixing these issues, the next question will be “Who made the change?” Both questions are bad for the business. Change is the root cause of all outage but business requires changes. In his own words, DevOps is a culture of embracing change. Along with that he explained the zero downtime ways to deploy the changes using a container.

He said DevOps is a culture, make it FACT(F-Frictionless, A-Agile, C-Continuous and T-Transparency).

Rahul Mahale from SecureDB gave a demo on Terraform, a tool for build and orchestration in Cloud. It features “Infrastructure as Code” and also provides an option to generate the diagrams and graphs of the present infrastructure.

Shobhit and Sankalp from CodeIgnition shared their experience on solving network based issues. Instead of whitelisting the user's location every time manually to provide the access to systems, they created a VPN to enable access only to users, not locations. They have resolved two more additional kind of issues by adding Router to bind two networks using FIP.  Another issue is that they need to whitelist to access third party services from containers, but it was hard to whitelist all the containers. Therefore, they created and whitelisted VMs and containers accessed the third party services through VMs.

Ankur Trivedi from Xebia Labs spoke about the “Open Containers Initiative” project. He explained the evolution of the containers (Docker - 2013 & Rocket - 2014). The various distributions of containers are compared based on the Packing, Identity, Distribution and Runtime capabilities. Open Containers is supported by the community and various companies who are doing extensive work on containers in order to standardize them.

Vamsee Kanala, a DevOps consultant, presented a talk on “Docker Networking - A Primer”. He spoke about Bridge networking, Host Networking, Mapped container networking and None (Self Managed) with dockers. The communications between the containers can happen through:

  • Port mapping
  • Link
  • Docker composing (programmatically)

In addition, he explained the tools which feature the clustering of containers, and listed the tools that have their own way of clustering and advantages:

  • Kubernetes
  • Mesos
  • Docker Swarm

Aditya Patawari from BrowserStack gave a demo on “Using Kubernetes to Build Fault Tolerant Container Clusters”. Kubernetes has a feature called “Replication Controllers,” which helps to maintain number of pods running at any time. “Kubernetes Services” defines a policy to enable access among the pods which provisions the pods as micro services.

Arjun Shenoy from LinkedIn introduced a tool called “SIMOORG.” The tool was developed at LinkedIn and does the failure induction in a cluster for testing the stability of the code. It is a components-based Open Source framework and few components are replaceable with external ones.

Dharmesh Kakadia, a researcher from Microsoft, gave a wonderful talk on “Mesos is the Linux”. He started with a wonderful explanation on Micro services (relating with linux commands, each command is a micro service) which is simplest, independently updatable, runnable and deployable. Mesos is a “Data Center Kernel” which takes care of scalability, fault tolerance, load balance, etc… in Data Center.

At the end, I got a chance to do some hands-on things on Docker and played with some of its features. It was a wonderful conference to learn more about configuration management and the containers world.

Pgbouncer user and database pool_mode with Scaleway

The recent release of PgBouncer 1.6, a connection pooler for Postgres, brought a number of new features. The two I want to demonstrate today are the per-database and per-use pool_modes. To get this effect previously, one had to run separate instances of PgBouncer. As we shall see, a single instance can now run different pool_modes seamlessly.

There are three pool modes available in PgBouncer, representing how aggressive the pooling becomes: session mode, transaction mode, and statement mode.

Session pool mode is the default, and simply allows you to avoid the startup costs for new connections. PgBouncer connects to Postgres, keeps the connection open, and hands it off to clients connecting to PgBouncer. This handoff is faster than connecting to Postgres itself, as no new backends need to be spawned. However, it offers no other benefits, and many clients/applications already do their own connection pooling, making this the least useful pool mode.

Transaction pool mode is probably the most useful one. It works by keeping a client attached to the same Postgres backend for the duration of a transaction only. In this way, many clients can share the same backend connection, making it possible for Postgres to handle a large number of clients with a small max_connections setting (each of which consumes resources).

Statement pool mode is the most interesting one, as it makes no promises at all to the client about maintaining the same Postgres backend. In other words, every time a client runs a SQL statement, PgBouncer releases that connections back to the pool. This can make for some great performance gains, although it has drawbacks, the primary one being no multi-statement transactions.

To demonstrate the new pool_mode features, I decided to try out a new service mentioned by a coworker, called Scaleway. Like Amazon Web Services (AWS), it offers quick-to-create cloud servers, ideal for testing and demonstrating. The unique things about Scaleway is the servers are all ARM-based SSDs. Mini-review of Scaleway: I liked it a lot. The interface was smooth and uncluttered (looking askance at you, AWS), the setup was very fast, and I had no problems with it being ARM.

To start a new server (once I entered my billing information, and pasted my public SSH key in), I simply clicked the create server button, chose "Debian Jessie (8.1)", and then create server again. 60 seconds later, I had an IP address to login as root. The first order of business, as always, is to make sure things are up to date and install some important tools:

root@scw-56578065:~# apt-get update
root@scw-56578065:~# apt-get upgrade
## Only five packages were upgraded, which means things are already very up to date

## Because just plain 'apt-get' only gets you so far:
root@scw-56578065:~# apt-get install aptitude

## To find out the exact names of some critical packages:
root@scw-56578065:~# aptitude search emacs git

## Because a server without emacs is like a jungle without trees:
root@scw-56578065:~# apt-get install emacs-nox git-core

## To figure out what version of Postgres is available:
root@scw-56578065:~# aptitude show postgresql
Package: postgresql                      
State: not installed
Version: 9.4+165

## Since 9.4 is the latest, we will happily use it for this demo:
root@scw-56578065:~# apt-get install postgresql postgresql-contrib

## Nice to not have to worry about initdb anymore:
root@scw-56578065:~# service postgresql start

Postgres 9.4 is now installed, and started up. Time to figure out where the configuration files are and make a few small changes. We will turn on some heavy logging via the postgresql.conf file, and allow anyone locally to log in to Postgres, no questions asked, by changing the pg_hba.conf file. Then we restart Postgres, and verify it is working:

root@scw-56578065:~# updatedb
root@scw-56578065:~# locate postgresql.conf pg_hba.conf

root@scw-56578065:~# echo "logging_collector = on
log_filename = 'postgres-%Y-%m-%d.log'
log_rotation_size = 0" >> /etc/postgresql/9.4/main/postgresql.conf

## But it already has a nice log_line_prefix (bully for you, Debian)

## Take a second to squirrel away the old version before overwriting:
root@scw-56578065:~# cp /etc/postgresql/9.4/main/pg_hba.conf ~
root@scw-56578065:~# echo "local all all trust" > /etc/postgresql/9.4/main/pg_hba.conf 
root@scw-56578065:~# service postgresql restart
root@scw-56578065:~# psql -U postgres -l
                             List of databases
   Name    |  Owner   | Encoding  | Collate | Ctype |   Access privileges   
 postgres  | postgres | SQL_ASCII | C       | C     | 
 template0 | postgres | SQL_ASCII | C       | C     | =c/postgres          +
           |          |           |         |       | postgres=CTc/postgres
 template1 | postgres | SQL_ASCII | C       | C     | =c/postgres          +
           |          |           |         |       | postgres=CTc/postgres
(3 rows)

SQL_ASCII? Yuck, how did that get in there?! That's an absolutely terrible encoding to be using in 2015, so we need to change that right away. Even though it won't affect this demonstration, there is the principle of the matter. We will create a new database with a sane encoding, then create some test databases based on that.

root@scw-56578065:~# su - postgres
postgres@scw-56578065:~$ createdb -T template0 -E UTF8 -l en_US.utf8 foo
postgres@scw-56578065:~$ for i in {1..5}; do createdb -T foo test$i; done
postgres@scw-56578065:~$ psql -l
                             List of databases
   Name    |  Owner   | Encoding  |  Collate   |   Ctype    |   Access privileges   
 foo       | postgres | UTF8      | en_US.utf8 | en_US.utf8 |
 postgres  | postgres | SQL_ASCII | C          | C          |
 template0 | postgres | SQL_ASCII | C          | C          | =c/postgres          +
           |          |           |            |            | postgres=CTc/postgres
 template1 | postgres | SQL_ASCII | C          | C          | =c/postgres          +
           |          |           |            |            | postgres=CTc/postgres
 test1     | postgres | UTF8      | en_US.utf8 | en_US.utf8 |
 test2     | postgres | UTF8      | en_US.utf8 | en_US.utf8 |
 test3     | postgres | UTF8      | en_US.utf8 | en_US.utf8 |
 test4     | postgres | UTF8      | en_US.utf8 | en_US.utf8 |
 test5     | postgres | UTF8      | en_US.utf8 | en_US.utf8 |
(9 rows)

## Create some test users as well:
postgres@scw-56578065:~$ for u in {'alice','bob','eve','mallory'}; do createuser $u; done
## First time I tried this I outfoxed myself - so make sure the users can connect!
postgres@scw-56578065:~$ for d in {1..5}; do psql test$d -qc 'grant all on all tables in schema public to public'; done

## Make sure we can connect as one of our new users:
postgres@scw-56578065:~$ psql -U alice test1 -tc 'show effective_cache_size'

Now that Postgres is up and running, let's install PgBouncer. Since we are showing off some 1.6 features, it is unlikely to be available via packaging, but we will check anyway.

postgres@scw-56578065:~$ aptitude versions pgbouncer
Package pgbouncer:                        
p   1.5.4-6+deb8u1            stable            500

## Not good enough! Let's grab 1.6.1 from git:
postgres@scw-56578065:~$ git clone https://github.com/pgbouncer/pgbouncer
postgres@scw-56578065:~$ cd pgbouncer
## This submodule business for such a small self-contained project really irks me :)
postgres@scw-56578065:~/pgbouncer$ git submodule update --init
postgres@scw-56578065:~/pgbouncer$ git checkout pgbouncer_1_6_1
postgres@scw-56578065:~/pgbouncer$ ./autogen.sh

The autogen.sh script fails rather quickly with an error about libtool - which is to be expected, as PgBouncer comes with a small list of required packages in order to build it. Because monkeying around with all those prerequisites can get tiresome, apt-get provides an option called "build-dep" that (in theory!) allows you to download everything needed to build a specific package. Before doing that, let's drop back to root and give the postgres user full sudo permission, so we don't have to keep jumping back and forth between accounts:

postgres@scw-56578065:~/pgbouncer$ exit
## This is a disposable test box - do not try this at home!
## Debian's /etc/sudoers has #includedir /etc/sudoers.d, so we can do this:
root@scw-56578065:~# echo "postgres ALL= NOPASSWD:ALL" > /etc/sudoers.d/postgres
root@scw-56578065:~# su - postgres
postgres@scw-56578065:~ cd postgres
postgres@scw-56578065:~/pgbouncer$ sudo apt-get build-dep pgbouncer
The following NEW packages will be installed:
  asciidoc autotools-dev binutils build-essential cdbs cpp cpp-4.9 debhelper docbook-xml docbook-xsl dpkg-dev g++ g++-4.9 gcc gcc-4.9 gettext
  gettext-base intltool-debian libasan1 libasprintf0c2 libatomic1 libc-dev-bin libc6-dev libcloog-isl4 libcroco3 libdpkg-perl libevent-core-2.0-5
  libevent-dev libevent-extra-2.0-5 libevent-openssl-2.0-5 libevent-pthreads-2.0-5 libgcc-4.9-dev libgomp1 libisl10 libmpc3 libmpfr4 libstdc++-4.9-dev
  libubsan0 libunistring0 libxml2-utils linux-libc-dev po-debconf sgml-data xmlto xsltproc
postgres@scw-56578065:~/pgbouncer$ ./autogen.sh

Whoops, another build failure. Well, build-dep isn't perfect, turns out we still need a few packages. Let's get this built, create some needed directories, tweak permissions, find the location of the installed PgBouncer ini file, and make a few changes to it:

postgres@scw-56578065:~/pgbouncer$ sudo apt-get install libtools automake pkg-config
postgres@scw-56578065:~/pgbouncer$ ./autogen.sh
postgres@scw-56578065:~/pgbouncer$ ./configure
postgres@scw-56578065:~/pgbouncer$ make
postgres@scw-56578065:~/pgbouncer$ sudo make install
postgres@scw-56578065:~/pgbouncer$ sudo updatedb
postgres@scw-56578065:~/pgbouncer$ locate pgbouncer.ini
postgres@scw-56578065:~/pgbouncer$ sudo mkdir /var/log/pgbouncer /var/run/pgbouncer
postgres@scw-56578065:~/pgbouncer$ sudo chown postgres.postgres /var/log/pgbouncer \
 /var/run/pgbouncer /var/lib/postgresql/pgbouncer/etc/pgbouncer.ini
postgres@scw-56578065:~/pgbouncer$ emacs /var/lib/postgresql/pgbouncer/etc/pgbouncer.ini
## Add directly under the [databases] section:
test1 = dbname=test1 host=/var/run/postgresql
test2 = dbname=test2 host=/var/run/postgresql pool_mode=transaction 
test3 = dbname=test3 host=/var/run/postgresql pool_mode=statement
test4  = dbname=test3 host=/var/run/postgresql pool_mode=statement auth_user=postgres
## Change listen_port to 5432
## Comment out listen_addr
## Make sure unix_socket_dir is /tmp

How are we able to use 5432, when Postgres is using it too? In the unix world, a port relies on a socket file, which is located somewhere on the file system. Thus, you can use the same port as long as they are coming from different files. While Postgres has a default unix_socket_directories value of '/tmp', Debian has changed that to '/var/run/postgresql', meaning PgBouncer itself is free to use '/tmp'! The bottom line is that we can use port 5432 for both Postgres and PgBouncer, and control which one is used by setting the host parameter when connecting (which, when starting with a slash, is actually the location of the socket file). However, note that only one of them can be used when connecting via TCP/IP. Enough of all that, let's make sure PgBouncer at least starts up!

postgres@scw-56578065:~/pgbouncer$ pgbouncer /var/lib/postgresql/pgbouncer/etc/pgbouncer.ini -d
2000-08-04 02:06:08.371 5555 LOG File descriptor limit: 65536 (H:65536),
 max_client_conn: 100, max fds possible: 230

As expected, the pgbouncer program gave us a single line of information before going into background daemon mode, per the -d argument. Since both Postgres and PgBouncer are running on port 5432, let's make our psql prompt a little more informative, by having it list the hostname via %M. If the hostname matches the unix_socket_directory value that psql was compiled with, then it will simply show '[local]'. Thus, seeing '/tmp' indicates we are connected to PgBouncer, and seeing '[local]' indicates we are connected to Postgres (via /var/run/postgresql).

## Visit the psql docs for explanation of this prompt
postgres@scw-56578065:~/pgbouncer$ echo "\set PROMPT1 '%n@%/:%>%R%x%#%M '" > ~/.psqlrc

Let's confirm that each PgBouncer connection is in the expected mode. Database test1 should be using the default pool_mode, 'session'. Database test2 should be using a 'transaction' pool_mode, while 'statement' mode should be used by both test3 and test4. See my previous blog post on ways to detect the various pool_modes of pgbouncer. First, let's connect to normal Postgres and verify we are not connected to PgBouncer by trying to change to a non-existent database. FATAL means PgBouncer, and ERROR means Postgres:

postgres@scw-56578065:~/pgbouncer$ psql test1
psql (9.4.3)
Type "help" for help.

postgres@test1:5432=#[local] \c crowdiegocrow
FATAL:  database "crowdiegocrow" does not exist

postgres@scw-56578065:~/pgbouncer$ psql -h /tmp test1
psql (9.4.3)
Type "help" for help.

postgres@test1:5432=#[local:/tmp] \c sewdiegosew
ERROR:  No such database: sewdiegosew

Now let's confirm that we have database-specific pool modes working. If you recall from above, test2 is set to transaction mode, and test3 is set to statement mode. We determine the mode by running three tests. First, we do a "BEGIN; ROLLBACK;" - if this fails, it means we are in statement mode. Next, we try to PREPARE and EXECUTE a statement. If this fails, it means we are in a transaction mode. Finally, we try to switch to a non-existent database. If it returns an ERROR, it means we are in session mode. If it returns a FATAL, it means we are not connected to PgBouncer at all.

Your mnemonic helper
## Mnemonic for this common set of psql options: "axe cutie"
postgres@scw-56578065:~/pgbouncer$ psql -Ax -qt -h /tmp test1
postgres@test1:5432=#[local:/tmp] BEGIN; ROLLBACK;
postgres@test1:5432=#[local:/tmp] PREPARE abc(int) AS SELECT $1::text;
postgres@test1:5432=#[local:/tmp] EXECUTE abc(123);
postgres@test1:5432=#[local:/tmp] \c rowdiegorow
ERROR:  No such database: rowdiegorow
## test1 is thus running in session pool_mode

postgres@scw-56578065:~/pgbouncer$ psql -Ax -qt -h /tmp test2
postgres@test2:5432=#[local:/tmp] BEGIN; ROLLBACK;
postgres@test2:5432=#[local:/tmp] PREPARE abc(int) AS SELECT $1::text;
postgres@test2:5432=#[local:/tmp] EXECUTE abc(123);
ERROR:  prepared statement "abc" does not exist
## test2 is thus running in transaction pool_mode

postgres@scw-56578065:~/pgbouncer$ psql -Ax -qt -h /tmp test3
postgres@test3:5432=#[local:/tmp] BEGIN; ROLLBACK;
ERROR:  Long transactions not allowed
## test3 is thus running in statement pool_mode

So the database-level pool modes are working as expected. PgBouncer now supports user-level pool modes as well, and these always trump the database-level pool modes. Recall that our setting for test4 in the pgbouncer.ini file was:

test4 = dbname=test3 host=/var/run/postgresql pool_mode=statement auth_user=postgres

The addition of the auth_user parameter allows us to specify other users to connect as, without having to worry about adding them to the PgBouncer auth_file. We added four sample regular users above: Alice, Bob, Eve, and Mallory. We should only be able to connect with them via auth_user, so only test4 should work:

postgres@scw-56578065:~/pgbouncer$ psql -h /tmp test1 -U alice
psql: ERROR:  No such user: alice
postgres@scw-56578065:~/pgbouncer$ psql -h /tmp test2 -U alice
psql: ERROR:  No such user: alice
postgres@scw-56578065:~/pgbouncer$ psql -h /tmp test3 -U alice
psql: ERROR:  No such user: alice
postgres@scw-56578065:~/pgbouncer$ psql -h /tmp test4 -U alice
psql (9.4.3)
Type "help" for help.

alice@test4:5432=>[local:/tmp] begin;
ERROR:  Long transactions not allowed

Let's see if we can change the pool_mode for Alice to transaction, even if we are connected to test4 (which is set to statement mode). All it takes is a quick entry to the pgbouncer.ini file, in a section we must create called [users]:

echo "[users]
alice = pool_mode=transaction
$(cat /var/lib/postgresql/pgbouncer/etc/pgbouncer.ini)" \
> /var/lib/postgresql/pgbouncer/etc/pgbouncer.ini

## Attempt to reload pgbouncer:
postgres@scw-56578065:~/pgbouncer$ kill -HUP `head -1 /run/pgbouncer/pgbouncer.pid`

## Failed, due to a PgBouncer bug:
2001-03-05 02:27:44.376 5555 FATAL @src/objects.c:299 in function 
  put_in_order(): put_in_order: found existing elem

## Restart it:
postgres@scw-56578065:~/pgbouncer$ pgbouncer /var/lib/postgresql/pgbouncer/etc/pgbouncer.ini -d

postgres@scw-56578065:~/pgbouncer$ psql -Ax -qt -h /tmp test4 -U alice
alice@test4:5432=>[local:/tmp] BEGIN; ROLLBACK;
alice@test4:5432=>[local:/tmp] PREPARE abc(int) AS SELECT $1::text;
alice@test4:5432=>[local:/tmp] EXECUTE abc(123);
ERROR:  prepared statement "abc" does not exist
## test4 is thus running in transaction pool_mode due to the [users] setting

There you have it - database-specific and user-specific PgBouncer pool_modes. Note that you cannot yet do user *and* database specific pool_modes, such as if you want Alice to use transaction mode for database test4 and statement mode for test5.

YAPC::NA 2015 Conference Report

In June, I attended the Yet Another Perl Conference (North America), held in Salt Lake City, Utah. I was able to take in a training day on Moose, as well as the full 3-day conference.

The Moose Master Class (slides and exercises here) was taught by Dave Rolsky (a Moose core developer), and was a full day of hands-on training and exercises in the Moose object-oriented system for Perl 5. I've been experimenting a bit this year with the related project Moo (essentially the best two-thirds of Moose, with quicker startup), and most of the concepts carry over, with just slight differences.

Moose and Moo allow the modern Perl developer to quickly write OO Perl code, saving quite a bit of work from the older "classic" methods of writing OO Perl. Some of the highlights of the Moose class include:

  • Sub-classing is discouraged; this is better done using Roles
  • Moose eliminates more typing; more typing can often equal more bugs
  • Using namespace::autoclean at the top is a best practice, as it cleans up after Moose
  • Roles are what a class does, not what it is. Roles add functionality.
  • Use types with MooseX::Types or Type::Tiny (for Moo)
  • Attributes can be objects (see slide 231)
Additional helpful resources for OO Perl and Moo.

At the YAPC::NA conference days, I attended all joint sessions, and breakout sessions that piqued my interest. Here are some of the things I noted:

  • The author of Form::Diva gave a lightning talk (approx. 5 minutes) about this module, which allows easier HTML form creation. I was able to chat with the author during a conference mixer, and the next time I need a long HTML form, I will be giving this a try.
  • One lightning talk presenter suggested making comments stand out, by altering your editor's code highlight colors. Comments are often muted, but making them more noticeable helps developers, as comments are often important guides to the code.
  • plenv (which allows one to install multiple versions of Perl) can remember which Perl version you want for a certain directory (plenv local)
  • pinto is useful for managing modules for a project
  • Sawyer did a talk on web scraping in which he demonstrated the use of Web::Query, which provides jQuery-like syntax for finding elements in the page you wish to scrape. There are many tools for web scraping, but this one seems easy to use, if you know jQuery.
  • DBIC's "deploy" will create new tables in a database, based on your schema. DBIx::Class::Fixtures can grab certain data into files for tests to use, so you can keep data around to ensure a bug is still fixed.
The presenter of What is this "testing" people keep talking about? did a great job researching a topic which he knew nothing about until after his talk was accepted. If there is ever a good way to learn something, it's teaching it! Slides are here.

The talk on Docker (slides) was interesting. Highlights I noted: use busybox, then install Perl on top of busybox (you can run Plack from this Perl); Gentoo is easy to dockerize, as about half the size of Ubuntu; Perl Dockerfiles; build Perl on a local system, then copy to Docker image, in Docker file.

I attended some talks on the long-awaited Perl 6, which is apparently to be released by the end of this year. While I'm not sure how practical Perl 6 will be for a while, one of the most interesting topics was that Perl 6 knows how to do math, such as: solve for "x": x = 7 / 2. Perl 6 gets this "right", as far as humans are concerned. It was interesting that many in attendance did not feel the answer should be "3.5", due to what I suspect is prolonged exposure to how computers do math.

One talk not related to Perl was Scrum for One (video), which discussed how to use the principles of Scrum in one's daily life. Helpful hints included thinking of your tasks in the User Story format: "as a $Person, I would like $Thing, so that $Accomplishment"; leave murky stories on the backlog, as you must know what "done" looks like; the current tasks should include things doable in the next week — this prevents you from worrying about all tasks in your list. Personally, I've started using Trello boards to implement this, such as: Done, Doing, ToDo, Later.

Finally, while a great technical conference, YAPC's biggest strength is bringing together the Perl community. I found this evident myself, as I had the opportunity to meet another attendee from my city. We were introduced at the conference, not knowing each other previously. When you have two Perl developers in the same city, it is time to resurrect your local Perl Mongers group, which is what we did!

Install Tested Packages on Production Server

One of our customers has us to do scheduled monthly OS updates following a specific rollout process. First week of the month, we will update the test server and wait for a week to confirm that everything looks as expected in the application; then next week we apply the very same updates to the production servers.

Since not long ago we used to use aptitude to perform system updates. While doing the update on the test server, we also executed aptitude on production servers to "freeze" the same packages and version to be updated on following week. That helped to ensure that only tested packages would have been updated on the production servers afterward.

Since using aptitude in that way wasn't particularly efficient, we decided to use directly apt-get to stick with our standard server update process. We still wanted to keep our test-production synced updated process cause software updates released between the test and the production server update are untested in the customer specific environment. Thus we needed to find a method to filter out the unneeded packages for the production server update.

In order to do so we have developed a shell script that automates the process and maintain the package version in test and production in sync during OS updates. I'll explain the processes involved used on both the test and the production servers.

Test Server Update Process:

  • Update the repository index
  • Get the list of installed packages and version before update
  • Complete the dist-upgrade
  • Get the list of installed packages and version after update
  • Compare the packages before and after the update and generate a diff output which has the information about the packages installed, updated and removed


root@selva_test:~# ./auto-tested-server-update.sh -t test

    ############ Auto Tested Server Update Script ############

Packages index list and server update will be completed here...
Files were generated in default directory. after_* and before_* contains the installed packages list before and after server update. diff_* contains the changes of the packages in the system.

root@selva_test:~# ls tmp/
after_update_selva_test.2015-09-02.txt  before_update_selva_test.2015-09-02.txt  diff_server_update_selva_test.2015-09-02.txt

# head -5 tmp/before_update_selva_test.2015-09-02.txt 

# head -5 tmp/after_update_selva_test.2015-09-02.txt 

# cat tmp/diff_server_update_selva_test.2015-09-02.txt 
             > git=1:
             > git-man=1:
metacity=1:2.34.1-1ubuntu11          <
mysql-client=5.43-0ubuntu0.12.04.1         | mysql-client=5.5.43-0ubuntu0.12.04.1
ubuntu-desktop=1.267.1           <
unity-2d=5.14.0-0ubuntu1          <
             > wget=1.13.4-2ubuntu1

Production Server Update Process:

  • Get the diff output file (which has information about packages installed, updated and removed packages) from the test server
  • Collect the packages which will be changed on server update
  • Process the packages to categorize under install, upgrade and remove
  • Filter the packages from test server installed, upgraded and removed packages
  • Perform the install/upgrade/remove actions based on the filtered tested packages


root@selva_prod:~# ./auto-tested-server-update.sh -t prod -S selva_test -U root

    ############ Auto Tested Server Update Script ############

*** Production Server Update Process ***

      remote_user: root
      remote_server: selva_prod
      remote_path: /root/tmp
Enter a number to choose file: 1
INFO: You have selected the file: diff_server_update_selva_test.2015-09-02.txt
INFO: scp root@selva_test:/root/tmp/diff_server_update_selva_test.2015-09-02.txt /root/tmp
diff_server_update_selva_test.2015-09-02.txt                       100%  763     0.8KB/s   00:00  
INFO: Fetching the list of packages need to be changed...!

*********************** Installation *************************
INFO: No packages to install

************************* Upgrade ****************************
apt-get install mysql-client=5.5.43-0ubuntu0.12.04.1
Reading package lists... Done
Building dependency tree

************************** Removal ***************************
apt-get remove zenity=3.4.0-0ubuntu4 zenity-common=3.4.0-0ubuntu4
Reading package lists... Done
Building dependency tree

As showed in the sample output above, the script will then run through different phases in which it will install/upgrade/remove all the needed packages in the target production system.

In this way we ensure that all packages tested in the test environment will be replayed on the production server to minimize the chances of introducing bugs and maximize application stability during production system updates.

The Git repository below contains the script and a few additional information to use it.


Memcache Full HTML in Ruby on Rails with Nginx

Hi! Steph here, former long-time End Point employee now blogging from afar as a software developer for Pinhole Press. While I’m no longer an employee of End Point, I’m happy to blog and share here.

I recently went down the rabbit hole of figuring out how to cache full HTML pages in memcached and serve those pages via nginx in a Ruby on Rails application, skipping Rails entirely. While troubleshooting this, I could not find much on Google or StackOverflow, except related articles for applying this technique in WordPress.

Here are the steps I went through to get this working:

Replace Page Caching

First, I cloned the page caching gem repository that was taken out of the Rails core on the move from Rails 3 to 4. It writes fully cached pages out to the file system. It can easily be added as a gem to any project, but the decision was made to remove it from the core.

Because of the complexities in cache invalidation across file systems on multiple instances (with load balancing), and the desire to skip a shared/mounted file server, and because the Rails application relies on memcached for standard Rails fragment and view caching throughout the site, the approach was to use memcached for these full HTML pages as well. A small portion of the page (my account and cart info) is modified via JavaScript with information it retrieves from another server.

The following [simplified] changes were made to the actionpack-page_caching gem, to modify where the full page content was stored:

# Cache clearing update:
+ Rails.cache.delete(path)
- File.delete(path) if File.exist?(path)
- File.delete(path + '.gz') if File.exist?(path + '.gz')
# Cache write update:
+ Rails.cache.write(path, content, raw: true)
- FileUtils.makedirs(File.dirname(path))
- File.open(path, 'wb+') { |f| f.write(content) }
- if gzip
-   Zlib::GzipWriter.open(path + '.gz', gzip) { |f| f.write(content) }
- end
# Cache path change:
def page_cache_path(path, extension = nil)
+ path
- page_cache_directory.to_s + page_cache_file(path, extension)

See, it's not that much! The rest of the gem was not modified much, and the interaction from the Rails app to this gem was maintained (via a controller class method :caches_page). The one thing to note above is the raw option passed in the call to write the cache, which forces the content to be served as a raw string.

Step 2: Set up nginx to look for memcached files

Next, I had to set up nginx to serve the HTML from memcached. This was the tricky part (for me). After much experimentation and logging, I finally settled on the following simplified config:

location / {
    set $memcached_key $uri;
    set $memcached_request 1;
    default_type "text/html";

    if $uri ~ "admin") {
      set $memcached_request 0;

    if ($uri ~ "nocache") {
      set $memcached_request 0;

    if ($memcached_request = 1) {
      memcached_pass localhost:11211;
      error_page 404 = /nocache/$uri;

The desired logic here is to look up the memcached pages for all requests. If there is no memcached page, nginx should fallback to serving the standard Rails page with a modified URL ("/nocache/" prepended). Without this URL modification, nginx would get stuck in an infinite loop of looking up all URLs in memcache repeatedly.

Step 3: Setup a Rack::Rewrite rule to lookup /nocache/ pages.

Due to the infinite loop problem, Rails was receiving all requests with /nocache/ prepended to it. A simple solution to handle this was to add a Rack::Rewrite rule to internally rewrite the URL to ignore the /nocache/ fragment, shown below. The nice thing about this change is that if caching is disabled (e.g. on the development server), this rewrite rule won't affect any requests.

config.middleware.insert_before(Rack::Runtime, Rack::Rewrite) do
  rewrite %r{/nocache/(.*)}, '/$1'

Step 4: Cache invalidation: the really hard part, right?

Finally, I had to add cache invalidation throughout the application where the memcache pages needed to expire. There are a few options for this, for example:

  • Inside a Rails controller after a successful update/commit.
  • Inside a Rails model, via ActiveRecord callbacks.
  • Inside a module that decorates an ActiveRecord model via callbacks.

Choose your poison wisely here. A controller makes sense, but in the case where RailsAdmin is utilized for all admin CRUD methods, it's not much different (IMHO) to extend the controllers as it is to extend the models. I'm a fan of ActiveRecord callbacks, so I went with option 2.

Final Thoughts

Logs were invaluable here. Most importantly, the memcached log was invaluable here to confirm the infinite loop bug. Also, the Rails dev log was naturally helpful once I solved the nginx issue to handle the Rack::Rewrite rule.

One important note here is that this type of caching does not take advantage of expiration via model timestamp cache keys introduced in Rails 4. But, it wouldn't be able to, because nginx needs a quick lookup on the URL in memcache to serve the file, and we ideally don't want to hit the database to try to figure out what that key should be.

Liquid Galaxy and the Coral Reefs of London

Exploring coral reefs and studying the diverse species that live in them usually requires some travel. Most of the coral in the world lives in the Indo-Pacific region that includes the Red Sea, Indian Ocean, Southeast Asia and the Pacific. The next largest concentration of reefs is in the Caribbean and Atlantic. Oh, and then there is London.

London, you say?

Yes, the one in England. No, not the coral attaching itself to oil and gas platforms in the North Sea, nor the deep water coral there—admittedly far away from London, but perhaps in the general vacinity on the globe. We’re talking about the heart of London, specifically Cromwell Road. Divers will need to navigate their ways there, but scuba gear and a boat won’t be required once they arrive. No worries! Divers can float right up to the Liquid Galaxy in the exhibit hall at The Natural History Museum. The museum opened CORAL REEFS: SECRET CITIES OF THE SEA on March 27th. and the exhibit runs through mid-September this year.

NHML LG1.jpg

Actually, there are 3 Liquid Galaxies at the Natural History Museum in London to allow a maximum amount of exploration and a minimum of waiting in queue.

NHML LG2.jpg

Last year, the Natural History Museum engaged End Point to provide an experience for their guests that will take them to coral reefs from around the globe. Working with panoramic images captured and provided to the exhibit by the XLCatlin SeaView Survey and at the direction of the curators of the exhibit, End Point prepared 3 Liquid Galaxy server stacks with the selected SeaView Survey reef dives for the exhibit. Each of the Liquid Galaxies has three high definition screens.

The screens are angled in toward the viewer to compensate for the distortion that can occur when viewing panoramic images on a flat screen. It is a bit like having a bay window on the nose of a submarine. This immersive experience is an unique feature of Liquid Galaxy. The 360 degree spherical field of view that is a panoramic photosphere will distort on a flat screen and present as a fisheye view of the image. Liquid Galaxy is designed to compensate for the geometry of a panoramic image and on the angled screens place the user’s point of view in the center of the image.

The mission for CORAL REEFS: SECRET CITIES OF THE SEA required a simple and reliable display solution. The solution for this exhibit at the Natural History Museum uses locally stored panoramic images rather than downloading the content. The Liquid Galaxy hardware is able to handle the XLCatlin SeaView Survey panospheres as a single image across the three screens to keep the technical needs of the exhibit simple, cost effective, and robust. With the imagery being stored locally, expensive bandwidth was not needed. A smaller inexpensive network connection is used so that the health and operation of the Liquid Galaxy systems can be monitored remotely. This is a feature that End Point has developed for all of our Liquid Galaxy clients. It is worth noting that since opening in March the exhibit has required almost no system intervention from End Point engineers. We are still monitoring the systems and should technical intervention be needed to keep the exhibit operating we are ready.

The unique ability to surround the viewer with imagery is what sets a Liquid Galaxy apart from a flat screen of any size. The Liquid Galaxy display provides context for the natural habitat of the 250 specimens and the live coral and fish that are also on display at the Museum. The Natural History Museum hopes to get guests so immersed in the imagery that they're able to feel the importance of the coral reefs to the planet. We are glad that the museum recognized that End Point’s Liquid Galaxy products and services are a great choice when immersion is important to the mission.

Old Dog & New Tricks - Giving Interchange a New Look with Bootstrap


So your Interchange based website looks like it walked out of a disco... but you spent a fortune getting the back end working with all those custom applications.... what to do?

Interchange is currently being used in numerous very complex implementations. Trying to adapt another platform to replace Interchange is a formidable task, and in many cases the "problem" that users are trying to "solve" by replacing Interchange, can be remedied by a relatively simple face lift to the front end. One of the main attractions to Interchange in the past was its ability to scale from a small mom & pop eCommerce application, to a mid level support system for a larger company and its related back end systems. Once the connection to those back end systems has been created, and for as long as you use those related systems, Interchange will continue to be the most economic choice for the job. But that leaves the front end, the one that your customers see and use (with their phones and tablets) that becomes the most immediate target for "modernization".

Granted, there are new and alternate ways of accessing and presenting data and views to users, but many of those alternatives are also accessible and inter-operable with Interchange. So while there are several topics that can be investigated at length, we will focus first on a popular front end framework that can help Interchange present a modern responsive theme to the end users. That framework is Bootstrap.  Bootstrap is a good basic start for breathing new life into your Interchange application. Bootstrap uses a reasonably generic approach to HTML, CSS, and Javascript frameworking. This blends nicely with the Interchange application approach: providing a basic, repeatable, broad based and well supported foundation that can then be crafted into whatever the developer and their client may need.

My intent in this post, is to give you links to the tools available to implement Bootstrap into your Interchange application, and in subsequent posts explain our development process which may help you in how you use the tools.  Like any development, there are many ways to accomplish various tasks, and knowing why certain things were done, can help.

Alternatively, you may simply want to just download the "strap" catalog, and get started. At the time of this writing, Josh Lavin has been reviewing and refining the package, and the most recent version can be found at his Github account: Bootstrap template for Interchange. It will soon become the standard catalog template in Interchange. So, if you are not really interested in why and how this template was developed, feel free to go to the git repository in the link, clone a copy and get busy!  The README is very informative, and if you are an Interchange developer you probably won't have much trouble implementing this.

If you are interested in some of the major changes that we made to the old "standard" templating approach, and how you can apply those changes to an existing catalog without having to start from scratch, stay tuned in for my next post.