Welcome to End Point’s blog

Ongoing observations by End Point people

Three Things: frame box, Kiss Metrics, DUMP_VHOSTS

Here's my latest installment of sharing content that doesn't necessarily merit entire blog posts, but I still want to write it down somewhere so I'll remember!

1. Kiss Metrics on Design and Conversion

First up is an article sent over by Jon. This is a great article from The Kiss Metrics Blog. Several of us at End Point have been a part of redesigning the End Point website and this is an interesting article that discusses how design decisions affect conversion, and how it's important to justify design decisions with metrics and testing.


Next up is a quick system admin command line that I came across while troubleshooting something with Richard:

apache2ctl -t -D DUMP_VHOSTS

We have a team of hosting experts here at End Point, and I am not often involved in that aspect of the web application deployment. But I was recently trying to figure out why the default apache site was being served when a recent domain had been updated to point to my IP address. I worked with Richard on a screen session and he pointed out how the above command was helpful in examining how all the virtual hosts are handled by Apache. We identified that the default host was being served for an incoming request that didn't have a matched definition.

3. frame box

I was recenty reunited with a great tool that I lost track of: frame box. It’s an free online quick mockup tool, great for building a quick mockup to visually communicate an idea to a client, or great for suggesting to clients to put together a quick mockup to visually communicate to a developer. Check it out!

frame box is a great tool for putting together quick mockups.

CanCan and RailsAdmin in Ecommerce

I've written about Rails Admin a lot lately. One thing I haven't written about is how it can leverage CanCan for user authorization. Once a user is authenticated via Devise, CanCan adds an additional layer for you to control how users can access and interact with data from the admin.

A Simple Example

CanCan requires that you create an Ability class where all user permissions are defined. A very basic example in RailsAdmin ecommerce context might look like this:

class Ability
  include CanCan::Ability
  if user && user.is_admin?
    can :access, :rails_admin
    can :dashboard
    can :manage, [User,

Note that in the above code, a user that is admin (where is_admin? returns a truthy reponse) has access to RailsAdmin, and the admin user can manage (create, read, update, destroy) all users, products, and orders.

Multi-Merchant Solution

Let's go a little deeper. Multi-merchant solutions are a frequent request in ecommerce. Let's say we have the following over-simplified data model, where users own and manage products and products are displayed by category:

The Ability class might look like:

class Ability
  include CanCan::Ability
  if user && user.is_store_owner?
    can :access, :rails_admin
    can :dashboard
    can :manage, Product, :store_owner_id =>
    can :read, Category, :visible => true
  if user && user.is_admin?
    can :access, :rails_admin
    can :dashboard
    can :manage, [User,

With the above Ability definition, nothing changes for an admin user. A store owner can create products and manage (read, update, destroy) those same products. A store owner can also read categories where the category visible attribute is true. As you can see, conditions can be passed in for ability definitions. Directly from the CanCan documentation: "Anything that you can pass to a hash of conditions in Active Record will work here. The only exception is working with model ids. You can't pass in the model objects directly, you must pass in the ids."

Custom Abilities

In addition to the CRUD methods (create, read, update, destroy), CanCan gives you the ability to define additional custom abilities. RailsAdmin has special abilities (:history, :show_in_app, :dashboard), but you can create custom actions with RailsAdmin where CanCan abilities can be managed. For example, Marina wrote about how you might create a custom task to approve reviews here. And I described how to create a custom action to import data here, where CanCan is used to define which models can be imported and which users can do the import.


CanCan is decoupled from authentication and the notion of roles, which yields quite a bit of flexibility in its use. Combine RailsAdmin, CanCan, Devise, and our mountable ecommerce Rails Engine, and you've got a powerful set of tools for development of a custom Ruby on Rails ecommerce application with a powerful admin interface, flexible user authorization and authentication, and an extensible ecommerce solution.

Debugging Celery Tasks in Django Projects

I recently had the opportunity to work on a Django project that was using Celery with RabbitMQ to handle long-running server-side processing tasks. Some of the tasks took several hours to complete. The tasks had originally been executed with the at command and others had been managed with cron jobs. The client had started to migrate several of the tasks to use Celery when I joined the project.

As I started to debug the Celery tasks, I quickly discovered there were many moving parts involved and it was not immediately clear which piece of the system was the cause of the problem. I was working in a local development environment that closely matched the production system. The dev environment consisted of a virtualenv created using the same method Szymon wrote about in his article Django and virtualenvwrapper. Django, Celery and their related dependencies were installed with pip and I installed RabbitMQ with Homebrew. After activating my virtualenv and starting up everything up (or so I thought), I jumped in to an IPython shell and began to debug the tasks interactively. Some tasks completed successfully but they finished almost instantaneously which didn't seem right. The client had experienced the same issue when they excecuted tasks on their development server.

Because I was joining an existing project in progress, the system administration and configuration had already been taken care of by other members of the team. Howerver, in the process of configuring my local development server to mimic the production systems, I learned a few things along the way, described below.


RabbitMQ is a message broker; at its most basic it sends and receives messages between sender (publisher) and receiver (consumer) applications. It's written in Erlang which helps to make it highly parallel and reliable. The RabbitMQ web site is a great place to learn more about the project. For my purposes I needed to create a user and virtual host (vhost) and set up permissions for Celery to communicate with the RabbitMQ server. This was done with the rabbitmqctl command. I issued the following command to start up the server and let the process run in the background.

rabbitmq-server -detached

I also enabled the management plugin which provides both a web-based UI and a command line interface for managing and monitoring RabbitMQ. This is what the web-based UI looks like:


Celery works very well with Django thanks in large part to the django-celery module. The django-celery module includes the djcelery app which can be plugged in to the Django admin site for your project. Connecting Django to Celery and RabbitMQ requires a few simple steps:

  1. Add djcelery to the list of INSTALLED_APPS in the file for the project.
  2. Add the following lines to
    import djcelery
  3. Create the celery database tables using the syncdb management command
  4. Configure the broker setttings in
    BROKER_HOST = "localhost"
    BROKER_PORT = 5672
    BROKER_USER = "celery_user"
    BROKER_PASSWORD = "celery_password"
    BROKER_VHOST = "celery"


With the RabbitMQ server up and running and Django configured to connect to Celery the last few steps involved starting up the Celery worker and its related monitoring apps. The Celery daemon (celeryd) has lots of options that you can check out by running the following command:

python celeryd --help

For my purposes, I wanted Celery to broadcast events which the various monitoring applications could then subscribe to. It would also be good to print some helpful debugging info to the logs. I started up the Celery worker daemon with the following command:

python celeryd -E -loglevel=DEBUG

Because I specified the -E flag, the celeryev application could be used to monitor and manage the Celery worker inside a terminal which was very helpful:

For Django to capture and save Celery task information to the database, the celerycam application needs to be running. This command line app takes a snapshot of Celery every few seconds or at an interval you specify on the command line:

python celerycam

With celerycam running, the Django admin interface is updated as Celery tasks are executed:

You can also view the detail for a particular task including any error messages from the task code:

With RabbitMQ, celeryd and celerycam ready to go, the Django development server could be started to begin testing and debugging Celery task code. To demonstrate this workflow in action, I wrote a simple Celery task that could be used to simulate how Django, Celery and RabbitMQ all work together.

from celery.decorators import task
import time
import random

# A simple task to demonstrate Celery & djcelery
def add(x, y): 
    delay = random.randint(1,60)
    return x + y 

Tying it all Together

With everything configured, I was ready to get to work debugging some Celery tasks. I set up a dashboard of sorts in tmux to keep an eye on everything as I worked on the code for a particular task:

Clockwise from the bottom left you'll see an IPython shell to debug the code interactively, the Django development server log, celeryev, Celery daemon (with debugging info) and the task code in Vim.

When I started developing task-related code I wasn't sure why my changes were not showing up in the Celery or Djcelery logs. Although I had made some changes, the same errors persisted. When I looked into this further I found that Celery caches the code used for a particular task and re-uses it the next time said task is executed. In order for my new changes to take effect I needed to restart the Celery daemon. As of Celery version 2.5 there is an option to have Celery autoreload tasks. However, the version of Celery used in this client project did not yet support this feature. If you find yourself working with Django, Celery and RabbitMQ, I hope you'll find this helpful.

Dealing with Rails Application Complexity - A Report from MWRC

One of the major themes coming out of the 2012 Mountain West Ruby Conference (MWRC) was the rising complexity of Ruby applications, in particular with Rails. The focus of many of the talks was directed at the pain many of us are feeling with our bloated Rails models. When I first started developing with Rails back in 2007, much of the focus was moving the application logic from the view to the controller. Then, a few years ago, the "thin controller, fat model" mantra had us all moving our code from the controller to the model. While each of these steps were an improvement, Rails developers are now dealing with what is affectionately referred to as a "stinking pile of poo" in our models.

Having seen my share of fat models of late, this topic really grabbed my interest. I started thinking about this problem a couple of months ago while working on a rather large Rails application. Thanks to some good folks in the #urug (Utah Ruby Users Group) channel on Freenode I was pointed to the article Rails is Not Your Application which got me thinking about better ways to handle unmaintainable and unapproachable models in Rails. I was happy to see that I'm not the only one thinking about this. In fact, many very smart developers in the Ruby community are also dealing with this and coming up with very interesting ways of cracking the nut.

What follows is a summary of 5 talks given at MWRC that touched on this subject and some of the solutions they presented. Plenty of links have been provided to give you a chance to read up more on each approach so I won’t go into all the nitty-gritty details here.

Mike Gehard ("Better Ruby Through Design Patterns", Lead Software Engineer at LivingSocial)

Mike talked about using classic object oriented design patterns as a way to simplify your back-end business logic. This approach comes from some great work done by Avdi Grimm in his book Objects on Rails. Rather than putting all your logic in Rails models, Mike talked about using classes that aren't directly tied to the database and are more based on behaviour than a table schema. When this is done, we get code that can handle change and can be tested in isolation. Much can be said about this approach and indeed much has! Take a look at the book, which can be read online for free.

Mitchell Hashimoto ("Middleware, a General Abstraction", Creator of Vagrant and Operations Engineer at Kiip)

Vagrant is the new darling of many developers and for good reason. Mitchell's work on Vagrant is nothing short of game-changing for development environments. While developing Vagrant, Mitchell also ran up against some complexity in handling the provisioning of Virtual Box virtual machines. Taking some inspiration from Rack (see "What is Rack middleware?"), he implemented a middleware pattern in his code. Middleware embraces the concept of passing a state bag to a series of classes that all implement a similar API. Each class can then process the state bag (which is a Ruby Hash object) according to its needs and then pass it on to the next middleware. Those familiar with Rack will be right at home with this pattern. I could easily see this pattern as a replacement or used in conjunction with the state machine gem that is widely used for similar processing. At the end of Mitchell's talk, many of us in the audience wondered if the middleware builder could be extracted from Vagrant and put in its own gem. At the MWRC Hackfest, Mitchell did just that and made it available on GitHub. Middleware solves complexity by explicitly showing the order that code has to run, makes it easy to see code dependencies, simplifies tests, and provides extensibility (via subclass).

Stephen Hageman ("Migrating from a Single Rails App to a Suite of Rails Engines", Software Engineer at Pivotal Labs)

In his lightning talk, Stephen explained how complex Rails applications can be broken down into Rails Engines. His talk was a summary of what he wrote in a post from his Pivotal Labs blog. Anyone who's built a Rails application using Spree will be familiar with this method. Breaking an application into functional engines, which can now be mounted with routes in Rails 3, provides the ability to simplify your test suite and even isolate certain functionality to individual web servers.

Jack Danger Canty ("Strong Code", Developer at Square)

Rails application complexity comes in many forms and Jack Danger Canty presented the problem of not only having large models, but many models that interact with each other. Jack measures complexity by the number of possible endpoints that each class has. His slides contain some great graph diagrams that illustrate the problem. Each node in the graph represents a class in your rails app. Then he shows the relationship between each class by drawing vertices between each class. This results in the formula v =((n-1)^2 +n)/2 where n represents the number of nodes and v represents the number of vertices in your graph. So for 15 nodes, there's a possibility of over 11,000 vertices! That complexity is often what makes big Rails apps difficult to adapt and change.

The solution that Jack presents is to reduce complexity in three ways: delete code, use a library or gem, and spin off a service. Deleting code can start with any functionality that doesn't bring any value to the users of your app. As developers, we often try to plan for the unknown by introducing extra code that's there "just in case". Deleting code requires a YAGNI attitude. If you don't need it, get rid of it. Many times code that is reused throughout the application can separated and isolated into a library. The advantage of doing this is that libraries can be maintained outside the application and if properly done, can contain their own test suite. Libraries can also be packaged in Ruby gems so that they are easier to keep isolated and avoid the temptation of constant tweaking. The last suggestion to spin off a service was also talked about by the next speaker so I'll cover the details there.

BJ Clark (“Service Oriented Architecture #realtalk”, Engineer at Goldstar)

Let's be honest. Throwing the phrase "Service Oriented Architecture" at a bunch of Ruby and Rails developers is courageous. SOA is a title that brings with it a ton of baggage that rubyists just don't like to deal with (SOAP, WSDL, ESB). In fact, this tweet pretty much sums up the community's thoughts on the subject. Indeed, BJ Clark spent the good majority of his talk convincing us that we really don't want to do this (#realtalk). Once we were all fairly warned, BJ then talked about how they use SOA at Goldstar with success and gives some very insightful tips for those willing to brave this kind of architecture.

To be clear, BJ isn't advocating using an enterprise class SOA with an Enterprise Service Bus and Object Brokering, or even using SOAP for that matter. What he's talking about is breaking your large Rails application into smaller applications that have a specific purpose. These applications then communicate with each other via HTTP behind your firewall. To keep things simple, the message format can be (should be) JSON. The thinking behind this is that smaller applications run faster than big applications and in a lot of cases it makes a lot of sense to do this. For example, your main application should not be concerned with sending email to your customers. So you could spin off a service that receives a simple JSON message from your main application to send an email and then let it do the processing for you.

To be successful doing SOA, BJ states that you need to have a small team where everyone knows about and works on all the services. This avoids potentially constraining ideas on what a service should and shouldn’t do. He also recommends that you read "Service-Oriented Design with Ruby and Rails" by Paul Dix to get you started but to adapt the concepts for your needs. For example, Dix states one advantage to having services is that they can automatically serve as an external API, but BJ believes that it's better to have an app serve as your API that talks to your services.


It's evident that the next big mountain to climb in Rails application development is going to be solving the problem of increased complexity. While Rails 3 introduced greater modularity to limit the framework's footprint, application developers still need to put some thought into how they are building their applications on that framework. While any of the ideas above can help with that effort, it is probably going to be a combination of several of them that develop into practices that we can implement to help cut down on complex and unmanageable code.

Three Things: Startups, Rails news, jQuery index

I recently had a conversation with Jon about End Point blogging, microblogging, and tweeting. Many of us End Pointer's have tips and tools that we encounter regularly that aren't worthy of an entire blog post, but are worthy of sharing. Here's my first attempt at sharing smaller bits of info – stay tuned to see how it works out.

1. Paul Graham’s Ambitious Startup

Here is an interesting recent article by Paul Graham entitled Frighteningly Ambitious Startup Ideas. It's great. Read it.

2. Rails Vulnerability Hack

If you aren’t up to speed on things going on in the Rails world, check out this recent commit. A GitHub user was able to exploit Rails' mass-assignment vulnerability to commit to the Rails core. Check out a few more related links at A Fresh Cup’s post on the incident.

3. jQuery's index method

I recently came across the index method in jQuery, and wanted to share an example of its use.

I’m using jQuery UI’s accordion on four categories (Period, Genre, Theme, Nationality) that have a set of options. A user can click any of the options to filter products, e.g. clicking on Folk Songs in the screenshot to the right would bring up products that have a Genre of Folk Songs. On the subsequent page load, the accordion region that includes the filtered option must be visible. Here’s what I came up with using the index method:

$(function() {
    var active = 0;
    if($('#accordion a.current').length) {
        active = $('#accordion div').index($('#accordion a.current').parent());
    $('#accordion').accordion({ active: active, autoHeight: false });

And here's how it breaks down:

  • First, the default active region is set to 0.
  • Next, if an accordion link has a class of current (or a filtered option is selected), the index method is used to determine the position of that link’s parent divider among all the accordion regions.
  • The accordion UI is created, set with the active option, which contains the selected link or defaults to the first accordion region.

jQuery's index method was used to set the active accordion region.

MWRC Highlights Part 1 of 2

I attended the 2012 Mountain West Ruby Conference in Salt Lake City last week. There were a lot of cool topics presented at this conference, but I've been suffering serious analysis paralysis trying to pick one topic to focus on here. So, I'm taking the easy way out. I'm going to run through some of the projects and tools mentioned during the conference that I flagged in my notes to check out further.

Sidekiq, a multi-threaded drop-in replacement for Resque

  • Sidekiq requires *much* less RAM to do the same amount of work as the single-threaded Resque, while providing a very similar API to Resque. It’s been designed to behave like Resque as much as possible and as such, would be a very easy transition for anyone that’s used Resque before.
  • By Mike Perham
  • Get it at:
  • Recommended by: Tony Arcieri

Book: Growing Object-Oriented Software Guided by Tests (aka GOOS)

  • The "GOOS book" was recommended by a number of speakers at the conference. One downside (for me) is that the Auction Sniper app developed in the book is done in Java. However, there are now ports of that code available in Ruby, Scala, C# and Groovy. Check out the book's website. The table of contents is very detailed, so you know what you're getting into.
  • By Steve Freeman and Nat Pryce
  • Get it at
  • Recommended by: Multiple presenters

PRY, a very powerful irb replacement and Rails debugging tool

"Ruby is a tool that gets out of your way. Build that for your clients."

  • More an ethos than a project or tool, this is a relatively direct quote from Jamis Buck in his talk entitled, "It's the Little Things." Part of what he talked about were the features or patterns of Ruby that tickle part of our brain and just feel right, natural or easy-flowing. The tools or apps that we build for our clients should have for that same sense of flow. I think achieving that requires a perspective shift that's not easy for many developers, but it's absolutely something worth aspiring to. Our clients deserve that.
  • As said by Jamis Buck from

Attending this conference was honestly pretty inspiring. It was my first Ruby conference, and it was exciting to hear some very knowledgeable people speak passionately about what they work on and how they work. I definitely have a lot more reading and investigation to do. Part 2 of this article will cover the remaining conference topics that I felt deserved a shout out. I'll probably do that in the next day or two.

Puppet custom facts and Ruby plugins to set a homedir

Puppet is an indispensable tool for system admins, but it can be tricky at times to make it work the way you need it to. One such problem one of our clients had recently was that they needed to track a file inside a user's home directory via Puppet (a common event). However, for various reasons the user's home directory was not the same on all the servers! As Puppet uses hard-coded paths to track files, this required the use of a custom Puppet "fact" and a helper Ruby script plugin to solve.

Normally, you can use Puppet to track a file by simply adding a file resource section to a puppet manifest. For example, we might control such a file inside a manifest named "foobar" by writing the file puppet/modules/foobar/init.pp as so:

class foobar {

  user {
      ensure     => present,
      managehome => true;

  file {
      ensure      => present,
      owner       => postgres,
      group       => postgres,
      mode        => 644,
      source      => [
      require     => User["postgres"];

This is a bare-bones example (the actual username and file were different), but gets the idea across. While we want to ensure that the postgres user has the correct .psqlrc file, we also need to make sure that the postgres user itself exists. Hence the user section at the top of the script. This will ask puppet to create that user if it does not already exist. We also added the "managehome" parameter, to ensure that the new user also has a home directory. If this parameter is false (or missing), puppet runs a plain useradd command (or its equivalent); if the parameter is true, it adds the -m or --create-home argument, which is what we need, as we need to monitor a file in that directory.

As there is no point in trying to manage the .psqlrc file before the user is created, we make the User creation check a pre-requisite via the "require" parameter; basically, this helps puppet determine the order in which it runs things (using a directed acyclic graph, a feature that should be familiar to git fans).

The file resource in this manifest, named "/home/postgres/.psqlrc", ensures that the file exists and matches the version stored in puppet. Most of the parameters are straightforward, but the source is not quite as intuitive. Here, rather than giving a simple string as the value for the parameter source, we give it an array of strings. Puppet will walk through the list until it finds the first one that exists, and use that for the actual file to use as /home/postgres/.psqlrc on each box using this manifest. This allows us to have different versions of the .psqlrc file for different arbitrary classes of boxes, but without having to write a separate manifest for each one. Instead, they all use the same manifest and simply change the $pg_environment variable, usually at the puppet "role" level.

The syntax puppet:///foobar/ is a way of telling puppet that the file is underneath the main puppet directory, inside the "foobar" directory, and in a subdirectory called "files". The level above "files" is where one would create different subdirectories based on $pg_environment, so your module might look like this:

         │     └─init.pp
              │     └─psqlrc

In the above, we have three versions of the psqlrc file stored: one for boxes with a $pg_environment of "production", one for a $pg_environment of "development", and a default one for boxes that do not have $pg_environment set (or whose $pg_environment string does not have a matching subdirectory).

All well and good, but the problem comes when the user in question already exists on more than one server, and has a different home directory depending on the server! We can no longer say "/home/postgres/.psqlrc", because on some boxes, what we really want is "/var/lib/pgsql/.psqlrc". There are a couple of wrinkles that prevent us from simply saying something like "$HOME/.psqlrc".

The first wrinkle is that Puppet runs as root, and what we need here is the $HOME of the postgres user, not root. The second wrinkle is that, even if we were to come up with a clever way to figure it out (say, by parsing /etc/passwd with an exec resource), we cannot add run-time code to our manifest and have it get stored into a variable. The reason is that the first thing Puppet does on a client is compiles all the manifests into a static catalog, that is then applied. Which introduces another wrinkle: even if we were to somehow know this information beforehand, what about the case where the user does not exist? We can ask puppet to create the user, but it is way, way too late in the game at that point to apply the location of the new home directory into our manifest.

We stated that the first thing puppet does is compile a catalog, but that's not strictly true: it actually walks through the manifests and does a few other things as well, including executing any plugins. We can use this fact to create a custom fact for each client server - this new fact will contain the location of the home directory for the postgres user on that server.

There's a few steps to get it all working. The first thing to know is that puppet provides a number of "facts" that get stored as simple key/value pairs, and these are available as variables you can use inside of your manifests. For example, you can put PostGIS on any of your hosts that contain the string "gis" somewhere in their hostname by saying:

  if $::hostname =~ /gis/ {
    package {
        ensure => latest;

This list of facts can be expanded by the use of "custom facts", which basically means we add our own variables that we can access in our manifests. In this particular case, we are going to create a variable named "$postgres_homedir", which we can then utilize in our manifest.

A custom fact is created by a Ruby function: this function should be in its own file, located in the "lib/facter" directory of the relevant module. So in our case, we will create a small ruby file named "postgres_homedir.rb" and stick it here:

         │  └─facter/
         │       └─postgres_homedir.rb
         │     └─init.pp
              │     └─psqlrc

The function itself follows a fairly standard format: The only unique parts are the actual system calls and the name of the variable:

# postgres_homedir.rb

Facter.add("postgres_homedir") do
  setcode do
    system('useradd -m postgres 2>/dev/null')
    Facter::Util::Resolution.exec('/bin/grep "^postgres:" /etc/passwd | cut -d: -f6').chomp

Since we've already shown how having Puppet create the user happens way too late in the game, and because we know that the foobar module always needs that user to exist, we've moved the user creation to a simple system call in this Ruby script. The -m makes sure that a home directory is created, and then the next line extracts the home directory and stores it in the global puppet variable $postgres_homedir. The 'useradd' line feels the least clean of all of this, and alternatives are welcome, but having the system do a 'useradd' and returning a (silenced) error each time the puppet client is run seems a fairly small price to pay for having this all work (and shorter than checking for existence, doing a conditional, etc).

Now that we have a way of knowing what the home directory of the postgres user will be *before* the manifest is compiled into a catalog, we can rewrite puppet/foobar/manifests/init.pp like so:

class foobar {

  file {

      path        => "${::postgres_homedir}/.psqlrc",
      ensure      => present,
      owner       => postgres,
      group       => postgres,
      mode        => 644,
      source      => [

Voila! We no longer have to worry about the user existing, because we have already done that in the Ruby script. We also no longer have to worry about what the home directory is set to, for we have a handy top-level variable we can use. Note the use of the :: to indicate this is in the root namespace; Puppet variables have a scope and a name, such as $alpha::bravo.

Rather than leave the title of this resource as the "path" (most puppet resources have sane defaults like that), we have explicitly set the path, as having a variable in the resource title is ugly and can make referring to it elsewhere very tricky. We also changed the on-disk copy of .psqlrc to psqlrc: while normally the files are the same, there is no reason to keep it as a "hidden" file inside the puppet repo.

Let's take a look at this module in action. We'll manually run puppetd on one of our clients, using the handy --test argument, which expands to --onetime --verbose --ignorecache --no-daemonize --no-usecacheonfailure. Notice how our plugins are retrieved and loaded before the catalog is built, and that our postgres user now has the file in question:

$ puppetd --test
info: Retrieving plugin
notice: /File[/var/lib/puppet/lib/facter/postgres_homedir.rb]/ensure: 
  content changed '{md5}0642408678c90dced5c3e34dc40c3415'
    to '{md5}0642408678c90dced5c3e34dc40c3415'
info: Loading downloaded plugin /var/lib/puppet/lib/facter/postgres_homedir.rb
info: Caching catalog for
info: Applying configuration version '1332065118'
notice: //foobar/File[postgres_psqlrc]/ensure: content changed 
    to '{md5}08731a768885aa295d3f0856748f31d5'
            Total: 1
          Applied: 1
      Out of sync: 1
        Scheduled: 195
            Total: 179
 Config retrieval: 1.63
             Exec: 0.00
             File: 6.28
       Filebucket: 0.00
            Group: 0.00
        Mailalias: 0.00
          Package: 0.13
         Schedule: 0.00
          Service: 0.51
             User: 0.01
            Total: 8.56
notice: Finished catalog run in 13.26 seconds

So we were able to solve out original problem via the use of custom facts, a Ruby plugin, and some minor changes to our manifest. While you don't have to go through all of this effort often, it's nice that Puppet is flexible enough to allow you do so!

Liquid Galaxy in GSoC 2012!

Once again The Liquid Galaxy Project has been accepted for the Google Summer of Code! Google Summer of Code is a tremendous program that provides an excellent opportunity for talented undergraduate and graduate students to work developing Open Source software guided by a mentor. Students receive $5000 stipends for successfully completing their summer projects. Last year The Liquid Galaxy Project had three GSoC slots for students. This year we are hoping for at least this number of slots again. I know it is not the point, but did I mention that participation in the program is a very nice credit for students to have on their resumes? ;-)

Right now we are in the "would-be student participants discuss application ideas with mentoring organizations" phase of the program. Interested students should contact the project's mentors and admins by emailing or by jumping into the #liquid-galaxy Freenode IRC channel. Applicants are well advised to take advantage of the opportunity to consult with project mentors in developing their applications. Student applications are being accepted until April 6 and should be emailed to

The Liquid Galaxy GSoC 2012 Ideas Page is at We are interested in project proposals based on all the topics listed there and in other ideas from students for projects that will advance the capabilities of Liquid Galaxy.

The overall timeline for this year's Google Summer of Code program is at

If you are a student with programming chops who likes Open Source software I highly recommend that you look at the Google Summer of Code program and apply to one or more of the great projects in the program! If you know of students who you think would be a good fit for the program you'll be doing a good deed by encouraging them to check it out!

Firebug in Action: CSS Changes

When I work with clients, I encourage them to use tools to improve efficiency for web development. Sometimes my clients want styling (CSS) adjustments like font-size, padding, or orientation changes, but they aren't sure what they want before they've seen it applied to real content. I recommend embracing a tool such as Firebug for examining temporary on-page CSS edits. Here's a quick video that demonstrates Firebug in action as I try out a few adjustments to End Point's home page.

Some of the changes I test out in the video include:

  • Font-color changes
  • Deleting DOM elements
  • Padding, margin adjustments
  • Background color changes

Firebug offers a lot more functionality but the video covers interactive CSS changes only. Read more about Firebug's features here. Chrome has similar functionality with the Developer Tools included in the core software. There are similar tools in the other browsers, but I develop in Chrome or Firefox and I'm not familiar with them.

Check JSON responses with Nagios

As the developer's love affair with JSON continues to grow, the need to monitor successful JSON output does as well. I wanted a Nagios plugin which would do a few things:

  • Confirm the content-type of the response header was "application/json"
  • Decode the response to verify it is parsable JSON
  • Optionally, verify the JSON response against a data file

Verify content of JSON response

For the most part, Perl's LWP::UserAgent class makes short work of the first requirement. Using $response->header("content-type") the plugin is able to check the content-type easily. Next up, we use the JSON module's decode function to see if we can successfully decode $response->content.

Optionally, we can give the plugin an absolute path to a file which contains a Perl hash which can be iterated through in attempt to find corresponding key/value pairs in the decoded JSON response. For each key/value in the hash it doesn't find in the JSON response, it will append the expected and actual results to the output string, exiting with a critical status. Currently there's no way to check a key/value does not appear in the response, but feel free to make a pull request on check_json on my GitHub page.

A Cache Expiration Strategy in RailsAdmin

I've been blogging about RailsAdmin a lot lately. You might think that I think it's the best thing since sliced bread. It's a great configurable administrative interface compatible with Ruby on Rails 3. It provides a configurable architecture for CRUD (create, update, delete, view) management of resources with many additional user-friendly features like search, pagination, and a flexible navigation. It integrates nicely with CanCan, an authorization library. RailsAdmin also allows you to introduce custom actions such as import, and approving items.

Whenever you are working with a gem that introduces admin functionality (RailsAdmin, ActiveAdmin, etc.), the controllers that provide resource management do not live in your code base. In Rails, typically you will see cache expirations in the controller that provides the CRUD functionality. For example, in the code below, a PagesController will specify caching and sweeping of the page which expires when a page is updated or destroyed:

class PagesController < AdminController
  caches_action :index, :show
  cache_sweeper :page_sweeper, :only => [ :update, :destroy ]


While working with RailsAdmin, I've come up with a different solution for expiring caches without extending the RailsAdmin functionality. Here are a couple of examples:

Page Caching

On the front-end, I have standard full page caching on static pages. In this case, the config/routes.rb maps wildcard paths to the pages controller and show action.

match '*path' => 'pages#show'

The controller calls the standard caches_page method:

class PagesController < ApplicationController
  caches_page :show

  def show
    @page = Page.find_by_slug(params[:path])

A simple ActiveRecord callback is added to clear the page cache:

class Page < ActiveRecord::Base

  after_update :clear_cache

  def clear_cache

Fragment Caching

When a page can't be fully cached, I might cache a view shared across the application. In the example below, the shared view is included in the layout – it's generated dynamically but the data does not change often, which makes it suitable for fragment caching.

<% cache "navigation" do -%>
  <% Category.each do |category| -%>
    <%= link_to, category_url(category) %>
  <% end -%>
<% end -%>

Inside the model, I add the following to clear the fragment cache when a category is created, updated, or destroyed:

class Category < ActiveRecord::Base
  after_create :clear_cache
  after_update :clear_cache
  before_destroy :clear_cache

  def clear_cache"navigation")


One thing that's noteworthy is that expire_page requires a class method on ActionController::Base while expire_fragment requires an instance method (see here versus here). Action cache expiration with ActiveRecord callbacks should work similarly with action caching, as a class method (reference).

An alternative approach here would be to extend the generic RailsAdmin admin controller to introduce a generic sweeper. However, the sweeper would have to determine what model was modified and what to expire it. This can be implemented and abstracted elegantly, but in my application I preferred to use simple ActiveRecord callbacks because the caching was limited to a small number of models.

RailsAdmin: A Custom Action Case Study

RailsAdmin is an awesome tool that can be efficiently used right out of box. It provides a handy admin interface, automatically scanning through all the models in the project and enhancing them with List, Create, Edit and Delete actions. However, sometimes we need to create a custom action for a more specific feature.

Creating The Custom Action

Here we will create an "Approve Review" action, that the admin will use to moderate user reviews. First, we need to create an action class rails_admin_approve_review.rb in Rails::Config::Actions namespace and place it in the "#{Rails.root}/lib" folder. Here is the template for it:

require 'rails_admin/config/actions'
require 'rails_admin/config/actions/base'

module RailsAdminApproveReview

module RailsAdmin
  module Config
    module Actions
      class ApproveReview < RailsAdmin::Config::Actions::Base
By default, all actions are present for all models. We will only show the "Approve" action for the models that actually support it and are yet unapproved. It means that they have approved attribute defined and set to false:
register_instance_option :visible? do
  authorized? && !bindings[:object].approved
RailsAdmin has a lot of configuration options. We will use one of them to specify that the action acts on the object (member) scope:
register_instance_option :member? do
We will also specify a css class for the action (from a grid of icons), so the link will display a little checkmark icon:
register_instance_option :link_icon do
Now, this is what I call "customized"!

The last step is, perhaps, the most important, because it actually processes the action. In this case, the action sets the approved attribute to true for the object. The code needs to be placed into the controller context. To do so we wrap it in the following block:

register_instance_option :controller do do
    @object.update_attribute(:approved, true)
    flash[:notice] = "You have approved the review titled: #{@object.title}."

    redirect_to back_or_index

Integrating the Custom Action Into RailsAdmin

The action is ready, now it is time to plug it in RailsAdmin. This includes two steps.

First, it should be registered with RailsAdmin::Config::Actions like this:

module RailsAdmin
  module Config
    module Actions
      class ApproveReview < RailsAdmin::Config::Actions::Base

This code was placed into config/initializers/rails_admin.rb to avoid the loading issue, that occurred because RailsAdmin config was loaded first and custom action class was not present yet. Next, the custom action needs to be listed in the actions config in config/initializers/rails_admin.rb:

RailsAdmin.config do |config|
  config.actions do



If your application is using CanCan with RailsAdmin, you also need to authorize the approve_review action:

class Ability
  include CanCan::Ability
  def initialize(user)
    if user && user.is_admin?
      cannot :approve_review, :all
      can :approve_review, [UserReview]

Full custom action can be viewed here

Additional Notes

RailsAdmin has a nice script that can be used for generating custom actions as external gems (engines). In the case of this blog article, the approve_review was integrated directly into the Rails application. RailsAdmin action configuration options can be found here.

End Point has been using RailsAdmin for an ecommerce project that uses Piggybak. Here are a few related articles:

Check HTTP redirects with Nagios

Often times there are critical page redirects on a site that may want to be monitored. Often times, it can be as simple as making sure your checkout page is redirecting from HTTP to HTTPS. Or perhaps you have valuable old URLs which Google has been indexing and you want to make sure these redirects remain in place for your PageRank. Whatever your reason for checking HTTP redirects with Nagios, you'll find there are a few scripts available, but none (that I found) which are able to follow more than one redirect. For example, let's suppose we have a redirect chain that looks like this: >> >>

Following multiple redirects

In my travels, I found check_http_redirect on Nagios Exchange. It was a well designed plugin, written by Eugene Kovalenja in 2009 and licensed under GPLv2. After experimenting with the plugin, I found it was unable to traverse multiple redirects. Fortunately, Perl's LWP::UserAgent class provides a nifty little option called max_redirect. By revising Eugene's work, I've exposed additional command arguments that help control how many redirects to follow. Here's a summary of usage:

-U          URL to retrieve (http or https)
        -R          URL that must be equal to Header Location Redirect URL
        -t          Timeout in seconds to wait for the URL to load. If the page fails to load, 
                    check_http_redirect will exit with UNKNOWN state (default 60)
        -c          Depth of redirects to follow (default 10)
        -v          Print redirect chain

If check_http_redirect is unable to find any redirects to follow or any of the redirects results in a 4xx or 5xx status code returned, the plugin will report a critical state code and the nature of the problem. Additionally, if the number of redirects exceeds the depth of redirects to follow as specified in the command arguments, it will notify you of this and exit with an unknown state code. An OK status will be returned only if the redirects result in a successful response to a URL which is a regex match against the options specified in the R argument.

The updated check_http_redirect plugin is available on my GitHub page along with several other Nagios plugins I'll write about in the coming weeks. Pull requests welcome, and thank you to Eugene for his original work on this plugin.

A Little Less of the Middle

I've been meaning to exercise a bit more. You know, just to keep the mid section nice and trim. But getting into that habit doesn't seem to be so easy. Trimming middleware from an app, that's something that can catch my attention.

Something that caught my eye recently is a couple recent commits to Postgres 9.2 that adds a JSON data type. Or more specifically, the second commit that adds a couple handy output functions: array_to_json() and row_to_json(). If you want to try it out on 9.1, those have been made available as a backported extension.

Lately I've been doing a bit of work with jQuery, using it for AJAX-y stuff but passing JSON around instead. (AJAJ?) And traditionally that involves something in between the database and the client rewriting rows from one format to another. Not that it's all that difficult; for example, in Python it's a simple module call:

jsonresult = json.dumps(cursor.fetchall())

... assuming I don't have any columns needing processing: TypeError: datetime.datetime(2012, 3, 09, 18, 34, 20, 730250,, name=None)) is not JSON serializable Similarly in PHP I can stitch together a JSON array to pass back to the front end:

while ($row = pg_fetch_assoc($rs))
    $rows[] = $row;
$jsonresult = json_encode($rows)

Now I can trim that out, and embed the encoding right into the database query:

SELECT row_to_json(pages) FROM pages WHERE page_id = 5;
-- or, to return an array of rows
SELECT array_to_json(array_agg(pages)) FROM pages WHERE page_title LIKE 'A Little Less%';

Notice the use of the row-type reference to the table itself after the SELECT, rather than just a single column. This outputs:

[{"page_id":105,"today":"π day","page_title":"A Little Less of the Middle","contents":"I've been meaning to exercise a bit more.  You...","published_on":"2012-03-15 03:30:00+00"}]

Compare that to the output from json_encode() above, where the database driver treated everything as a string, even the page_id integer. The other difference is the Postgres code doesn't do any quoting on Unicode characters:

[{"page_id":"105","today":"\u03c0 day","page_title":"A Little Less of the Middle","contents":"I've been meaning to exercise a bit more.  You...","published_on":"2012-03-15 03:30:00+00"}]

I'm a bit on the fence about whether it's a real replacement for doing it in middleware, especially in some web use cases where you typically want to do things like anti-XSS type processing on some fields before sending them off to a browser somewhere. Besides, at the moment at least, there's no built-in way to break JSON back apart in the database. But I'm sure there's some places getting direct JSON is helpful, and it's certainly an interesting start.

The Mystery of The Zombie Postgres Row

Being a PostgreSQL DBA is always full of new challenges and mysteries. Tracking them down is one of the best parts of the job. Presented below is an error message we received one day via tail_n_mail from one of our client's production servers. See if you can figure out what was going on as I walk through it. This is from a "read only" database that acts as a Bucardo target (aka slave), and as such, the only write activity should be from Bucardo.

 05:46:11 [85]: ERROR: duplicate key value violates unique constraint "foobar_id"
 05:46:11 [85]: CONTEXT: COPY foobar, line 1: "12345#011...

Okay, so there was a unique violation during a COPY. Seems harmless enough. However, this should never happen, as Bucardo always deletes the rows it is about to add in with the COPY command. Sure enough, going to the logs showed the delete right above it:

 05:45:51 [85]: LOG: statement: DELETE FROM public.foobar WHERE id IN (12345)
 05:46:11 [85]: ERROR: duplicate key value violates unique constraint "foobar_id"
 05:46:11 [85]: CONTEXT: COPY foobar, line 1: "12345#011...

How weird. Although we killed the row, it seems to have resurrected, and shambled like a zombie into our b-tree index, preventing a new row from being added. At this point, I double checked that the correct schema was being used (it was), that there were no rules or triggers, no quoting problems, no index corruption, and that "id" was indeed the first column in the table. I also confirmed that there were plenty of occurrences of the exact same DELETE/COPY pattern - with the same id! - that had run without any error at all, both before and after this error. If you are familiar with Postgres' default MVCC mode, you might make a guess what is going on. Inside the postgresql.conf file there is a setting named 'default_transaction_isolation', which is almost always set to read committed. Further discussion of what this mode does can be found in the online documentation, but the short version is that while in this mode, another transaction could have added row 12345 and committed after we did the DELETE, but before we ran the COPY. A great theory that fits the facts, except that Bucardo always sets the isolation level manually to avoid just such problems. Scanning back for the previous command for that PID revealed:

 05:45:51 [85]: LOG: statement: DELETE FROM public.foobar WHERE id IN (12345)
 05:46:11 [85]: ERROR: duplicate key value violates unique constraint "foobar_id"
 05:46:11 [85]: CONTEXT: COPY foobar, line 1: "12345#011...

So that rules out any effects of read committed isolation mode. We have Postgres set to the strictest interpretation of MVCC it knows, SERIALIZABLE. (As this was on Postgres 8.3, it was not a "true" serializable mode, but that does not matter here.) What else could be going on? If you look at the timestamps, you will note that there is actually quite a large gap between the DELETE and the COPY error, despite it simply deleting and adding a single row (I have changed the table and data names, but it was actually a single row). So something else must be happening to that table.

Anyone guess what the problem is yet? After all, "when you have eliminated the impossible, whatever remains, however improbable, must be the truth". In this case, the truth must be that Postgres' MVCC was not working, and the database was not as ACID as advertised. Postgres does use MVCC, but has two (that I know of) exceptions: the system tables, and the TRUNCATE command. I knew in this case nothing was directly manipulating the system tables, so that only left truncate. Sure enough, grepping through the logs found that something had truncated the table right around the same time, and then added a bunch of rows back in. As truncate is *not* MVCC-safe, this explains our mystery completely. It's a bit of a race condition, to be sure, but it can and does happen. Here's some more logs showing the complete sequence of events for two separate processes, which I have labeled A and B:

A 05:45:47 [44]: LOG: statement: TRUNCATE TABLE public.foobar
A 05:45:47 [44]: LOG: statement: COPY public.foobar FROM STDIN
B 05:45:51 [85]: LOG: statement: DELETE FROM public.foobar WHERE id IN (12345)
A 05:46:11 [44]: LOG: duration: 24039.243 ms
A 05:46:11 [44]: LOG: statement: commit
B 05:46:11 [85]: LOG: duration: 19884.284 ms
B 05:46:11 [85]: LOG: statement: COPY public.foobar FROM STDIN
B 05:46:11 [85]: ERROR: duplicate key value violates unique constraint "foobar_id"
B 05:46:11 [85]: CONTEXT: COPY foobar, line 1: "12345#011...

So despite transaction B doing the correct thing, it still got tripped up by transaction A, which did a truncate, added some rows back in (including row 12345), and committed. If process A had done a DELETE instead of a TRUNCATE, the COPY still would have failed, but with a better error message:

ERROR: could not serialize access due to concurrent update

Why does this truncate problem happen? Truncate, while extraordinarily handy, can be real tricky to implement properly in MVCC without some severe tradeoffs. A DELETE in Postgres actually leaves the row on disk, but changes its visibility information. Only after all other transactions that may need to access the old row have ended can the row truly be removed on disk (usually via the autovacuum daemon). Truncate, however, does not walk through all the rows and add visibility information: as the name implies, it truncates the table by removing all rows, period.

So when we did the truncate, process A was able to add row 12345 back in: it had no idea that the row was "in use" by transaction B. Similarly, B had no idea that something had added the row back in. No idea, that is, until it tried to add the row and the unique index prevented it! There appears to be some work on making truncate more MVCC friendly in future versions.

Here is a sample script demonstrating the problem:


use strict;
use warnings;
use DBI;
use Time::HiRes; ## so we can reliably sleep less than one second

## Connect and create a test table, populate it:
my $dbh = DBI->connect('dbi:Pg', 'postgres', '', {AutoCommit=>0});
$dbh->do('DROP TABLE foobar');
$dbh->do('CREATE TABLE foobar(a INT UNIQUE)');
$dbh->do('INSERT INTO foobar VALUES (42)');

## Fork, then have one process truncate, and the other delete+insert
if (fork) {
  my $dbhA = DBI->connect('dbi:Pg', 'postgres', '', {AutoCommit=>0});
  $dbhA->do('TRUNCATE TABLE foobar');          ## 1
  sleep 0.3;                                   ## Wait for B to delete
  $dbhA->do('INSERT INTO foobar VALUES (42)'); ## 2
  $dbhA->commit();                             ## 2
else {
  my $dbhB = DBI->connect('dbi:Pg', 'postgres', '', {AutoCommit=>0});
  sleep 0.3;                                   ## Wait for A to truncate
  $dbhB->do('DELETE FROM foobar');             ## 3
  $dbhB->do('INSERT INTO foobar VALUES (42)'); ## 3

Running the above gives us:

 ERROR:  duplicate key value violates unique constraint "foobar_a_key"
 DETAIL:  Key (a)=(42) already exists

This should not happen, of course, as process B did a delete of the entire table before trying an INSERT, and was in SERIALIZABLE mode. If we switch out the TRUNCATE with a DELETE, we get a completely different (and arguably better) error message:

 ERROR:  could not serialize access due to concurrent update 

However, it we try it with a DELETE on PostgreSQL version 9.1 or better, which features a brand new true serializable mode, we see yet another error message:

 ERROR:  could not serialize access due to read/write dependencies among transactions
 DETAIL:  Reason code: Canceled on identification as a pivot, during write.
 HINT:  The transaction might succeed if retried

This doesn't really give us a whole lot more information, and the "detail" line is fairly arcane, but it does give a pretty nice "hint", because in this particular case, the transaction *would* succeed if it were tried again. More specifically, B would DELETE the new row added by process A, and then safely add the row back in without running into any unique violations.

So the morals of the mystery are to be very careful when using truncate, and to realize that everything has exceptions, even the supposed sacred visibility walls of MVCC in Postgres.

PHP Vulnerabilities and Logging

I've recently been working on a Ruby on Rails site on my personal Linode machine. The Rails application was running in development with virtually no caching or optimization, so page load was very slow. While I was not actively developing on the site, I received a Linode alert that the disk I/O rate exceeded the notification threshold for the last 2 hours.

Since I was not working on the site and I did not expect to see search traffic to the site, I was not sure what caused the alert. I logged on to the server and checked the Rails development log to see the following:

Started GET "/muieblackcat" for at 2012-02-15 10:01:18 -0500
Started GET "/admin/index.php" for at 2012-02-15 10:01:21 -0500
Started GET "/admin/pma/index.php" for at 2012-02-15 10:01:22 -0500
Started GET "/admin/phpmyadmin/index.php" for at 2012-02-15 10:01:24 -0500
Started GET "/db/index.php" for at 2012-02-15 10:01:25 -0500
Started GET "/dbadmin/index.php" for at 2012-02-15 10:01:27 -0500
Started GET "/myadmin/index.php" for at 2012-02-15 10:01:28 -0500
Started GET "/mysql/index.php" for at 2012-02-15 10:01:30 -0500
Started GET "/mysqladmin/index.php" for at 2012-02-15 10:01:32 -0500
Started GET "/typo3/phpmyadmin/index.php" for at 2012-02-15 10:01:33 -0500
Started GET "/phpadmin/index.php" for at 2012-02-15 10:01:35 -0500
Started GET "/phpMyAdmin/index.php" for at 2012-02-15 10:01:36 -0500
Started GET "/phpmyadmin/index.php" for at 2012-02-15 10:01:38 -0500
Started GET "/phpmyadmin1/index.php" for at 2012-02-15 10:01:39 -0500
Started GET "/phpmyadmin2/index.php" for at 2012-02-15 10:01:41 -0500
Started GET "/pma/index.php" for at 2012-02-15 10:01:42 -0500
Started GET "/web/phpMyAdmin/index.php" for at 2012-02-15 10:01:44 -0500
Started GET "/xampp/phpmyadmin/index.php" for at 2012-02-15 10:01:46 -0500
Started GET "/web/index.php" for at 2012-02-15 10:01:48 -0500
Started GET "/php-my-admin/index.php" for at 2012-02-15 10:01:50 -0500
Started GET "/websql/index.php" for at 2012-02-15 10:01:52 -0500
Started GET "/phpmyadmin/index.php" for at 2012-02-15 10:01:53 -0500
Started GET "/phpMyAdmin/index.php" for at 2012-02-15 10:01:55 -0500
Started GET "/phpMyAdmin-2/index.php" for at 2012-02-15 10:01:57 -0500
Started GET "/php-my-admin/index.php" for at 2012-02-15 10:01:59 -0500
Started GET "/phpMyAdmin-2.2.3/index.php" for at 2012-02-15 10:02:00 -0500
Started GET "/phpMyAdmin-2.2.6/index.php" for at 2012-02-15 10:02:02 -0500
Started GET "/phpMyAdmin-2.5.1/index.php" for at 2012-02-15 10:02:04 -0500
Started GET "/phpMyAdmin-2.5.4/index.php" for at 2012-02-15 10:02:07 -0500
Started GET "/phpMyAdmin-2.5.5-rc1/index.php" for at 2012-02-15 10:02:09 -0500
Started GET "/phpMyAdmin-2.5.5-rc2/index.php" for at 2012-02-15 10:02:10 -0500
Started GET "/phpMyAdmin-2.5.5/index.php" for at 2012-02-15 10:02:12 -0500
Started GET "/phpMyAdmin-2.5.5-pl1/index.php" for at 2012-02-15 10:02:14 -0500
Started GET "/phpMyAdmin-2.5.6-rc1/index.php" for at 2012-02-15 10:02:16 -0500
Started GET "/phpMyAdmin-2.5.6-rc2/index.php" for at 2012-02-15 10:02:17 -0500
Started GET "/phpMyAdmin-2.5.6/index.php" for at 2012-02-15 10:02:19 -0500
Started GET "/phpMyAdmin-2.5.7/index.php" for at 2012-02-15 10:02:21 -0500
Started GET "/phpMyAdmin-2.5.7-pl1/index.php" for at 2012-02-15 10:02:23 -0500
Started GET "/phpMyAdmin-2.5.5-pl1/index.php" for at 2012-02-15 14:09:10 -0500

As it turns out, the domain somehow got picked up by crawlers that were looking for PHP vulnerabilities. It's interesting to see the various versions of phpMyAdmin the crawler is attempting to exploit. Judging from the crawled pages, there may also be a few other applications (e.g. TYPO3) that the crawler was trying to exploit. I'm not up to date on the various security exploits in PHP applications, but I was surprised to not see anything directly related to WordPress in the log, since I often hear of WordPress security issues.

Luckily, this particular application and all other applications on this server have virtually no private data, since most applications running on the server are CMS-type applications where all content is displayed on the front-end.

Handling outside events with jQuery and Backbone.js

I recently worked on a user interface involving a persistent shopping cart on an ecommerce site. The client asked to have the persistent cart close whenever a user clicked outside or "off" of the cart while it was visible. The cart was built with Backbone.js, and jQuery so the solution would need to play nicely with those tools.

The first order of business was to develop a way to identify the "outside" click events. I discussed the scenario with a colleague and YUI specialist and he suggested the YUI Outside Events module. Since the cart was built with jQuery and I enjoyed using that library, I looked for a comparable jQuery plugin and found Ben Alman's Outside Events plugin. Both projects seemed suitable and a review of their source code revealed a similar approach. They listened to events on the document or the <html> element and examined the property of the element that was clicked. Checking to see if the element was a descendant of the containing node revealed whether the click was "inside" or "outside".

With this in mind, I configured the plugin to listen to clicks outside of the persistent cart like so:

    $('#modal-cart').bind('clickoutside', function(event) {

The plugin worked just like it said on the tin, however further testing revealed a challenge. The site included ads and several of them were <iframe> elements. Clicks on these ads were not captured by the clickoutside event listener. This was a problem because the outside event listening code could be ignored depending on where the user clicked on the page.

To mitigate this issue, a second approach was taken. A "mask" element was added below the persistent cart. CSS was used to position the mask below the persistent cart using the z-index property. The mask was invisible to the user because the background was transparent. Instead of listening to clicks outside of the persistent cart, clicks on the mask element could be captured. Thanks to the magic of CSS, the mask covered the entire page (including those pesky <iframes>).

Now that I was able to handle the "outside" clicks properly, the event handling code needed to be configured inside the Backbone.js cart view. Backbone uses jQuery to handle events behind the scenes but the syntax is a little bit different.

Where with jQuery you might set up an event handler like this:


This would be the Backbone.js equivalent:

  "click #mask": "maskClickHandler"

Here is how it all shaped up inside the Backbone view. First, the event handler on the mask element was configured inside the events object of the view:

window.cartView = Backbone.View.extend({
  template: '#cart-template',

  initialize: function() {
    _.bindAll(this, 'render');

  initializeTemplate: function() {
    this.template = _.template($(this.template).html());

  // set up event listener on the mask element
  events: function() {
    "click #modal-cart-mask": "closeCart"

The openCart function was augmented to show the mask element each time the persistent cart was shown:

  openCart: function () {
    // show the mask element when the cart is opened
    this.isClosed = false;

Lastly, the closeCart function was modified to hide the mask element each time the persistent cart was closed:

  closeCart: function () {
    this.isClosed = true;
    // hide the mask element when the cart is closed

  render: function() {
    return this;

With this in place, the outside events were properly captured and handled by the same Backbone view that managed the persistent cart. How's that for playing nice?

jQuery Async AJAX: Interrupts IE, not Firefox, Chrome, Safari

I recently worked on a job for a client who uses ThickBox ( Now, ThickBox is no longer maintained, but the client has used it for a while, and, "if it ain't broke don't fix it," seems to apply. Anyway, the client needed to perform address verification checks through Ajax calls to a web-service when a form is submitted. Since the service sometimes takes a little while to respond, the client wanted to display a ThickBox, warning the user of the ongoing checks, then, depending on the result, either continue to the next page, or allow the user to change their address.

Since the user has submitted the form and is now waiting for the next page, I chose to have jQuery call the web service with the async=false option of the ajax() function. (Not the best choice, looking back). Everything worked well: Firefox, Safari and Chrome all worked as expected, and then we tested in IE. Internet Explorer would not pop up the initial ThickBox ('pleaseWait' below), until the Ajax queries had completed, unless I put an alert in place between them, then the ThickBox would appear as intended.

function myFunc(theForm) {
    tb_show("Please wait.','#TB_inline?
        type : "POST",
        url:  "call to webservice",
        data:   "address to post to the webservice",
        async:  false,
        success:  function (msg) {
To make a long story a bit shorter, we finally determined that the culprit was the async setting in the JQuery ajax call. Making the Ajax call in synchronous mode, locked the browser and delayed the display of the ThickBox, even though the tb_show call preceded the ajax call! Setting the async value to true solved the locking problem, and worked fine in all browsers.

The jQuery Documentation states: "Note that synchronous requests may temporarily lock the browser, disabling any actions while the request is active," which turns out to be true (at least for IE). What surprised me was , in this case, the ThickBox call was requested well before the jQuery AJAX call. This makes me think that maybe tb_show is triggered asynchronously in IE. In any case it is different here than in the other mainstream browsers, and it gets locked up by the synchronous Ajax.

I worked around the problem by setting the Ajax calls to be asynchronous, then keeping track to make sure both the shipping address and the billing address checks have returned before continuing.

The lessons here are: (1) Be careful using synchronous mode with jQuery AJAX calls. Easier to leave them as default and ensure all your calls have returned before you take the next step. (2) As always, don't assume your solution will work as intended until you see it work in IE.

IPv6 Tunnels with Debian/Ubuntu behind NAT

As part of End Point's preparation for World IPv6 Launch Day, I was asked to get my IPv6 certification from Hurricane Electric. It's a fun little game-based learning program which had me setup a IPv6 tunnel. IPv6 tunnels are used to provide IPv6 for those whose folks whose ISP or hosting provider don't currently support IPv6, by "tunneling" it over IPv4. The process for creating a tunnel is straight forward enough, but there were a few configuration steps I felt could be better explained.

After creating a tunnel, Hurricane Electric kindly provides a summary of your configuration and offers example configurations for several different operating systems and routers. Below is my configuration summary and the example generated by Hurricane Electric.

However, entering these commands change won't survive a restart. For Debian/Ubuntu users an update in /etc/network/interfaces does the trick.

auto he-ipv6
iface he-ipv6 inet6 v4tunnel
  address 2001:470:4:9ae::2
  netmask 64
  ttl 225 
  gateway 2001:470:4:9ae::1

Firewall Configuration

If you're running UFW the updates to /etc/default/ufw are very straightforward. Simply change the IPV6 directive to yes. Restart the firewall and your network interfaces and you should be able to ping6 I also recommend hitting for a detailed configuration test.

Behind NAT

If you're behind a NAT, the configuration needs to be tweaked a bit. First, you'll want to setup a static IP address behind your router. If you're router supports configuration of forwarding more than just TCP/UDP, you'll want to forward protocol 41 (aka IPv6) (NOT PORT 41), which is responsible for IPv6 tunneling over IPv4, to your static address. If you've got a consumer grade router that doesn't support this, you'll just have to put your machine in the DMZ, thus putting your computer "in front" of your router's firewall. Please make sure you are running a local software firewall if you chose this option.

After handling the routing of protocol 41, there is one small configuration change to /etc/network/interfaces. You must change your tunnel's local address from your public IP address, to your private NATed address. Here is an example configuration including both the static IP configuration and the updated tunnel configuration.

auto eth0
iface eth0 inet static

auto he-ipv6
iface he-ipv6 inet6 v4tunnel
  address 2001:470:4:9ae::2
  netmask 64
  ttl 225 
  gateway 2001:470:4:9ae::1

Don't forget to restart your networking interfaces after these changes. I found a good ol' restart was helpful as well, but of course, we don't have this luxury in production, so be careful!

Checking IPv6

If you're reading this article, you're probably responsible for several hosts. For a gentle reminder which of your sites you've not yet setup IPv6, I recommend checking out IPvFoo for Chrome or 4or6 for Firefox. These tools make it easy for you to see which of your sites are ready for World IPv6 Launch Day!

Getting Help

Hurricane Electric provides really great support for their IPv6 tunnel services (which is completely free). Simply email and provide them with some useful information such as:

cat /etc/network/interfaces
cat netstat -nrA inet6  (these are your IPv6 routing tables)
cat /etc/default/ufw
relevant router configurations
I was very impressed to get a response from a competent person in 15 minutes! Sadly, there is one downside to using this tunnel; IRC is not an allowed.
Due to an increase in IRC abuse, new non-BGP tunnels now have IRC blocked by default. If you are a Sage, you can re-enable IRC by visiting the tunnel details page for that specific tunnel and selecting the 'Unblock IRC' option. Existing tunnels have not been filtered.
I guess ya gotta earn it to use IRC over your tunnel. Good luck!