End Point


Welcome to End Point's blog

Ongoing observations by End Point people.

Upgrading Spree with the help of Git

Lately, I've upgraded a few Spree projects with the recent Spree releases. Spree is a Ruby on Rails ecommerce platform that End Point previously sponsored and continues to support. In all cases, my Spree project was running from the Spree gem (version 0.10.2) and I was upgrading to Spree 0.11.0. I wanted to go through a brief explanation on how I went about upgrading my projects.


First, I made sure my application was running and committed all recent changes to have a clean branch. I follow the development principles outlined here that describe methodology for developing custom functionality on top of the Spree framework core. All of my custom functionality lives in the RAILS_ROOT/vendor/extensions/site/ directory, so that directory probably won't be touched during the upgrade.

steph@The-Laptop:/var/www/ep/myproject$ git status
# On branch master
nothing to commit (working directory clean)

Then, I tried the rake spree:upgrade task with the following results. I haven't upgraded Spree recently, and I vaguely remembered there being an upgrade task.

steph@The-Laptop:/var/www/ep/myproject$ rake spree:upgrade
(in /var/www/ep/myproject)
[find_by_param error] database not available?
This task has been deprecated.  Run 'spree --update' command using the newest gem instead.

OK. The upgrade task has been removed. So, I try spree --update:

Updating to Spree 0.11.0 ...

That was easy! I run 'git status' and saw that there were several modified config/ files, and a few new config/ files:

# On branch master
# Changed but not updated:
#   (use "git add ..." to update what will be committed)
#   (use "git checkout -- ..." to discard changes in working directory)
#       modified:   config/boot.rb
#       modified:   config/environment.rb
#       modified:   config/environments/production.rb
#       modified:   config/environments/staging.rb
#       modified:   config/environments/test.rb
#       modified:   config/initializers/locales.rb
#       modified:   config/initializers/new_rails_defaults.rb
#       modified:   config/initializers/spree.rb
# Untracked files:
#   (use "git add ..." to include in what will be committed)
#       config/boot.rb~
#       config/environment.rb~
#       config/environments/cucumber.rb
#       config/environments/production.rb~
#       config/environments/staging.rb~
#       config/environments/test.rb~
#       config/initializers/cookie_verification_secret.rb
#       config/initializers/locales.rb~
#       config/initializers/new_rails_defaults.rb~
#       config/initializers/spree.rb~
#       config/initializers/touch.rb
#       config/initializers/workarounds_for_ruby19.rb

Because I had a clean master branch before the upgrade, I can easily examine the code changes for this upgrade from Spree 0.10.2 to Spree 0.11.0:


-      load_rails("2.3.5")  # note: spree requires this specific version of rails (change at your own risk)
+      load_rails("2.3.8")  # note: spree requires this specific version of rails (change at your own risk)


-  config.gem 'authlogic', :version => '>=2.1.2'
+  config.gem 'authlogic', :version => '2.1.3'
-  config.gem 'will_paginate', :lib => 'will_paginate', :version => '2.3.11'
+  config.gem 'will_paginate', :lib => 'will_paginate', :version => '2.3.14'
-  config.i18n.default_locale = :'en-US'
+  config.i18n.default_locale = :'en'


-config.gem 'test-unit', :lib => 'test/unit', :version => '~>2.0.5' if RUBY_VERSION.to_f >= 1.9
+config.gem 'test-unit', :lib => 'test/unit', :version => '~>2.0.9' if RUBY_VERSION.to_f >= 1.9

Most of the changes were not surprising, except that the locale changes here are significant because they may require extension locales to be updated. After I reviewed these changes, I installed newer gem dependencies and bootstrapped the data since all my application data was stored in sample data files and restarted the server to test the upgrade. Then, I added the config/ and public/ files in a single git commit. I removed the old temporary configuration files that were left around from the upgrade.

For this particular upgrade, my git log shows changes in the files below. The config/ files were made when I ran the update, and the public/ files were modified when I restarted the server as gem public/ files are copied over during a restart.

commit 96a68e86064aa29f51c5052631f896845c11c266
Author: Steph Powell 
Date:   Mon Jun 28 13:44:50 2010 -0600

    Spree upgrade.

diff --git a/config/boot.rb b/config/boot.rb
diff --git a/config/environment.rb b/config/environment.rb
diff --git a/config/environments/cucumber.rb b/config/environments/cucumber.rb
diff --git a/config/environments/production.rb b/config/environments/production.rb
diff --git a/config/environments/staging.rb b/config/environments/staging.rb
diff --git a/config/environments/test.rb b/config/environments/test.rb
diff --git a/config/initializers/cookie_verification_secret.rb b/config/initializers/cookie_verification_secret.rb
diff --git a/config/initializers/locales.rb b/config/initializers/locales.rb
diff --git a/config/initializers/new_rails_defaults.rb b/config/initializers/new_rails_defaults.rb
diff --git a/config/initializers/spree.rb b/config/initializers/spree.rb
diff --git a/config/initializers/touch.rb b/config/initializers/touch.rb
diff --git a/config/initializers/workarounds_for_ruby19.rb b/config/initializers/workarounds_for_ruby19.rb
diff --git a/public/images/admin/bg/spree_50.png b/public/images/admin/bg/spree_50.png
Binary files a/public/images/admin/bg/spree_50.png and b/public/images/admin/bg/spree_50.png differ
diff --git a/public/images/tile-header.png b/public/images/tile-header.png
Binary files /dev/null and b/public/images/tile-header.png differ
diff --git a/public/images/tile-slider.png b/public/images/tile-slider.png
Binary files /dev/null and b/public/images/tile-slider.png differ
diff --git a/public/javascripts/admin/checkouts/edit.js b/public/javascripts/admin/checkouts/edit.js
diff --git a/public/javascripts/taxonomy.js b/public/javascripts/taxonomy.js
diff --git a/public/stylesheets/admin/admin-tables.css b/public/stylesheets/admin/admin-tables.css
diff --git a/public/stylesheets/admin/admin.css b/public/stylesheets/admin/admin.css
diff --git a/public/stylesheets/screen.css b/public/stylesheets/screen.css

From my experience, the config/ and public/ files are typically modified with a small release. If your project has custom JavaScript, or overrides the Spree core JavaScript, you may have to review the upgrade changes more carefully. Version control goes a long way in highlighting changes and problem areas. Additionally, having custom code abstracted from the Spree core should allow for easier maintenance of your project.

Learn more about End Point's Ecommerce Development or Ruby on Rails Ecommerce Services.

Getting Started with Unit Testing

So, you're not writing tests.

And it's not like you don't want to, or think they're a bad idea. It just seems so hard to get started. The hurdles to clear feel like such an impediment. It Just Couldn't Possibly Be Productive To Start Testing Right Now, Not When My Really Important Project Needs to Get Finished in a Timely Manner.

Maybe you're working on a legacy project, on an application built on an old framework that isn't particularly friendly towards unit testing. To get testing, you'll need to wrestle with so many things, it just doesn't make sense to even try, right?

After a few years of using test-driven development (TDD) pretty consistently, I'm convinced that unit testing can and should be a more widespread practice. More importantly, after learning a lot of lessons over those few years, I think it's well within any dedicated individual's grasp. Care to hear about it? (Don't answer that.)

Digression the First: Why You Should Write Tests, In Case You Require Convincing (and if you're not writing them, then you clearly require convincing)

The code you write is for something, and that something is for somebody. Somebody cares that the stuff you wrote does what it's supposed to.

It follows that when you implement stuff, you tell the relevant Somebody about it. You tell them "hey, this is ready." Or "hey, this is finished; check it out."

Are you a liar?

Well, is it really ready, or isn't it?

How do you know it's ready? How can you claim that it works, that it's worth Somebody's time to check out or pay for or otherwise revel in?

Oh... you tested your work. I see. How did you test it?

Wait, you what? You went into a REPL client for your language of choice and called a bunch of functions? And you visually verified their results?

That sounds like a little bit of programming followed by a little manual inspection. As if you knew what results to look for given a particular set of inputs. Right?

So, wouldn't it be nearly as simple to have put that little check into a script, called it a test, and then been able to re-run that test at any time in the future?

Oh... I see. This is only part of what you needed to do. The other stuff is really high-level, you have to submit some stuff in a form and be sure that the data you get back looks right. So you tested through a browser. So, you feel like you can't write a test for that because you don't want to go set up Selenium or whatever and deal with the unpleasant prospect of testing actual HTTP/HTML applications?

Well, wait a sec... even if you have a form submission and some data to inspect, you still know how the data you should get back should be structured for a given set of inputs, right? Couldn't you write a test script for that piece, and at least have the foundational data layer stuff covered?

Oh..., so this is all in one big long hairy unmaintainable hacky script and the logic isn't split out that way. I see.

Wait... aren't you changing said script to do this work? Aren't you already in there making changes? You are? Well, aren't you -- as a sentient, sane human being capable of rational thought -- fully empowered to refactor pieces of this code, since you're already changing it?

Right, you could refactor things instead of maintaining the status quo. So, couldn't you, say, move the data processing piece into a function and test the inputs and outputs of that function? Or better yet, move the data stuff into a separate module specific to data abstraction for the relevant portion of your system, and test that module?

I understand that it isn't perfect, that it's not capturing the full stack of concerns, but it's capturing the core behaviors of what you're doing, right?

Isn't that possible and in fact not particularly hard?

Doesn't that really not impose all that much additional time on what you're already doing?

Wouldn't it be better if you started doing that right now and slowed the relentless accumulation of technical debt?

So, wait... why aren't you writing tests?

All Software Engineers are Liars, But Tests Make You Less of One

You lie!

Every software engineer will claim that something works and find that it doesn't work for the Relevant Somebody. Thus, every engineer will lie in making the claim that something is "done".

At least if you're writing tests as you go, addressing some subset of the core of your work, you can speak with greater confidence about what works and what doesn't. You will still be wrong. You will write imperfect tests. You won't be able to cover everything. You will miss certain corner cases.

You will still lie. But you'll lie less often, with less profundity.

Getting Started: It is the Biggest Challenge

The single biggest obstacle to getting going with unit tests is... getting going with unit tests.

Seriously. I don't think I've encountered any other area of software engineering that suffers as much as testing from inertia and excuse-mongering and whatever else. Something about it invites people to let any inconvenience or potential hurdle balloon into a solution-killing problem.

While certain aspects of unit testing undoubtedly pose significant challenges and, occasionally, do not present any particularly clean way out, for any given programming problem there is likely at least some aspect that can be productively tested with a minimum of fuss.

So, at a minimum, start there. In a festering code-pile with no obvious place to gain a testing foothold, there will still be such places: simple pieces of functionality in which the outputs are purely dependent upon the inputs and the logic in between, with no side effects, no external dependencies.

Some principles to start your testing life with:

  1. The right thing to start on is whatever you're working on right now. Do it. Now.
  2. To get moving, find some aspect of your code that is simple, self-contained, with clear expectations for inputs and outputs. Test that stuff. It's the easiest to test.
  3. Adapt your design and coding style to maximize the places that are simple, and self-contained, with clear expectations for inputs and outputs. This will maximize the testability of your code. It will also very possibly improve the organization of your code, as well.
  4. Don't worry about perfection and worry instead about getting some meaningful tests. If you're just started testing, the first tests you write are probably going to be pretty weak anyway. Just saying. That's how it is. Learning takes time. That's what change looks like.
  5. The language you're using almost certainly has some basic, common, standard testing framework. In Perl you've got Test::More. In Ruby you've got Test::Unit. In Python you've got the unittest module. Etc. They're not hard to learn, and you don't need to learn the whole thing to get started. Just write a script that pulls this stuff in and uses the basic assertions. Don't write a custom test script that doesn't build on something standard, because you'll be reducing productivity and missing out.
  6. Test the interface, not the implementation. This is something that can take some acclimation, depending on your mindset. The purpose of the test is to show that the subject adheres to its contract: given input A, you get output A'. Under foo conditions, your widget tastes green. And so on. The purpose is almost certainly not to show that the subject implements something in a particular manner.
  7. Start small. Start with something that has very clear expectations and a modest interface (one or two arguments, for instance, as opposed to arbitrarily complex arguments of nested data structures).
  8. Things that depend on shared/global state are messier to test. So are things that depend on side-effects (a database call, for instance, where you're less interested in the return value for a given set of interests and more interested in what happens in an external service as a side-effect of your call). Unless your framework gives you ways to deal with them in a testing situation (like Rails does with the database, for instance), maybe you don't want to start your first set of tests dealing with such stuff.
  9. Adapt your design and coding style to avoid reliance on shared/global state or side-effects. If such things are key to what you're doing, then design so the global state or side-effects are accessed through an interface you control. We'll get more into this subject in a subsequent posting.
  10. Remember that it isn't that hard: if you can clearly, definitively express the expected behavior in prose or speech, then you can express it in code.
  11. Remember that if it's hard, you're probably designing it poorly: if you find that you cannot express the expected behavior clearly, then you need to step back and reconsider the design.

This article will be the first in a series over the next few days. Next time, I'll look at some concrete examples.

SouthEast LinuxFest 2010

This past weekend I took a day to visit lovely, uh, Spartanburg, SC for the 2nd Annual SouthEast LinuxFest...

I've yet to live in an area of the country that either embraces Open Source Software (OSS) to a significant degree (Portland for instance) or is significantly populated (New York, San Francisco) or has significant university representation (Ann Arbor, Cambridge) which would allow me to get well connected in person or have large events to attend about the platforms I use regularly. Such is life, but it makes it difficult to feel engaged in the community aspects of many of the projects whose products I use on a daily basis. It also makes it difficult to learn in a quick fashion even the most basic elements about a new technology or practice that might be making its rounds at any given moment. Much to my surprise even here in "The South" there is a group of volunteers putting on a very good conference, not exceptional, not huge, but good. And for the second year, growing, and from the conversations I had with past participants improving!

The nature of the conference, i.e. a LinuxFest, makes the topic range incredibly varied as how can you have a targeted conference about all topics of a particular ecosystem as large as Linux is these days, but there were a couple of central themes--virtualization, clouds, and scalability seemed to be the common threads. (And how everyone is sick of hearing the word "cloud", but that was more unofficial.) One interesting (to me) thing I noticed was that git was talked about as if it was ubiquitous but there was a distinct lack of talks on it, has it and version control really come this far this fast?

The two keynotes revolved around community building efforts and a couple of the talks I attended had a similar bent. There were six sessions per time slot on Saturday and Sunday which helped to keep most of them small but provide something of interest for everyone throughout the day. Obviously I was only able to attend a small fraction of topics and most I chose because of something in particular that affects my daily work.

I was surprised to find a lack of "NoSQL" talks given the hype surrounding such data stores, but fortunately the first talk I attended, "Which Database Should You Choose?" ended up being a talk about the suggestion of an alternative name for "NoSQL" databases by the author of SQLite. He suggested the name be changed to "Postmodern Databases" and included a number of very funny but seemingly accurate comparisons to the postmodern philosophical and artistic movements. In the end the talk provided a nice overview of just what the NoSQL hype is all about, and despite the hype, just how important traditional RDBMS are and will continue to be. See "Why consistency is important when dealing with people's money"...or "how an expressive and full featured language makes interfacing complex data structures simpler" (even if it is one that people love to hate).

After the DB talk I moved on to a personal history of OSS from Jon "Maddog" Hall that was enjoyably informative though not necessarily overly practical, it did make me reflect on the fact that I've been running Linux for 13 years and that that is a frighteningly long time in this particular technology.

After which I attended a talk about Postfix by its author that was generally useful to someone that has "run" mail servers in the past, and dealt with SMTP from the development side, but knows that mail is more complex than any non-system admin could possibly handle. A nice overview, but it reconfirmed that I should leave this stuff to someone smarter than myself.

From there I moved to a talk about the Xen virtualization platform that was targeted at building the community around Xen.org, but my primary takeaway was this thing called the Hypervisor and a better understanding of just how Xen (and the other common platforms) implement virtualization on a hardware system. As a developer on a day-to-day basis I have familiarity with running virtual machines, and I can certainly see the benefits, but I'd had very little exposure to even the basics of how they are implemented under the hood and just the short part of this talk on the Hypervisor provided a means for doing my own research after the conference (a good goal for any conference talk). Speaking of which, reading the Hypervisor page on Wikipedia is a nice place to start.

Following that I attended a talk on Puppet by the product manager at PuppetLabs. And how could I go a whole day without a talk on a CMS web platform, so I jumped into a Drupal talk.

In general a well rounded and informative day. The usual spate of informal conversations between talks and at lunch provided nice filler and a chance to interact with other OSS users of varying backgrounds and experience levels. The conference despite a few hiccups here and there was well run and kept to the schedule. As far as I could tell all sessions were videotaped so should start to appear online soon, and most presenters indicated their slides would be available shortly.

NoSQL at RailsConf 2010: An Ecommerce Example

Even more so than Rails 3, NoSQL was a popular technical topic at RailsConf this year. I haven't had much exposure to NoSQL except for reading a few articles written by Ethan (Quick Thoughts on NoSQL Live Boston Conference, NoSQL Live: The Dynamo Derivatives (Cassandra, Voldemort, Riak), and Cassandra, Thrift, and Fibers in EventMachine), so I attended a few sessions to learn more.

First, it was reinforced several times that if you can read JSON, you should have no problem comprehending NoSQL. So, it shouldn't be too hard to jump into code examples! Next, I found it helpful when one of the speakers presented high-level categorization of NoSQL, whether or not the categories meant much to me at the time:

  • Key-Value Stores: Advantages include that this is the simplest possible data model. Disadvantages include that range queries are not straightforward and modeling can get complicated. Examples include Redis, Riak, Voldemort, Tokyo Cabinet, MemcacheDB.
  • Document stores: Advantages include that the value associated with a key is a document that exposes a structure that allows some database operations to be performed on it. Examples include CouchDB, MongoDB, Riak, FleetDB.
  • Column-based stores: Examples include Cassandra, HBase.
  • Graph stores: Advantages include that this allows for deep relationships. Examples include Neo4j, HypergraphDB, InfoGrid.

In one NoSQL talk, Flip Sasser presented an example to demonstrate how an ecommerce application might be migrated to use NoSQL, which was the most efficient (and very familiar) way for me to gain an understanding of NoSQL use in a Rails application. Flip introduced the models and relationships shown here:

In the transition to NoSQL, the transaction model stays as is. As a purchase is created, the Notification.create method is called.

class Purchase < ActiveRecord::Base
  after_create :create_notification

  # model relationships
  # model validations

  def total
    quantity * product.price

  def create_notification
      :action => "purchased #{quantity == 1 ? 'a' : quantity} #{quantity == 1 ? product.name : product.name.pluralize}",
      :description => "Spent a total of #{total}",
      :item => self,
      :user => user

Flip moves the product class to Document store because it needs a lot of flexibility to handle the diverse product metadata. The structure of the product class is defined in the product class and nowhere else.


class Product < ActiveRecord::Base
  serialize :info, Hash


class Product
  include MongoMapper::Document

  key :name, String
  key :image_path, String

  key :info, Hash


The Notification class is moved to a Key-Value store. After a user completes a purchase, the create method is called to store a notification against the user that is to receive the notification.


class Notification < ActiveRecord::Base
  # model relationships
  # model validations


require 'ostruct'

class Notification < OpenStruct
  class << self
    def create(attributes)
      message = "#{attributes[:user].name} #{attributes[:action]}"
      attributes[:user].follower_ids.each do |follower_id|
        Red.lpush("user:#{follower_id}:notifications", {:message => message, :description => attributes[:description], :timestamp => Time.now}.to_json)

The user model remains an ActiveRecord model and uses the devise gem for user authentication, but is modified to retrieve the notifications, now an OpenStruct. The result is that whenever a user's friend makes a purchase, the user is notified of the purchase. In this simple example, a purchase contains one product only.


class User < ActiveRecord::Base
  # user authentication here
  # model relationships

  def notifications
    Notification.where("friend_relationships.friend_id = notifications.user_id OR notifications.user_id = #{id}").
      joins("LEFT JOIN friend_relationships ON friend_relationships.user_id = #{id}")


class User < ActiveRecord::Base
  # user authentication here
  # model relationships

  def followers
    User.where('users.id IN (friend_relationships.user_id)').
      joins("JOIN friend_relationships ON friend_relationships.friend_id = #{id}")

  def follower_ids

  def notifications
    (Red.lrange("user:#{id}:notifications", 0, -1) || []).map{|notification| Notification.new(ActiveSupport::JSON.decode(notification))}

The disadvantages to the NoSQL and RDBMS hybrid is that data portability is limited and ActiveRecord plugins can no longer be used. But the general idea is that performance justifies the move to NoSQL for some data. In several sessions I attended, the speakers reiterated that you will likely never be in a situation where you'll only use NoSQL, but that it's another tool available to suit performance-related business needs. I later spoke with a few Spree developers and we concluded that the NoSQL approach may work well in some applications for product and variant data for improved performance with flexibility, but we didn't come to an agreement on where else this approach may be applied.

Learn more about End Point's Ruby on Rails Development or Ruby on Rails Ecommerce Services.

Rails 3 at RailsConf 2010: Code Goodness

At RailsConf 2010, popular technical topics this year are Rails 3 and NoSQL technologies. My first two articles on RailsConf 2010 so far (here and here) have been less technical, so I wanted to cover some technical aspects of Rails 3 and some tasty code goodness in standard ecommerce examples.


Bundler, a gem management tool, is a hot topic at the conference, which comes with Rails 3. I went to a talk on Bundler and it was mentioned in several talks, but a quick run through on its use is:

gem install bundler
gem update --system  # update Rubygems to 1.3.6+

Specify your gem requirements in the application root Gemfile directory.

# excerpt from Spree Gemfile in the works
gem 'searchlogic',            '2.3.5'
gem 'will_paginate',          '2.3.11'
gem 'faker',                  '0.3.1'
gem 'paperclip',              '>='
bundle install  # installs all required gems
git add Gemfile  # add Gemfile to repository

In Spree, the long-term plan is to break apart ecommerce functional components into gems and implement Bundler to aggregate the necessary ecommerce gems. The short-term plan is to use Bundler for management of all the Spree gem dependencies.


ActiveRecord has some changes that affect the query interface. Some ecommerce examples on new querying techniques with the idea of chaining finder methods:

recent_high_value_orders = Order
  .where("total > 1000")
  .where(["created_at >= :start_date", { :start_date => params[:start_date] }])
  .order("created_at DESC")

An example with the use of scope:

class Order << ActiveRecord::Base
  scope :high_value_orders where("total > 1000")
    .where(["created_at >= :start_date", { :start_date => Time.now - 5.days )])
    .order("created_at DESC")
class SomeController << YourApplication::AdminController
  def index
    orders = Order.high_value_orders.limit(50)
  def snapshot
    orders = Order.high_value_orders.limit(10)
  def winner

The changes to ActiveRecord provide a more sensible and elegant way to build queries and moves away from the so-called drunkenness on hashes in Rails. ActiveRecord finder methods in Rails 3 include where, having, select, group, order, list, offset, joins, includes, lock, read only, and from. Because the relations are lazily loaded, you have the ability to chain query conditions with no performance effects as the query hasn't been executed yet, and fragment caching is more effective because the query is executed from a view call. Eager loading can be forced by using first, last, and all.

Router Changes

Some new changes are introduced with Rails 3 in routing that move away from hash-itis, clarify flow ownership, and improve conceptual conciseness. A new route in a standard ecommerce site may be:

resources :users do
  member do
    get :index, :show
  resources :addresses
  resources :reviews 
    post :create, :on => :member

Another routing change on named routes allows:

get 'login' => 'sessions#new'   # sessions is the controller, new is the action


Some significant changes were changed to the ActionMailer class after a reexamination of assumptions and the decision to model mailers after a Rails controller instead of a model/controller hybrid. An example of use with ActionMailer now:

class OrderCompleteNotifier < ActionMailer::Base
  default :from => "customerservice@myecommercesite.com"
  def order_complete_notification(recipient)
    @recipient = recipient
    mail(:to => recipient.email_address_with_name,
         :subject => "Order information here")

And some changes in sending messages, allowing the following:

OrderCompleteNotifier.signup_notification(recipient1).deliver  # sends email
message = OrderCompleteNotifier.signup_notification(recipient2)


A few talks about Rails 3 mentioned the use of RailTies, which serves as the interface between the Rails framework and the rest of its components. It accepts configuration from application.rb, sets up initializers in extensions, tells Rails about generators and rake tasks in extensions, gems, plugins.

Rails 3.1

DHH briefly spoke about some Rails 3.1 things he's excited about, including reorganization of the public directory assets and implementing sprite functionality, which I am a big fan of.

Rails 3 Resources

A few recommended Rails 3 learning resources were mentioned throughout the conference, including:

There are tons of resources out there on these topics and more that I found as I was putting this article together. Go look and write code!

Learn more about End Point's Ruby on Rails Development or Ruby on Rails Ecommerce Services.

pgcrypto pg_cipher_exists errors on upgrade from PostgreSQL 8.1

While migrating a client from a 8.1 Postgres database to a 8.4 Postgres database, I came across a very annoying pgcrypto problem. (pgcrypto is a very powerful and useful contrib module that contains many functions for encryption and hashing.) Specifically, the following functions were removed from pgcrypto as of version 8.2 of Postgres:

  • pg_cipher_exists
  • pg_digest_exists
  • pg_hmac_exists

While the functions listed above were deprecated, and marked as such for a while, their complete removal from 8.2 presents problems when upgrading via a simple pg_dump. Specifically, even though the client was not using those functions, they were still there as part of the dump. Here's what the error message looked like:

$ pg_dump mydb --create | psql -X -p 5433 -f - >pg.stdout 2>pg.stderr
psql::2654: ERROR:  could not find function "pg_cipher_exists"
  in file "/var/lib/postgresql/8.4/lib/pgcrypto.so"
psql::2657: ERROR:  function public.cipher_exists(text) does not exist

While it doesn't stop the rest of the dump from importing, I like to remove any errors I can. In this case, it really was a SMOP. Inside the Postgres 8.4 source tree, in the contrib/pgcrypto directory, I added the following declarations to pgcrypto.h:

Datum       pg_cipher_exists(PG_FUNCTION_ARGS);
Datum       pg_digest_exists(PG_FUNCTION_ARGS);
Datum       pg_hmac_exists(PG_FUNCTION_ARGS);

Then I added three simple functions to the bottom of the pgcrypto.c file that simply throw an error if they are invoked, letting the user know that the functions are deprecated. This is a much friendlier way than simply removing the functions, IMHO.

/* SQL function: pg_cipher_exists(text) returns boolean */

             errmsg("pg_cipher_exists is a deprecated function")));

/* SQL function: pg_cipher_exists(text) returns boolean */


             errmsg("pg_digest_exists is a deprecated function")));
/* SQL function: pg_hmac_exists(text) returns boolean */


             errmsg("pg_hmac_exists is a deprecated function")));

After running make install from the pgcrypto directory, the dump proceeded without any further pgcrypto errors. From this point forward, if the anyone attempts to use one of the functions, it will be quite obvious that the function is deprecated, rather than leaving the user wondering if they typed the function name incorrectly or wondering if pgcrypto is perhaps not installed.

Why not just add some dummy SQL functions to the pgcrypto.sql file instead of hacking the C code? Because pg_dump by default will create the database as a copy of template0. While there are other ways around the problem (such as putting the SQL functions into template1 and forcing the load to use that instead of template0, or by creating the database, adding the SQL functions, and then loading the data), this was the simplest approach.

Photo of Enigma machine by Marcin Wichary

Learn more about End Point's Postgres Support, Development, and Consulting.

RailsConf 2010: Spree and The Ecommerce Smackdown, Or Not

Spree has made a good showing at RailsConf 2010 so far this year, new site and logo released this week:

It started off in yesterday's Ecommerce Panel with Sean Schofield, a former End Point employee and technical lead on Spree, Cody Fauser, the CTO of Shopify and technical lead of ActiveMerchant, Nathaniel Talbott, co-founder of Spreedly, and Michael Bryzek, the CTO and founder of Gilt Groupe.

The panel gave a nice overview on a few standard ecommerce questions:

  • My client needs a store - what technology should I use? Why shouldn't I just reinvent the wheel? The SaaS reps evangelized their technologies well, explaining that a hosted solution is good as a relatively immediate solution that has minimum cost and risk to a business. A client [of SaaS] need not be concerned with infrastructure or transaction management initially, and a hosted solution comes with freebies like the use of a CDN that improve performance. A hosted solution is a good option to get the product out and make money upfront. Also, both SaaS options offer elegant APIs.
  • How do you address client concerns with security? Again, the SaaS guys stressed that there are a few mistakes guaranteed to kill your business and one of those things includes dropping the ball on security, specifically credit card security. The hosted solutions worry about credit card security so the client doesn't have to. One approach to PCI compliance is to securely post credit card information to a 3rd party secure payment request API as the external requests minimizes the risk to a company. Michael (Gilt) discussed The Gilt Groupe's intricate process in place for security and managing encryption keys. Nathaniel (Spreedly) summarized that rather than focus on PCI compliance specifically, it's more important to have the right mindset about financial data security.
  • What types of hosting issues should I be concerned about? Next, the SaaS guys led again on this topic by explaining that they worry about the hosting - that a monthly hosted solution cost (Shopify.com starts at $24/month) is less than the cost of paying a developer who knows your technology in an emergency situation when your site goes down on subpar hosting. Michael (Gilt) made a good point by considering that everything is guaranteed to fail at some point - how do you (client) feel about taking the risk of that? do you just trust that the gateway is always up? One interesting thing mentioned by the SaaS guys is that technically, you should not be able to host any solution in the cloud if you touch credit card data, although you may likely be able to "get away with it" - I'm not sure if this was a scare tactic, but it's certainly something to consider. One disadvantage to hosting in the cloud are that you can't do forensic investigation after a problem if the machine has disappeared.

The remaining panel time was spent on user questions that focused on payment and transaction details specifically. There were a few bits of valuable transaction management details covered. Cody (Shopify) has no plans of developing or expanding on alternative payment systems because there isn't a good ROI. The concept of having something like OIDs for credit cards to shop online would likely not receive support from credit card companies, but PayPal [kinda] serves this role currently. Nathaniel (Spreedly) covered interesting details on how user transaction information is tracked: From day one, everything that is stacked on a users account is a transaction model object that mutate the user transaction state over time. The consensus was that a transaction log is the way to track user transaction information - you should never lose track of any dollarz. On the topic of data, Shopify and Spreedly collect and store all data - the first client step is to sell stuff, then later the client can come back to analyze data for business intelligence such as ROI per customer demographic, the average lifespan of a customer, or the computed value of a customer.

Now I take a break for an image, because it's important to have images in blog articles. Here is my view from the plane as I traveled to Baltimore.

After the panel, Spree had a Birds of a Feather session in the evening, which focused more on Spree. Some topics covered:

  • What is the current Spree road map to work on Rails 3? Extension development in Rails 3? As Rails 3 stabilizes over time, Spree will begin to transition but no one's actively doing Rails 3 work at this point. I spoke with Brian Quinn, a member of the Spree core team, who mentioned that he's recently spent time on investigating resource controller versus inherited resources or something else. The consensus was that people don't like Resource Controller (one attendee mentioned they used the Spree data model, but ripped out all of the controllers), but that a sensible alternative will need to be implemented at some point. Searchlogic, the search gem used in Spree search functionality, has no plans to upgrade to Rails 3, so Spree will also have to make sensible decisions for search. The Rails 3 generators have a reputation to be good, so this may trickle down to have positive effects on Spree extension development. Rails 3 also encourages more modular development, so the idea with Spree is that things will gradually be broken into more modular pieces, or gems, and the use of Bundler will tie the Spree base components together.
  • How's test coverage in Spree? Bad. Contributors appreciated.
  • I got scared after I went to the Ecommerce Panel talk - what's PCI compliance and security like in Spree? Spree is PCI compliant with the assumption that the client doesn't store credit cards - there is (was?) actually a Spree preference setting that defaults to not store credit cards in the database, but it will result in unencrypted credit cards being stored in the database if set to true. The Spree core team recently mentioned that this might be removed from the core. Offsite credit card use such as Authorize.Net CIM implementation is included in the Spree core.

The Spree Birds of a Feather session was good: the result of the session was likely a comprehension of the short and long term road map of Spree is as it transitions to Rails 3. This blog post was going to end here, but luckily I sat next to a Shopify employee this morning and learned more about Shopify. My personal opinion of the ecommerce panel was that advantages of the hosted solutions were appropriately represented, but there wasn't much focus on Spree and the disadvantages of SaaS weren't covered much. I learned that one major disadvantage to Shopify is that they don't have user accounts, however, user accounts are in development. Shopify is also [obviously] not a good choice for sites that have customization but there is a large community of applications. One example of a familiar customization is building a site with a one-deal-at-a-time business model (SteepAndCheap.com, JackThreads.com's former business model) - this would be difficult with Shopify. Some highlights of Shopify include it's template engine, based on Liquid, it has a great API, where you can do most things except place an order, and that it scales really well.

Obviously, I drink the Spree kool-aid often, so I learned more from the panel, BOF, and hallway talk on the subject of SaaS, or hosted Rails ecommerce solutions. The BOF session covered details on the Spree to Rails 3 (hot topic!) transition nicely.

Learn more about End Point's Ruby on Rails Development or Ruby on Rails Ecommerce Services.

RailsConf 2010 Rate a Rails Application: Day One, Session One

My first session at RailsConf 2010 was one that I found valuable: 12 hours to Rate a Rails Application presented by Elise Huard. Elise discussed the process of picking up a Rails application and analyzing it, rather than being the developer who develops a Rails application from scratch. This is particularly valuable because I've had to dive into and comprehend Rails projects more in the last few months. I'd imagine this will become more common as Rails matures and legacy code piles up. Elise mentioned that her motivation for application review comes from either acquisition or for project maintenance. Below is the 12-hour timeline covered by Elise:

0:00: Team Overview

First, Elise suggests that speaking to a team will reveal much about the application you are about to jump into. In our case at End Point, we often make up part of or all of the development team. She briefly mentioned some essential personality traits to look for:

  • control freak: someone who worries about details and makes other people pay attention to details
  • innovator: someone who looks for the next exciting things and doesn't hesitate to jump in
  • automator: people who care about process, more sys admin side of things
  • visionaries
  • methodologizers (ok, i made this word up): someone who has long term planning ability, road mapping insight
  • humility: important to be open to understanding code flaws and improve

Of course, there's overlap of personality traits, but the point is that these traits are are reflected in the code base in some way or another. Elise briefly mentioned that having an issue tracker or version control is viewed positively in application review (of course).

2:00: Systemic Review

The next step in a Rails application evaluation is running the app, making sure it works, examining the maintainability, rails version, and license. She also discussed avoiding the NIH syndrome during a review, which I interpreted as reviewing the code and avoiding thinking about reinventing of the wheel and taking the code and functionality as is (not sure I interpreted her intentions correctly) rather than immediately deciding that you would rewrite everything. Additional systemic indications of a good application are applications that use open source gems or plugins that are maintained and used by others, and that the application has passing tests.

3:00: Start Digging Around

The next step in a 12-hour Rails application review should be an initial poke around of the code. Elise likes to look at config/routes.rb because it's an interface application to the user and a good config/routes.rb file will be a representative inventory of the application. Another step in the review is to examine a model diagram, using a tool such as the railroad gem, or via rubymine. Another good overview is to examine how parts of the application are named, as the names should be understandable to someone in the business.

3:30: Metrics, Tools

Elise's next step in application review is using several metrics to examine complexity and elegance of code, which covered several tools that I haven't heard of besides the common and popular (already mentioned a few times at RailsConf) WTF-metric.

An overview of the tools:

  • "rake stats": lines of codes, methods, etc. per controllers, helpers, models
  • parsetree's ruby_parser: code transformed into an abstract syntax tree
  • flog: analysis of code complexity based on the ABC (Assignment Branch Condition) metric
  • flay: analysis for similarity between classes
  • saikuro: analysis of cyclomatic complexity
  • roodi: detection of antipatterns using the visitor pattern
  • reek: detection OO code smells
  • rails_best_practices: smell for rails antipatterns
  • churn: metrics on change of code run on version control history
  • rcov: analysis of code coverage
  • metric_fu: tool aggregate that includes many of above tools

Elise noted that although metrics are valuable, they don't identify bugs, analyze code performance and don't analyze the human readability of the code.

5:30: Check out the good stuff

Next up in an application review is looking at the good code. She likes to look at the database files: is everything in the migrations? is the database optimized sensibly? Occasionally, Rails developers can become a bit ignorant (sorry, true) to data modeling, so it's important to note the current state of the database. She also looks at the views to analyze style, look for divitis, and identify too much JavaScript or logic.

7:30: Test Code Review

The next step in Elise's review of a Rails application is checking out the test code. As implementation or requirements change, tests should change. The tests should express code responsibility and hide the implementation detail. Tests should reveal expressive code and don't necessarily need to follow the development DRY standard.

9:30: Deployment Methodology Review

Another point of review is to understand the deployment methodology. Automated deployment such as deployment using capistrano, chef, etc. are viewed positively. Similar to software development tests, failures should be expressive. Deployment is also viewed positively if performance tests and/or bottleneck identification is built into deployment practices.

11:00: Brownie Points

In the final hour of application review, Elise looks for brownie-point-worthy coverage such as:

  • continuous integration
  • documentation and freshness of documentation
  • monitoring (nagios, etc.), exception notification, log analyzers
  • testing javascript

I found this talk to be informative on how one might approach understanding an existing rails application. As a consultant, we frequently have to pick up a project and just go, Rails or not, so I found the tools and approach presented by Elise insightful, even if I might rearrange some of the tasks if I am going to write code.

The talk might also be helpful in providing details to teach someone where to look in an application for information. For example, a couple of End Point developers are starting to get into Rails, and from this talk I think it's a great recommendation to send someone to config/routes.rb to learn and understand the application routing as a starting point.

Learn more about End Point's Ruby on Rails Development or Ruby on Rails Ecommerce Services.

Spree vs Magento: A Feature List Comparison

Note: This article was written in June of 2010. Since then, there have been several updates to Spree. Check out the current Official Spree Extensions or review a list of all the Spree Extensions.

This week, a client asked me for a list of Spree features both in the core and in available extensions. I decided that this might be a good time to look through Spree and provide a comprehensive look at features included in Spree core and extensions and use Magento as a basis for comparison. I've divided these features into meaningful broader groups that will hopefully ease the pain of comprehending an extremely long list :) Note that the Magento feature list is based on their documentation. Also note that the Spree features listed here are based on recent 0.10.* releases of Spree.

Features on a Single Product or Group of Product

Product reviews and/or ratingsY, extensionY
Product qnaNN
Product seo (url, title, meta data control)NY
Advanced/flexible taxonomyY, coreY
Seo for taxonomy pagesNY
Configurable product searchY, coreY
Bundled products for discountY, extensionY
Recently viewed productsY, extensionY
Soft product support/downloadsY, extensionY, I think so
Product comparisonY, extensionY
Cross sellNY
Related itemsY, extensionY
RSS feed of productsNY
Multiple images per productY, coreY
Product option selection (variants)Y, coreY
WishlistY, extensionY
Send product email to friendY, extensionY
Product tagging / search by taggingNY
BreadcrumbsY, coreY

CMS Features

Blogging functionalityY, extensionY *extension
Static page managementY, extensionY
Media managementNY
Contact us formY, extensionY
PollsY, extensionY

Checkout Support

One page checkoutNY
Guest checkoutY, coreY
SSL SupportY, coreY
DiscountsY, coreY
Gift CertificatesNY
Saved Shopping CartNY
Saved AddressesY, extensionY

Shipping Support

Real time rate lookup (UPS, USPS, Fedex)Y, extensionY
Order trackingNY
Multiple shipments per orderY, coreY
Complex rate lookupY, extensionY
Free shippingY, extensionY

Payment Support

Multiple Payment GatewaysY, coreY
Authorize.netY, coreY
Authorize and capture versus authorize onlyY, coreY
Google CheckoutY, extensionY
Paypal ExpressY, extensionY

Admin Features

Sales reportingY, coreY
Sales Management ToolsNY
Inventory managementY, coreY
Purchase order managementNY
Multi-tier pricing for quantity discountsNY
Landing page toolY, extensionY
Batch import and export of productsY, extensionY
Multiple Sales reportsY, coreY
Order fulfillmentY, coreY
Tax Rate ManagementY, coreY

User Account Features

User addressesY, extensionY
Feature rich user preferencesNY
Order tracking historyY, coreY

System Wide Features

ExtensibilityY, coreY
Appearance ThemingY, coreY
Ability to customize appearance at category or browsing levelNY
LocalizationY, coreY
Multi-store, single admin supportY, extensionY
Support for multiple currenciesNY
Web Service APIY, coreY
SEO System wide: sitemap, google base, etcY, extensionY
Google AnalyticsY, coreY
Active communityY, N/AY

The configurability and complexity of each feature listed above varies. Just because a feature is provided within a platform does not guarantee that it will meet the desired business needs. Magento serves as a more comprehensive ecommerce platform out of the box, but the disadvantage may be that adding custom functionality may require more resources (read: more expensive). Spree serves as a simpler base that may encourage quicker (read: cheaper) customization development simply because it's in Rails and because the dynamic nature of Ruby allows for elegant extensibility in Spree, but a disadvantage to Spree could be that a site with a large amount of customization may not be able to take advantage of community-available extensions because they may not all play nice together.

Rather than focus on the platform features, the success of the development depends on the developer and his/her skillset. Most developers will say that any of the features listed above are doable in Magento, Spree, or Interchange (a Perl-based ecommerce platform that End Point supports) with an unlimited budget, but a developer needs to have an understanding of the platform to design a solution that is easily understood and well organized (to encourage readability and understandability by other developers), develop with standard principles like DRY and MVC-style separation of concerns, and elegantly abstract from the ecommerce core to encourage maintainability. And of course, be able to understand the business needs and priorities to guide a project to success within the given budget. Inevitably, another developer will come along and need to understand the code and inevitably, the business will often use an ecommerce platform longer than planned so maintainability is important.

Please feel free to comment on any errors in the feature list. I'll be happy to correct any mistakes. Now, off to rest before RailsConf!

Learn more about End Point's Ecommerce Development or Ruby on Rails Ecommerce Services.

Tracking Down Database Corruption With psql

I love broken Postgres. Really. Well, not nearly as much as I love the usual working Postgres, but it's still a fantastic learning opportunity. A crash can expose a slice of the inner workings you wouldn't normally see in any typical case. And, assuming you have the resources to poke at it, that can provide some valuable insight without lots and lots of studying internals (still on my TODO list.)

As a member of the PostgreSQL support team at End Point a number of diverse situations tend to cross my desk. So imagine my excitement when I get an email containing a bit of log output that would normally make a DBA tremble in fear:

LOG:  server process (PID 10023) was terminated by signal 11
LOG:  terminating any other active server processes
FATAL:  the database system is in recovery mode
LOG:  all server processes terminated; reinitializing

Oops, signal 11 is SIGSEGV, Segmentation Fault. Really not supposed to happen, especially in day to day activities. That'll cause Postgres to drop all of its current sessions and restart itself, as the log lines indicate. That crash was in response to a specific query their application was running, which essentially runs a process on a column across an entire table. Upon running pg_dump they received a different error:

ERROR:  invalid memory alloc request size 2667865904
STATEMENT:  COPY public.different_table (etc, etc) TO stdout

Different, but still very annoying and in the way of their data. So we have (at least) two areas of corruption. But therein lies the bigger problem: Neither of these messages give us any clues about where in these potentially very large tables it's encountering a problem.

Yes, my hope is that the corruption is not widespread. I know this database tends to not see a whole lot of churn, relatively speaking, and that they look at most if not all the data rather frequently. So the expectation is that it was caught not long after the disk controller or some memory or something went bad, and that whatever's wrong is isolated to a handful of pages.

Our good and trusty psql command line client to the rescue! One of the options available in psql is FETCH_COUNT, which if set will wrap a SELECT query in a cursor then automatically and repeatedly fetch the specified number of rows from it. This option is there primarily to allow psql to show the results of large queries without having to dedicate so much memory up front. But in this case it lets us see the output of a table scan as it happens:

testdb=# \set FETCH_COUNT 1
testdb=# \pset pager off
Pager usage is off.
testdb=# SELECT ctid, * FROM gs;
 ctid  | generate_series 
 (0,1) |               0
 (0,2) |               1
(scroll, scroll, scroll...)

(You did start that in a screen session, right? No need to have it send all the data over to your terminal, especially if you're working remotely. Set screen to watch for the output to go idle, Ctrl-A, _ keys by default, and switch to a different window. Oh, and this of course isn't the client's database, but one where I've intentionally introduced some corruption.)

We select the system column ctid to tell us the page where the problem occurs. Or more specifically, the page and positions leading up to the problem:

 (439,226) |           99878
 (439,227) |           99879
server closed the connection unexpectedly
        This probably means the server terminated abnormally
        before or while processing the request.
The connection to the server was lost. Attempting reset: Failed.

Yup, there it is. Some point after item pointer 227 on page 439, which probably actually means page 440. At this point we can reconnect, and possibly through a bit of trial and error narrow down the affected area a little more. But for now let's run with page 440 being suspect; let's take a closer look. And it here it should be noted that if you're going to try anything, shut down Postgres and take a file-level backup of the data directory. Anyway, first we need to find the underlying file for our table...

testdb=# select oid from pg_database where datname = 'testdb';
(1 row)

testdb=#* select relfilenode from pg_class where relname = 'gs';
(1 row)

testdb=#* \q
demo:~/p82$ dd if=data/base/16393/16394 bs=8192 skip=440 count=1 | hexdump -C | less
000001f0  00 91 40 00 e0 90 40 00  00 00 00 00 00 00 00 00  |..@...@.........|
00000200  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
00001000  1f 8b 08 08 00 00 00 00  02 03 70 6f 73 74 67 72  |..........postgr|
00001010  65 73 71 6c 2d 39 2e 30  62 65 74 61 31 2e 74 61  |esql-9.0beta1.ta|
00001020  72 00 ec 7d 69 63 1b b7  d1 f0 f3 55 fb 2b 50 8a  |r..}ic.....U.+P.|
00001030  2d 25 96 87 24 5f 89 14  a6 a5 25 5a 56 4b 1d 8f  |-%..$_....%ZVK..|
00001040  28 27 4e 2d 87 5a 91 2b  6a 6b 72 97 d9 25 75 c4  |('N-.Z.+jkr..%u.|
00001050  f6 fb db df 39 00 2c b0  bb a4 28 5b 71 d2 3e 76  |....9.,...([q.>v|
00001060  1b 11 8b 63 30 b8 06 83  c1 60 66 1c c6 93 41 e4  |...c0....`f...A.|

Huh, so through perhaps either a kernel bug, a disk controller problem, or bizarre action on the part of a sysadmin, the last bit of our table has been overwritten by the 9.0beta1 tarball distribution. Incidentally this is not one of the recommended ways of upgrading your database.

With a corrupt page identified, if it's fairly clear the invalid data covers most or all of the page it's probably not too likely we'll be able to recover any rows from it. Our best bet is to "zero out" the page so that Postgres will skip over it and let us pull the rest of the data from the table. We can use `dd` to seek to the corrupt block in the table and write out an 8k block of zero-bytes in its place. Shut down Postgres (just to make sure it doesn't re-overwrite your work later) and note the conv=notrunc that'll keep dd from truncating the rest of the table.

demo:~/p82$ dd if=/dev/zero of=data/base/16393/16394 bs=8192 seek=440 count=1 conv=notrunc
1+0 records in
1+0 records out
8192 bytes (8.2 kB) copied, 0.000141498 s, 57.9 MB/s
demo:~/p82$ dd if=data/base/16393/16394 bs=8192 skip=440 count=1 | hexdump -C
1+0 records in
1+0 records out
8192 bytes (8.2 kB) copied, 0.000147993 s, 55.4 MB/s
00000000  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|

Cool, it's now an empty, uninitialized page that Postgres should be fine skipping right over. Let's test it, start Postgres back up and run psql again...

testdb=# select count(*) from gs;
(1 row)

No crash, hurray! We've clearly lost some rows from the table, but that should now allow us to rescue any of the surrounding data. As always it's worth dumping out all the data you can, running initdb, and loading it back in. You never know what else might have been affected in the original database. This is of course no substitute for a real backup, but if you're in a pinch at least there is some hope. For now, PostgreSQL is happy again!

Learn more about End Point's Postgres Support, Development, and Consulting.