Welcome to End Point’s blog

Ongoing observations by End Point people

ASTC Day 1

I recently attended ASTC - Association of Science Technology Centers in Albuquerque, New Mexico (also known as Breaking Bad territory). I was amazed by the interactive and unique exhibits I encountered, as well as the cool museums and science centers that attended.

On Day 1 we had a full day of sessions geared towards empowering museum directors and exhibit designers to create meaningful exhibits in order to engage their visitors. As many of you know, End Point has installed our Liquid Galaxy display platform at several museums throughout the country and world, at places like the Ann Arbor Hands-On Museum, the San Jose Tech Museum and the Smithsonian Institution. Our experience working with museums has been absolutely positive thus far, so we wanted to meet others within the industry to not only promote the Liquid Galaxy further, but also learn how to expand our offerings for our museum clients.

One of the best sessions I attended was called Gaming in Museums to Engage Audiences. Experts from the Science Museum of Minnesota, Carnegie Science Center and the American Museum of Natural History all shared how they have utilized gaming experiences to enhance the visitor experience at their institutions. I especially enjoyed Keith Braafladt’s talk about how he utilized the MIT Scratch software at the Science Museum of Minnesota. For those of you not familiar, Scratch is an open-source educational programming language that was developed by the MIT Media Lab. Keith illustrated how they used this language to create “Scratch Day” at their museum, which is comprised of a series of workshops and activities to help their visitors experience interactive Scratch creations.

The session definitely inspired me to come back with some ideas for incorporating game design and concepts within our own Liquid Galaxy content program. With the ability to explore Google Earth, Ocean, Moon and Mars on the Liquid Galaxy, we have limitless possibilities to help our exhibit visitors, both young and old, experience the beauty of our galaxy.

If you are a museum that would like to incorporate games or other types of content on your own Liquid Galaxy, contact us!

Downstream Implications of Data Page Checksums

Now that Postgres 9.3 is all the rage, page checksums are starting to see use in production. It's not enabled by default during initdb, so you may want to double check the options used when you upgraded.

What? You have already upgraded to 9.3, right? No? Oh well, when you do get around to updating, keep an eye out for initdb's --data-checksums option, or just -k. To give the feature a try on my development desktop, after the initdb I created a table and loaded in some text data. Small text strings are being cast from integers so we can more easily see it in the on-disk structure. You'll see why in a moment. The table was loaded with a good amount of data, at least more than my shared_buffers setting:

postgres=# CREATE TABLE filler (txt TEXT PRIMARY KEY);
postgres=# INSERT INTO filler SELECT generate_series::text FROM generate_series(-10000000,10000000);
INSERT 0 20000001
postgres=# \dt+
List of relations
 Schema |  Name  | Type  |  Owner   |  Size  | Description
 public | filler | table | postgres | 761 MB |
(1 row)

There. Maybe a little more than I needed, but it works. My storage (on this desktop) is so much slower than the processor, of course, I certainly didn't notice any difference in performance with checksums on. But on your nice and speedy server you might see the performance hit. Anyway, now to find the file on disk...

postgres=# SELECT relfilenode FROM pg_class WHERE relname = 'filler';
(1 row)

postgres@endpoint:~/9.3$ dd bs=8192 count=1 skip=10 if=main/base/12066/16390 of=block
1+0 records in
1+0 records out
8192 bytes (8.2 kB) copied, 0.0161733 s, 507 kB/s

That relfilenode (plus the "postgres" database oid of 12066) corresponds to base/12066/16390, so I've taken a copy of the 10th page in that file. And then introduced some "silent" corruption, such as some that might be seen if I had a scary storage driver, or a cosmic ray hit the disk platter and flipped a bit:

postgres@endpoint:~/9.3$ sed -iorig 's/9998000/9999000/' block

postgres@endpoint:~/9.3$ diff -u <(hexdump -C block) <(hexdump -C blockorig)
--- /dev/fd/63    2013-10-18 16:35:22.963860942 -0400
+++ /dev/fd/62    2013-10-18 16:35:22.963860942 -0400
@@ -134,7 +134,7 @@
 00000850  98 00 01 00 02 09 18 00  13 2d 39 39 39 37 39 39  |.........-999799|
 00000860  39 00 00 00 00 00 00 00  02 00 00 00 00 00 00 00  |9...............|
 00000870  00 00 00 00 00 00 0a 00  97 00 01 00 02 09 18 00  |................|
-00000880  13 2d 39 39 39 39 30 30  30 00 00 00 00 00 00 00  |.-9999000.......|
+00000880  13 2d 39 39 39 38 30 30  30 00 00 00 00 00 00 00  |.-9998000.......|
 00000890  02 00 00 00 00 00 00 00  00 00 00 00 00 00 0a 00  |................|
 000008a0  96 00 01 00 02 09 18 00  13 2d 39 39 39 38 30 30  |.........-999800|
 000008b0  31 00 00 00 00 00 00 00  02 00 00 00 00 00 00 00  |1...............|

Yep, definitely right in the middle of a column value. Normal Postgres wouldn't have noticed at all, and that incorrect value could creep into queries that are expecting something different. Inject that corrupt page back into the heap table...

postgres@endpoint:~/9.3$ dd bs=8192 count=1 seek=10 of=main/base/12066/16390 if=block
1+0 records in
1+0 records out
8192 bytes (8.2 kB) copied, 0.000384083 s, 21.3 MB/s

postgres=# SELECT count(*) FROM filler;
WARNING:  page verification failed, calculated checksum 14493 but expected 26981
ERROR:  invalid page in block 10 of relation base/12066/16390

... And our checksum-checking Postgres catches it, just as it's supposed to. And, obviously, we can't modify anything on that page either, as Postgres would need to read it into the shared buffer before any tuples there could be modified.

postgres=# UPDATE filler SET txt ='Postgres Rules!' WHERE txt = '-9997999';
WARNING:  page verification failed, calculated checksum 14493 but expected 26981
ERROR:  invalid page in block 10 of relation base/12066/16390

The inability to even try to modify the corrupted data is what got me thinking about replicas. Assuming we're protecting against silent disk corruption (rather than Postgres bugs,) nothing corrupted has made it into the WAL stream. So, naturally, the replica is fine.

postgres@endpoint:~/9.3$ psql -p 5439
psql (9.3.1)
Type "help" for help.
postgres=# SELECT ctid, * FROM filler WHERE txt IN ('-9997998', '-9997999', '-9998000', '-9998001');
   ctid   |   txt
 (10,153) | -9997998
 (10,152) | -9997999
 (10,151) | -9998000
 (10,150) | -9998001
(4 rows)

You'd probably be tempted to fail over to the replica at this point, which would be the Right thing to do. You are, after all, starting to see odd and (presumably) unexplained corruption in the on-disk state. You would be wise to switch off that hardware as soon as you can and investigate.

But Halloween is right around the corner, so lets given to some Mad Scientist tendencies! And remember, only try this at home.

postgres@endpoint:~/9.3$ dd bs=8192 count=1 skip=10 if=replica/base/12066/16390 seek=10 of=main/base/12066/16390
1+0 records in
1+0 records out
8192 bytes (8.2 kB) copied, 0.000295787 s, 27.7 MB/s

With one assumption -- that the replica is caught up to the point where the primary first saw the page as corrupted -- the replica should be guaranteed to have an up-to-date copy of the page, even if other things on the master are changing and the replica's lagging behind.

Above, we did a direct copy from the replica's version of that page back to the master...

postgres@endpoint:~/9.3$ psql -p 5435
psql (9.3.1)
Type "help" for help.

postgres=# SELECT ctid, * FROM filler WHERE txt IN ('-9997998', '-9997999', '-9998000', '-9998001');
   ctid   |   txt
 (10,150) | -9998001
 (10,151) | -9998000
 (10,152) | -9997999
 (10,153) | -9997998
(4 rows)

postgres=# UPDATE filler SET txt ='Postgres Rules!' WHERE txt = '-9997999';

Voila! Corruption fixed without even having to take the database down.

SSL Certificate SANs and Multi-level Wildcards

Some bits of advice for those that run their own Certificate Authorities or use self-signed certificates, related to multiple matches and wildcard domains.

In some circumstances it's desirable to match multiple levels of wildcards in an SSL certificate. One example of this is in our Camp development system (whose domain names are in the format, where n is a numeric identifier of the camp), where having a certificate which matches something like and also would be needed.

The most obvious way to do this is to create a certificate whose commonName is *.*; unfortunately this is also not a working solution, as it is unsupported with current-day browsers. The alternative is to create a subjectAltName (which is an alias within the certificate for the subject of the certificate, abbreviated SAN) for each subdomain which we want to wildcard. For Camps this works well because the subdomains are in an extremely regular format so we can create a SAN for each [0..99] One caveat is that if SANs are in use they must also contain the commonName (CN) as an alternate name, since the browser will ignore the CN in that case (in this example, a SAN for * and would be added).

SANs are added to the certificate when the certificate signing request is created. Assuming that the key for the certificate has already been generated as, the first step is to make a copy ofthe system's openssl.cnf file, naming it something like Next this file is edited to add the SANs.

First uncomment the line which says:

req_extensions = v3_req

Next in the [v3_req] section, add the SANs. There are two ways to do this, either inline or as a separate section. Inline is in the format:

subjectAltName=DNS:*,DNS:* [...]

Sectional is in the format:


And then a section [alt_names] is created later in the file:


Note that in the DNS.n lines, the n enumerates the entry number starting at 1.

The sectional format seems more convenient for programmatic generation of the cnf file, since a simple template could be made that is merely appended to.

Once the cnf file is created, the CSR is generated in the normal way, with the new cnf file specified:

openssl req -new -out -key -config

This creates the CSR file, which can then be signed by the CA or the key in the normal way, with one important point: the cnf file used by the CA must have the following line uncommented:

copy_extensions = copy

This line ensures that the v3 extension sections for subjectAltNames are copied from the CSR into the newly minted certificate. The reason that this is commented out by default is that it introduces a security risk if used against CSRs that aren't strictly controlled since some extensions may cause the CSRs to produce certificate signing certificates.

The resultant certificate will contain the subjectAltNames specified, and can be verified by looking at the contents of the certificate with:

openssl x509 -in -text -noout

There will be a lot of output, but will contain something like:

X509v3 Subject Alternative Name:
     DNS:*, DNS:*, DNS:* [...]

When you see this, you'll know everything worked. I hope this summary will help anyone who has this need, and here's to the day when browsers support multiple-level wildcard certificates and this advice is moot.

Changing Postgres pg_dump warnings into errors with sed

Turning off warnings when loading a PostgreSQL data dump can be a trickier task than it may first appear. One of the goals of a program I was working on was to automatically load the schema of a production database into a development system. The program needed to stay as quiet as possible - the users running it did not need to be distracted by annoying messages generated by Postgres as the schema was loaded.

The problem occurred when a text-based dump of the database (created via pg_dump with the --schema-only flag) was loaded into an existing database with the help of the the psql program. To load the file "test.sql" into the database wilber, one usually does:

$ psql wilber -U postgres -f test.sql

There are two general classes of "noise" that are generated by such an action. I'll show how to make both kinds quiet.

The first problem is that psql by default is very verbose and every single command gives an echoing confirmation of what just occurred. Thus, a typical schema load outputs a lot of things like this:

$ psql wilber -U postgres -f test.sql

These types of messages are easy to turn off. Simply add the --quiet flag when calling psql, usually abbreviated to just -q, like so:

$ psql wilber -q -U postgres -f test.sql

The other type of unwanted output that can appear are the various messages from Postgres itself. These have different levels of severity: which ones are shown to you are determined by the client_min_messages setting. From least important to most important, the levels are:

  • LOG

The output I was seeing looked like this:

psql:test.sql:22716: WARNING:  => is deprecated as an operator name
DETAIL:  This name may be disallowed altogether in future versions of PostgreSQL.

Not something I wanted users to see every time they ran the script! The solution was to set client_min_messages to something higher than WARNING. In this case, ERROR is a good choice. All warning-level notices will be suppressed. The SQL would look like this:

SET client_min_messages = ERROR;

However, because we are loading a file, there is no easy way to enter in SQL commands before the file is loaded. Luckily, there is a very-poorly-documented trick that can apply some settings before a program (such as psql) is run. Just set the PGOPTIONS environment variable. It allows you to pass certain options to psql and other programs. For the case at hand, you would do this:

$ PGOPTIONS='--client-min-messages=error' psql wilber -q -U postgres -f test.sql

This will ensure that client_min_messages is set to ERROR before psql starts to load the file. If you are using a Perl script (as I was), you can set the environment variable before making a system call:

$ENV{PGOPTIONS} = '--client-min-messages=error';
$COM = 'psql wilber -q -U postgres -f test.sql';
system $COM;

However, if your file is from a pg_dump, you will find that this trick does not work! Why? When pg_dump creates a file, it adds some SET commands to the top of it. One of those is client_min_messages, which will clobber whatever you have set it to. D'oh! Here's what pg_dump currently sticks at the top of its output:

$ pg_dump --schema-only | head -20 | grep -n '^SET'
5:SET statement_timeout = 0;
6:SET client_encoding = 'UTF8';
7:SET standard_conforming_strings = on;
8:SET check_function_bodies = false;
9:SET client_min_messages = warning;

What to do? The schema file was very large, so editing it was not an option - and certainly not for this automated task. The answer was to modify the large schema text file before loading it. While perl is my goto tool for most things, this looked the perfect job for the sed program. So, in my script, I simply added this right before psql is invoked:

  $COM = "sed --in-place '1,/client_min_messages/ {s/warning/ERROR/}' test.sql";
  system $COM;

The short explanation is that it replaces the 'warning' on line 9 of the file with the word 'error'. This prevented any of the annoying warnings from showing up, and made the script nice and quietly happy again.

The long explanation about how it works is the rest of this post. :)

The sed program is a "stream editor" designed to quickly transform text. The man page is not very useful: I recommend learning more about sed via an online tutorial. This one is my favorite, not least because of the "modem noise" joke (kids, look that one up).

First we instruct sed, via the --in-place flag to make the changes directly to the file itself, rather than sending output to stdout. This is usually abbreviated to simply -i, but I consider the longer form more polite, as it's a little less obtuse to someone not familiar with the flags. Consider it a cheap way to comment your code.

After that, in single quotes, is the actual instructions we pass to sed. Finally, we give it the file to perform the work on. Again, the instructions to sed were:

'1,/client_min_messages/ {s/warning/ERROR/}'

This has two sections: what to do, and where to do it. By default, sed operates on every line of the file. As we only want sed to change the first occurrence of client_min_messages, we tell it to limit its actions:


You can instruct sed to operate on a range of lines by using the X,Y format, aka start,stop. A lone number represents a line number in the file; in this case we want to start at the first line of the file. You can also provide a regular expression. In this case, we want to stop processing when we find a line that matches the regular expression /client_min_messages/. Note that these limits are inclusive, so that we still process the matched line.

The next section tells sed what to do:

' {s/warning/ERROR/}'

Note that both the extra space and the braces are not needed, but are there simply to make things easier to read. You can also change the delimiter, which can arguably help as well; here is an equivalent version:

'1,/client_min_messages/ {s:warning:ERROR:}'

We are telling sed to make a simple substitution of the word warning with the word ERROR. I used uppercase here to make it more obvious to anyone viewing the file that a change had been made (Postgres does not care what the case is).

Normally, this might be considered a fairly sloppy regex, as it doesn't match very strictly - any lines up to and including the client_min_messages will have the word warning replaced. However, because we can be confident that the pg_dump output is going to remain very standard, we can let it slide. A more proper solution might be:

'1,/client_min_messages/ {s:client_min_messages = warning:client_min_messages = ERROR:}'

Why didn't I simply replace the regex and be done, like this?:

's/client_min_messages = warning/client_min_messages = ERROR/'

Two reasons. First, by setting a range, we ensure that we only make the change at the top of the file, and not anywhere else where it may be set (not by pg_dump itself, but perhaps as the text inside a function. You never know). This is also more efficient, as sed doesn't need to scan the lines for a potential replacement after it hits the first occurrence at line 9.

The second reason is that we can now safely run this against the file more than once, and not worry about a bad replacement or about losing efficiency. If we search for the 'warning' version of the regex, then the second time through we may end up scanning the entire file (and any substitutions we do make will be in the wrong place).

To recap, the winning lines in Perl were:

$COM = "sed --in-place '1,/client_min_messages/ {s/warning/error/}' test.sql";
system $COM;

Once we have this, we no longer need the PGOPTIONS setting. Which was a little dangerous as a global $ENV inside the perl script anyways, as the setting persists even after the first system call! So sed has saved the day, or at least made the users of this program happier!

Full Page Caching in Interchange 5

I recently attended the eCommerce Innovation Conference 2013 with fellow End Pointer Richard Templet and presented on the work End Point has done to develop full-page caching in Interchange 5. The work is in final review currently for inclusion into core Interchange and should provide a roadmap for cache management in Nitesi/Interchange 6.

Parameters of a Caching Solution

In order to identify the full scope of what one means when one says an application uses caching, there are (at least) 3 aspects to define:

  • Where in the application stack is the cache being applied.
  • How long until the cache is expired.
  • What level of user state must it support.

The issue of cache duration is not addressed here as any such formulation will be specific to business requirements and tolerance of stale data. I will state, however, that even very brief cache durations can have an enormous impact on performance, particularly for sites that have a relatively small number of resources that absorb the bulk of the traffic. Cache durations of several minutes to several hours can drop the traffic that makes it to Interchange to a small fraction of the overall requests.

Caching and the Application Stack

Let's examine an ideal target architecture for a simple implementation of an Interchange store, or stores. Note that we could also introduce load balancing layers to substantially boost capacity through horizontal scaling at multiple points in this stack, and if we did so we'd need to add those to our list to identify the impact of selecting a target cache point.

  • Browser
  • Reverse proxy (e.g., Varnish, Pound, nginx)
  • Web server (e.g., Apache httpd, nginx)
  • Interchange, with session storage
    • Database (e.g., MySQL)
    • Other applications (e.g., email server)
    • Web services (e.g., payment gateway)

The higher up the stack we can cache, the more scalable our solution becomes. Interchange comes with [timed-build] which can be used to great effect to cache database results in particular, but also potentially other applications that could produce bottlenecks in performance. Moreover, because Interchange assembles the document, this is the last point in the stack that we can (directly, if at all) partially cache a resource. However, having all requests reach Interchange is, itself, a scalability issue that would have to be resolved with either horizontal scaling or pushing our cache farther up the stack.

It's also possible to produce a static build of assets in the web server's doc space, keeping requests from reaching Interchange at all. And while responses from a web server will have considerably less overhead and better response times than Interchange, both building and maintaining a static repository of Interchange assets is going to take some effort and, ultimately, will require horizontal scaling to relieve overload.

Our target point for the cache described herein is at the reverse proxy. We want to control our cache using standard cache headers and an nginx configuration that uses the full URL for its cache keys. The reverse proxy is chosen because:

  • Very fast and efficient at delivering documents.
  • Low maintenance as we are able to have proxy engine keep its data storage fresh according to document headers.
  • Assets can now be massively scaled through 3rd-party CDNs requiring no infrastructure investment and maintenance that horizontal scaling solutions lower in the stack would.
  • Full URL cache keys and cache headers allow us extend our cache all the way to the browser, for those user agents that respect the cache headers.

Core Interchange Support for Full-Page Caching

To provide general support for full-page caching in Interchange, we found it necessary to introduce some core features into Interchange 5. These features are in final review internally and will be committed to core soon.


New boolean catalog configuration parameter that, when true, instructs Interchange not to write any cookies in a cacheable response. Cookies are inherently specific to the individual client being served and nginx will refuse to cache a resource containing Set-Cookie headers.


Value indicates to critical core code what the request's cache potential is. Three values:

  • undef - unknown, could be cached, but hasn't been explicitly identified
  • true - cannot be cached. Indicates the requested resource is user-dependent and may produce different results for the same URL for different users.
  • false (other than undef) - explicitly treat as a "can be cached" resource. This setting can be used to reverse override other cache overrides.

The ternary nature of Volatile allows a developer to explicitly control the caching behavior of any given resource if circumstances require an adjustment to the default behavior.


New container tag whose body will only interpolate if the value of $::Instance->{Volatile} at the time of interpolation is false. Tag is particularly useful for placing settings for cache headers on shared resources (includes files, components, etc.) where the final document may or may not be cacheable.


Catalog configuration parameter that takes the name of a catalog or global sub to execute just prior to Interchange writing its Set-Cookie headers. Setting was inspired by the need to maintain portions of the session on the client via cookies to allow some more stubborn session-dependent resources to be cacheable.

Obstacles to Full-Page Caching in Interchange

Standard coding practices in a typical Interchange catalog interfere with full-page caching in a number of ways, primarily:

  • Liberal coupling of resources with user sessions
  • Searches common to all site users (e.g., category lists) generate saved search objects that force session coupling
  • Heavy reliance on non-RESTful URLs, primarily those generated by the process actionmap

In most circumstances, these practices can be altered to produce fully cacheable resources, and particularly if the most heavily used components of the site are addressed first, such as the home page and category lists.

Catalog Changes to Mitigate Caching Obstacles

Precisely what changes are required depends on the specific coding practices used for the resources in question. However, there are a number of typical usage patterns that will to some degree affect almost all Interchange catalogs.

Use RESTful URLs

Avoid unnecessary dependence on the process actionmap, which is often used liberally precisely because it gives lots of hooks into cool and useful features. Avoid any other use of common URLs that produce varying resources based on session dependence.

Take advantage of writing custom actionmaps, which allow the developer extreme flexibility in URL construction. Because actionmaps make it easy and straightforward to produce unique URLs, they are ideal both for creating cacheable resources and fine-tuning SEO.

Permanent More for Category Lists

By default, search objects which Interchange uses for more lists, are restricted to access from the generating user's session. This is a safeguard as often search results include personal data for access only to the requestor. However, for features such as category lists, this creates a difficult burden for the developer who wishes to cache the popular resources and whose results are identical across all users.

We can overcome this difficulty by making the search definitions for category lists, or other canned searches, include the permanent more indicator. Permanent more causes all identical searches to share a common search object accessible by the same URLs, and freeing the usual coupling with the session of the search originator.

Address Common Session Variables

There are certain session variables that are often found in page code and can cause a number of difficulties when trying to make a resource cacheable. Start by tracking through the use of each of the following and come up with a strategy to remove the dependencies so that the interpolated code is free of them:

  • [value]
  • [data session ___]
  • [set]/[seti]
  • Any [if] conditionals that use those respective bases (e.g., [if value], [if session], and [if scratch])

Cacheable Redirects

Any code that issues a redirect must do so consistently with respect to its URL. Any redirect will cache the http code and, if issued conditionally, will force all users accessing the cached resource to also redirect. This practice is seen often in Interchange catalogs, particularly when monitoring pages that are restricted to logged-in users. In summary, it's OK to cache redirects, but just make sure that a given URL is either always, or never, redirected.

Profile and "Click" Code

It is common practice to define profile and click code in scratch variables. This is particularly true for click code defined with the [button] tag, which while convenient causes the click action to be defined under the hood in scratch space. In order for these event-driven features to work, the resource must compile that code and seed it in the session in anticipation of the user's next actions. If those resources are cached, those important features are never added to the session as the result of a page load and, so, none of the actions will work.

All use of [button] or [set] to produce click or profile code should be moved into the profile files (typically found in etc/profile.*). There they are added to the Interchange global configuration at compile time and are thus available to all users without regard to the state of their sessions. This is good practice generally since it is often easy (particularly with [button]) to have multiple actions map to the same scratch key. When that happens, a user going through the browser back button can get invalid results on actions taken because the click or profile definitions have changed with respect to the anticipated such actions on the current page.

Set SuppressCachedCookies to Yes

As described earlier, this will tell the Interchange core not to write any cookies if the resource is to be cached.

Define Cache-Control Headers

At any point in the development of the response body, Interchange can be issued a pragma that tells it to treat the response as cacheable. This will interact with the core features described above to ensure that there is no impact on the user session as a result of this request, and put the correct headers in the output stream (as well as keep the cookie headers out).

Invocation looks something like [tag pragma cache_control]max-age=NNN[/tag], where NNN is the number of seconds the cache should persist.

Impact on Session Management

Any resource considered cacheable should a priori have neither impact nor dependence on a session. This must be true if we consider that, once cached, a user will interact with the page--and expect correct behavior--without ever touching Interchange. This introduces some new conditions associated with the session:

  • The initial user request against a cacheable resource will not generate a session. Why is this so? Apart from the already-noted discrepancy of accessing the resource for a cached vs. live hit, generating a session would necessitate producing the session cookie. Returning that cookie would invalidate the resource as cacheable. Further, one of the significant advantages of a reverse-proxy caching strategy is to provide protection in a DoS attack, and the user agents in such an attack are very unlikely to maintain a session. Thus, if we were failing to produce a cache on initial hits to allow setting a session, all those DoS hits would reach Interchange, and on top of that be churning out session entries on the server.
  • Session writing is suppressed on any response with a cacheable resource. Interchange must treat the response without any permanence because all accesses of the resource from the cache will never reach Interchange. If the request that produced the cache also wrote that user's session, it would produce a deviation in behavior between the cached v. live requests.

Overrides on a Cacheable Response

Any action resulting in a POST is considered to imply the necessity of the user initiating the request reaching the session (or the database, or some other permanent storage controlled by the Interchange app). Thus POSTs by default force the Volatile setting to true. However, note this can be overridden by the developer if necessary (e.g., if a DoS hits the home page with a POST rather than the expected GET).

Similarly, any requests passing through either the "process" or "order" actionmap are assumed to require access to the session. "process" will most often be issued as a POST as well, although using "order" with a GET is common.

User State on Cacheable Resources

A big mistake a developer may make when considering full-page caching is to assume an all-or-nothing approach. Trying to compartmentalize an entire catalog into fully cacheable resources would be a daunting task, requiring essentially the construction of a fully client-side application and session management. This is neither realistic nor desirable.

A catalog can gain considerable benefit simply from evaluating those resources which do not require session entanglement at all and starting with them. Without considering users that are logged in or have items in their cart, under most circumstances the home and category list pages should be free from entanglement. With a bit of URL management, the resources can skip the cache when a user is logged in or has items in the cart.

"Read Only" User State

If caching is desirable on resources that cannot be decoupled from session influence, we can expose the necessary parts of the session to the client in the form of cookies and can refactor our document to contain client-side code to manage the session use. Typical examples of this would be personalization for logged in users, or the display of a small cart on all pages. The session data stored in the cookie is controlled exclusively by Interchange and is read-only on the client. Each time the session is accessed and updated, the cookie is re-written to the client.

Management of such a process is relatively easy with modern Javascript frameworks such as jQuery. As a typical example, one might need to replace the following session-dependent code

  [if session logged_in]
    Hi, [value fname]!

with client-side management:

  <span id="fname_display"></span>
and in the ready() event elsewhere with our session cookie data stored in valuesData:
  if (valuesData.fname) {
    $('#fname_display').replaceWith('Hi, ' + valuesData.fname + '!');

The OutputCookieHook was developed as a convenient mechanism for constructing the proposed valuesData cookie above, allowing for a subroutine to build the cookie just prior to the core code that constructs the document headers but after any standard actions that would alter the session and would need to be captured in the cookie data.

"Read Write" User State

If state needs are more complex on a particularly popular resource, it may be necessary to allow our state cookie to also be updated from the client. With the tools described here, the developer can either amend the existing cookie, or construct a new one, that captures data input by the client through subsequent requests to cached resources. Once the user issues the next non-cached request, an Autoload subroutine could be constructed to identify that changes have occurred and then sync those changes back to the user's session. While implementing read-write user state may be challenging, it is possible and has been done at End Point for clients where that need exists.


Hopefully this provides a good idea of how to get started when approaching full-page caching, which is possible in most web app frameworks, and soon will be much easier in Interchange 5 with the new core tools introduced here.

How to DRY out your Active Record queries with Squeel

Active Record alone isn’t enough when it comes to having a data-access code-base that is DRY and clean in real world Rails applications. You need a tool like the squeel gem and some good practices in place.

Using JOIN with Active Record is cumbersome

Having JOINs in composable data-access Rails code, quickly makes the code ugly and hard to read.

Consider this snippet of code taken directly from Rails guides, that gives developer a full control over the joining part of the query:

Client.joins('LEFT OUTER JOIN addresses ON addresses.client_id =')

That’s 77 characters for something as simple as a left outer join.

Now - when using Rails, we almost always have properly defined associations in our models, which allows us to put in code something like this:


That’s much better - there’s not much code and you immediately know what it does. It isn’t “left outer join” but is useful nevertheless. You could do more with that feature. For example you could join with multiple associations at once:

Post.joins(:category, :comments)

Also - join with nested associations:

Post.joins(comments: :guest)

Which produces:

SELECT posts.* FROM posts
INNER JOIN comments ON comments.post_id =
INNER JOIN guests ON guests.comment_id =

Another example from the Rails guides shows that you can join multiple nested associations as well:

Category.joins(posts: [{comments: :guest}, :tags])


SELECT categories.* FROM categories
INNER JOIN posts ON posts.category_id =
INNER JOIN comments ON comments.post_id =
INNER JOIN guests ON guests.comment_id =
INNER JOIN tags ON tags.post_id =

Specifying conditions on nested associations works nicely according to the guides too:

time_range = ( -
Client.joins(:orders).where(orders: {created_at: time_range})

And indeed this looks pretty sweet.

The ugly side of the Active Record JOIN mechanism

If the real world would consist only of the use cases found in docs - we wouldn’t need to look for any better solutions.

Take a look at the following example that might occur:

@tags = params[:tags]
Post.joins(comments: [{guest: :tags}]).
     where(comments: {guest: {tags: {code: @tags}}})

Now try to look at this code and answer the question:
“what does this code do?”.

Squeel - The missing query building DSL

The last example could be easily stated using squeel as:

@tags = params[:tags]
     where{comments.guest.tags.code << my{@tags}}

Isn’t that simpler and easier to read and comprehend?

What about queries you wouldn’t do even in Active Record?

If you were to produce a query with e.g. left outer join - you’d have to resort to writing the joining part in a string. With squeel you can just say:


Can I do the same with subqueries?

Surely you can, take a look here:

awesome_people = Person.where{awesome == true}

Which produces the following SQL query:

SELECT "articles".* FROM "articles"
WHERE "articles"."author_id" IN 
  (SELECT "people"."id" FROM "people"  WHERE "people"."awesome" = 't')

But if I were to use SQL functions - I’d have to write queries in strings, no?

No! Take a look here:

       having{{articles => {max(id) => id}}}


SELECT "people".* FROM "people"
INNER JOIN "articles" ON "articles"."person_id" = "people"."id"
GROUP BY "articles"."title"
HAVING max("articles"."id") = "articles"."id"

Why use a DSL to generate SQL? - Composability

At this point many hard-core database experts sit with a wry face, frowning at the idea of having an ORM and DSL for accessing the database.

I won’t go into reasons as to why their concerns are valid or not. I’ll just say that I completely understand and feel most of them.

Here though, is an argument for using those DSLs: you simply can’t compose SQL strings easily, mixing and matching many of them in order to have a DRY and maintainable code base.

You can’t compose:

SELECT "people".* FROM "people"
WHERE "people"."active" IS TRUE


SELECT "people".* FROM "people"
WHERE "people"."sex" = 'female'

But with a DSL like squeel you can easily define:

  where{active == true}

def self.females
  where{sex == 'female'}

And then in your controller say:

def index
  @people =

And nothing would stop you from reusing those methods to build nice looking and easy to comprehend queries.

More to read

I encourage you to visit the squeel GitHub pages. The documentation is well written and very easy to grasp.

Controlling interactive programs with pexpect-u

A client I am working with requires that various machines have Ubuntu 10.04.4 installed along with certain software dependencies prior to installation of their own software.

In order to have our client avoid the tedious task of spinning up a new machine for each new client of theirs, I decided to attempt to automate the process (minus the OS installation) in Python.

A couple of the software installations require the user to interact with a console application. For example, this is the Matlab Runtime Environment installer:

and the Passenger installer:

Here I used the Python package pexpect-u which allows you to spawn child applications and control them automatically.

To spawn the Matlab installer I run:

import pexpect
child = pexpect.spawn("sudo ./MCRInstaller.bin -console")

Now we tell pexpect what to expect:

child.expect("Press 1 for Next, 3 to Cancel or 5 to Redisplay \[1\]")

And we send a command with:


The package can be found here and the source code includes many more examples, one of which might be of use for this very client:

This client has various installations of a Rails application and each sends requests to the same location on a server. Because this location changes periodically, each of the Rails application installations need a configuration variable updated. I think the following should do the trick:

python user1:pass2@host1 user2:pass2@host2 ... userN:passN@hostN
grep -rl 'old_location_path_string' /path/to/config/file | xargs sed -i 's/old_location_path_string/new_location_path_string/'

Let's see it in action. I've set up two Ubuntu virtual machines and will be changing a string in a file that exists on both machines in the same location. First I login:

Let's look at the contents of the files:

Now I'll replace the string "Hello" with "Goodbye":

Because the user of each machine has permissions over the test.conf file, I was able to modify it without sudo. If you'll be needing to use sudo to make any changes, take a look at the :sync example here, however, note that if your sudo passwords differ on at least one of the remote machines this will not work.

Ecommerce Innovation 2013

Mark Johnson and I went to the Ecommerce Innovation 2013 conference in beautiful Hancock, NY. The event was hosted by Sam Batschelet of West Branch Resort. The conference was spread out over three days and was very well planned. We had plenty of time in between talks to mingle with the other people. All of the talks were very insightful and informative. I found the mixture of technology and marketing talks beneficial. I have already discussed some things with my clients that I learned.

A brief overview of the talks

  • Jure Kodzoman of Informa had two different subjects.
    • His first talk was about Template::Flute which is a Perl based template system which is the default template for Interchange 6. It utilizes the use of html classes to figure out where to parse in the data returned from your Perl code. Overall it seems pretty straight forward to use.
    • His second talk was about the current state of the new Interchange 6 demo store.

  • Ana Kozole of Informa had a talk named "Remarketing with Banners" that was really informative.The base of this is to have the ability to show specific banners to visitors on different websites. She discussed different remarketing techniques including creating specific lists based on different criteria like all visitors, people who got to the checkout page but didn't checkout or people who used a coupon code etc. You can also use remarketing lists for search ads.

  • Luka Klemenc of Informa gave two talks.
    • He discussed some a CRM system that they had developed in house and some of the pros and cons.
    • Luka gave us short talk about the ways to know whether or not your newsletter is effective.

  • Josh Lavin of Perusion talked about a new template for Interchange 5 called Strap which is based on Bootstrap version 2.3.2. With this new template they have created a bunch of page and url changes to make the stock Interchange much more SEO friendly. This also makes a few underlying changes like assuming the username to login would be your email address and creating a multiple page checkout.

  • Mike Heins of Perusion gave us a brief history of Interchange and compared it to modern day frameworks like Dancer. He also gave us a brief overview of PCI compliance and how Interchange holds up. He introduced us to the features of the Perusion Payment Server which is a remote credit card processing system that helps with PCI compliance.

  • Stefan Hornburg of LinuXia Systems discussed two different topics with us.
    • gave us an overview of where the Interchange 6 project currently is and where it's going. He gave us some code samples of the way we can do simple things like add an item to the cart, fetch the subtotal of the cart and talk to the database.
    • Stefan walked us through an integration he did for OpenERP with Interchange 5 using RPC::XML.

  • Mark Johnson gave us a review of the modifications to Interchange 5 to allow web servers like nginx or Apache to cache entire pages. He discussed how we modified Interchange 5 for a customer to help with a DDoS attack. He laid out all of the new usertags and directives you will need to set to get pages to be cachable including some "gotchas" like not sending cookie information if you want this page to be cached. We hope that this feature will be included in the Interchange 5.8.1 release.

  • Sam Batschelet gave a talk about DevCamps and the reasons why it is so great. He discussed things like using Perlbrew and Carton in camps to help get around the fact that most Linux operating systems ship with a pretty old version of Perl. He also expanded on a few features that we hope to get released soon.

The last big item on the schedule was a 2 hour round table discussion about the database schema for Interchange 6. It was a very good discussion with many different opinions for adjustments. Most of us based our suggestions on past experience with clients. I do not think we are finished making adjustments to it but we are on the right path to a very flexible setup.

Overall I thought the conference was a great success. It was great to meet in person some people I had only seen on a mailing list before and pass around ideas for the future. I cannot wait to see what cool new things we will have to discuss next year!

PostgreSQL Debian apt repository GnuPG key update

The excellent PGDG (PostgreSQL Global Development Group) apt repositories provide current point releases of supported PostgreSQL versions for Debian and Ubuntu LTS Linux. If you'd like to use a newer version of PostgreSQL than ships with your Linux distribution, or need to use an older Postgres release, you should take a look at

A minor housekeeping matter arose just a few days ago: The GnuPG key used to sign the PostgreSQL packages expired on October 13. During a run of apt-get update && apt-get upgrade that leads to errors such as those seen here:

Get:1 wheezy/updates Release.gpg [836 B]
Get:2 wheezy/updates Release [102 kB]                                    
Get:3 wheezy-pgdg Release.gpg [836 B]               
Get:4 wheezy Release.gpg [1,672 B]                          
Get:5 wheezy-pgdg Release [29.2 kB]
Get:6 wheezy-updates Release.gpg [836 B] 
Get:7 wheezy Release [168 kB]     
Err wheezy-pgdg Release                               
Get:8 wheezy/updates/main Sources [61.2 kB]          
Get:9 wheezy/updates/main amd64 Packages [113 kB]  
Get:10 wheezy/updates/main Translation-en [66.5 kB]          
Get:11 wheezy-updates Release [124 kB]                             
Get:12 wheezy/main Sources [5,959 kB]
Get:13 wheezy/main amd64 Packages [5,848 kB]
Get:14 wheezy/main Translation-en [3,851 kB]
Get:15 wheezy-updates/main Sources [1,995 B]
Hit wheezy-updates/main amd64 Packages/DiffIndex
Hit wheezy-updates/main Translation-en/DiffIndex
Fetched 16.3 MB in 5s (3,094 kB/s)               
Reading package lists... Done
W: A error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: wheezy-pgdg Release: The following signatures were invalid: KEYEXPIRED 1381654177

W: Failed to fetch  

W: Some index files failed to download. They have been ignored, or old ones used instead.

And, fair enough, the PostgreSQL apt wiki page mentions this in its news section:

2013-10-10: New pgdg-keyring version extending the key expiration date. The old expiration date is 2013-10-13.

Updating the key to quell the error may not be quite as simple as expected, though. If you follow the instructions on the page, you would do this:

wget --quiet -O - | sudo apt-key add -

Indeed that imports the new key. However, you may still see errors from apt. Why? It's possible to have apt trusted keys installed in more than one place:

# apt-key list
pub   4096R/ACCC4CF8 2011-10-13 [expires: 2016-02-24]
uid                  PostgreSQL Debian Repository

pub   4096R/ACCC4CF8 2011-10-13 [expired: 2013-10-13]
uid                  PostgreSQL Debian Repository

A script is referenced from the apt setup page but it doesn't do the exact same thing as the instructions show. Instead, it installs the GPG key in /etc/apt/sources.list.d/pgdg.list, and unless we update or remove that file, apt will continue to see the expired key and complain about it.

In our example above, we have already imported the new key into the main /etc/apt/trusted.gpg keystore, so let's just remove the original key that was in its own file in /etc/apt/trusted.gpg.d/ like this:

rm -f /etc/apt/trusted.gpg.d/

Afterward, apt happily proceeds with its work.

First Dance: a Gentle Introduction to for Web Services

I've been dabbling in Dancer (Version One, not so much Version Two). Our first opportunity to create a production-worthy Dancer application just rolled out with encouraging results, so I thought I would outline some things we tried, and things we learned. Don't worry if you don't know a thing about Dancer; I'll educate you a little as we go along.

First, some background. This application was a re-host and rewrite of an existing application written in Javascript and plain old CGI Perl scripts, returning either HTML or fairly simple JSON objects. Even the database it connected to was ancient DBM files. Those files were created by extracting data from the main website's database, so that part was easy to replace – we just had to use the extraction queries (mostly) to connect directly.

We chose Dancer as the platform for this effort because of its purely Perl footprint, and its relatively easy deployment.

Since I didn't really want to rewrite the front-end Javascript code, I left that mostly as-is and concentrated on replacing the CGI scripts. The first step was to move all their code into a non-Dancer Perl module, which communicated with the outside world entirely in the form of hashrefs:

sub do_something_great {
    my $opt = shift;
    my $output = {};
    # do something great!
    return $output;

Next, as a stop-gap measure, I replaced the innards of all the CGI scripts to turn CGI arguments into a hashref and call my new processing stuff. Note that so far, no actual Dancer code has been involved. This was intentional; since my Dancer expertise was still a bit thin, I wanted to have as few app-caused problems as possible, when I started Dancing.

But now we should stop for a moment and learn just enough about Dancer to be less dangerous.

Dancer (details at the first link in this article) is a web application framework: in essence, it's a way to convert URLs into data. You feed in a URL, you get back a web page, a structure, maybe a side effect or three. In order to connect your application to a browser, you need a series of routes:

package MyApp::Routes;
use Dancer;

get '/index.html' => sub {
    print 'hello, world';

Dancer sets up some easy-to-grasp syntax that lets you associate a clump of Perl code (an anonymous sub, in the example above) with a URL and a method. Here, a GET of "/index.html" runs the code, and sends the output back to the browser.

post '/myapp/do_this.html' => sub {
    return q{It's done.};

Likewise, we can specify more complex URLs (such as "/myapp/do_this.html?foo=bar"), with POST methods, and access the CGI parameters via the param() function. It just returns a hashref, which we can address as shown.

Dancer provides quite a bit of support infrastructure: config files, database connections, plug-ins for templating HTML or other output, etc. Since this article is a Gentle Introduction, I refer you to the link cited above if you want to delve deeper.

Now it was pretty simple to make gradual progress: I'd eliminate one CGI script at a time, replacing it with a Dancer route (stored separately from my main processing module, so that the processing could continue to be ignorant of Dancer). Most of the routes were very simple, since the paths they had to recognize were simple and straightforward, with simple CGI parameter lists. E.g.,

get '/notes.html' => sub {
    my $context = MyApp::get_notes({
        dbh    => database(),
        order  => params->{order},
        raw    => 1,
    template 'notes', $context, { layout => 'plain' };

This example is one of the routes that produces HTML. The "template" built-in is just a layer wrapped around your preferred templating system (such as Template::Toolkit); it looks for a template file called "notes", using a set of values specified by "$context", and wraps the result in a "layout" file called "plain".

(At the risk of grossly oversimplifying: your "template" can be thought of as all of your output HTML document within the <body> tag, while the "layout" is everything else, with a spot in the middle for your template's output.)

By contrast, one simple route that is designed to return JSON to the Javascript side:

get '/orders.html' => sub {
    my $orders = MyApp::get_orders({
        dbh    => database(),
        date   => params->{date},
    return to_json($orders || [], { pretty => params->{pretty} // 0 });

We had to make some adjustments to our Apache configuration to make this all work.

RewriteRule    ^/D/myapp/(.*) http://$SOME_ADDR:5001/$1 [P]

As you can see here, we decided to set up the Dancer app as a service on a particular port, and we settled on a particular prefix for our webservice requests to distinguish them from other traffic. All the requests in the Javascript were adjusted to this new style.

Our main Dancer application has the usual one-liner construction*:


*Okay, totally not true. For this application, there was a ton of file-parsing and environment-loading to do first, but it was all just a big work-around for some very specific things needed in the app, and not really anything to do with Dancer per se.

Now, we added one more layer to this: Starman. This provided us with a way to simply wrap our Dancer application with the necessary extra bits to turn it into a "service". That way it starts and stops with a command line interface, logs to a particular path, doesn't just quit if a user session is ended, etc.

I don't have enough background to describe it more than that; hopefully, we'll have a follow-on post here about Starman from one of my colleagues with more "chops", soon. One thing we learned at the cost of some sanity was: Don't put an "exit" statement in your Dancer script if you plan to use Starman. We never learned why this messed it up, but I assume it's got something to do with Starman absorbing your code into itself and turning it into a Perl subroutine.

Newest European Liquid Galaxy installation: Google Hamburg shop-in-shop

In September 2013, Google opened a store-in-store within the large electronics retailer Saturn in Hamburg. Dominating the visuals of the store is a large Liquid Galaxy on the back wall.

We partnered with system integrator Pixel Projects this time, and instead of our usual 7 screen display, worked the Liquid Galaxy into an unique 3x3 video wall array. The Liquid Galaxy shows Google Earth and will be showing Google Street View starting later this month, further highlighting the rich set of services offered by Google. It is also integrated with traditional video wall software configured by Pixel Projects such that it it can be switched to displaying Google services such as YouTube, Google Drive, Hangouts, and more. As always, the Liquid Galaxy is a huge customer draw. Customers who come through the store can explore any point on the globe while they browse the Google products.


End Point worked under the direction of the retail experience agency, Make Retail, to design a fully integrated experience. Once the unique configuration of the Liquid Galaxy was set, we sent along one of our top engineers, Matt Vollrath, to oversee the final hardware and software integration and installations, together with our local deployment partners Pixel.

As a new step for Google in Europe, the Liquid Galaxy creates a fantastic experience for consumers and enhances the impact of the retail store.