Welcome to End Point’s blog

Ongoing observations by End Point people

Postgres connection service file

Postgres has a wonderfully helpful (but often overlooked) feature called the connection service file (its documentation is quite sparse). In a nutshell, it defines connection aliases you can use from any client. These connections are given simple names, which then map behind the scenes to specific connection parameters, such as host name, Postgres port, username, database name, and many others. This can be an extraordinarily useful feature to have.

The connection service file is named pg_service.conf and is setup in a known location. The entries inside are in the common "INI file" format: a named section, followed by its related entries below it, one per line. To access a named section, just use the service=name string in your application.

## Find the file to access by doing:
$ echo `pg_config --sysconfdir`/pg_service.conf
## Edit the file and add a sections that look like this:

## Now you can access this database via psql:
$ psql service=foobar

## Or in your Perl code:
my $dbh = DBI->connect('dbi:Pg:service=foobar');

## Other libpq based clients are the same. JDBC, you are out of luck!

So what makes this feature awesome? First, it can save you from extra typing. No more trying to remember long hostnames (or copy and paste them). Second, it is better than a local shell alias, as the service file can be made globally available to all users. It also works similar to DNS, in that it insulates you from the details of your connections. Your hostname has changed because of a failover? No problem, just edit the one file, and no clients need to change a thing.

As seen above, the format of the file is simple: a named section, followed by connection parameters in a name=value format. Among the connection parameters one may use, the most common and useful are host, port, user, and dbname. Although you can set a password, I recommend against it, as that belongs in the more secure, per-user .pgpass file.

The complete list of what may be set can be found in the middle of the database connection documentation page. Most of them will seldom, if ever, be used in a connection service file.

The connection service file is not just limited to basic connections. You can have sections that only differ by user, for example, or in their SSL requirements, making it easy to switch things around by a simple change in the service name. It's also handy for pgbouncer connections, which typically run on non-standard ports. Be creative in your service names, and keep them distinct from each other to avoid fat fingering the wrong one. Comments are allowed and highly encouraged. Here is a slightly edited service file that was recently created while helping a client use Bucardo to migrate a Postgres database from Heroku to RDS:

## Bucardo source: Heroku

## Bucardo target: RDS

## Test database on RDS

## Hot standby used for schema population

You may notice above that "connect_timeout" is repeated in each section. Currently, there is no way to set a parameter that applies to all sections, but it's a very minor problem. I also usually set the environment variable PGCONNECT_TIMEOUT to 10 in by .bashrc, but putting it in the pg_service.conf file ensures it is always set regardless of what user I am.

One of the trickier parts of using a service file can be figuring out where the file should be located! Postgres will check for a local service file (named $USER/.pg_service.conf) and then for a global file. I prefer to always use the global file, as it allows you to switch users with ease and maintain the same aliases. By default, the location of the global Postgres service file is /usr/local/etc/postgresql/pg_service.conf, but in most cases this is not where you will find it, as many distributions specify a different location. Although you can override the location of the file with the environment variable PGSERVICEFILE and the directory holding the pg_service.conf file with the PGSYSCONFIDIR environment variable, I do not like relying on those. One less thing to worry about by simply using the global file.

The location of the global pg_service.conf file can be found by using the pg_config program and looking for the SYSCONFDIR entry. Annoyingly, pg_config is not installed by default on many systems, as it is considered part of the "development" packages (which may be named postgresql-devel, libpq-devel, or libpq-dev). While using pg_config is the best solution, there are times it cannot be installed (e.g. working on an important production box, or simply do not have root). While you can probably discover the right location through some simple investigation and trial-and-error, where is the fun in that? Here are two other methods to determine the location using nothing but psql and some standard Unix tools.

When you invoke psql with a request for a service file entry, it has to look for the service files. We can use this information to quickly find the expected location of the global pg_service.conf file. If you have the strace program installed, just run psql through strace, grep for "pg_service", and you should see two stat() calls pop up: one for the per-user service file, and one for the global service file we are looking for:

$ strace psql service=foobar 2>&1 | grep service.conf
stat("/home/greg/.pg_service.conf", 0x3526366F6637) = -1 ENOENT (No such file or directory)
stat("/var/opt/etc/postgres/pg_service.conf", 0x676F746F3131) = -1 ENOENT (No such file or directory)

What if strace is not installed? Well, perhaps gdb (the GNU debugger) can help us out:

$ gdb -q --args psql service=foobar
Reading symbols from psql...(no debugging symbols found)...done.
(gdb) start
Temporary breakpoint 1 at 0x435356
Starting program: /usr/local/bin/psql service=foobar
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/".

Temporary breakpoint 1, 0x4452474E4B4C5253 in main ()
(gdb) catch syscall stat
Catchpoint 2 (syscall 'stat' [4])
(gdb) c

Catchpoint 2 (call to syscall stat), 0x216c6f65736a6f72 in __GI___xstat (vers=, name=0x616d7061756c "/usr/local/bin/psql", buf=0x617274687572)
    at ../sysdeps/unix/sysv/linux/wordsize-64/xstat.c:35
35      return INLINE_SYSCALL (stat, 2, name, buf);
(gdb) c 4
Will ignore next 3 crossings of breakpoint 2.  Continuing.

Catchpoint 2 (call to syscall stat), 0x37302B4C49454245 in __GI___xstat (vers=, name=0x53544F442B4C "/var/opt/etc/postgres/pg_service.conf", buf=0x494543485445)
    at ../sysdeps/unix/sysv/linux/wordsize-64/xstat.c:35
35      return INLINE_SYSCALL (stat, 2, name, buf);
(gdb) quit

The use of a connection service file can be a nice addition to your tool chest, especially if you find yourself connecting from many different accounts, or if you just want to abstract away all those long, boring host names!

Use Java along with Perl

While working with one of our clients, I was tasked with integrating a Java project with a Perl project. The Perl project is a web application which has a specific URL for the Java application to use. To ensure that the URL is called only from the Java application, I wanted to send a special hash value calculated using the request parameters, a timestamp, and a secret value.

The Perl code calculating the hash value looks like this:

use strict;
use warnings;

use LWP::UserAgent;
use Digest::HMAC_SHA1;
use Data::Dumper;

my $uri = 'param1/param2/params3';

my $ua = LWP::UserAgent->new;
my $hmac = Digest::HMAC_SHA1->new('secret_something');

my $ts = time;

$hmac->add($_) for (split (m{/}, $uri));

my $calculated_hash = $hmac->hexdigest;

My first try for calculating the same hash in the Java code looked something like this (without class/package overhead):

import javax.crypto.Mac;
import javax.crypto.spec.SecretKeySpec;
import org.apache.commons.codec.binary.Hex;

public String calculateHash(String[] values) throws NoSuchAlgorithmException, UnsupportedEncodingException, InvalidKeyException {

    java.util.Date date= new java.util.Date();
    Integer timestamp = (int) date.getTime()/1000;

    Mac mac = Mac.getInstance("HmacSHA1");
    SecretKeySpec signingKey = new SecretKeySpec("secret_something".getBytes(), "HmacSHA1");
    for(String value: values) {

    byte[] rawHmac = mac.doFinal();
    byte[] hexBytes = new Hex().encode(rawHmac);
    return new String(hexBytes, "UTF-8");

The code looks good and successfully calculated a hash. However, using the same parameters for the Perl and Java code, they were returning different results. After some debugging, I found that the only parameter causing problems was the timestamp. My first guess was that the problem was caused by the use of Integer as the timestamp type instead of some other numeric type. I tried a few things to get around that, but none of them worked.

Another idea was to check why it works for the String params, but not for Integer. I found that Perl treats the timestamp as a string and passes a string to the hash calculating method, so I tried emulating this by converting the timestamp into a String before using the getBytes() method:

import javax.crypto.Mac;
import javax.crypto.spec.SecretKeySpec;
import org.apache.commons.codec.binary.Hex;

public String calculateHash(String[] values) throws NoSuchAlgorithmException, UnsupportedEncodingException, InvalidKeyException {

    java.util.Date date= new java.util.Date();
    Integer timestamp = (int) date.getTime()/1000;

    Mac mac = Mac.getInstance("HmacSHA1");
    SecretKeySpec signingKey = new SecretKeySpec("secret_something".getBytes(), "HmacSHA1");
    for(String value: values) {

    byte[] rawHmac = mac.doFinal();
    byte[] hexBytes = new Hex().encode(rawHmac);
    return new String(hexBytes, "UTF-8");

This worked perfectly, and there were no other problems with calculating the hash in Perl and Java.

Building Containers with Habitat

Many Containers, Many Build Systems

When working with modern container systems like Docker, Kubernetes, and Mesosphere, each provide methods for building your applications into their containers. However each build process is specific to that container system, and using similar applications across tiers of container environments would require maintaining each container's build environment. When approaching this problem for multiple container environments, Chef Software created a tool to unify these build systems and create container-agnostic builds which could be exported into any of the containers. This tool is called Habitat which also provide some pre-built images to get applications started quickly.

I recently attended a Habitat Hack event locally in Portland (Oregon) which helped me get more familiar with the system and its capabilities. We worked together in teams to take a deeper dive into various aspects of how Habitat works, you can read about our adventures over on the Chef blog.

To examine how the various parts of the build environment work, I picked an example Node.js application from the Habitat Documentation to build and customize.

Node.js Application into a Docker Container

For the most basic Habitat build, you must define a file which will contain all the build process logic as well as all configuration values to define the application. Within my Node.js example, this file contains this content:

pkg_maintainer="Kirk Harr <>"

do_build() {
  npm install

do_install() {
  cp package.json ${pkg_prefix}
  cp server.js ${pkg_prefix}

  mkdir -p ${pkg_prefix}/node_modules/
  cp -vr node_modules/* ${pkg_prefix}/node_modules/

Within this is defined all the application details like the name of the author, the version of the application being packaged, as well as the package name. Each package can be defined with a license for the code in use as well as any code dependencies, like the Node.js application server (core/node), as well as the repository URL for locating these files. There are also two executable statements which build the package dependencies, and perform final installation setup during the eventual package installation.

Additionally to define this application we must provide the logic for how to start the Node application server, and provide configuration on what ports to listen on as well as the message to be displayed once it has started. To do so we must create a stub Node.js config.json which provides the port and message values:

    "message": "Hello, World",
    "port": "8080"

We also need two hooks which will be executed at package install time and at runtime for the application respectively. These are named, init and run in our case, with init setting up the symbolic links to the various Node.js components from the core/node package which will be included in the build, and run provides the entry point for the applications flow effectively starting the npm application server. Just like with a Dockerfile, any additional logic needed during the process would be included in these two hooks, depending on if the logic was specific to install time or run time.

Injected Configuration Values

In this example, both the message to be displayed to the user, as well as the port that the Node.js application server will listen on are hard-coded into our build, and all the images that resulted from it would be identical. In order to allow for some customizing of the resulting image, you can replace the hard-coded values in the Node.js config.json into variables which can be replaced during the build process:

    "message": "{{cfg.message}}",
    "port": "{{cfg.port}}"

To complete the replacement, we would provide a "Tom's Obvious, Minimal Language" (.toml) file with has a key-value pair for each of these configuration variables we want to set. This .toml file will be interpreted during each build to populate these values, creating an opportunity to customize our builds by injecting specific values into the variables defined in the application configuration. Here is an example of the syntax from this example:

# Message of the Day
message = "Hello, World of Habitat"

# The port number that is listening for requests.
port = 8080


Habitat seeks to fill in the gaps between the various container formats for Docker, Kubernetes and others, by allowing common build infrastructure and dependency libraries to be unified in distribution. By utilizing the same build infrastructure, it becomes more feasible to have a hybrid environment with various container formats in use, without creating duplicate build infrastructure which basically performs the same task slightly differently right at the end to package the application into the proper container format. Habitat helps to decouple the actual build process and all that plumbing, from the process of exporting the build image into the proper format for whatever container is in use. In that way as new container formats are developed, all that is required to accommodate them is expanding the export function for that new format, without any changes to the overall build process or customization of your code.

Postgres schema differences and views

Comparing the schemas of two or more different Postgres databases is a common task, but can be tricky when those databases are running different versions of Postgres. The quick and canonical way to compare schemas is by using the exact same pg_dump program to query each database via the --schema-only option. This works great, but there are some gotchas, especially when dumping database views.


First some background as to how this issue was discovered. We have a client that is in the process of upgrading from Postgres 9.2 to the Postgres 9.6 (the latest version as of this writing). Using the pg_upgrade program was not an option, because not only are data checksums going to be enabled, but the encoding is being moved to UTF-8. A number of factors, especially the UTF-8 change, meant that the typical upgrade process of pg_dump old_database | psql new_database was not possible. Thus, we have a very custom program that carefully migrates pieces over, performing some transformations along the way.


As a final sanity check, we wanted to make sure the final schema for the upgraded 9.6 database was as identical as possible to the current production 9.2 database schema. When comparing the pg_dump outputs, we quickly encountered a problem with the way that views were represented. Version 9.2 uses a very bare-bones, single-line output, while 9.6 uses a multi-line pretty printed version. Needless to say, this meant that none of the views matched when trying to diff the pg_dump outputs.

The problem stems from the system function pg_get_viewdef(), which is used by pg_dump to give a human-readable and Postgres-parseable version of the view. To demonstrate the problem and the solution, let's create a simple view on a 9.2 and a 9.6 database, then compare the differences via pg_dump:

$ psql -p 5920 vtest -c \
'create view gregtest as select count(*) from pg_class where reltuples = 0'
$ psql -p 5960 vtest -c \
'create view gregtest as select count(*) from pg_class where reltuples = 0'
$ diff -u <(pg_dump vtest -x -p 5920 --schema-only) <(pg_dump vtest -x -p 5960 --schema-only)

--- /dev/fd/70          2016-09-29 12:34:56.019700912 -0400
+++ /dev/fd/72          2016-09-29 12:34:56.019720902 -0400
@@ -2,7 +2,7 @@
 -- PostgreSQL database dump
--- Dumped from database version 9.2.18
+-- Dumped from database version 9.6.0
 -- Dumped by pg_dump version 9.6.0
 SET statement_timeout = 0;
@@ -35,22 +35,14 @@
 CREATE VIEW gregtest AS
-SELECT count(*) AS count FROM pg_class WHERE (pg_class.reltuples = (0)::double precision);
+ SELECT count(*) AS count
+   FROM pg_class
+  WHERE (pg_class.reltuples = (0)::double precision);

The only difference other than the server version is the view, which does not match at all as far as the diff utility is concerned. (For purposes of this article, the minor ways in which schema grants are done have been removed from the output).

As mentioned before, the culprit is the pg_get_viewdef() function. Its job is to present the inner guts of a view in a sane, readable fashion. There are basically two adjustments it can make to this output: adding more parens, and adding indentation via whitespace. In recent versions, and despite what the docs allude to, the indentation (aka pretty printing) can NOT be disabled, and thus there is no simple way to get a 9.6 server to output a viewdef in a single line the way 9.2 does by default. To further muddy the waters, there are five versions of the pg_get_viewdef function, each taking different arguments:

  1. by view name
  2. by view name and a boolean argument
  3. by OID
  4. by OID and a boolean argument
  5. by OID with integer argument

In Postgres 9.2, the pg_get_viewdef(text,boolean) version will toggle indentation on and off, and we can see the default is no indentation:

$ psql vtest -p 5920 -Atc "select pg_get_viewdef('gregtest')"
 SELECT count(*) AS count FROM pg_class WHERE (pg_class.reltuples = (0)::double precision);

$ psql vtest -p 5920 -Atc "select pg_get_viewdef('gregtest',false)"
 SELECT count(*) AS count FROM pg_class WHERE (pg_class.reltuples = (0)::double precision);

$ psql vtest -p 5920 -Atc "select pg_get_viewdef('gregtest',true)"
 SELECT count(*) AS count                        +
   FROM pg_class                                 +
  WHERE pg_class.reltuples = 0::double precision;

In Postgres 9.6 however, you are always stuck with the pretty indentation - regardless of which of the five function variations you choose, and what arguments you give them! Here's the same function calls as above in version 9.6:

$ psql vtest -p 5960 -Atc "select pg_get_viewdef('gregtest')"
  SELECT count(*) AS count
   FROM pg_class
  WHERE (pg_class.reltuples = (0)::double precision);

$ psql vtest -p 5960 -Atc "select pg_get_viewdef('gregtest',false)"
  SELECT count(*) AS count
   FROM pg_class
  WHERE (pg_class.reltuples = (0)::double precision);

$ psql vtest -p 5960 -Atc "select pg_get_viewdef('gregtest',true)"
  SELECT count(*) AS count
   FROM pg_class
  WHERE pg_class.reltuples = 0::double precision;


When I first ran into this problem, the three solutions that popped into my mind were:

  1. Write a script to transform and normalize the schema output
  2. Modify the Postgres source code such that pg_get_viewdef changes its behavior
  3. Have pg_dump call pg_get_viewdef in a way that creates identical output

My original instinct was that a quick Perl script would be the overall easiest route. And while I eventually did get one working, it was a real pain to "un-pretty" the output, especially the whitespace and use of parens. A brute-force approach of simply removing all parens, brackets, and extra whitespace from the rule and view definitions almost did the trick, but the resulting output was quite uglyhard to read, and their was still some lingering whitespace problems.

Approach two, hacking the Postgres source code, is actually fairly easy. At some point, the Postgres source code was changed such that all indenting is forced "on". A single character change to the file src/backend/utils/adt/ruleutils.c did the trick:

- #define PRETTYFLAG_INDENT    2
+ #define PRETTYFLAG_INDENT    0

Although this solution will clear up the indentation and whitespace, the parenthesis are still different, and not as easily solved. Overall, not a great solution.

The third solution was to modify the pg_dump source code. In particular, it uses the pg_get_viewdef(oid) form of the function. By switching that to the pg_get_viewdef(oid,integer) form of the function, and giving it an argument of 0, both 9.2 and 9.5 output the same thing:

$ psql vtest -p 5920 -tc "select pg_get_viewdef('gregtest'::regclass, 0)"
  SELECT count(*) AS count                        +
    FROM pg_class                                 +
   WHERE pg_class.reltuples > 0::double precision;

$ psql vtest -p 5960 -tc "select pg_get_viewdef('gregtest'::regclass, 0)"
  SELECT count(*) AS count                        +
    FROM pg_class                                 +
   WHERE pg_class.reltuples > 0::double precision;

This modified version will produce the same schema in our test database:

$ diff -u <(pg_dump vtest -x -p 5920 --schema-only) <(pg_dump vtest -x -p 5960 --schema-only)

--- /dev/fd/80               2016-09-29 12:34:56.019801980 -0400
+++ /dev/fd/88               2016-09-29 12:34:56.019881988 -0400
@@ -2,7 +2,7 @@
 -- PostgreSQL database dump
--- Dumped from database version 9.2.18
+-- Dumped from database version 9.6.0
 -- Dumped by pg_dump version 9.6.0
 SET statement_timeout = 0;

The best solution, as pointed out by my colleague David Christensen, is to simply make Postgres do all the heavy lifting with some import/export magic. At the end of the day, the output of pg_dump is not only human readble, but designed to be parseable by Postgres. Thus, we can feed the old 9.2 schema into a 9.6 temporary database, then turn around and dump it. That way, we have the same pg_get_viewdef() calls for both of the schemas. Here it is on our example databases:

$ createdb -p 5960 vtest92

$ pg_dump vtest -p 5920 | psql -q -p 5960 vtest92

$ diff -s -u <(pg_dump vtest92 -x -p 5960 --schema-only) <(pg_dump vtest -x -p 5960 --schema-only)
Files /dev/fd/63 and /dev/fd/62 are identical


Trying to compare schemas across versions can be difficult, so it's best not to try. Dumping and recreating schemas is a cheap operation, so simply dump them both into the same backend, then do the comparison.

Liquid Galaxy developer job opening

This position has been filled! Thanks to everyone who expressed interest.

We are looking for a full-time, salaried engineer to help us further develop our software, infrastructure, and hardware integration for the impressive Liquid Galaxy. The Liquid Galaxy was created by Google to provide an immersive experience of Google Earth and other applications.

This position is located at either our satellite office in Bluff City, Tennessee, or in Eugene, Oregon.

What you will be doing:

  • Develop new software involving panoramic video, Google Earth, a custom CMS, and ROS (Robot Operating System)
  • Improve the system with automation, monitoring, and customizing configurations to customers’ needs
  • Provide remote and occasional on-site troubleshooting and support at customer locations
  • Build tours and supporting tools for emerging markets
  • Integrate and test new hardware to work with the system

What’s in it for you?

  • Flexible full-time work hours
  • Benefits including health insurance and 401(k) retirement savings plan
  • Annual bonus opportunity

What you will need:

  • Sharp troubleshooting ability
  • Experience with “devops” automation tools such as Chef
  • Strong programming experience: Python, JavaScript, C/C++, Ruby, etc.
  • Linux system administration skills
  • A customer-centered focus
  • Strong verbal and written communication skills
  • Experience directing your own work, and working remotely as part of a team
  • Enthusiasm for learning new technologies

Bonus points for experience:

  • Contributing to open source projects
  • Working with geospatial systems, Cesium, Google Maps API, SketchUp, Google Building Maker, Blender, 3D modeling
  • Packaging software (e.g. dpkg/apt or RPM/Yum), building custom OS images, PXE booting
  • Working with Linux device drivers and networking
  • Doing image and video capture and processing, 360° video, or virtual reality
  • Using PostgreSQL or other databases
  • Writing server-side web applications with Django or Flask
  • Working with SAGE2
  • Working with web client technology such as HTML, CSS, DOM, browser extensions

About us

End Point is a technology consulting company founded in 1995 and based in New York City, with 50 full-time employees working from our offices in New York City, the tri-cities area in eastern Tennessee, and from home offices. We serve over 200 clients ranging from small family businesses to large corporations, using a variety of open source technologies. Our team works together using collaboration tools including SSH, tmux/Screen, IRC, Google Hangouts, Skype, wiki, Trello, and GitHub.

How to apply

Please email us an introduction to to apply. Include a resume and any URLs that would help us get to know you. We look forward to hearing from you!

CSSConf US 2016 — Day One

IMG 6042
Boston Common

I attended CSSConf US 2016 last week in Boston, MA. Having been to the 2013 and 2014 (part one, part two) events in Florida then missing it in NYC last year, I was looking forward to seeing what it would be like in a new location. It was fun to see some familiar faces including Eiríkur (congrats on organizing a very successful JSConf Iceland conference earlier this year!) and to meet many new folks in the CSS community. Claudina Sarahe was our MC for two days of talks and she did a great job. After each talk she took a few minutes to interview the speaker — asking them questions from the audience and some of her own. The chats were candid and interesting on their own and gave the next speaker time to set up.

Below are my notes and some observations on the talks from Day One. Be sure to check out the videos, slides and transcripts at the CSSConf site. Many of the videos are there now and they are adding more as they become available.

Sarah Drasner — Creativity in Programming for Fun and Profit

Sarah started off the day with an inspiring keynote about creativity. She explained how writing code is a creative endeavor by showing us several drastically different solutions to the classic FizzBuzz problem in computer science. She shared some stats and survey results which documented excellent, tangible results companies saw when they invested in R&D and fostering creativity. Sarah demoed some really cool creative projects she had built — many of them just for fun. She encouraged us to work on creative projects on the side to enter into flow while designing, coding and building things. Time devoted to purely creative and fun projects can makes less interesting day-to-day tasks more bearable. It was amazing to see some of the animation work she's done and her passion for art and programming was palpable throughout the talk.

Brian Jordan — No Bugs in Sight: Continuous Visual Tesing at

Brian works for which is the organization behind the Hour of Code and other initiatives promoting computer science education in schools. His work on apps that teach kids to code is used on a vast variety of hardware (tablets, old underpowered school computers etc.). Testing became a challenge early for them so Brian began to incorporate testing with Selenium into their development workflow. They use Cucumber to write their tests. They use Sauce Labs to run the tests and circleCI to integrate it with their source code control system. When code is committed, the tests are run which ensures broken code does not get merged in. They also use applitools eyes for visual diffing which looked very interesting. They keep a wiki of the larger and more interesting issues that it catches to help everyone remember them (and sometimes recall why they still need to keep a particular test). Brian showed us several screenshots of visual problems caught by their system including one that was a single pixel showing up in a strange (unwanted) location on the screen.

Jessica Lord — Nativize is the new Normalize

Jessica works at GitHub on the Electron project. Her talk was about Electron and how you can use it to build cross-platform (Windows/Linux/Mac) desktop apps using web technologies. Electron is used to power Atom, Slack and many other interesting apps. It uses V8 as its JavaScript runtime which allows you to use Node.js in your apps. Chromium is used for the all the DOM and rendering bits. She showed off the Electron example app and talked about some tips and tricks to building apps with Electron. Jessica stressed that you have to think about apps differently than web sites. You want your app to feel like an app not a website. She pointed out several good resources for learning more about it. Electron can also be used to build menubar apps for OS X.

Jen Kramer — CSS4 Grid: True Layout Finally Arrives

Jen is a professor at Harvard teaching web development. She talked about CSS grid and how it will make laying out sites much better than the current hacks techniques we have been using up to this point (tables, floats, flexbox). She explained how flexbox was designed to only work in a single dimension (grids need to work in two for both rows & columns) and showed some example layouts built with CSS grid. Jen's examples showed how many CSS design challenges (e.g.: equal height columns) are easily solved with grid. The only thing holding it back is browser support. Today it's available in Chrome and Firefox behind a configuration flag. Jen was really funny and engaging and drew everyone in with her presentation skills and passion for web development in general (grid too!).

Pete Hunt — Component-Based Style Reuse

Pete talked about React and how it can be used to create components (e.g.: a piece of UI that can be re-used in different contexts). He shared how OOCSS was a foundational concept and inspiration to him and walked through the process of building a React component that implemented an OOCSS "media block" component. Having previously worked at FaceBook, Pete discussed the issues that arise when developing many UI components at scale — especially CSS class conflicts and the resulting style pollution. Pete used jsxstyle (developed by his current team at Smyte) in his React component to keep the styles scoped to the component and avoid using descendent selectors. It was very interesting talk and I recommend watching the video of Pete's talk to get the full effect.

Keith J. Grant — Stop Thinking in Pixels

Keith works for the company behind the NYSE website. He thoroughly explained how to effectively use em and rem units for font-size and other properties and how he recommends avoiding using pixels for font sizes altogether. Keith showed many examples using rem to set the font size for a container element and then use em's for all of its children. This had the benefit of flexibility and made is possible to adjust the sizes of all of the child elements by simply changing the size on the parent/container. He covered several formulas which can be used to determine the em values you will need for approximate pixel sizes. He shared as a web app that can help with these calculations. Once you understand the formulas, you can add mixins in Sass to perform the calculations for you. Near the end of his talk Keith showed us an example using rem's for the parent, em's for the child elements and then combining calc() with viewport units (vh, vw) to smoothly scale the size of a component as the viewport changes. This was really neat to see and avoided the clunky switch between sizes that happens when the viewport changes between media queries. With the technique Keith demonstrated, the scaling was completely fluid — very impressive. Keith wrapped up by explaining that these techniques require a little work up front and often educating other developers on the team but the benefits are immediate.

Will Boyd — Silky Smooth Animation with CSS

Will talked about the deep dive he took into how browsers handle rendering and animation. He demoed how we can look into the process for our own performance debugging with tools like Chrome DevTools. To illustrate some of the performance wins Will had discovered in his own projects, he loaded up an animation he'd built for the talk and compared the performance before and after he used CSS properties that could be handled by the GPU. The performance improvement using transform, filter and opacity (all hardware-accelerated and rendered by the GPU) was drastic and impressive.

Lea Verou — CSS Variables var(--subtitle);

Lea's talk was about CSS variables (aka custom properties). I admit to being surprised when I looked it up recently and found that they were already quite widely supported. The syntax looks little odd at first but it made sense when Lea explained it was intentional that they look different than preprocessor variables. I found it interesting that CSS variables are expected to be used alongside preprocessor variables rather than replacing them. Lea explained how they work, how they can be accessed by JavaScript and demonstrated several things CSS variables can do that preprocessor variables can not! Make sure to check out the video of her talk when you get the chance.

SSH Key Access Recovery on EC2 Instances

Can't access your EC2 instance? Throw it away and spin up a new one! Everyone subscribes to the cattle server pattern, yes?

Not quite, of course. Until you reach a certain scale, that's not as easy to maintain as a smaller grouping of pet servers. While you can certainly run with that pattern on Amazon, EC2 instances aren't quite as friendly about overcoming access issues as other providers are. That's to say, even if you have access to the AWS account there's no interface for forcing a root password change or the like.

But sometimes you need that, as an End Point client did recently. It was a legacy platform, and the party that set up the environment wasn't available. However an issue popped up that needed solved, so we needed a way to get in. The process involves some EBS surgery and does involve a little bit of downtime, but is fairly straightforward. The client's system was effectively down already, so taking it all the way offline had little impact.

Also do make sure this is the actual problem, and not that the connection is blocked by security group configuration or some such.

  1. For safe keeping, it's recommended to create a snapshot of the original root volume. If something goes wrong, you can then roll back to that point.
  2. Create temporary working instance in the same availability zone as the instance to fix. A micro instance is fine. Don't forget to set a key that you do have access to.
  3. Find the instance you need to fix and note the root volume ID. Double check that it has an elastic IP assignment: when stopped the instance IP address will change if not associated with a static elastic IP. Similarly, ephemeral storage will be cleared (but hopefully you're not relying on having anything permanently there anyway.)
  4. Stop, do not terminate, the instance to be fixed.
  5. Find the root volume for the instance to be fixed using the ID found earlier (or just click it within the instance details pane) and detach it from the instance. Attach it to your working instance.
  6. Connect in to your working instance, and mount that volume as /mnt (or anywhere, really. Just using /mnt as an example here.)
  7. Copy any needed ssh keys into the .ssh/authorized_keys under /mnt/home/ubuntu/ or /mnt/home/ec2-user/ depending on the base distro used for the image, or even just to /mnt/root/. And/or make any other fixes needed.
  8. When all is good, umount the volume and detach it from the working instance. Attach it back to the original instance as /dev/sda1 (even though it doesn't say that's an option.)
  9. Boot original instance. If all goes well you should now be able to connect in using the ssh key you added. Ensure that everything comes up on boot. Terminate the temporary working instance (and do make sure you don't terminate the wrong one.)

That's not the only approach, of course. If you have a partially functional system, for example, you may be better off immediately creating a volume from the snapshot created in step 0, mounting that in another instance for the modifications, and then performing the stop, root volume swap, and start in quick succession. That'll minimize any actual downtime, at the potential expense of losing any data changes happening in between the snapshot and the reboot.

Either way just remember there are options, and all is not lost!