Welcome to End Point’s blog

Ongoing observations by End Point people

AngularJS & Dancer for Modern Web Development

At the Perl Dancer Conference 2015, I gave a talk on AngularJS & Dancer for Modern Web Development. This is a write-up of the talk in blog post form.

Legacy Apps

It's a fact of life as a software developer that a lot of us have to work with legacy software. There are many older platforms out there, still being actively used today, and still supporting valid businesses. Thus, legacy apps are unavoidable for many developers. Eventually, older apps are migrated to new platforms. Or they die a slow death. Or else the last developer maintaining the app dies.

Oh, to migrate

It would be wonderful if I could migrate every legacy app I work on to something like Perl Dancer. This isn't always practical, but a developer can dream, right?

Of course, every circumstance is different. At the very least, it is helpful to consider ways that old apps can be migrated. Using new technologies can speed development, give you new features, and breathe new life into a project, often attracting new developers.

As I considered how to prepare my app for migration, here are a few things I came up with:

  • Break out of the Legacy App Paradigm
    • Consider that there are better ways to do things than the way they've always been done
  • Use Modern Perl
  • Organize business logic
    • Try to avoid placing logic in front-end code

You are in a legacy codebase

I explored how to start using testing, but I soon realized that this requires methods or subroutines. This was the sad realization that up till now, my life as a Perl programmer had been spent doing scripting. My code wasn't testable, and looked like a relic with business logic strewn about.


I set out to change my ways. I started exploring object-oriented Perl using Moo, since Dancer2 uses Moo. I started trying to write unit tests, and started to use classes and methods in my code.

Essentially, I began breaking down problems into smaller problems. This, after all, is how the best methods are written: short and simple, that do just one thing. I found that writing code this way was fun.


I quickly realized that I wasn't able to run tests in my Legacy App, as it couldn't be called from the command line (at least not out of the box, and not without weird hacks). Thus, if my modules depended on Legacy App code, I wouldn't be able to call them from tests, because I couldn't run these tests from the shell.

This led me to a further refinement: abstract away all Legacy App-specific code from my modules. Or, at least all the modules I could (I would still need a few modules to rely on the Legacy App, or else I wouldn't be using it it all). This was a good idea, it turned out, as it follows the principle of Separation of Concerns, and the idea of Web App + App, which was mentioned frequently at the conference.

Now I was able to run tests on my modules!

Move already

This whole process of "getting ready to migrate" soon began to look like yak shaving. I realized that I should have moved to Dancer earlier, instead of trying to do weird hacks to get the Legacy App doing things as Dancer would do them.

However, it was a start, a step in the right direction. Lesson for me, tip for you.

And, the result was that my back-end code was all the more ready for working with Dancer. I would just need to change a few things, and presto! (More on this below.)


With the back-end looking tidier, I now turned to focus on the front-end. There was a lot of business logic in my front-end code that needed to be cleaned up.

Here is an example of my Legacy App front-end code:

<h1>[scratch page_title]</h1>
   my $has_course;
   for (grep {$_->{mv_ib} eq 'course'} @$Items) {
   return $has_course ? '

You have a course!

' : ''; [/perl] <button>Buy [if cgi items]more[else]now[/else][/if]</button> @_BOTTOM_@

As you can see, the Legacy App allowed the embedding of all sorts of code into the HTML page. I had Legacy App tags (in the brackets), plus something called "embedded perl", plus regular HTML. Add all this together and you get Tag Soup.

This kind of structure won't look nice if you attempt to view it on your own machine in a web browser, absent from the Legacy App interpreting it. But let's face it, this code doesn't look nice anywhere.

Separation of Concerns

I thought about how to apply the principle of Separation of Concerns to my front-end code. One thing I landed on, which isn't a new idea by any means, is the use of "HTML + placeholders," whereby I would use some placeholders in my HTML, to be later replaced and filled in with data. Here is my first attempt at that:

    page_title="[scratch page_title]"
    has_course="[perl] ... [/perl]"
    buy_phrase="Buy [if cgi items]more[else]now[/else][/if]"

    {HAS_COURSE?}<p>You have a course!</p>{/HAS_COURSE?}


What I have here uses the Legacy App's built-in placeholder system. It attempts to set up all the code in the initial "my-tag-attr-list", then the HTML uses placeholders (in braces) which get replaced upon the page being rendered. (The question-mark in the one placeholder is a conditional.)

This worked OK. However, the logic was still baked into the HTML page. I wondered how I could be more ready for Dancer? (Again, I should have just gone ahead and migrated.) I considered using Template::Toolkit, since it is used in Dancer, but it would be hard to add to my Legacy App.

Enter AngularJS (or your favorite JavaScript framework)

AngularJS is a JavaScript framework for front-end code. It displays data on your page, which it receives from your back-end via JSON feeds. This effectively allows you to separate your front-end from your back-end. It's almost as if your front-end is consuming an API. (Novel idea!)

After implementing AngularJS, my Legacy App page looked like this (not showing JavaScript):

<h1 ng-bind="page.title"></h1>
<p ng-if="items.course">You have a course!</p>
<button ng-show="items">Buy more</button>
<button ng-hide="items">Buy now</button>

Now all my Legacy App is doing for the front-end is basically "includes" to get the header/footer (the TOP and BOTTOM tags). The rest is HTML code with ng- attributes. These are what AngularJS uses to "do" things.

This is much cleaner than before. I am still using the Legacy App back-end, but all it has to do is "routing" to call the right module and deliver JSON (and do authentication).

Here's a quick example of how the JavaScript might look:

<html ng-app="MyApp">
<script src="angular.min.js"></script>
  angular.module / factory / controller
  $scope.items = ...;

This is very simplified, but via its modules/factories/controllers, the AngularJS code handles how the JSON feeds are displayed in the page. It pulls in the JSON and can massage it for use by the ng- attributes, etc.

I don't have to use AngularJS to do this — I could use a Template::Toolkit template delivered by Dancer, or any number of other templating systems. However, I like this method, because it doesn't require a Perl developer to use. Rather, any competent JavaScript developer can take this and run with it.


Now the migration of my entire app to Dancer is much easier. I gave it a whirl with a handful of routes and modules, to test the waters. It went great.

For my modules that were the "App" (not the "Web App" and dependent on the Legacy App), very few changes were necessary. Here is an example of my original module:

package MyApp::Feedback;
use MyApp;
my $app = MyApp->new( ... );
sub list {
    my $self = shift;
    my $code = shift
        or return $app->die('Need code');
    my $rows = $app->dbh($feedback_table)->...;
    return $rows;

You'll see that I am using a class called MyApp. I did this to get a custom die and a database handle. This isn't really the proper way to do this (I'm learning), but it worked at the time.

Now, after converting that module for use with Dancer:

package MyApp::Feedback;
use Moo;
with MyApp::HasDatabase;
sub list {
    my $self = shift;
    my $code = shift
        or die 'Need code';
    my $rows = $self->dbh->...;
    return $rows;

My custom die has been replaced with a Perl die. Also, I am now using a Moo::Role for my database handle. And that's all I changed!


The biggest improvements were in things that I "stole" from Dancer. (Naturally, Dancer would do things better than I.) This is my Legacy App's route for displaying and accepting feedback entries. It does not show any authentication checks. It handles feeding back an array of entries for an item ("list"), a single entry (GET), and saving an entry (POST):

sub _route_feedback {
    my $self = shift;
    my (undef, $sub_action, $code) = split '/', $self->route;
    $code ||= $sub_action;
    $self->_set_status('400 Bad Request');   # start with 400
    my $feedback = MyApp::Feedback->new;
    for ($sub_action) {
        when ("list") {
            my $feedbacks = $feedback->list($code);
            $self->_set_tmp( to_json($feedbacks) );
            $self->_set_content_type('application/json; charset=UTF-8');
            $self->_set_status('200 OK') if $feedbacks;
        default {
            for ($self->method) {
                when ('GET') {
                    my $row = $feedback->get($code)
                        or return $self->_route_error;
                    $self->_set_tmp( to_json($row) );
                    $self->_set_content_type('application/json; charset=UTF-8');
                    $self->_set_status('200 OK') if $row;
                when ('POST') {
                    my $params = $self->body_parameters
                        or return $self->_route_error;
                    $params = from_json($params);
                    my $result = $feedback->save($params);
                    $self->_set_status('200 OK') if $result;
                    $self->_set_content_type('application/json; charset=UTF-8');


Here are those same routes in Dancer:

prefix '/feedback' => sub {
    my $feedback = MyApp::Feedback->new;
    get '/list/:id' => sub {
        return $feedback->list( param 'id' );
    get '/:code' => sub {
        return $feedback->get( param 'code' );
    post '' => sub {
        return $feedback->save( scalar params );

Dancer gives me a lot for free. It is a lot simpler. There's still no authentication shown here, but everything else is done. (And I can use an authentication plugin to make even that easy.)


For the front-end, we have options on how to use Dancer. We could have Dancer deliver the HTML files that contain AngularJS. Or, we could have the web server deliver them, as there is nothing special about them that says Dancer must deliver them. In fact, this is especially easy if our AngularJS code is a Single Page App, which is a single static HTML file with AngularJS "routes". If we did this, and needed to handle authentication, we could look at using JSON Web Tokens.

Now starring Dancer

In hindsight, I probably should have moved to Dancer right away. The Legacy App was a pain to work with, as I built my own Routing module for it, and I also built my own Auth checking module. Dancer makes all this simpler.

In the process, though, I learned something...

Dancer is better?

I learned you can use tools improperly. You can do Dancer "wrong". You can write tag soup in anything, even the best modern tools.

You can stuff all your business logic into Template::Toolkit tags. You can stuff logic into Dancer routes. You can do AngularJS "wrong" (I probably do).

Dancer is better:

Dancer is better when (thanks to Matt S Trout for these):

  • Routes contain code specific to the Web.
  • Routes call non-Dancer modules (where business logic lives; again, Web App + App).
  • The route returns the data in the appropriate format.

These make it easy to test. You are effectively talking to your back-end code as if it's an API. Because it is.

The point is: start improving somewhere. Maybe you cannot write tests in everything, but you can try to write smart code.

Lessons learned

  • Separate concerns
  • Keep it testable
  • Just start somewhere

The end. Or maybe the beginning...

Perl Dancer Conference 2015 Report - Conference Days

In my last post, I shared about the Training Days from the Perl Dancer 2015 conference, in Vienna, Austria. This post will cover the two days of the conference itself.

While there were several wonderful talks, Gert van der Spoel did a great job of writing recaps of all of them (Day 1, Day 2), so here I'll cover the ones that stood out most to me.

Day One

Dancer Conference, by Alexis Sukrieh (used with permission)

Sawyer X spoke on the State of Dancer. One thing mentioned, which came up again later in the conference, was: Make the effort, move to Dancer 2! Dancer 1 is frozen. There have been some recent changes to Dancer:

  • Middlewares for static files, so these are handled outside of Dancer
  • New Hash::MultiValue parameter keywords (route_parameters, query_parameters, body_parameters; covered in my earlier post)
  • Delayed responses (asynchronous) with delayed keyword:
    • Runs on the server after the request has finished.
    • Streaming is also asynchronous, feeding the user chunks of data at a time.

Items coming soon to Dancer may include: Web Sockets (supported in Plack), per-route serialization (currently enabling a serializer such as JSON affects the entire app — later on, Russell released a module for this, which may make it back into the core), Dancer2::XS, and critic/linter policies.

Thomas Klausner shared about OAuth & Microservices. Microservices are a good tool to manage complexity, but you might want to aim for "monolith first", according to Martin Fowler, and only later break up your app into microservices. In the old days, we had "fat" back-ends, which did everything and delivered the results to a browser. Now, we have "fat" front-ends, which take info from a back-end and massage it for display. One advantage of the microservice way of thinking is that mobile devices (or even third parties) can access the same APIs as your front-end website.

OAuth allows a user to login at your site, using their credentials from another site (such as Facebook or Google), so they don't need a password for your site itself. This typically happens via JavaScript and cookies. However, to make your back-end "stateless", you could use JSON Web Tokens (JWT). Thomas showed some examples of all this in action, using the OX Perl module.

One thing I found interesting that Thomas mentioned: Plack middleware is the correct place to implement most of the generic part of a web app. The framework is the wrong part. I think this mindset goes along with Sawyer's comments about Web App + App in the Training Days.

Mickey Nasriachi shared his development on PONAPI, which implements the JSON API specification in Perl. The JSON API spec is a standard for creating APIs. It essentially absolves you from having to make decisions about how you should structure your API.

Panorama from the south tower of St. Stephen's cathedral, by this author

Gert presented on Social Logins & eCommerce. This built on the earlier OAuth talk by Thomas. Here are some of the pros/cons to social login which Gert presented:

  • Pros - customer:
    • Alleviates "password fatigue"
    • Convenience
    • Brand familiarity (with the social login provider)
  • Pros - eCommerce website:
    • Expected customer retention
    • Expected increase in sales
    • Better target customers
    • "Plug & Play" (if you pay) — some services exist to make it simple to integrate social logins, where you just integrate with them, and then you are effectively integrated with whatever social login providers they support. These include Janrain and LoginRadius
  • Cons - customer:
    • Privacy concerns (sharing their social identity with your site)
    • Security concerns (if their social account is hacked, so are all their accounts where they have used their social login)
    • Confusion (especially on how to leave a site)
    • Usefulness (no address details are provided by the social provider in the standard scope, so the customer still has to enter extra details on your site)
    • Social account hostages (if you've used your social account to login elsewhere, you are reluctant to shut down your social account)
  • Cons - eCommerce website:
    • Legal implications
    • Implementation hurdles
    • Usefulness
    • Provider problem is your problem (e.g., if the social login provider goes down, all your customers who use it to login are unable to login to your site)
    • Brand association (maybe you don't want your site associated with certain social sites)
  • Cons - social provider:
    • ???

Šimun Kodžoman spoke on Dancer + Meteor = mobile app. Meteor is a JavaScript framework for both server-side and client-side. It seems one of the most interesting aspects is you can use Meteor with the Android or iOS SDK to auto-generate a true mobile app, which has many more advantages than a simple HTML "app" created with PhoneGap. Šimun is using Dancer as a back-end for Meteor, because the server-side Meteor aspect is still new and unstable, and is also dependent on MongoDB, which cannot be used for everything.

End Point's own Sam Batschelet shared his work on Space Camp, a new container-based setup for development environments. This pulls together several pieces, including CoreOS, systemd-nspawn, and etcd to provide a futuristic version of DevCamps.

Day Two

Conference goers, by Sam (used with permission)

Andrew Baerg spoke on Taming the 1000-lb Gorilla that is Interchange 5. He shared how they have endeavored to manage their Interchange development in more modern ways, such as using unit tests and DBIC. One item I found especially interesting was the use of DBIx::Class::Fixtures to allow saving bits of information from a database to keep with a test. This is helpful when you have a bug from some database entry which you want to fix and ensure stays fixed, as databases can change over time, and without a "fixture" your test would not be able to run.

Russell Jenkins shared HowTo Contribute to Dancer 2. He went over the use of Git, including such helpful commands and tips as:

  • git status --short --branch
  • Write good commit messages: one line summary, less than 50 characters; longer description, wrapped to 72 characters; refer to and/or close issues
  • Work in a branch (you shall not commit to master)
  • "But I committed to master" --> branch and reset
  • git log --oneline --since=2.weeks
  • git add --fixup <SHA1 hash>
  • The use of branches named with "feature/whatever" or "bugfix/whatever" can be helpful (this is Russell's convention)

There are several Dancer 2 issues tagged "beginner suitable", so it is easy for nearly anyone to contribute. The Dancer website is also on GitHub. You can even make simple edits directly in GitHub!

It was great to have the author of Dancer, Alexis Sukrieh, in attendance. He shared his original vision for Dancer, which filled a gap in the Perl ecosystem back in 2009. The goal for Dancer was to create a DSL (Domain-specific language) to provide a very simple way to develop web applications. The DSL provides "keywords" for use in the Dancer app, which are specific to Dancer (basically extra functionality for Perl). One of the core aspects of keeping it simple was to avoid the use of $self (a standby of object-oriented Perl, one of the things that you just "have to do", typically).

Alexis mentioned that Dancer 1 is frozen — Dancer 2 full-speed ahead! He also shared some of his learnings along the way:

  • Fill a gap (define clearly the problem, present your solution)
  • Stick to your vision
  • Code is not enough (opensource needs attention; marketing matters)
  • Meet in person (collaboration is hard; online collaboration is very hard)
  • Kill the ego — you are not your code

While at the conference, Alexis even wrote a Dancer2 plugin, Dancer2::Plugin::ProbabilityRoute, which allows you to do A/B Testing in your Dancer app. (Another similar plugin is Dancer2::Plugin::Sixpack.)

Also check out Alexis' recap.

Finally, I was privileged to speak as well, on AngularJS & Dancer for Modern Web Development. Since this post is already pretty long, I'll save the details for another post.


In summary, the Perl Dancer conference was a great time of learning and building community. If I had to wrap it all up in one insight, it would be: Web App + App — that is, your application should be a compilation of: Plack middleware, Web App (Dancer), and App (Perl classes and methods).

Perl Dancer Conference 2015 Report - Training Days

I just returned from the Perl Dancer Conference, held in Vienna, Austria. It was a jam-packed schedule of two days of training and two conference days, with five of the nine Dancer core developers in attendance.

[image of Vienna]

Kohlmarkt street, Wien, by this author

If you aren't familiar with Perl Dancer, it is a modern framework for Perl for building web applications. Dancer1 originated as a port of Ruby's Sinatra project, but has officially been replaced with a rewrite called Dancer2, based on Moo, with Dancer1 being frozen and only receiving security fixes. The Interchange 5 e-commerce package is gradually being replaced by Dancer plugins.

Day 1 began with a training on Dancer2 by Sawyer X and Mickey Nasriachi, two Dancer core devs. During the training, the attendees worked on adding functionality to a sample Dancer app. Some of my takeaways from the training:

  • Think of your app as a Dancer Web App plus an App. These should ideally be two separate things, where the Dancer web app provides the URL routes for interaction with your App.
  • The lib directory contains all of your application. The recommendation for large productions is to separate your app into separate namespaces and classes. Some folks use a routes directory just for routing code, with lib reserved for the App itself.
  • It is recommended to add an empty .dancer file to your app's directory, which indicates that this is a Dancer app (other Perl frameworks do similarly).
  • When running your Dancer app in development, you can use plackup -R lib bin/app.psgi which will restart the app automatically whenever something changes in lib.
  • Dancer handles all the standard HTTP verbs, except note that we must use del, not delete, as delete conflicts with the Perl keyword.
  • There are new keywords for retrieving parameters in your routes. Whereas before we only had param or params, it is now recommended to use:
    • route_parameters,
    • query_parameters, or
    • body_parameters
    • all of which can be used with ->get('foo') which is always a single scalar, or ->get_all('foo') which is always a list.
    • These allow you to specify which area you want to retrieve parameters from, instead of being unsure which param you are getting, if identical names are used in multiple areas.

Day 2 was DBIx::Class training, led by Stefan Hornburg and Peter Mottram, with assistance from Peter Rabbitson, the DBIx::Class maintainer.

DBIx::Class (a.k.a. DBIC) is an Object Relational Mapper for Perl. It exists to provide a standard, object-oriented way to deal with SQL queries. I am new to DBIC, and it was a lot to take in, but at least one advantage I could see was helping a project be able to change database back-ends, without having to rewrite code (cue PostgreSQL vs MySQL arguments).

I took copious notes, but it seems that the true learning takes place only as one begins to implement and experiment. Without going into too much detail, some of my notes included:

  • Existing projects can use dbicdump to quickly get a DBIC schema from an existing database, which can be modified afterwards. For a new project, it is recommended to write the schema first.
  • DBIC allows you to place business logic in your application (not your web application), so it is easier to test (once again, the recurring theme of Web App + App).
  • The ResultSet is a representation of a query before it happens. On any ResultSet you can call ->as_query to find the actual SQL that is to be executed.
  • DBIx::Class::Schema::Config provides credential management for DBIC, and allows you to move your DSN/username/password out of your code, which is especially helpful if you use Git or a public GitHub.
  • DBIC is all about relationships (belongs_to, has_many, might_have, and has_one). many_to_many is not a relationship per se but a convenience.
  • DBIx::Class::Candy provides prettier, more modern metadata, but cannot currently be generated by dbicdump.
  • For deployment or migration, two helpful tools are Sqitch and DBIx::Class::DeploymentHandler. Sqitch is better for raw SQL, while DeploymentHandler is for DBIC-managed databases. These provide easy ways to migrate, deploy, upgrade, or downgrade a database.
  • Finally, RapidApp can read a database file or DBIC schema and provide a nice web interface for interacting with a database. As long as you define your columns properly, RapidApp can generate image fields, rich-text editors, date-pickers, etc.

The training days were truly like drinking from a firehose, with so much good information. I am looking forward to putting this into practice!

Stay tuned for my next blog post on the Conference Days.

NOAA Marine Sanctuaries in Liquid Galaxy


End Point enjoys a strong presence in the aquarium and museum community with our ability to deploy the Liquid Galaxy display platform. Our content team recently embedded 14 graphic overlays in the Aquarium of the Pacific and Monterrey Bay National Marine Sanctuary Liquid Galaxy systems. These layers highlight National Marine Sanctuary PDF maps created by the National Oceanic and Atmospheric Administration (NOAA). The Office of National Marine Sanctuaries serves as the trustee for a network of 14 protected areas encompassing more than 170,000 square miles from American Samoa to the Florida Keys. As marine sanctuaries, however, these locations are not gong to see as many humans as Yosemite or Yellowstone. Embedding the maps on the Liquid Galaxy allows viewers to dynamically explore the marine sanctuary map and its relationship to the larger landscape in Google Earth, and brings these remote locations to many more interactions with the public.

We are happy to share some of the media that goes into these presentations: A short YouTube video featuring the Marine sanctuary overlays in Google Earth is here [ ]. If you would like to explore the marine sanctuaries overlays on your desktop in Google Earth, the .kmz files are hosted with Google Drive here [ ]

End Point’s 20th anniversary meeting

End Point held a company-wide meeting at our New York City headquarters for two days on October 1 and 2. We had an excellent two days of presentations, discussions, and socializing with each other.

In addition to our main Manhattan office we have an office in Tennessee, and many of us work throughout the world from home offices. Because of this, we usually work together through text chat, phone, video call, and other remote means. Everyone traveling to New York City for this gathering was a great chance to get to know each other more personally.

The meeting itself was prefaced by a meetup at the Metropolitan Museum of Art, which turned out to be an fun game of hide-and-seek, trying to find each other throughout the museum's exhibits.

We're 20 years old!

This gathering was a special occasion because this year we are celebrating our 20th anniversary as a company! Day one of our meetings began with introductory comments by End Point's founders, Rick Peltzman, CEO, and Ben Goldstein, President. Together with Jon Jensen, CTO, they took a look back at where we've been, where we are now, and where we're going.

Rick had this message for our clients and friends in August, the month the company was founded:

Hello and happy birthday to us! This week marks End Point’s 20th anniversary. Congratulations to all our friends, clients, business partners, advisers, and especially our gifted engineers over these many years that have been the core of what makes this company successful.

We started as a two-person company out of New York, building simple websites in the infant stages of the oncoming internet boom. We now are 55 people strong and counting, throughout the world. Our skillsets have expanded to include dozens of technologies and services. And we’re extremely proud of the emergence of our Liquid Galaxy division. End Point is no longer your father’s internet company!

Along the way we weathered stock market crashes and bubble bursts, three U.S. presidents, amazing triumphs and heartbreaking human events. Still, through it all, End Point has adapted, grown, and has always lived up to its core principles: providing excellent support, deep knowledge of all things internet, and great client relationships.

Thank you again for helping us thrive, improve, and position ourselves for an even more exciting next 20 years!

Remote work tips

Moving on, two of our company directors, Ron Phipps and Richard Templet, took the floor to tell us about some remote work hacks, or tips and techniques, they use. Ron recommends using multiple monitors to increase productivity, dedicating one monitor to tools for time tracking, chat, calendar, etc. so they're always in sight and easy to access. Ron summarized how he records his time by writing notes down before starting the work, then refining the description and recording the time spent as he goes.

Ron also talked about the value of using voice or video calls with Google Hangouts or Skype to break through communication logjams. We use lots of good tools including email, IRC, Flowdock, Trello, Redmine, and Google Calendar, but they are inefficient for rapid, in-depth conversations. When there's confusion or misunderstanding in a discussion, speaking together in real-time makes it much easier to clear things up. Using the phone also helps us establish better relationships with each other and our clients, which, in turn, improves our work together.

Some other tips from Ron: Keep notes and download apps for whatever tools your team or projects are using. Use separate browsers or browser profiles for work and personal stuff. When traveling, relying solely on WiFi is a bad idea. Bringing a 50-100 foot Ethernet cable and an extra router has saved a few of our employees before.

Richard shared some tips as well:

  • Keep some things offline so your work doesn't shudder to a halt whenever the network cuts out.
  • Taking notes with pen and paper works without power or a network, helps you get things into your head, and is much simpler than using an app or website.
  • Bringing a backup power supply or battery when traveling can save you from a bad situation.
  • Whenever working at home, setting up a spot that is your office helps cut down on outside distractions.
  • Use screen or tmux on remote servers so you don't lose work when the network drops.
  • Pair or team programming can do miracles for productivity! Use a shared screen or tmux session while talking on the phone with a headset.

Perusion history

Greg Hanson and Mike Heins then reviewed the history of their company Perusion which merged with End Point in July 2015. Greg and Mike met when Greg was looking for someone to create a website for his computer hardware store and later they went into the consulting business together.

Perusion started with just a few clients, getting business by word of mouth. Their first big client was Vervanté, an on-demand publishing firm built on Interchange. Vervanté started small but has grown to a large business supporting thousands of authors.

We have written up more about Perusion in the blog post Perusion has joined End Point!


Piotr Hankiewicz and Greg Davidson talked about their work with Stance, a company founded in 2009 that creates and sells stylish socks. Most of the work we've done has been on the "product wall," which was challenging because of the sheer number of products and many ways to filter them. We started working with Stance in 2014, when they wanted help replacing their Magento site and redesigning it to be more responsive. We use Snapshot.js to very efficiently sort and filter large datasets. It also lets us keep the number of AngularJS models low, which keeps the site fast.

The product wall has around 3000 SKUs, and we've built a complex JavaScript system which makes it possible to sort and filter. It's even possible to filter the products by a combination of many things: color, size, thickness, price, collection, etc. There's a search function which is also integrated with the site.



Josh Williams and Patrick Lewis talk about Carjojo. Carjojo is a company to help consumers find detailed information and history on cars they're interested in buying.

Josh works on the back-end of their site, creating a REST API with Django and TastyPie. TastyPie makes it very easy to write functions that return data from the database, although it becomes very complicated when it comes to return datasets that are more complicated than TastyPie is built to handle.

The front-end is handled by Patrick, who creates a modern Angular-based JavaScript web application. There are two main parts of the front-end; one for filtering cars when you have a more general idea of what kind of car you're looking for, and the other for when you have a more specific idea of what you're looking for. They will both result in a detail page for a vehicle.

This architecture lends itself to a simple development process: It starts with the web designer, who gives Patrick a mockup of a page he wants created. Patrick implements the front-end app code, figures out what data is necessary to make the page work, and then Josh works on the backend to implement functions to return that data.

Code quality and testing

Kamil Ciemniewski and Marina Lohova talked about testing code. They started by talking about the difference between core testing (functionality of a program), compatibility testing (across browsers and devices), and usability testing.

Automated testing is a great way to make sure everything's working. It is very useful for catching things that the developer may not catch if they're testing manually. Automated testing can also easily cover odd scenarios, such as null values and empty search results (and other "fuzzing" of input data).

Kamil also talked about the importance of preventing bugs versus fixing them when they happen. It's much better for everyone involved if a bug is prevented or avoided rather than caught after it's been found. It means the user can do what they want to, the client feels like they are getting what they're paying for, and the developer feels satisfied with their work.

These are some ways to help prevent bugs coming up:

  • Use declarative syntax rather than imperative.
  • Always be clear about types.
  • Break code into small chunks.
  • Create terse code that is easy to read and documents itself.
  • Stick to standard libraries when possible.
  • Learn common programming paradigms.

Having clear communication with the client is also an important part of testing, as they're really the only ones qualified to judge the quality of the product they envisioned.


Agile development methodologies

Wojciech Ziniewicz and Brian Gadoury talked about Agile theory and practice. First, they briefly covered "waterfall development," which starts with defining requirements, then goes on to design, development, testing, and then maintenance. In reality, requirements change, design needs revision, development takes longer than expected, testing may be skipped or skimped on, and maintenance is never perfect.

The Agile Manifesto was written in 2001, and emphasizes a focus on adapting to change, a simpler process, and tight feedback loops. "Agile development" has become a collection of management and development methodologies. It’s intended to be light on the process overhead, iterative and incremental, and to be helpful when done right.

There are now many flavors of Agile, such as Extreme Programming (XP), Scrum, Crystal, Kanban, Lean, DSDM, etc.

XP focuses on things like short development cycles and many checkpoints. It relies on great communication and tools to help with frequent small releases, like good devops. It also requires pair programming. According to a study done by the University of Utah and North Carolina State University, pair programmers spend 15% more time on problems, but the resulting code has 15% fewer defects. 96% of programmers in the study said they enjoy pair programming more than solo programming. Pair programming also makes knowledge-sharing easy.

Scrum involves short "sprints", usually one or two weeks long, and along the way a daily standup meeting which is a short, disciplined meeting where you discuss what you've done since the last meeting, any blockers to your work, and what you are and will be doing in the near future.

One interesting agile technique is planning poker, which involves using playing cards to create estimates for work, with the numbers representing a difficulty for the project. Everyone will flip their cards over at the same time, and people with high or low estimates are given a chance to explain their justification. The Dunning-Kruger effect is a very good reason to use this, as it helps people see when they're estimating too low or when team members think something will be much easier than the person making the estimate thinks. There is a nice plugin to Google Hangouts for doing planning poker remotely.

Test driven development (TDD) is another good agile technique. TDD involves writing tests before code, running the test (it'll fail), writing code to fix the test problems, then testing again to confirm it's all working as expected. It can get in the way sometimes, but usually ends up making you write better code.

Several agile methodologies encourage measuring velocity: finding the rate of progress to measure the capacity of your team by putting points on tasks and then assigning points on new tasks based on how points were assigned for completed tasks. In other words, rather than using points to represent time, use them to represent difficulty and improve the team's speed at completing tasks over time.

Continuous Integration

Zdeněk Maxa talked about another Agile technique, Continuous Integration (CI). CI is a method used in software development where developers commit and push to the code base frequently. A CI server automatically runs unit tests and/or builds the software packages and reports on the success or any problems introduced by recent commits or merges. CI tests and builds are usually done at least once a day, if not after every single push. This kind of rapid re-alignment can work wonders in helping avoid last-minute conflict merges and provide quality assurance.

Project estimation

David Christensen talked about project estimation. Successful agreements for new projects come through good communication, scoping, and estimation. Difficulty estimation is fairly similar to engineering; sometimes it can be easy, for example when the project involves things we’ve done many times, when people who know the area of work well have time to work on it. Some estimates are more difficult, when things like unclear objectives, uncertainty, or lack of experience in the field get in the way.

Depending on the circumstance, it can be useful to put together a very broad estimate with a wide range, so that the customer has a rough idea of the size of the project and can decide whether to pursue it at all. Such a rough estimate is much quicker and simpler than putting lots of time into project analysis, only to find out the project is far outside of the budget range.

A large project that requires a more exact estimate may call for a smaller paid discovery project, which involves a deeper investigation into the project and ferreting out hidden pitfalls and risks. It can be incredibly useful for when a more exact estimate of time and/or cost is needed.

David says we need to avoid unrealistic engineer optimism and be honest about estimates. Overpromising leads to unhappy clients and unhappy management. Clients need to make informed decisions, and giving them only a best-case scenario isn’t good for that. To that end, we can solicit input from more experienced people and those who are subject matter experts in relevant areas.


Electrical problem in the building!

Our day had some extra excitement when our office building was evacuated due to an electrical problem that may have posed a fire risk. We split up for short walks around the neighborhood, until Ben Witten found us a great temporary meeting place at Rise New York, a co-working space focused on helping financial startups work together. The rest of the afternoon our meeting continued there, a comfortable and convenient place just down the street from our office!

Continue reading about day 2 of our meeting!

MediaWiki extension.json change in 1.25

I recently released a new version of the MediaWiki "Request Tracker" extension, which provides a nice interface to your RequestTracker instance, allowing you to view the tickets right inside of your wiki. There are two major changes I want to point out. First, the name has changed from "RT" to "RequestTracker". Second, it is using the brand-new way of writing MediaWiki extensions, featuring the extension.json file.

The name change rationale is easy to understand: I wanted it to be more intuitive and easier to find. A search for "RT" on ends up finding references to the WikiMedia RequestTracker system, while a search for "RequestTracker" finds the new extension right away. Also, the name was too short and failed to indicate to people what it was. The "rt" tag used by the extension stays the same. However, to produce a table showing all open tickets for user 'alois', you still write:

<rt u='alois'></rt>

The other major change was to modernize it. As of version 1.25 of MediaWiki, extensions are encouraged to use a new system to register themselves with MediaWiki. Previously, an extension would have a PHP file named after the extension that was responsible for doing the registration and setup—usually by mucking with global variables! There was no way for MediaWiki to figure out what the extension was going to do without parsing the entire file, and thereby activating the extension. The new method relies on a standard JSON file called extension.json. Thus, in the RequestTracker extension, the file RequestTracker.php has been replaced with the much smaller and simpler extension.json file.

Before going further, it should be pointed out that this is a big change for extensions, and was not without controversy. However, as of MediaWiki 1.25 it is the new standard for extensions, and I think the project is better for it. The old way will continue to be supported, but extension authors should be using extension.json for new extensions, and converting existing ones over. As an aside, this is another indication that JSON has won the data format war. Sorry, XML, you were too big and bloated. Nice try YAML, but you were a little *too* free-form. JSON isn't perfect, but it is the best solution of its kind. For further evidence, see Postgres, which now has outstanding support for JSON and JSONB. I added support for YAML output to EXPLAIN in Postgres some years back, but nobody (including me!) was excited enough about YAML to do more than that with it. :)

The extension.json file asks you to fill in some standard metadata fields about the extension, which are then used by MediaWiki to register and set up the extension. Another advantage of doing it this way is that you no longer need to add a bunch of ugly include_once() function calls to your LocalSettings.php file. Now, you simply call the name of the extension as an argument to the wfLoadExtension() function. You can even load multiple extensions at once with wfLoadExtensions():

## Old way:
$wgRequestTrackerURL = '';

## New way:
wfLoadExtension( 'RequestTracker' );
$wgRequestTrackerURL = '';

## Or even load three extensions at once:
wfLoadExtensions( array( 'RequestTracker', 'Balloons', 'WikiEditor' ) );
$wgRequestTrackerURL = '';

Note that configuration changes specific to the extension still must be defined in the LocalSettings.php file.

So what should go into the extension.json file? The extension development documentation has some suggested fields, and you can also view the canonical extension.json schema. Let's take a quick look at the RequestTracker/extension.json file. Don't worry, it's not too long.

    "manifest_version": 1,
    "name": "RequestTracker",
    "type": "parserhook",
    "author": [
        "Greg Sabino Mullane"
    "version": "2.0",
    "url": "",
    "descriptionmsg": "rt-desc",
    "license-name": "PostgreSQL",
    "requires" : {
        "MediaWiki": ">= 1.25.0"
    "AutoloadClasses": {
        "RequestTracker": "RequestTracker_body.php"
    "Hooks": {
        "ParserFirstCallInit" : [
    "MessagesDirs": {
        "RequestTracker": [
    "config": {
        "RequestTracker_URL": "",
        "RequestTracker_DBconn": "user=rt dbname=rt",
        "RequestTracker_Formats": [],
        "RequestTracker_Cachepage": 0,
        "RequestTracker_Useballoons": 1,
        "RequestTracker_Active": 1,
        "RequestTracker_Sortable": 1,
        "RequestTracker_TIMEFORMAT_LASTUPDATED2": "FMMonth DD, YYYY",
        "RequestTracker_TIMEFORMAT_CREATED": "FMHH:MI AM FMMonth DD, YYYY",
        "RequestTracker_TIMEFORMAT_CREATED2": "FMMonth DD, YYYY",
        "RequestTracker_TIMEFORMAT_RESOLVED": "FMHH:MI AM FMMonth DD, YYYY",
        "RequestTracker_TIMEFORMAT_RESOLVED2": "FMMonth DD, YYYY",
        "RequestTracker_TIMEFORMAT_NOW": "FMHH:MI AM FMMonth DD, YYYY"

The first field in the file is manifest_version, and simply indicates the extension.json schema version. Right now it is marked as required, and I figure it does no harm to throw it in there. The name field should be self-explanatory, and should match your CamelCase extension name, which will also be the subdirectory where your extension will live under the extensions/ directory. The type field simply tells what kind of extension this is, and is mostly used to determine which section of the Special:Version page an extension will appear under. The author is also self-explanatory, but note that this is a JSON array, allowing for multiple items if needed. The version and url are highly recommended. For the license, I chose the dirt-simple PostgreSQL license, whose only fault is its name. The descriptionmsg is what will appear as the description of the extension on the Special:Version page. As it is a user-facing text, it is subject to internationalization, and thus rt-desc is converted to your current language by looking up the language file inside of the extension's i18n directory.

The requires field only supports a "MediaWiki" subkey at the moment. In this case, I have it set to require at least version 1.25 of MediaWiki - as anything lower will not even be able to read this file! The AutoloadClasses key is the new way of loading code needed by the extension. As before, this should be stored in a php file with the name of the extension, an underscore, and the word "body" (e.g. RequestTracker_body.php). This file contains all of the functions that perform the actual work of the extension.

The Hooks field is one of the big advantages of the new extension.json format. Rather than worrying about modifying global variables, you can simply let MediaWiki know what functions are associated with which hooks. In the case of RequestTracker, we need to do some magic whenever a <rt> tag is encountered. To that end, we need to instruct the parser that we will be handling any <rt> tags it encounters, and also tell it what to do when it finds them. Those details are inside the wfRequestTrackerParserInit function:

function wfRequestTrackerParserInit( Parser $parser ) {

    $parser->setHook( 'rt', 'RequestTracker::wfRequestTrackerRender' );

    return true;

The config field provides a list of all user-configurable variables used by the extension, along with their default values.

The MessagesDirs field tells MediaWiki where to find your localization files. This should always be in the standard place, the i18n directory. Inside that directory are localization files, one for each language, as well as a special file named qqq.json, which gives information about each message string as a guide to translators. The language files are of the format "xxx.json", where "xxx" is the language code. For example, RequestTracker/i18n/en.json contains English versions of all the messages used by the extension. The i18n files look like this:

$ cat en.json
  "rt-desc"       : "Fancy interface to RequestTracker using <code>&lt;rt&gt;</code> tag",
  "rt-inactive"   : "The RequestTracker extension is not active",
  "rt-badcontent" : "Invalid content args: must be a simple word. You tried: <b>$1</b>",
  "rt-badquery"   : "The RequestTracker extension encountered an error when talking to the RequestTracker database",
  "rt-badlimit"   : "Invalid LIMIT (l) arg: must be a number. You tried: <b>$1</b>",
  "rt-badorderby" : "Invalid ORDER BY (ob) arg: must be a standard field (see documentation). You tried: <b>$1</b>",
  "rt-badstatus"  : "Invalid status (s) arg: must be a standard field (see documentation). You tried: <b>$1</b>",
  "rt-badcfield"  : "Invalid custom field arg: must be a simple word. You tried: <b>$1</b>",
  "rt-badqueue"   : "Invalid queue (q) arg: must be a simple word. You tried: <b>$1</b>",
  "rt-badowner"   : "Invalid owner (o) arg: must be a valud username. You tried: <b>$1</b>",
  "rt-nomatches"  : "No matching RequestTracker tickets were found"

$ cat fr.json
  "@metadata": {
     "authors": [
         "Josh Tolley"
  "rt-desc"       : "Interface sophistiquée de RequestTracker avec l'élement <code>&lt;rt&gt;</code>.",
  "rt-inactive"   : "Le module RequestTracker n'est pas actif.",
  "rt-badcontent" : "Paramètre de contenu « $1 » est invalide: cela doit être un mot simple.",
  "rt-badquery"   : "Le module RequestTracker ne peut pas contacter sa base de données.",
  "rt-badlimit"   : "Paramètre à LIMIT (l) « $1 » est invalide: cela doit être un nombre entier.",
  "rt-badorderby  : "Paramètre à ORDER BY (ob) « $1 » est invalide: cela doit être un champs standard. Voir le manuel utilisateur.",
  "rt-badstatus"  : "Paramètre de status (s) « $1 » est invalide: cela doit être un champs standard. Voir le manuel utilisateur.",
  "rt-badcfield"  : "Paramètre de champs personalisé « $1 » est invalide: cela doit être un mot simple.",
  "rt-badqueue"   : "Paramètre de queue (q) « $1 » est invalide: cela doit être un mot simple.",
  "rt-badowner"   : "Paramètre de propriétaire (o) « $1 » est invalide: cela doit être un mot simple.",
  "rt-nomatches"  : "Aucun ticket trouvé"

One other small change I made to the extension was to allow both ticket numbers and queue names to be used inside of the tag. To view a specific ticket, one was always able to do this:


This would produce the text "RT #6567", with information on the ticket available on mouseover, and hyperlinked to the ticket inside of RT. However, I often found myself using this extension to view all the open tickets in a certain queue like this:

<rt q="dyson"></rt>

It seems easier to simply add the queue name inside the tags, so in this new version one can simply do this:


If you are running MediaWiki 1.25 or better, try out the new RequestTracker extension! If you are stuck on an older version, use the RT extension and upgrade as soon as you can. :)

Intl - JavaScript numbers and dates formatting, smart strings comparision


*** WARNING *** At the time of writing this text all of the things mentioned below are not yet supported by Safari and most of the mobile browsers.

It's almost three years now since Ecma International published the 1st version of "ECMAScript Internationalization API Specification". It's widely supported by most of the browsers now. A new object called Intl has been introduced. Let's see what it can do.

To make it easier imagine that we have a banking system with a possibility of having accounts in multiple currencies. Our user is Mr. White, a rich guy.

Intl powers

Number formatting

Mr. White has four accounts with four different currencies: British pound, Japanese yen, Swiss franc, Moroccan dirham. If we want to have a list of current balances, with a correct currency symbols, it's pretty simple:

// locales and balances object
var accounts = [
        locale: 'en-GB',
        balance: 165464345,
        currency: 'GBP'    
        locale: 'ja-JP',
        balance: 664345,
        currency: 'JPY'         
        locale: 'fr-CH',
        balance: 904345,
        currency: 'CHE'        
        locale: 'ar-MA',
        balance: 4345,
        currency: 'MAD'        

// now print how rich is Mr. White!
accounts.forEach(function accountsPrint (account) {
    console.log(new Intl.NumberFormat(account.locale, {style: 'currency', currency: account.currency}).format(account.balance));

The output looks like:

"CHE 904 345"
"د.م.‏ ٤٬٣٤٥"

In a real application you'd typically use the same locale for a view/language/page, the different examples above are just to show the power of Intl!

Date and time easier

Mr. White changed his UI language to Norwegian. How to show dates and weekdays without a big effort?

// we need a list of the current week dates and weekdays
var startingDay = new Date();

var thisDay = new Date();
var options = {weekday: 'long', year: 'numeric', month: 'long', day: 'numeric'};
for(var i=0; i < 7; i++) {
    thisDay.setDate(startingDay.getDate() + i);

    console.log(new Intl.DateTimeFormat('nb-NO', options).format(thisDay));

The output is:

"onsdag 23. september 2015"
"torsdag 24. september 2015"
"fredag 25. september 2015"
"lørdag 26. september 2015"
"søndag 27. september 2015"
"mandag 28. september 2015"
"tirsdag 29. september 2015"

Trust me, it's correct (;p).

String comparison

Mr. White has a list of his clients from Sweden. He uses his UI in German as it's a default language, but Mr. White is Swedish.

var clients = ["Damien", "Ärna", "Darren", "Adam"];
clients.sort(new Intl.Collator('de').compare); 

Here Mr. White will expect to find Mr. Ärna at the end of the list. But in German alphabet Ä is in a different place than in Swedish, that why the output is:

["Adam", "Ärna", "Damien", "Darren"]

To make Mr. White happy we need to modify our code a bit:

var clients = ["Damien", "Ärna", "Darren", "Adam"];
clients.sort(new Intl.Collator('sv').compare); 

Now, we are sorting using Collator object with Swedish locales. The output is different now:

["Adam", "Damien", "Darren", "Ärna"]


Be careful with using the Intl object -- its implementation is still not perfect and not supported by all the browsers. There is a nice library called: Intl.js by Andy Earnshaw. It's nothing more than a compatibility implementation of the ECMAScript Internationalization API. You can use it to use all the Intl object features now, without being worried about different browsers' implementations.

Intro to DimpleJS, Graphing in 6 Easy Steps

Data Visualization is a big topic these days given the giant amount of data being collected. So, graphing this data has been important in order to easily understand what has been collected. Of course there are many tools available for this purpose but one of the more popular choices is D3.

D3 is very versatile but can be a little more complicated than necessary for simple graphing. So, what if you just want to quickly spin up some graphs quickly? I was recently working on a project where we were trying to do just that. That is when I was introduced to DimpleJS.

The advantage of using DimpleJS rather than plain D3 is speed. It allows you to quickly create customizable graphs with your data, gives you easy access to D3 objects, is intuitive to code, and I've found the creator John Kiernander to be very responsive on Stack Overflow when I ran in to issues.

I was really impressed with how flexible DimpleJS is. You can make a very large variety of graphs quickly and easily. You can update the labels on the graph and on the axes, you can create your own tooltips, add colors and animations, etc..

I thought I'd make a quick example to show just how easy it is to start graphing.

Step 1

After including Dimple in your project, you simply create a div and give it an id to be used in your JavaScript.

<script src=""></script>
<script src=""></script>
<div id="chart" style="background:grey"></div>

Step 2

In your JavaScript you use Dimple to create an SVG, targeting the element with your id, in this case "chart" and I've given it some size options.

var svg1 = dimple.newSvg("#chart", 600, 400);

Step 3

Then, you get your data set through an API call, hardcoded, etc.. I've hardcoded some sample data of weeks and miles ran for two runners.

var data1 = [
      {week: 'week 1', miles: 1},
      {week: 'week 2', miles: 2},
      {week: 'week 3', miles: 3},
      {week: 'week 4', miles: 4}
      {week: 'week 1', miles: 2},
      {week: 'week 2', miles: 4},
      {week: 'week 3', miles: 6},
      {week: 'week 4', miles: 8}

Step 4

Next, you set up your axes. You can create multiple x and y axes with Dimple but for the sake of this example I will just create one x and one y. The two that I use most are category and measure. Category is used for string values, here the runners' names. One word of caution form the docs for the measure axes is that "If the field is numerical the values will be aggregated, otherwise a distinct count of values will be used.". This means that if I had two runners who were both named Bill in the same data set, and say the first Bill ran 1 mile in week 1 and the second Bill ran 2 miles in week 1, Dimple would graph one x-value Bill and one y-value 3 rather than two Bills, one with 1 mile and one with 2.

var xAxis = chart1.addCategoryAxis("x", "week");
  var yAxis = chart1.addMeasureAxis("y", "miles");

Step 5

Now we've set up the axes, so we want to actually plot the points. In Dimple we call this "adding a series". We are telling it what kind of graph we want, and what to call the data (in this case, the runner's name), and we assign it the data we'd like to use.

s1 = chart1.addSeries("Bill", dimple.plot.line); = data1[0];

  s2 = chart1.addSeries("Sarah", dimple.plot.line); = data1[1];

Step 6

Lastly, we simply tell Dimple to actually draw it.


Here's the code in full, and a working JSBin here,,output

var chart1 = new dimple.chart(svg1);
  var xAxis = chart1.addCategoryAxis("x", "week");
  var yAxis = chart1.addMeasureAxis("y", "miles");

  var data1 = [
      {week: 'week 1', miles: 1},
      {week: 'week 2', miles: 2},
      {week: 'week 3', miles: 3},
      {week: 'week 4', miles: 4}
      {week: 'week 1', miles: 2},
      {week: 'week 2', miles: 4},
      {week: 'week 3', miles: 6},
      {week: 'week 4', miles: 8}

  s1 = chart1.addSeries("Bill", dimple.plot.line); = data1[0];
  s2 = chart1.addSeries("Sarah", dimple.plot.line); = data1[1];