End Point

News

Welcome to End Point's blog

Ongoing observations by End Point people.

Class is in Session: CartoDB

A few weeks ago, some members of our Liquid Galaxy content team headed to Gilt which hosted a class on CartoDB.

Since we deal with location-based data for our work with Liquid Galaxy, it seemed like the ideal class to learn more about this popular geospatial mapping tool. CartoDB is a cloud-based solution for mapping where users can easily manage data and create maps with them. The true allure of CartoDB is its ability to produce beautiful visualizations, which is why it’s often referred to as the “Instagram of mapping”.

The platform is built on open source software including PostGIS and PostgreSQL. There is extensive use of JavaScript in the front end, and Node.js for the back end APIs. CartoDB is packaged in a way where the user doesn’t need to have much coding experience, yet options still exist for those who want to go further with it using SQL and CSS.

Andrew Hill from CartoDB led the 4 hour class and provided us with a detailed summary of CartoDB’s origins, and its current wide use among companies like Twitter, NYTimes and several nonprofit organizations. Every data set uploaded is an API endpoint, and sql processes and packages data any you want and returns it with JSON. One of the coolest features of CartoDB is that the visualizations are dynamic, so whenever the dataset is altered, the changes are automatically reflected in the map visualization.

We spent a significant amount of time creating our own visualizations, and I was astounded by the application's ability to deliver beautiful visualizations in a matter of seconds. Below is an example.

Have you used CartoDB in your applications? We'd love to see your examples!

YAPC::NA 2014, Day Three (and wrap-up)

My final day at YAPC::NA (Yet Another Perl Conference, North American flavor) summarized below. (For Day Two and Day One, see the links.)

The conference organizers know their audience: day one started at 8 AM, day two at 9 AM, and the final day at 10 AM. (I took advantage of this lenient schedule to go for a swim in the hotel pool and grab a nice off-site breakfast. — Oh, dear, does my boss read this blog? :-)

I attended several medium-sized sessions during the day:

  • Designing and Implementing Web Data Services in Perl

    Michael McClennen, who programs for the Geology Department at the University of Wisconsin, outlined a flexible approach to providing an API to a complex data store. This supports a front-end used for displaying the location of fossils around the world, which is actually quite impressive. A typical URL for a data request looks like:

    http://paleobiodb.org/data1.1/colls/summary.json?
    lngmin=-180&lngmax=180&latmin=-90&latmax=90&limit=all&show=time&level=3&interval_id=14
    
    

    (The URL above is wrapped for our blogger format; there should be no whitespace anywhere in it.)

    This identifies a specific version of the service ("data1.1"), so that any backwards-incompatible changes to the API won't necessarily bring down a client. The requested format, JSON, is embedded in the request, too.

  • Dancer: Getting to Hello, World

    R Geoffrey Avery presented this talk, not as a way to get the application code together, but to put in place a fairly complex infrastructure:

    • PSGI
    • Plack
    • nginx
    • gcc
    • starman
    as well as setting up appropriate permissions, etc.

    Dancer was somewhat incidental to this talk. It's another web framework (a way of connecting structured URLs into a web service so that a given URL runs a given chunk of code, and delivers output in a particular format, HTML, JSON, XML, or what-have you). Setting it up on a bare system can be a chore, especially if you aren't a rugged, fearless system administrator type, so Avery gave us mortals a way to brute-force this installation without a lot of experience beforehand.

  • Templates as a service – with swig.js

    This was a remarkable, rapid tour of an effort by Logan Bell and others at Shutterstock, in which the swig.js template system is set up to be provided as a web service. In other words, a Dancer application runs JavaScript inside of Perl to fill out HTML templates, which get delivered back to an invoking Perl application (and then, one assumes, served up as the output of that application). I found all this just a bit mind-blowing, and I'm looking forward to giving it a try.

  • How Cognitive Linguistics Can Help You Become A (More) Bad-Ass Developer

    This was a great place to wind up my technical-talk journey, because it wasn't actually a technical talk at all.

    Aside: YAPC has adopted a special track of talks called "Awesome && !Perl" (awesome and not Perl), in which presenters can talk about pretty much anything they want to. One session featured instruction on how to roast your own coffee. Others were somewhat closer to the spirit of the conference, but all were awesome from what I heard.

    I can't do this talk justice in the short space I have, but let me try as follows: when you are thinking about an application, especially in designing it in an object-oriented fashion, you will quite often use a metaphor or a system of them: this object "is" a customer, that object "is" a purchase, etc. Then your system verbs reflect this metaphor:

    $customer->bill_for_service(@params);
    ...
    $purchase->refund($amount);
    ...
    
    etc., This process goes on at almost every level of system design, and our ability to understand when we do it and how to do it well is an important insight.

YAPC wound down (if that can be said, given the pace) with a third series of lightning talks.

  • Donations for grants to YAPC administration were sought, because the job of organizing the convention has become larger than one person's free time can support.
  • How to raise geek children.
  • A preview of and invitation to the Netherlands Perl Workshop in 2014.
  • And a review of an effort that normalized and centralized all the regular expressions in an enterprise into a Perl module: part numbers, lot numbers, equipment IDs, etc., all served up symbolically.

Finally, we came to not one but two keynote addresses. The first, by Sawyer X of Booking.com, asked us to remember and focus on the joy of what we do. We get bogged down in the details of programming and dealing with users, requirements, and the like, but at heart we are problem-fixers and puzzle-solvers. That should be enough to spur some feelings of joy in our hearts.

John Anderson gave the final address, which returned to the theme of "Perl as a dying language" vs. its maturity, and asked us to be ambassadors of Perl. In particular, he noted that the CPAN site (which shares free software modules to thousands of programmers) has been copied by other communities, but the testing approach used to maintain those modules and their compatibility across versions of Perl has not been so widely adopted.

Summary Time

YAPC::NA 2014 recorded 366 registered attendees (299 paid or signed up to present), coming from 13 countries and 69 Perl local user groups ("Perl mongers", in the lingo). 127 talks were submitted, 97 accepted. There were at least a dozen things I knew of but could not attend due to a conflict. Difficult choices had to be made.

Can I summarize something like 30 hours of experience in a paragraph? No, but I can hope to convey how much fun it was. I came away armed with at least a half-dozen things I want to explore right away, today if I can fit it into my schedule. Another half-dozen got tucked away "just in case". I'd like to thank everyone involved in YAPC::NA for a job well done. And thanks to End Point for giving me the opportunity to explore this.

Please visit their website, yapcna.org, for links to the talk slides, and check out their video library of the talks; I know I will just because I couldn't be everywhere at once!

Rails Performance with Skylight

Back at RailsConf, I met a couple of the creators of Skylight.io, a recently launched Ruby on Rails profiler. I was anxious to try it out after having unconvincing experiences with New Relic, but first I had to get through a pretty big upgrade of Rails 2.3 to 4.1 on H2O. I survived the upgrade and moved on to profiling.

Installation of Skylight.io was super simple and the installation screen provided real-time feedback during gem installation and configuration. The web-app had data to share within a minute or so. At the moment, Skylight offers a free month trial to get started, and paid plans after that month. Skylight reports metrics requests (referenced by controller#action) per minute and time per request and allows you to sort results by those metrics combined (Response time x Requests Per Minute = Agony), individually or alphabetically. The Agony-sorted option highlights methods that are candidates for the most impactful changes. One interesting note here is because our application uses full page caching, the requests recorded by Skylight do not include those static cached requests, so the Skylight data is a representation of the work the Rails application is doing to generate non-cached content and apply various writes.


Skylight app screenshot offering various ways to sort requests. Controller names blurred out.

Once you drill down to a specific controller#action method, Skylight provides a waterfall of the various processes, including view rendering and database hits. It highlights potential problem areas where the same query executes repeatedly in one request (n+1). There's a lot of data and interactivity available in the waterfall view.


Skylight request waterfall screenshot. Table names blurred out

With my recent upgrade from Rails 2.3 to Rails 4.1, I initially chose the simple, happy, and wise path of minimal refactoring, which did not take advantage of improved cache management in Rails 4 or eager loading strategies. Armed with Skylight metrics, I was able to apply a number of changes to improve performance, described below.

Data Model Challenges

Before I go into the performance details, I want to describe the inherent challenge associated with the application's data model, which combines nesting of listed items and polymorphism. In the diagram below, ItemA, ItemB, ItemC, and ItemD are all Rails models. A model of type ItemA has a list of items (of class ListItem). Each of those list items points to another item via a polymorphic relationship (of class ItemA, ItemB, ItemC, ItemD). The nested referenced item can include further nesting. Nesting is allowed at up to 4 levels and infinite nested loops are not allowed. When the top-level ItemA loads, there are some metrics pulled from the aggregate of all of its nested list items, which requires all nested items to be loaded from the database. Because of this nested data model, one must pay special attention to eager loading in Rails (via the includes() method, or default scope). In some cases, eager loading of all nested items is necessary and in other cases it only becomes a performance burden if the data is not needed. This nested polymorphic data model has created some challenges in terms of performance and cache invalidation.

ItemA
  ListItem => ItemA
    ListItem => ItemA
      ListItem => ItemB
    ListItem => ItemA
      ListItem => ItemB
    ListItem => ItemB
  ListItem => ItemB
  ListItem => ItemC
  ListItem => ItemD

Repeating Queries

Skylight reported a number of (n+1) scenarios. This was a relatively simple improvement with a couple of changes:

  • updating the default_scope of various models to include associations often included
  • updating specific queries to use the includes(:some_association) method to eagerly load these associations.

I also found an opportunity via Sunspot, a Solr-based Rails search gem, for eager-loading associations on associations to search results objects. Here are some code examples:

default_scope { includes(:some_association) } # example default scope in model
SomeModel.includes(:some_association).limit(5) # example eager loading associations on query
SomeModel.search(:include => :user) # example Sunspot search with eager loading

These updates proved extremely valuable in terms of minimizing database hits by reducing repeated queries.

Cache Management

One of the problem areas that Skylight highlighted was that many of our writes were taking quite a while. In the Rails 2.3 app, Rails Sweepers were used extensively to perform manual cache expiration after specific actions (e.g. create or update). With the update to Rails 4.1, the code can take advantage of better Rails cache key management as well as Russian Doll caching, eliminating the need for manual cache management. The application still uses full page caching in some instances, so some cache management is required to clear the fully cached pages, but the fragment cache management has improved dramatically.

# Example of cache key based on item only
<% cache(item) do -%>
# Stuff here
<% end -%>

# Example of cache key based on item and item.user
# Fragment cache will expire when item.user or item is updated
<% cache([item.user, "listed-item", item]) do -%>
# Stuff here
<% end -%>

Remove unnecessary AJAX Requests

Although this wasn't a specific issue highlighted by Skylight, I did take the opportunity to investigate where AJAX requests could be reduced. In one case, this meant moving from an eager loading strategy via AJAX to a lazy loading strategy. In another scenario, it meant utilizing Rails render_to_string method which allowed me to return both view content and object data from a JSON post, instead of making two requests that return different data types (JSON followed by HTML).

# Example of render_to_string to return HTML and JSON data from single AJAX request
content = render_to_string("path/to/view.html.erb", :locals => { :item => item })
render :json => { :some_key => item,
                  :content => content,
                  :other_data => some_other_data
                }

Database Management

Because Skylight highlighted a few areas where writes were taking excessively long, I revisited options for more efficient interaction with the database. This included updates such as using update_column instead of update_attribute, which eliminated redundant cache expiration logic. This also included minor updates to minimize the number of updates applied to the database where applicable. Those are both duh! updates to a seasoned Rails developer, but having a profiler point out the most agonizing requests (including updates) forced me to dig in deep on specific actions.

Conclusion

My experience with Skylight has been positive. Skylight is opinionated about what data it provides but what it does present is actionable, compared to other profilers which may present a large and overwhelming amount of information and metrics. Because I've provided feedback on Skylight, I know they are continuously making updates and improvements in hopes of improving the service. I definitely suggest trying Skylight out to profile your application.

YAPC::NA 2014, Day Two

YAPC::NA 2014 (in Orlando, FL) continues after a brief interruption for sleep ... see my previous post for the beginning of the story. And now, the exciting middle of our story:

This time I'm giving you a much less chronological treatment; instead, we're starting with the biggest impression, then moving on to less gripping but still important items. Remember, all talks are available at YouTube.

Charles Stross on the Future of Perl

The second day concluded with a keynote speaker from another Perl celebrity, author Charles Stross. Stross is no stranger to me, as I've read a book or two from his published works. (Hearing him speak will prompt me add a few to my Amazon wish list.) Stross has worked as a programmer (or as he put it, "I have been paid to argue with computers") but nowadays he is entirely a writer of science fiction (or, "I tell lies for money").

His talk was a futurist's view of the Internet of Things, computer programming, and Perl. He noted that most technologies (e.g., railroads) go through a kind of sigmoid curve, which in computer technology has been called "Moore's Law", but which Stross feels is about to be shown to be no longer in effect, due to physical constraints on the manufacture of integrated circuits. (In a nutshell, we can't make chips a lot faster than they are now, because we'd have to etch the circuits at a scale below that of a single atom.)

So instead of speed, the immediate breakthroughs are going to be in price. Stross imagined a city where every sidewalk section of concrete had a solar-powered chip embedded in it with Bluetooth technology. That chip could interact with the chips in your child's clothing and the cars traveling in the street to predict and prevent a tragedy as the child chases a ball. Or it could receive data from the chip in your clothing to determine that your uneven gait indicated you were injured or incapacitated and summon assistance. Of course, it could also determine all sorts of other things about you that are nobody's business, but that's a different story.

Stross feels Perl is well-positioned to become (or actually, remain) the glue that holds these millions or billions of Internet-enabled things together. In 10 or 20 years as this world becomes the new normal, Perl will be a seasoned, mature technology with a proven record of doing this same job, just on a smaller scale. His observations were witty, thoughtful, and well received.

RapidApp: Turnkey Ajax-y Web Apps

Henry Van Styn gave an intriguing introduction to RapidApp, which is a Catalyst web application framework and application generator that creates full-featured database applications from nothing more than a database connection. If your database is well-designed, with good naming conventions, types, and foreign keys, you can spin up a CRUD-capable, professional-looking starter app in minutes. In fact, he did so live, during the talk.

Once you have such a starter application, it can be extended and refined at both the front- and back-end. You can even embed bits of it, as it has fully REST-ful URLs, in other applications through the <iframe> tag.

DBIx::Class, the Perl ORM

Arthur Schmidt's presentation on DBIx::Class gave me an appetite to learn more about this system. It seems to have a much different approach than what I've used most, which is Rose::DB. My only disappointment was that the speaker had no great experience with Rose, and could not compare and contrast it, so I guess I'll have to do that on my own someday.

Game Night at YAPC::NA

I am an inveterate game player. I have hundreds of them around the house, I organize game sessions whenever my friends' schedules permit, and twice a year I open my home to 30-50 people so we can play games from dawn until dusk. Imagine my joy at learning that my fellow Perl enthusiasts shared my other passion. Game Night took place in the very ballroom where we had been meeting all day. Well-supplied with fajitas and beer, a large majority of the attendees sat down for games of Fluxx, Cards Against Humanity, and Magic: the Gathering.

Liquid Galaxy Technology Showcase at Situation Interactive

Situation Interactive hosted its first Annual Tech Showcase on June 10th to continue its ongoing event series that promotes innovation through community collaboration.

End Point Liquid Galaxy was chosen to be one of three technology companies to show off its interactive display platform to Situation's internal staff and a select group of clients. We were selected as a company that represents an exciting innovative technology. Our presentation focused on the various features of the Liquid Galaxy, including:

  • Google Earth navigation with digital content overlays
  • Panoramic photography and videos using krPano
  • 3D modeling within Google Earth
  • Customized tours

As you can see in the photos below, participants were able to view any part of Google Earth, pano photos, and even the interior of leading museums and coral reefs (as Google Street View expands beyond just the sidewalks).

This Tech Showcase gave us an opportunity to show off these new features of the Liquid Galaxy to potential partners and clients. We are always happy to work with our friends at Situation, who have been our longstanding clients for a few years now. We look forward to growing our relationships with other creative agencies and entertainment venues. Do you work at a creative agency? Do you think the Liquid Galaxy might fit in with your plans or designs? Please contact us at sales@endpoint.com.

YAPC::NA 2014, Day One

YAPC (Yet Another Perl Conference) is an annual gathering of Perl developers (and non-developers) to talk about Perl: how to do it, how to get other people to do it, and how we will all be doing it next year (or decade, if all goes well). There are flavors of YAPC set in North America, Europe, etc.

I attended my first-ever starting in Orlando, FL today (which apparently makes me a VIP - Very Important Perl-user, as the community stands on its head the idea that the old fogies are the important people -- it's the new blood at the conference that gets them all excited).

In no particular order, here's what I remember of my whirlwind tour of YAPC::NA, Day One.

We were welcomed by Chris Prather, and informed that the conference would be live-streamed on the "yapcna" YouTube channel. Those videos are already up here, so you can follow along or take a detour to the several talks I had to miss.

Dan Wright, treasurer for the Perl Foundation, gave an overview of that virtuous organization's activities for the past year. Basically, they are the most visible philanthropic facet of the Perl community, giving grants to (among other things) support developers who are engaged in fundamental Perl support.

Mark Keating gave an energetic (almost frenetic) talk on the Life and Death of Perl: at various times in the last 10 years, Perl has been declared "dead", mostly due to the use of various (flawed and/or skewed) statistics, such as the negative growth in Perl jobs. However, Perl is still being written, written about, and talked about in great volumes: the reports of its demise can be traced to the downturn in programming jobs in general.

Larry Wall (yes, that Larry Wall -- author of the One True Programming Language) spoke at length about Perl RFCs: not to discuss the hundreds of features requested for the language, but to highlight some general principles that emerged as these features were proposed, considered, modified, accepted or rejected. For instance, "YAGNI": Ya Ain't Gonna Need It. Sometimes a feature seems so intrinsically cool that you just want to embrace it, but as a language maintainer you realize that its innate usefulness just "ain't gonna" crop up that often, so convoluting the syntax or semantics isn't worth the risk.

Much of Larry's talk dealt with the advent of Perl 6, which is coming soon and will shake up the language at least as much as Perl 5 did when Perl 4 was still what people used. Larry's "sacred" goal: to keep Perl as Perl-like as possible. Quote:

"We've got a golden opportunity to turn Perl into whatever we like. Let's not take it."

This was the first time I'd heard Larry speak. He touched briefly on his bout with cancer, and that he was now one year cancer-free -- which prompted a great, congratulatory outburst from the room.

From here, we jumped into the first round of lightning talks. I can't do them justice, as they were here and gone almost before I could write down the title. One discussed a "universal" stemming library. Stemming is the process of linguistic analysis to find a root word (usually for search indexing: "hacker", "hacked", and "hacking" would all be indexed under "hack". The library is an attempt to put almost two dozen languages under one umbrella so that code processing a natural language doesn't care which language it is.

Another talk gave advice on how to write about Perl for a Perl programming audience. Some advice was humorous and tongue-in-cheek: design your article title using the tried and true formula of "$N things every Perl programmer should know!", "$X ways to do $Y in Perl". Other bits were more to the point: have an opinion, don't just report the facts.

I was quite interested in a presentation about Perl on NetBeans, which is a kind of universal IDE for programmers (more than just an editor, but a source code analyzer, a configuration manager, a code formatter, documentation support tool, and others). It was particularly interesting since it was developed by an "outsider": not someone who had decades of Perl experience, but rather who had enough Perl background to know what to do, and sufficient lack of expertise to know what a beginner needed. I look forward to installing NetBeans to see what it can do. (I've not had much patience with IDEs in the past, but I'm willing to give this one a bit of my time.)

DTrace was the subject of another short talk. This one focused on problem-solving (debugging), particularly applications for which a standard static approach wasn't viable. For instance, a typical approach is to instrument an application with output statements, or to run it in a debugger (such as the capable Perl debugger). But if the problem is in a production system, or the bug-event is hard to predict or reproduce, DTrace can provide many more options. I wasn't able to follow everything here, but I caught some things about detecting events in the system that you normally wouldn't be able to instrument: e.g., when such-and-such a file is changed, log a stack trace. This feature allowed the presenter to track down a bug in the system that wasn't even caused by the code: the file was being altered correctly by the system, but the system administration Puppet configuration was replacing the correct file with a "vanilla" version periodically!

The last of the talks I will report on here concerned a Perl module called DBIx::Introspector. This provides a means for investigating a database connection (or definition) to determine what type of database it is (e.g., Postgres vs. MySQL). This may sound trivial (why would you not know what your database is?), but in fact since database implementation details can be different in crucial ways (for instance, SQL syntax), any sort of generic database-agnostic module (ORM, utility, etc.) may need to have DB-specific code abstracted out. In an example close-to-home, our very own DevCamp tool has need of this sort of abstraction.

Day 2 promises to be just as action-packed. I'm live-tweeting this via @murwiz, and the hashtag #yapcna is our unifying battle-cry.

Version 5 of Bucardo database replication system


Goat & Kid by Flickr user Bala Sivakumar

Bucardo 5, the next generation of the async multimaster replication system, has been released. This major release removes the previous two source database limitation, allowing you to have as many sources (aka masters) and as many targets (aka slaves) as you wish. Bucardo can also replicate to other targets, including MySQL, MariaDB, Oracle, SQLite, MongoDB, and Redis. Bucardo has been completely rewritten and is more powerful and efficient than the previous version, known as Bucardo 4. You can always find the latest version by visiting the Bucardo wiki.

This article will show a quick demonstration of Bucardo. Future posts will explore its capabilities further: for now, we will show how easy it is to get basic multimaster replication up and running.

For this demo, I used a quick and disposable server from Amazon Web Services (AWS, specifically a basic t1.micro server running Amazon Linux). If you want to follow along, it's free and simple to create your own instance. Once it is created and you have SSH'ed in as the ec2-user account, we can start to install PostgreSQL and Bucardo.

# Always a good idea:
$ sudo yum update
# This also installs other postgresql packages:
$ sudo yum install postgresql-plperl
# Create a new Postgres cluster:
$ initdb btest

We cannot start Postgres up yet, as this distro uses both /var/run/postgresql and /tmp for its socket directory. Once we adjust the permissions of the first directory, we can start Postgres, and then create our first test database:

$ sudo chmod 777 /var/run/postgresql
$ pg_ctl -D btest -l logfile start
$ createdb shake1

Next, we need something to replicate! For a sample dataset, I like to use the open source Shakespeare project. It's a small, free, simple schema that is easy to load. There's a nice little project on github the contains a ready-made Postgres schema, so we will load that in to our new database:

$ sudo yum install git
$ git clone -q https://github.com/catherinedevlin/opensourceshakespeare.git
$ psql shake1 -q -f opensourceshakespeare/shakespeare.sql
# You can safely ignore the 'role does not exist' errors

We want to create duplicates of this database, to act as the other sources. In other words, other servers that have the identical data and can be written to. Simple enough:

$ createdb shake2 -T shake1
$ createdb shake3 -T shake1

Bucardo has some dependencies that need installing. You may have a different list depending on your distro: this is what Amazon Linux needed when I wrote this. (If you are lucky, your distro may have Bucardo already available, in which case many of the steps below can be replaced e.g. with "yum install bucardo" - just make sure it is using version 5 or better! (e.g. with yum info bucardo))

$ sudo yum install  perl-ExtUtils-MakeMaker  perl-DBD-Pg \
> perl-Encode-Locale  perl-Sys-Syslog perl-boolean \
> perl-Time-HiRes  perl-Test-Simple  perl-Pod-Parser
$ sudo yum install cpan
$ echo y | cpan DBIx::Safe

The Perl module DBIx::Safe was not available on this system's yum, hence we needed to install it via CPAN. Once all of that is done, we are ready to install Bucardo. We'll grab the official tarball, verify it, untar it, and run make install:

$ wget -nv http://bucardo.org/Bucardo.tar.gz
$ wget -nv http://bucardo.org/Bucardo.tar.gz.asc
$ gpg -q --keyserver pgp.mit.edu --recv-key 14964AC8
$ gpg --verify Bucardo.tar.gz.asc
$ tar xfz Bucardo.tar.gz
$ ln -s Bucardo-5.0.0 bucardo
$ cd bucardo
$ perl Makefile.PL
$ make
$ sudo make install

Let's make some small adjustments via the bucardorc file (which sets some global information). Then we can run the "bucardo install", which creates the main bucardo database, containing the information the Bucardo daemon will need:

$ mkdir pid
$ echo -e "piddir=pid\nlogdest=." > .bucardorc
$ bucardo install --batch --quiet
Creating superuser 'bucardo'

Now that Bucardo is installed and ready to go, let's setup the replication. In this case, we are going to have three of our databases replicate to each other. We can do all this in only two commands:

$ bucardo add dbs s1,s2,s3 dbname=shake1,shake2,shake3
Added databases "s1","s2","s3"
$ bucardo add sync bard dbs=s1:source,s2:source,s3:source tables=all
Added sync "bard"
Created a new relgroup named "bard"
Created a new dbgroup named "bard"
  Added table "public.chapter"
  Added table "public.character"
  Added table "public.character_work"
  Added table "public.paragraph"
  Added table "public.wordform"
  Added table "public.work"

With the first command, we told Bucardo how to connect to three databases, and gave them names for Bucardo to refer to as (s1,s2,s3). You can also specify the port and host, but in this case the default values of 5432 and no host (Unix sockets) were sufficient.

The second command creates a named replication system, called a sync, and assigns it the name "bard". It needs to know where and how to replicate, so we tell it to use the three databases s1,s2, and s3. Each of these is to act as a source, so we append that information as well. Finally, we need to know what to replicate. In this case, we simply want all tables (or to be more precise, all tables with a primary key or a unique index). Notice that Bucardo always puts databases and tables into named groups - in this case, it was done automatically, and the dbgroup and relgroup are simply named after the sync.

Let's verify that replication is working, by checking that a changed row replicates to all systems involved in the sync:

$ bucardo start
$ psql shake1 -c \
> "update character set speechcount=123 where charname='Hamlet'"
UPDATE 1
$ for i in {1,2,3}; do psql shake$i -tc "select \
> current_database(), speechcount from character \
> where charname='Hamlet'"; done | grep s
 shake1       |     123
 shake2       |     123
 shake3       |     123

We can also take a peek in the Bucardo log file, "log.bucardo" to see the replication happening:

$ tail -2 log.bucardo
(25181) KID (bard) Delta count for s1.public."character": 1
(25181) KID (bard) Totals: deletes=2 inserts=2 conflicts=0

There are two deletes and inserts because the changed row was first deleted, and then inserted (via COPY, technically) to the other two databases. Next, let's see how Bucardo handles a conflict. We will have the same row get changed on all the servers, which should lead to a conflict:

$ for i in {1,2,3}; do psql shake$i -tc \
> "update character set speechcount=$i$i$i \
> where charname='Hamlet'"; done
UPDATE 1
UPDATE 1
UPDATE 1

Looking in the logs shows that we did indeed have a conflict, and that it was resolved. The default conflict resolution declares the last database to be updated the winner. All three databases now have the same winning row:

$ tail log.bucardo
(25181) KID (bard) Delta count for s1.public."character": 1
(25181) KID (bard) Delta count for s2.public."character": 1
(25181) KID (bard) Delta count for s3.public."character": 1
(25181) KID (bard) Conflicts for public."character": 1
(25181) KID (bard) Conflicts have been resolved
(25181) KID (bard) Totals: deletes=2 inserts=2 conflicts=1

$ for i in {1,2,3}; do psql shake$i -tc \
> "select current_database(), speechcount \
> from character where charname='Hamlet'"; done | grep s
 shake1       |     333
 shake2       |     333
 shake3       |     333

Sometimes when I was developing this demo, Bucardo was so fast that conflicts did not happen. In other words, because the updates were sequential, there is a window in which Bucardo can replicate a change before the next update occurs. The 'pause a sync' feature can be very handy for this, as well as other times in which you need a sync to temporarily stop running:

$ bucardo pause bard
Syncs paused: bard
$ psql shake1 -c "update character set speechcount=1234 where charname='Hamlet'"
UPDATE 1
$ psql shake2 -c "update character set speechcount=4321 where charname='Hamlet'"
UPDATE 1
$ bucardo resume bard
Syncs resumed: bard

$ tail log.bucardo
(27344) KID (bard) Delta count for s1.public."character": 1
(27344) KID (bard) Delta count for s2.public."character": 1
(27344) KID (bard) Conflicts for public."character": 1
(27344) KID (bard) Conflicts have been resolved
(27344) KID (bard) Totals: deletes=2 inserts=2 conflicts=1

There is a lot more to Bucardo 5 than what is shown here. Future posts will cover other things it can do, from replicating to non-Postgres systems such as Oracle, MySQL, or MongoDB, to using custom conflict resolution mechanisms, to transforming data on the fly while replicating. If you have any questions, use the comments below, or drop a line to the Bucardo mailing list at bucardo-general@bucardo.org.

This major release would not have been possible without the help of many people over the years who have contributed code, raised bugs, tested Bucardo out, and asked (and/or answered!) important questions. Please see the Changes file for a partial list. Thanks to all of you, and special thanks to Jon Jensen, who started this whole project, many moons ago.

Laziness is a virtue

Laziness is a virtue. Blessed Saint Larry told me so. And yet I am of little faith ...

Here I was, banging my head on the keyboard, trying to solve a vertical alignment issue. (The following is a bit simplified for presentation here.)

<td>
<ul class="floaty">
<li> Item 1</li>
<li> Item 2</li>
</ul>
<div class="sticky">
blah blah blah
</div>
</td>
The <ul> element was supposed to float to the right of the cell, and its individual <li> elements also floated, so that the list would fill up from right to left as more items were added:
One item:
  • 1
Two items:
  • 1
  • 2
Three items:
  • 1
  • 2
  • 3

etc., while the "sticky" div was supposed to stay on the left. The challenge was when the div got too tall, or the number of list items caused it to wrap around to a new row; the vertical alignment to keep everything nice and centered is probably achievable in CSS, but I decided to be lazy:

<td style="vertical-align: middle">
<div class="sticky">
blah blah blah
</div>
</td>
<td style="vertical-align: middle">
<ul class="floaty">
<li> Item 1</li>
<li> Item 2</li>
</ul>
</td>

Duh. Seriously. Table cells do a bang-up job of vertical alignment under the worst of conditions. And I'm already in the middle of a table, so the work here was just a matter of adding the appropriate "colspans" elsewhere to account for my column's bifurcation.

P.S.: I love the word "bifurcate" and work it in to conversation when I can.

Liquid Galaxy engineer job opening

We are looking for a full-time, salaried engineer to help us further develop our software, infrastructure, and hardware integration for the Liquid Galaxy created by Google. Liquid Galaxy is an impressive panoramic system for Google Earth and other applications.

What is in it for you?

  • Work from your home office, or from our offices in New York City or Tennessee (Tri-Cities area)
  • Flexible full-time work hours
  • Benefits including health insurance and 401(k) retirement savings plan
  • Annual bonus opportunity
  • Ability to move without being tied to your job location

What you will be doing:

  • Develop new software involving panoramic video, Google Earth, content management, Interactive Spaces, and ROS (Robot Operating System)
  • Improve the system with automation, monitoring, and customizing configurations to customers’ needs
  • Provide remote and occasional on-site troubleshooting and support for Liquid Galaxy at customer locations
  • Build tours and supporting tools for emerging markets

What you will need:

  • Strong programming experience with Java, Python, C/C++, Ruby, Perl, and/or shell
  • Experience with automation tools such as Chef, Ansible, Salt, and Puppet
  • Linux system administration skills
  • Sharp troubleshooting ability
  • A customer-centered focus
  • Strong verbal and written communication skills
  • Experience directing your own work, and working remotely as part of a team
  • Enthusiasm for learning new technologies

Bonus points for experience:

  • Contributing to open source projects
  • Working with geospatial systems, SketchUp, Google Building Maker, Blender, 3D modeling
  • Packaging software (e.g. dpkg/apt or RPM/Yum), building custom OS images, PXE booting
  • Doing image and video capture and processing, or working with kernel drivers
  • Using SQL and databases (relational or non-relational)
  • With Web server and client technology (HTML, CSS, JavaScript, etc.)

About us

End Point is technology consulting company founded in 1995 and based in New York City, with 35 full-time employees working mostly remotely from home offices. We serve over 200 clients ranging from small family businesses to large corporations, using a variety of open source technologies. Our team works together using ssh, Screen and tmux, IRC, Google+ Hangouts, Skype, and even regular phones.

How to apply

Please email us an introduction to jobs@endpoint.com to apply. Include a resume and your GitHub or other URLs that would help us get to know you. We look forward to hearing from you!

SELinux, PHP and FTP issues

Sometimes it feels like working with SELinux is much like playing Wack-A-Mole. You manage to squash a bug/issue and another one appears elsewhere.
A similar situation happened to one of our customers when he tried connecting via FTP from his PHP code (through Apache).
After much debugging and a lot more Google-ing it turned out it was just a matter of enabling the right SELinux boolean setting.

In order to verify that it really was SELinux fault, we usually keep an eye on the "/var/log/audit/audit.log" log file and then temporarily set SELinux to "Permissive" with:

setenforce 0

In our case things started working as expected so we knew that it was SELinux fault, though we had no "AVC (denial)" error in the audit.log file, neither in Enforce nor in Permissive.
When this kind of situations happens it's usually a matter of finding which SELinux booleans needs to be toggled.
To discover which SELinux booleans is blocking the wanted behavior we need to temporarily disable the "dontaudit" setting by using:

semodule -DB

and then continue looking at the audit.log file. In our case we found that the interested setting was "httpd_can_network_connect".
First we verified that it really was set to off:

getsebool httpd_can_network_connect

If it is actually set to "off" then go on with the next steps, otherwise you'll probably need to investigate somewhere else.
Next set the SELinux boolean to "on" and put SELinux back to "Enforce" by running:

setsebool httpd_can_network_connect=1
setenforce 1

Now check again the the code is still running as expected and if so set the SELinux boolean to stick between reboots:

setsebool -P httpd_can_network_connect=1

If you toggled the "dontaudit" setting, remember to re-enable it or you'll end up with a very noisy log file:

semodule -B

If everything went well your PHP code trying to connect via FTP should now be working. If that's not the case, keep searching for errors and let us know in the comments what was your problem.

Feel free to skim through our other articles for some ideas and hints:

* SELinux fix for sudo PAM audit_log_acct_message() failed
* SELinux and the need of talking about problems
* SELinux Local Policy Modules
* Passenger and SELinux

DAD Trouble

I never thought I'd say it, but these days technology is simply moving too fast for DAD. It's just the way it is. Of course it's not DAD's fault, it's just the world doesn't want to wait.

Before I get to that, I want to mention some trouble we'd recently started seeing with nginx failing to start on boot. It's just been on our most recently obtained servers, both Debian-based (including Ubuntu) and RHEL-based installations. Some were Linode VM's, others were bare metal hardware systems. After boot and once we got in to try and see what was happening, nginx would happily start manually. The only clue was one line that had been left in the error log:

2014/06/14 23:33:20 [emerg] 2221#0: bind() to [2607:f0d0:2001:103::8]:80 failed (99: Cannot assign requested address)

And it wasn't just nginx; Apache httpd in one instance gave us similar trouble:

Starting httpd: (99)Cannot assign requested address: make_sock: could not bind to address [2600:3c00::f03c:91ff:fe73:687f]:80
no listening sockets available, shutting down

As an interim fix, since at the moment these systems only had one IPv6 each, we told nginx or httpd to listen on all addresses. But not liking to leave a mystery unsolved, once we were able to schedule a long enough maintenance window on a system to reboot it a few times and see what's going on, we found that the interface was in a "tentative" state for a short interval.

That was the clue we needed. For some reason, the boot process was allowed to continue before DAD (Duplicate Address Detection) has a chance to decide that if the interface is allowed to use the provided IPv6 address. It's probably been doing this all along, but the servers that were affected just didn't boot fast enough to try binding before the interface was ready. Now, things are faster, and service start-up was winning the race.

For us, the addresses are either static or autoconfigured, and we're confident that a duplicate address situation won't be a problem. So we turned off dad_transmits by setting this in sysctl.conf:

net.ipv6.conf.all.dad_transmits = 0


Success! No more bind problems on boot preventing a service from starting.

Incidentally, I believe this has been solved in Debian by making the interface wait until it's out of the "tentative" state, but it doesn't look like it's been backported to current stable. It should be in Ubuntu as of the current LTS (14.04) however.

Happy Father's Day!

Integrating Facebook SDK and HybridAuth PHP library

There are a few different libraries out there for integrating your site with Facebook and other social networking sites. I recently added "Login with Facebook" for a client to their PHP site utilizing the Facebook JavaScript SDK. The documentation on Facebook's site is pretty good (although it could use a few more examples). Beyond just the login feature, this client also wanted to be able to offer a checkbox for "Post a message to Facebook about your order". And the way they wanted it done required a PHP library to make calls to the Facebook Graph API directly.

I chose to use the HybridAuth PHP library which is a wrapper for integrating many different social networking sites using a plugin system (Facebook, Twitter, Google, other OpenID services, etc). Likewise, the docs for HybridAuth were sufficient to get the examples up and running for me. The problem was that none of the examples or documentation fit my scenario, where I already have the login set up and working with the JavaScript SDK but want to utilize the PHP library for posting to a user's feed.

When attempting to connect to Facebook with HybridAuth it kept attempting to log the user in again. The main problem was that the access token received from the JavaScript SDK was not getting passed in correctly to HybridAuth, and so it was attempting to get a new access token. The solution I finally got working seems a little dirty (not going through a standard method call or documented API), but it works. Here is the snippet of PHP code to set the Facebook access token so that HybridAuth will use it instead of fetching a new one:

    // Connect to Facebook and get the user's profile
    try {
        $hybridauth = new Hybrid_Auth( $config );

        // Set some session variables needed for HybridAuth
        Hybrid_Auth::storage()->set( 'hauth_session.facebook.is_logged_in', 1 );
        Hybrid_Auth::storage()->set( 'hauth_session.facebook.token.access_token', $_POST['fb_access_token'] );
        $hybrid_config = require $config;
        $fb_config     = $hybrid_config['providers']['Facebook'];
        $fb_app_id     = $fb_config['key']['id'];
        $_SESSION['fb_'. $fb_app_id .'_access_token'] = $_POST['fb_access_token'];
        $_SESSION['fb_'. $fb_app_id .'_user_id']      = $_POST['fb_uid'];

        // Now we connect to FB using the given access token for this user
        $adapter      = $hybridauth->getAdapter( "facebook" );
        $user_profile = $adapter->getUserProfile();
    }
    catch( Exception $e ) {
        return 'FB Connection Failed: got an error! error message=' . $e->getMessage() . ', error code='. $e->getCode();
    }
    ...

When a user successfully logs in with the JavaScript SDK, you will be given an authResponse object which contains, among other things, the user's Facebook ID and an access token. I pass these two values to PHP using an HTTPS ajax call (POST as you can see above). These two values are needed by HybridAuth and stored in certain session variables. These must be set in the session before you call $hybridauth->getAdapter() method. The order is important, otherwise HybridAuth won't use the access token you've set and will treat the user as not yet logged in.

Android Developer Tools via Google Chrome

Recently I was working on a website on my Android phone, and I found myself needing Chrome's Developer Tools. However, Developer Tools are not included in the Android version of Chrome for many reasons, including lack of screen real estate.
So, I looked around, and I found a solution: using a USB cable and ADB (Android Debug Bridge), you can do debugging on an Android device with Chrome's Developer Tools from your desktop.

To show you exactly what I mean, here's a short video demonstrating this:

So, how does one work this magic? There are several ways, but I'll talk about the one that I used. For this method, you need to have Google Chrome version 31 or higher installed on both your Android device and your development machine.

First, you have to enable Android debugging on your device. From android.com:

  • On most devices running Android 3.2 or older, you can find the option under Settings > Applications > Development.
  • On Android 4.0 and newer, it's in Settings > Developer options.
    • Note: On Android 4.2 and newer, Developer options is hidden by default. To make it available, go to Settings > About phone and tap Build number seven times. Return to the previous screen to find Developer options.

Next, connect your device with a USB cable and, on your development machine, go to about:inspect and check Discover USB Devices. After a second or two, your device should show up like this:

To open Dev Tools for a tab, just click "inspect" below it. The buttons next to "inspect" only appear for tabs open in Chrome, but you can open Developer Tools for any app that uses WebView, whether it's in Chrome or not.

And there you go! Fully featured Chrome Developer Tools for your Android device on your development machine. More information, including a way to do this with earlier versions of Chrome, can be found at the Android Developer site.

Why Can't I Edit this Database Table? Don't Forget the Client!

A client of mine recently informed me of an issue he'd been having for years, where he was unable to edit a single table in his database. He uses Access to connect to a MySQL database via ODBC, and his database has a few dozen tables, all of which are editable except this one. He reports that, when trying to edit just this one table, putting the cursor into any of the fields and attempting to change any of the data is blocked. As he put it, "It's like the keyboard won't respond."

We confirmed through conversation that the issue was not a MySQL permissions problem--not that I would have expected MySQL permissions to result in such client behavior. We also confirmed that, when using a different application to connect to MySQL with Perl's DBI, the table was editable just as the rest of the database. At this point, I didn't have any good suspects (as neither Access nor ODBC are my strong suit) and agreed to bring up the issue with the rest of the End Point engineering team.

After sending out a description of the problem, it wasn't long before Josh Williams responded. He had seen this sort of behavior with Access before, where the client will lock out the table if the table does not have a unique key defined. Not surprisingly, it turned out this particular table's implied primary key was in fact a non-unique index. I applied a primary key to the field, dropped the original index, and received confirmation from the client that the table was now editable like the rest of the database.

Setting up your database is a combination of server and client behaviors. While most of the focus goes into configuring the server, if you encounter unusual circumstances, don't forget the possibility that a given client may also have requirements impacting actions normally confined to the realm of the server.

OpenWest Conference Recap

A few weeks ago, the Utah Open Source Foundation put on its seventh annual conference, known as OpenWest. Spencer already wrote about his experience at the conference. Family concerns kept me from attending much of it, so as time has permitted I've been reviewing some of the conference videos as they've come out. The schedule demonstrates a promising evolution as the conference expands and improves. The early years' schedules always struck me as a bit heavy on front-end development and a limited set of currently popular technologies, and necessarily so given the smaller base of attendees and supporters. But recent years and increasing maturity have brought a very well-rounded conference. For this conference, tickets sold out.

This year's keynotes included Utah's enthusiastic Lieutenant Governor speaking on technology in the state, and though this is a regional conference with attendees from all over the western United States, the issues in question cross state lines as governments turn increasingly to technology, and infrastructure ties together even the very remote and rural areas that comprise much of the West. Cox's video, available here, describes the growth of internet service in Utah as seen through the eyes of CentraCom, a communications company Cox's family helped found.

The next keynote came from OpenWest regular Pete Ashdown, founder and sole owner of conference sponsor XMission, a Utah internet service provider. In the media attention surrounding the National Security Agency, Ashdown and XMission received quite a bit of publicity for making clear their policy to ignore government requests for ISP data except when accompanied by what Ashdown called "proper warrants", and his keynote on Internet Liberty was one I was sad to miss in person. Video of his talk is available here.

My own presentation at the end of the first day covered the essentials of dimensional modeling, as described in a blog post I wrote some time ago. Especially considering the constant hype around "big data" in recent years, the relatively little attention paid to properly modeling those data within a relational database in useful ways is surprising. This may reflect the fact that levels of hype and levels of intellectual rigor don't necessarily correspond, but in larger part it demonstrates the spread of non-relational databases into the "big data" field.

The conference traditionally incorporates a "family day", where topics extend beyond software development into ... whatever the conference organizers are willing to accept. In this year's conference swag, Utah "maker space" theTransistor added an Arduino-based kit for attendees to solder together. The family day track also featured presentations and labs specifically for younger nerds, beginner development classes, and interesting projects that don't really fit in other tracks. My wife and I held a workshop on fermented foods, covering stuff like sauerkraut and sourdough, while other sessions included the traditional annual PGP keysigning, workshops from Perl luminary Mark Dominus, and a full "Young Technologist" track designed both to help kids get into technology, and to support parents in caring for their geeky progeny.

I've enjoyed watching OpenWest develop from a relatively small conference of limited focus into a well-organized regional educational experience. Thanks to its sponsors for their support, and its organizers for their excellent work.

Elixir — a step in a never ending journey

Every now and then a new programming language is born. In fact, since the not-so-distant introduction of early programming languages, we’ve got about 693 of them! (at least that’s what Wikipedia says).

Why can’t we settle for just one or at least just a handful? Creating a new programming language certainly isn’t the easiest task on earth. It’s one thing to have fun with syntax lexers, but completely different to provide all the tooling and libraries. In fact programming languages authors are being held hostage to their own creations. There’s always a multitude of things to do, which makes leading such a project basically a full-time job.

Why are those languages sprouting all the time then? The answer is simple: out of necessity.


Pitfalls of computer programming


Most of today’s mainstream programmers choose object-oriented programming as their paradigm of choice. It solves the problems of procedural programming… we could say: in a classy way. You can find its advocates everywhere. In fact you don’t even need to search — they will yell at you from just about every corner of the Internet.

Truth be told it’s one of the things that makes producing new software possible. Some of today’s projects wouldn’t even be possible to create with procedural programming for the same cost. The tax of complex architecture would simply be too high. It’s especially true for commercial products when time and budget play a crucial role.

It’s equally true that this approach produces its own set of traps to fall into. Even though you’ve got a plethora of design patterns at your disposal (http://www.oodesign.com/) you will still fall into one of them.

The reason is that the whole concept is by its nature very bug-potent. There are a couple of things that gets us into trouble. One is the inability to quickly and clearly reason about the flow of programs. In object-oriented world the flow of a higher level action may be covered by a large nest of several objects' class definitions. In every class definition the result and all effects of methods may affect the results and effects of many other methods in different objects. Even at the level of one class the result and effects of a method call depends on the state of this object. This covers processes with a fog that cannot be cleared even after hours of testing and debugging. We have to pay the mental energy tax just because we're using the most widely applauded technology.


Hello?… Real world here… You are a business!


In a computer programmer’s paradise there are no deadlines. There aren’t any troublemaking users either. You could just spend 10 years developing something you believe in, then get a Nobel Prize for how awesome it is and then shut the whole thing off before you break anything. You could make the code as clear as you like. Your programming heroes could stare at you from pictures of their books with a sense of amazement. All design patterns used. The code written in the Right Way.

There’s a newsflash for you though: you are a business! Even if you just work for a business you still have deadlines, no? You still have to deal with users and investors. And your realistic estimations almost never sound reasonable. Welcome to the real world.

Almost everything you do has a price tag. Time rules software projects — in fact there is no such thing as free time. But it’s not only about time — silly bugs can have profound impact on the perception of businesses. Will users go back to a web shop again after seeing a HTTP 500 error when trying to check out?

At the end of the day all we do is business. That makes certain features more important than others when it comes to choosing technologies. As far as my understanding goes the three of the most important factors are:

  • time-to-market
  • maintainability
  • difficulty of shooting yourself in the foot

When I was learning Haskell a while ago I was mostly concerned about this third factor. Everyone knows how easy it is to shoot yourself in the foot with a dynamically typed programming language. At the same time having a very strict type system makes productivity drop like a rock. Haskell seemed to have a great solution to this dilemma. Being a functional language, it allows you to narrow or widen the range of types a function can operate on. It’s polymorphism Done Right. It doesn’t restrict functions to operate just on a certain branch of types like OOP languages make them. With type classes, you can also make them extensible in the future. I won’t give any examples - you can look them up in the Internet if you want. The message is simple: Haskell makes you productive and it makes your results correct at the same time.

All this holds true as with simple code. In real world, you’ll have lots of app specific type classes and lots of type aliases not to get crazy from reading function signatures. Also, the learning curve is pretty dramatic - for simple cases, it is sufficient to use Monads. From all category theory, knowing Monads and Functors is all one needs to write very simple programs in Haskell. But when you need to create something more complex, you need to have a good understanding of the rest and spend a lot of the time on planning out the right architecture.

Looking at what it takes to achieve similar results, using Yesod — Haskell’s most popular web framework as one can achieve in Rails — the time-to-market factor is ridiculously high. And what innovation that is if we get worse results? It is true that once your Haskell code is compiled — it’s very probable that it is in fact bug free. And you cannot say the same thing about any Ruby code even with thousands of lines of unit tests.


Getting the competitive edge


About two years ago a new programming language was born. As it usually goes — it didn’t introduce any new paradigm. In fact it was built around ideas known from languages that have been around for almost 30 years now.

Elixir is a programming language built for the Erlang virtual machine. Its compiler produces exactly the same binaries that come from Erlang’s compiler. The two languages are very alike and libraries developed in one of them can easily be used in another. However, Elixir adds a couple of niceties that Erlang doesn’t have. It also features a nice and clean syntax that many Ruby programmers can immediately get.

Above all it enhances a developer’s toolbox with tools for increasing all three aspects of a good business technology. Being created by one of the most known Open Source contributor in the Rails world - Jose Valim, includes a great set of features, making Rails developers feel almost "at home". Because it is a functional language though, it reduces the number of potential bug kinds significantly. Its polymorphism model isn’t based on type classes, but on easy to grasp protocols. Thanks to philosophy inherited from Erlang, maintainability is at its heights.

Superficially, one might think of Elixir as of some kind of a marriage between Ruby and Haskell. Ruby being a very dynamic language — enhances developer’s productivity in many ways. Its syntax allows creation of custom DSLs that in turn boosts productivity even more. Haskell as a functional language makes the test & fix part of projects significantly shorter. As with other functional languages — the resulting code tends to be very terse and easy to grasp just from looking at. In fact, there are some libraries which aren’t that well documented — but it’s not that hard to get what they do just from quickly skimming the code.

Elixir shares these benefits, but it takes developer’s productivity to the next level by introducing Lisp style macros. It also shares the same base infrastructure libraries that Erlang has, making it an amazing fit for developing highly available solutions. It is known that Erlang based solutions tend to have an insanely high rate of reliability. Joe Armstrong, an author of the Erlang language once said:

The AXD301 has achieved a NINE nines reliability (yes, you read that right, 99.9999999%). Let’s put this in context: 5 nines is reckoned to be good (5.2 minutes of downtime/year). 7 nines almost unachievable ... but we did 9.

Why is this? No shared state, plus a sophisticated error recovery model.

You can achieve the same with Elixir. Because it runs on the same virtual machine, it features the same hot code reloading ability. This means that you don’t have to shut off your application just to introduce some quick patches or enhancements. Thanks to the famous OTP library you can use so called supervisors too. They define strategies for dealing with failures — e.g automatically restarting a part of the app that has just crashed. It’s very inexpensive as in Erlang and Elixir, parts of applications are built around very lightweight processes. Processes do not share any memory with one another, making them very loosely coupled. They communicate with each other through so-called messages. They don’t need to know if a receiver process lives on the same machine or on some other in the network. All this makes writing highly scalable solutions a cinch.


There’s no one right answer, but there will never be


This post is meant as an outline of a small part of what’s available outside of The Mainstream. Even though many ideas presented here were known to the programming world for long years, they never really get through to what is considered mainstream. I think we can be more ambitious than that. Collectively as developers we can do much better than how we’re doing now. And because of that I’m so excited about any project that tries to address real programming issues from a much deeper perspective than just a couple of syntax sugar niceties. It’s nice to have pleasant looking code, but it’s even nicer to have pleasing looking end results. I don’t think Elixir is the answer to all our troubles but at least it’s seems to steer in the right direction.

You can find its specifics, syntax, features, tutorials on its website: http://elixir-lang.org/

Unable to Bcc in mail, Spree 2.0 Stable Rails 3.2.14

Hello again all. As usual, I was working on a Spree Commerce website. I recently encountered an issue when trying to bcc order confirmation emails. Others have been asking about this on Github and also on the Spree mailing list, so it was time to write about the problem and the solution that worked for me.

First, I'd like to briefly describe the use case here. As with any typical e-commerce site, a user visits the site, adds some items to their cart, and checks out. After which, an order confirmation (order summary) email is sent to the user with their order details and any extra information provided by the seller.

Spree pretty much handles all this for you automatically. What about if you as the business owner would like a copy of this e-mail? Easy enough. If you review the Spree documentation you will see simple instructions for the "Mail Method Settings" to set up in the Spree Admin Interface.

Ok, so let's say you follow all the instructions and start placing test orders (or receiving real ones), and you're not getting bcc'd? This is where it gets tricky, so let's check out a few things:

  1. Check the logs
    • Check to see if Spree/Rails is attempting to send the confirmation email to your Bcc recipient
  2. Try development, test, & production modes
    • If the Bcc email is not getting sent, keep following along for the solution
  3. Ensure the interceptor works by adding a value into "INTERCEPT EMAIL ADDRESS" via the admin. If emails are being intercepted you know the the interceptor is working. Why is that important? Because, that is also where the bcc code is.
    • core/lib/spree/mail_interceptor.rb
      5 module Spree
        6   module Core
        7     class MailInterceptor
        8       def self.delivering_email(message)
        9         return unless MailSettings.override?
       10
       11         if Config[:intercept_email].present?
       12           message.subject = "#{message.to} #{message.subject}"
       13           message.to = Config[:intercept_email]
       14         end
       15
       16         if Config[:mail_bcc].present?
       17           message.bcc ||= Config[:mail_bcc]
       18         end
       19       end
       20     end
       21   end
       22 end
      
  4. You can ensure that more than one email can be sent by updating

    mail(to: @order.email, from: from_address, subject: subject)

    to be an array of emails, like

    mail(to: [@order.email, "some_other_email@somewhere.com"], from: from_address, subject: subject)

    • core/app/mailers/spree/order_mailer.rb
      1 module Spree
        2   class OrderMailer < BaseMailer
        3     def confirm_email(order, resend = false)
        4       @order = order.respond_to?(:id) ? order : Spree::Order.find(order)
        5       subject = (resend ? "[#{Spree.t(:resend).upcase}] " : '')
        6       subject += "#{Spree::Config[:site_name]} #{Spree.t('order_mailer.confirm_email.subject')} ##{@order.number}"
        7       mail(to: @order.email, from: from_address, subject: subject)
        8     end
        9
       10     def cancel_email(order, resend = false)
       11       @order = order.respond_to?(:id) ? order : Spree::Order.find(order)
       12       subject = (resend ? "[#{Spree.t(:resend).upcase}] " : '')
       13       subject += "#{Spree::Config[:site_name]} #{Spree.t('order_mailer.cancel_email.subject')} ##{@order.number}"
       14       mail(to: @order.email, from: from_address, subject: subject)
       15     end
       16   end
       17 end
      

At this point, you've verified the interceptor works, and the mailer can definitely send more than one email. Here is the final piece that is not mentioned anywhere on the Spree site, or any resolutions for any posts I had found: you may need to contact your hosting provider and ask them to make any necessary adjustments to allow for the bcc messages. I saw that several people have gotten hung up on this and I hope this post has saved you some time. If you are still having trouble, please feel free to reach out in the comments. You can refer to the Spree issue I created on this topic some time ago.

CSS Conf US 2014 — Part Two

More Thoughts on Getting Vertical, Testing and Icon Fonts

Without further ado I've written up another batch of my notes about three more great talks at CSS Conf US in Amelia Island, FL. last week.

Antoine Butler — Embrace the Vertical

Antoine shared his observation that vertical media queries are available to CSS developers but not often used. With the vast array of devices accessing the web today vertical media queries can be a useful tool to adapt your content effectively. Antoine walked us through a couple examples of how he applied this technique in a couple of his projects. The first was a prototype of WikiPedia. While they have gone with a separated mobile site (e.g. en.m.wikipedia.org/), he started with the HTML from the desktop site and applied some vertical media queries to make the content much more digestible. Take a look at his code to see how it works.

The second example Antoine demonstrated was for the navigation at Volkswagen. The client wanted to display an unlimited number of items in the secondary navigation. Once again Antoine applied vertical media queries to handle the varying number of navigation elements based on the device height. Check out his adaptive sticky vertical navigation code for a closer look.

Slides from this talk are available here: Embrace the Vertical.

Christophe Burgmer — If your CSS is happy and you know it...

This was a really interesting talk about testing your CSS visually with a tool Christophe has been developing called CSS Critic. Christophe covered some of the existing CSS/HTML testing tools like Selenium and found that while they worked well they didn't meet his needs entirely. He wanted a way to visually diff the changes that were made and to be able to write tests for his UI code. For example, when the "accepted" version of the page changed visually, he wanted to be notified and decide whether or not to accept the proposed change.

Christophe demoed the tool for us and it was really cool to see a visual diff in the browser. For a change that was introduced, screenshots of the old, new and difference were displayed. The user then has the ability to accept / OK the change or reject it. You can view the tool in action on the CSS Critic site. Under the hood, CSS Critic uses some other nifty projects including Wraith, PhantomCSS, CasperJS and Hardy. Christophe also mentioned csste.st as a site which curates information on all of these topics and projects.

Slides from this talk are available here: If you CSS is happy and you know it..."

Zach Leatherman — Bulletproof Icon Fonts

Zach wrote a great article on Bulletproof Accessible Icon Fonts earlier this year and his talk was along similar lines. He chronicled some of the challenges and pitfalls worth knowing about in order to support icon fonts in your sites and applications. Browser support varies a great deal and Zach cited John Holt Ripley's Unify unicode support charts as a helpful reference. He works on the a-font-garde project which documents best (er. bulletproof) practices for working with icon fonts today.

Stay Tuned

Watch for one more post later this week with the last batch of talks from the conf!

vim-airline: A lightweight status/tabline for Vim

My standard Vim configuration makes use of around 30 different plugins and I consider vim-airline to be one of the most indispensable because of its built-in functionality and superb integration with a variety of other Vim plugins. It's a great starting point for anyone looking to extend their Vim setup with additional plugins.

I became interested in vim-airline the first time I saw screenshots of it; the color schemes, custom glyphs[1] and indicators immediately revealed value beyond the basic status bar that a stock Vim installation provides. After installing and spending some time using vim-airline I discovered additional benefits due to its integration with other plugins such as Fugitive, Syntastic and CtrlP. vim-airline provides a common platform for integrating the display indicators of plugins from various authors into one view and presents them all with consistent a consistent style.

A quick comparison of Vim with vim-airline installed:


vs. a standard Vim installation:


reveals new indicators for the current Vim mode, git branch, open buffers, and line endings. Integrating other plugins can add additional indicators for syntax errors, trailing whitespace, and more.

Using vim-airline also helped me adopt the usage of built-in Vim functionality that I had previously overlooked. I had not used Vim's tab features, favoring a single split window, but after enabling vim-airline's tabline feature I am now in the habit of using a combination of tabs and split windows to organize my workspace and have found that I am more easily able to keep track of multiple files at a time.

I recommend reading through the documentation for vim-airline, trying it out and then installing some of the other plugins that it integrates with in order to develop your own preferred set of Vim plugins. I spend most of my workday in a Vim session and consider it a good investment to research different plugins and features that can increase productivity and developer happiness.

[1] Note that some of the nice-looking but non-standard characters available in vim-airline require the use of a patched font; pre-patched versions of many popular monospaced fonts are available in the powerline-fonts repository on GitHub and are easy to install (I'm a fan of Adobe's Source Code Pro font).