End Point

News

Welcome to End Point's blog

Ongoing observations by End Point people.

Another Round of Tidbits: Browser Tools, Performance, UI

It's been a while since I shared a blog article where I share End Point tidbits, or bits of information passed around the End Point team that don't necessarily merit a single blog post, but are worth mentioning and archiving. Here are some notes shared since that last post that I've been collecting:

  • Skeuocard and creditcard.js are intuitive user interface (JS, CSS) plugins for credit card form inputs (card number, security code, billing name).

    Skeuocard Screenshot
  • StackExchange UX is a Stack Overflow resource for user interface Q&A.
  • wpgrep is an available tool for grepping through WordPress databases.
  • Here is a nifty little tool that analyzes GitHub commits to report on language convention, e.g. space vs. tab indentation & spacing in argument definitions.

    Example comparison of single vs. double quote convention in JavaScript.
  • Ag (The Silver Searcher) is a document searching tool similar to ack, with improved speed. There's also a Ag plugin for vim.
  • GitHub released Atom earlier this year. Atom is a desktop application text editor; features include Node.js support, modular design, and a full feature list to compete with existing text editors.
  • SpeedCurve is a web performance tool built on WebPagetest data. It focuses on providing a beautiful user interface and minimizing data storage.

    Example screenshot from SpeedCurve
  • Here is an interesting article by Smashing Magazine discussing mobile strategy for web design. It covers a wide range of challenges that come up in mobile web development.
  • Reveal.js, deck.js, Impress.js, Shower, and showoff are a few open source tools available for in-browser presentation support.
  • Have you seen Firefox's 3D view? It's a 3D representation of the DOM hierarchy. I'm a little skeptical of its value, but the documentation outlines a few use cases such as identifying broken HTML and finding stray elements.

    Example screenshot of Firefox 3D view
  • Here is an interesting article discussing how to approach sales by presenting a specific solution and alternative solutions to clients, rather than the generic "Let me know how I can help." approach.
  • A coworker inquired looking for web based SMS providers to send text messages to customer cellphones. Responses included services recommended such as txtwire, twilio, The Callr, and Clickatell.

Updating Firefox and the Black Screen

If you are updating your Firefox installation for Windows and you get a puzzling black screen of doom, here's a handy tip: disable graphics acceleration.

The symptoms here are that after you upgrade Firefox to version 33, the browser will launch into a black screen, possibly with a black dialog box (it's asking if you want to choose Firefox to be your default browser). Close this as you won't be able to do much with it.

Launch Firefox by holding down the SHIFT key and clicking on the Firefox icon. It will ask if you want to reset Firefox (Nope!) or launch in Safe mode (Yes).

Once you get to that point, click the "Open menu" icon (three horizonal bars, probably at the far right of your toolbar). Choose "Preferences", "Advanced", and uncheck "Use hardware acceleration when available".

Close Firefox, relaunch as normal, and you should be AOK. You can try re-enabling graphics acceleration if and when your graphics driver is updated.

Reference: here.

Postgres copy schema with pg_dump


Manny Calavera (animated by Lua!)
Image by Kitt Walker

Someone on the #postgresql IRC channel was asking how to make a copy of a schema; presented here are a few solutions and some wrinkles I found along the way. The goal is to create a new schema based on an existing one, in which everything is an exact copy. For all of the examples, 'alpha' is the existing, data-filled schema, and 'beta' is the newly created one. It should be noted that creating a copy of an entire database (with all of its schemas) is very easy: CREATE DATABASE betadb TEMPLATE alphadb;

The first approach for copying a schema is the "clone_schema" plpgsql function written by Emanuel Calvo. Go check it out, it's short. Basically, it gets a list of tables from the information_schema and then runs CREATE TABLE statements of the format CREATE TABLE beta.foo (LIKE alpha.foo INCLUDING CONSTRAINTS INCLUDING INDEXES INCLUDING DEFAULTS). This is a pretty good approach, but it does leave out many types of objects, such as functions, domains, FDWs, etc. as well as having a minor sequence problem. It's also slow to copy the data, as it creates all of the indexes before populating the table via INSERT.

My preferred approach for things like this is to use the venerable pg_dump program, as it is in the PostgreSQL 'core' and its purpose in life is to smartly interrogate the system catalogs to produce DDL commands. Yes, parsing the output of pg_dump can get a little hairy, but that's always preferred to trying to create DDL yourself by parsing system catalogs. My quick solution follows.

pg_dump -n alpha | sed '1,/with_oids/ {s/ alpha/ beta/}' | psql

Sure, it's a bit of a hack in that it expects a specific string ("with_oids") to exist at the top of the dump file, but it is quick to write and fast to run; pg_dump creates the tables, copies the data over, and then adds in indexes, triggers, and constraints. (For an explanation of the sed portion, visit this post). So this solution works very well. Or does it? When playing with this, I found that there is one place in which this breaks down: assignment of ownership to certain database objects, especially functions. It turns out pg_dump will *always* schema-qualify the ownership commands for functions, even though the function definition right above it has no schema, but sensibly relies on the search_path. So you see this weirdness in pg_dump output:

--
-- Name: myfunc(); Type: FUNCTION; Schema: alpha; Owner: greg
--
CREATE FUNCTION myfunc() RETURNS text
    LANGUAGE plpgsql
    AS $$ begin return 'quick test'; end$$;

ALTER FUNCTION alpha.myfunc() OWNER TO greg;

Note the fully qualified "alpha.myfunc". This is a problem, and the sed trick above will not replace this "alpha" with "beta", nor is there a simple way to do so, without descending into a dangerous web of regular expressions and slippery assumptions about the file contents. Compare this with the ownership assignments for almost every other object, such as tables:

--
-- Name: mytab; Type: TABLE; Schema: alpha; Owner: greg
--
CREATE TABLE mytab (
    id integer
);

ALTER TABLE mytab OWNER TO greg;

No mention of the "alpha" schema at all, except inside the comment! Before going into why pg_dump is acting like that, I'll present my current favorite solution for making a copy of a schema: using pg_dump and some creative renaming:

$ pg_dump -n alpha -f alpha.schema
$ psql -c 'ALTER SCHEMA alpha RENAME TO alpha_old'
$ psql -f alpha.schema
$ psql -c 'ALTER SCHEMA alpha RENAME TO beta'
$ psql -c 'ALTER SCHEMA alpha_old TO alpha'

This works very well, with the obvious caveat that for a period of time, you don't have your schema available to your applications. Still, a small price to pay for what is most likely a relatively rare event. The sed trick above is also an excellent solution if you don't have to worry about setting ownerships.

Getting back to pg_dump, why is it schema-qualifying some ownerships, despite a search_path being used? The answer seems to lie in src/bin/pg_dump/pg_backup_archiver.c:

  /*                                                                                                                                                      
     * These object types require additional decoration.  Fortunately, the                                                                                  
     * information needed is exactly what's in the DROP command.                                                                                            
     */
    if (strcmp(type, "AGGREGATE") == 0 ||
        strcmp(type, "FUNCTION") == 0 ||
        strcmp(type, "OPERATOR") == 0 ||
        strcmp(type, "OPERATOR CLASS") == 0 ||
        strcmp(type, "OPERATOR FAMILY") == 0)
    {
        /* Chop "DROP " off the front and make a modifiable copy */
        char       *first = pg_strdup(te->dropStmt + 5);

Well, that's an ugly elegant hack and explains why the schema name keeps popping up for functions, aggregates, and operators: because their names can be tricky, pg_dump hacks apart the already existing DROP statement built for the object, which unfortunately is schema-qualified. Thus, we get the redundant (and sed-busting) schema qualification!

Even with all of that, it is still always recommended to use pg_dump when trying to create DDL. Someday Postgres will have a DDL API to allow such things, and/or commands like MySQL's SHOW CREATE TABLE, but until then, use pg_dump, even if it means a few other contortions.

Liquid Galaxy at the Ryder Cup 2014



End Point was proud to present the Liquid Galaxy for the French Golf Federation at this year’s Ryder Cup in Gleneagles, Scotland. The French Golf Federation will be hosting the cup in 2018 at Le Golf National, which is just outside of Paris and is also the current venue of the French Open.

Throughout the event, thousands of people came in and tried out the Liquid Galaxy. The platform displayed one of its many hidden talents and allowed golf fans from around the world to find and show off their home courses. One of the most interesting things to witness was watching golf course designers accurately guess the date of the satellite imagery based on which course changes were present.


This deployment presented special challenges: a remote location (the bustling tented village adjacent to the course) with a combination of available hardware from our European partners and a shipment from our Tennessee office. Despite these challenges, we assembled the system, negotiated the required network connectivity, deployed the custom interface, and delivered a great display for our sponsoring partners. The event was a great success and all enjoyed the unseasonably mild Scottish weather.




Rails Recursive Sorting for Multilevel Nested Array of Objects

Whenever you display data as a list of records, sorting them in a particular order is recommended. Most of the time, Rails treats data as an array, an array of objects, or as a nested array of objects (tree). We would like to use a general sorting mechanism to display the records in ascending or descending order, to provide a decent view to end users. Luckily, Rails comes with a sorting method called 'sort_by' which helps to sort the array of objects by specific object values.

Simple Array:

Trivially, an array can be sorted just by sorting using the “sort” method:
my_array = [ 'Bob', 'Charlie', 'Alice']

my_array = my_array.sort;  # (or just my_array.sort!)
This is the most basic way to sort elements in an array and is part of Ruby’s built-in API.

Array of Objects:

Usually, the result set of the Rails will have an array of objects and should be sorted based on specific attributes of the object in the array. Here is a sample array of objects which need to be sorted by date and fullname.
s_array =
[  
    {
        "date"=> "2014-05-07",
        "children"=> [],
        "fullname"=> "Steve Yoman"
    },
    {
        "date"=> "2014-05-06",
        "children"=> [],
        "fullname"=> "Josh Tolley"
    }
]

Solution:

1) Simple sorting

We can use the Rails sort_by method to sort the array of objects by date and fullname in order:
s_array = s_array.sort_by{|item| [ item['date'], item['fullname'] ]}
sort_by is passed an anonymous function which operates on each item, returning a value to be used as a sort key (returned as an anonymous array in this case). Because Ruby’s array have the Enumerable property, they will automatically be able to be used as sort keys as long as the elements containing them are as well. Because we are returning string properties, we get this for free. We can make use of Rails sort_by method to sort the array of objects by date and fullname in order.

2) Handling case on strings

Sometimes sorting directly on the object attribute will produce undesirable results, for instance if there is inconsistent case in the data. We can further normalize the case of the string used to get records to sort in the expected order:
s_array = s_array.sort_by{|item| [ item['date'], item['fullname'].downcase ]}
Here again we are returning an array to be used as a sort key, but we are using a normalized version of the input data to return.

Multilevel Nested Array of Objects

Sometimes objects in an array will contain the array as element and it continues multilevel. Sorting this kind of array requires recursive sorting to sort the all the levels of array of objects based on specific attributes in object. The following array has nested array and objects alternatively:

m_array =
[
    {
        "name"=> "Company",
        "children"=> [
            {
                "name"=> "Sales",
                "children"=> [
                    {
                        "date"=> "2014-05-07",
                        "children"=> [],
                        "fullname"=> "Steve Yoman"
                    },
                    {
                        "date"=> "2014-05-06",
                        "children"=> [],
                        "fullname"=> "Josh Tolley"
                    }
                ]
            },
            {
                "name"=> "Change Requests",
                "children"=> [
                    {
                        "name"=> "Upgrade Software",
                        "children"=> [
                            {
                                "date"=> "2014-05-01",
                                "children"=> [],
                                "fullname"=> "Selvakumar Arumugam"
                            },
                            {
                                "date"=> "2014-05-02",
                                "children"=> [],
                                "fullname"=> "Marina Lohova"
                            }
                        ]
                    },
                    {
                        "name"=> "Install Software",
                        "children"=> [
                            {
                                "date"=> "2014-05-01",
                                "children"=> [],
                                "fullname"=> "Selvakumar Arumugam"
                            },
                            {
                                "date"=> "2014-05-01",
                                "children"=> [],
                                "fullname"=> "Josh Williams"
                            }
                        ]
                    }
                ]
            }
        ]
    }
]

Solution:

In order to tackle this issue, we will want to sort all of the sub-levels of the nested objects in the same way. We will define a recursive function in order to handle this. We will also want to add additional error-handling.

In this specific example, we know each level of the data contains a “children” attribute, which contains an array of associated objects. We write our sort_multi_array function to recursively sort any such arrays it finds, which will in turn sort all children by name, date and case-insensitive fullname:
def sort_multi_array(items)
  items = items.sort_by{|item| [ item['name'], item['date'], item['fullname'].to_s.downcase ]}
  items.each{ |item| item['children'] = sort_multi_array(item['children']) if (item['children'].nil? ? [] : item['children']).size > 0 }
  items
end

m_array = sort_multi_array(m_array);

You can see that we first sort the passed-in array according to the object-specific attributes, then we check to see if there is an attribute ‘children’ which exists, and then we sort this array using the same function. This will support any number of levels of recursion on this data structure.

Notes about this implementation:

1. Case-insensitive sorting

The best practice when sorting the strings is to convert to one unique case (i.e upper or lower) on sorting. This ensures that records show up in the order that the user would expect, not the computer:
item['fullname'].downcase

2. Handling null values in case conversion

The nil values on the attributes need to be handled on the string manipulation process to avoid the unexpected errors. Here we converting to string before applying the case conversion:
item['fullname'].to_s.downcase

3. Handling null values in array size check

The nil values on the array attributes need to be handled on the sorting process to avoid the unexpected errors. Here we guard against the possibility of item[‘children’] being nil, and if it is, then we return an empty array instead:
(item['children'].nil? ? [] : item['children']).size

Parsing Email Addresses in Rails with Mail::Address

I've recently discovered the Mail::Address class and have started using it for storing and working with email addresses in Rails applications. Working with an email address as an Address object rather than a String makes it easy to retrieve different parts of the address and I recommend trying it out if you're dealing with email addresses in your application.

Mail is a Ruby library that handles email generation, parsing, and sending. Rails' own ActionMailer module is dependent on the Mail gem, so you'll find that Mail has already been included as part of your Rails application installations and is ready for use without any additional installation or configuration.

The Mail::Address class within the library can be used in Rails applications to provide convenient, object-oriented ways of working with email addresses.

The class documentation provides some of the highlights:

a = Address.new('Patrick Lewis (My email address) <patrick@example.endpoint.com>')
a.format       #=> 'Patrick Lewis <patrick@example.endpoint.com> (My email address)'
a.address      #=> 'patrick@example.endpoint.com
a.display_name #=> 'Patrick Lewis'
a.local        #=> 'patrick'
a.domain       #=> 'example.endpoint.com'
a.comments     #=> ['My email address']
a.to_s         #=> 'Patrick Lewis <patrick@example.endpoint.com> (My email address)'

Mail::Address makes it trivial to extract the username, domain name, or just about any other component part of an email address string. Also, its #format and #to_s methods allow you to easily return the full address as needed without having to recombine things yourself.

You can also build a Mail::Address object by assigning email and display name strings:

a = Address.new
a.address = "patrick@example.endpoint.com"
a.display_name = "Patrick Lewis"
a #=> #<Mail::Address:69846669408060 Address: |Patrick Lewis <patrick@example.endpoint.com>| >
a.display_name = "Patrick J. Lewis"
a #=> #<Mail::Address:69846669408060 Address: |"Patrick J. Lewis" <patrick@example.endpoint.com>| >
a.domain #=> "example.endpoint.com"

This provides an easy, reliable way to generate Mail::Address objects that catches input errors if the supplied address or display name are not parseable.

I encourage anyone who's manipulating email addresses in their Rails applications to try using this class. I've found it especially useful for defining application-wide constants for the 'From' addresses in my mailers; by creating them as Mail::Address objects I can access their full strings with display names and addresses in my mailers, but also grab just the email addresses themselves for obfuscation or other display purposes in my views.

Liquid Galaxy at UNC Chapel Hill


End Point has brought another academic Liquid Galaxy online! This new display platform is now on the storied campus of University of North Carolina - Chapel Hill. With a strong background in programming and technology, UNC wanted a premiere interactive platform to showcase the GIS data and other presentations the school researchers are putting together.

Neil Elliott, our hardware manager for Liquid Galaxy, first assembled, preconfigured, and tested the computer stack at our facility in Tennessee, bringing together the head node, display nodes, power control units, switches, and cases to build a “Liquid Galaxy Express”: an entirely self-contained Liquid Galaxy unit that fits in just under 1-meter cubed. From there, Neil drove the computers and custom-built frame directly to Chapel Hill. Our install manager Neil described it as: “It’s a great drive from our office in Tennessee over the mountains to Chapel Hill. When I arrived, the UNC staff was on-hand to help assemble things, lay out the space, and it all went very quickly. We were live by 4pm that same day.”
Overall, the installation took just over 6 hours, including assembly and final configuration. The University’s library and IT staff were on hand to assist with the frame assembly and mounting the dazzling 55” Samsung commercial displays. That evening, students were exploring and the staff was tweeting out Vines of the new display:



"From the moment the installer closed his tool box, students have been lining up non-stop to try the screens," said Amanda Henley, the Library's Geographic Information Systems Librarian. Located on the 2nd floor of the Davis Library, the Liquid Galaxy is open to students and staff for research and exploration. End Point is looking forward to working closely with the staff at UNC, as well as with their CompSci teams on developing great new functionality for the Liquid Galaxy. If you would like to get a Liquid Galaxy at your school, call us!