End Point

News

Welcome to End Point's blog

Ongoing observations by End Point people.

Installing Python in Local Directory

On one of our client's Ubuntu 10.04 machines, I needed to upgrade Python from 2.6 to 2.7. Unfortunately, after installing Python 2.7 from apt the virtualenv, version 1.4.5, did not work correctly. This bug was fixed in a newer virtualenv version, however there were no Ubuntu packages available.

I thought about trying something else: why not install all the software locally in my home directory on the server? When virtualenv is used to create a new environment, it copies the Python executable in to the virtualenv directory.

First I install pythonbrew, which is great software for installing many different Python versions in a local directory.

$ curl -kL http://xrl.us/pythonbrewinstall | bash

Then I activate pythonbrew with:

$ source "$HOME/.pythonbrew/etc/bashrc"

And install the Python version I want:

$ pythonbrew install 2.7.3

The installation took a couple of minutes. The script downloaded the tarball with the Python source code for the required version, compiled it and installed. It was writing all the information into a log file, which I was looking at by running the command below in another console:

$ tail -f $HOME/.pythonbrew/log/build.log

You can also add the following lines to ~/.bashrc to activate this python after starting new bash session.

[[ -s "$HOME/.pythonbrew/etc/bashrc" ]] && source $HOME/.pythonbrew/etc/bashrc
pythonbrew switch 2.7.3

I run the below command to activate the pythonbrew script:

$ source $HOME/.pythonbrew/etc/bashrc

The python version changed:

$ python --version
Python 2.6.5

$ source $HOME/.pythonbrew/etc/bashrc

$ python --version
Python 2.7.2

As you can see, Python from my local installation is used:

$ which python
/home/szymon/.pythonbrew/pythons/Python-2.7.3/bin/python

The only thing left is to create the virtual environment for the new Python version. I use virtualenvwrapper for managing virtualenv, so the obvious way to create a new environment is:

$ mkvirtualenv --no-site-packages envname

Unfortunately, it creates an environment with the wrong Python version:

$ which python
/home/szymon/.virtualenvs/envname/bin/python

$ python --version
Python 2.6.5

So let's try to tell virtualenvwrapper which Python file should be used:

$ deactivate 

$ rmvirtualenv envname
Removing envname...

$ mkvirtualenv --no-site-packages -p /home/szymon/.pythonbrew/pythons/Python-2.7.3/bin/python envname

Unfortunately this ended with an error:

Running virtualenv with interpreter /home/szymon/.pythonbrew/pythons/Python-2.7.3/bin/python
New python executable in envname/bin/python
Traceback (most recent call last):
  File "/home/szymon/.virtualenvs/envname/lib/python2.7/site.py", line 67, in 
    import os
  File "/home/szymon/.virtualenvs/envname/lib/python2.7/os.py", line 49, in 
    import posixpath as path
  File "/home/szymon/.virtualenvs/envname/lib/python2.7/posixpath.py", line 17, in 
    import warnings
ImportError: No module named warnings
ERROR: The executable envname/bin/python is not functioning
ERROR: It thinks sys.prefix is '/home/szymon/.virtualenvs' (should be '/home/szymon/.virtualenvs/envname')
ERROR: virtualenv is not compatible with this system or executable

The problem is that the virtualenv version, used by virtualenvwrapper, doesn't work with Python 2.7. As I wrote at the begining, there is no newer version available via apt. The solution is pretty simple. Let's just install the newer virtualenvwrapper and virtualenv version using pip.

$ pip install virtualenv
Requirement already satisfied: virtualenv in /usr/lib/pymodules/python2.6
Installing collected packages: virtualenv
Successfully installed virtualenv

As you can see, there is a problem. The problem is that there is used pip from system installation. There is no pip installed in my local Python 2.7 version. However there is easy_install:

$ which pip
/usr/bin/pip

$ which easy_install
/home/szymon/.pythonbrew/pythons/Python-2.7.3/bin/easy_install

So let's use it for installing virtualenv and virtualenvwrapper:

$ easy_install virtualenv virtualenvwrapper

I've checked the whole installation procedure once again, it turned out that there was some network error while downloading pip, but unfortunately I didn't notice the error. If everything is OK, then pip should be installed, and you should be able to install virtualenv using pip as well with:

$ pip install virtualenv virtualenvwrapper

Cool, let's check which version is installed:

$ which virtualenv
/home/szymon/.pythonbrew/pythons/Python-2.7.3/bin/virtualenv

$ virtualenv --version
1.8.4

Before creating the brand new virtual environment, I have to activate the new virtualenvwrapper. I have the following line in my ~/.bashrc file:

source /usr/local/bin/virtualenvwrapper.sh

I just have to change it to the below line and login once again:

source /home/szymon/.pythonbrew/pythons/Python-2.7.3/bin/virtualenvwrapper.sh

Let's now create the virtual environment using the brand new Python version:

$ mkvirtualenv --no-site-packages -p $HOME/.pythonbrew/pythons/Python-2.7.3/bin/python envname

I want to use this environment each time I log into this server, so I've added this line to my ~/.bashrc:

workon envname

Let's check if it works. I've logged out and logged in to my account on this server before running the following commands:

$ which python
/home/szymon/.virtualenvs/envname/bin/python

$ python --version
Python 2.7.3

Looks like everything is OK.

Crossed siting; or How to Debug iOS Flash issues with Chrome

This situation had all the elements of a programming war story: unfamiliar code, an absent author, a failure that only happens in production, and a platform inaccessible to the person in charge of fixing this: namely, me.

Some time ago, an engineer wrote some Javascript code to replace a Flash element on a page with an HTML5 snippet, for browsers that don't support Flash (looking at you, iOS). For various reasons, said code didn't make it to production. Fast forward many months, and that engineer has left for another position, so I'm asked to test it, and get it into production.

Of course, it works fine. My only test platform is an iPod, but it looks great here. Roll it out, and ker-thunk: it doesn't work. Of course, debugging Javascript on an iPod is less than optimal, so I enlisted others with Apple devices and found that it mostly failed, but maybe worked a few times, depending on [SOMETHING].

To make matters a bit worse, the Apache configurations for the test and production environments differed, just enough to raise my suspicions and convince me that was worth investigating. Once I went down that path, it was tough to jar myself loose from that suspicion.

I tried disabling Flash in Firefox to trigger the substitution, but that didn't seem to have the desired effect (as the replacement didn't happen, which was a different error than the replacement failing). I tried a browser emulation site (which shall remain nameless for this post, as I don't think they are bad at what they do, but they don't emulate iOS browsers in this capacity).

Eventually we disabled flash in Chrome (by visiting the chrome://plugins page). That unveiled the hidden error:

XMLHttpRequest cannot load http://www.somewhere.com/ajax/newstuff.html. Origin http://somewhere.com is not allowed by Access-Control-Allow-Origin.

There's the crux of it: the browser was sitting on an address which appeared different than that of the AJAX target.

The site involved is an Interchange site, and page constructed a URL using the [area] tag, which makes a fully-qualified URL from a fragment like "ajax/newstuff". That URL was being seen by the iOS browsers as a cross-site scripting attempt, as it didn't precisely match where the browser found the page. The error was not visible in the Safari browser on my iPod, and browsers I had access to which could have displayed the error, weren't suffering it.

I replaced the [area] tag with a plain relative URL and the problem disappeared.

TL;DR:

               $.ajax({
-                  url: "[area href=|ajax/newstuff|]",
+                  url: "ajax/newstuff.html",
                   type: 'html',

The original code caused a cross-site scripting failure.

To ask or not to ask? Debug first.

Jumping head first into a project, the ramp up will likely lead to questions galore. In the eagerness of getting things done, it seems like the best thing to do when stuck is to just ask the seasoned developers to tell you how to move forward. After all, they did build the application. However, when to reach out for help can be dependent on the deadline and priority of the task at hand as well as your subjective definition of "stuck." Knowing when it's too early, just right, or too late to get help can be a tricky thing. Here are some things to consider when reaching out for help early:

Pros.
 1. Time/money is of the essence and getting a quick answer is best.
 2. Time saved debugging a particular issue that does not further your understanding of the application can be applied elsewhere.

Cons.
 1. You risk a learning opportunity by throwing in the towel too early.
 2. You risk looking lazy or unprepared if the person whom you are reaching out to believes you could have done more.
 3. Developers are busy, too.

All cases being different, there is no right time to reach out for help but steps can be taken to ensure that you have your part. First, get better at reading source code. The more you practice reading and making sense of other people's code, the quicker you will build your complete mental model of the application, and the quicker you'll be able to debug. Brandon Bloom (snprbob86) wrote an excellent post regarding this. Second, create your own stack trace. The application may have many entry points. Pick one, feed it some input and trace it down the rabbit hole. A white board may come in handy here. Third, make heavy use of the logs. The code should already be sprinkled with plenty of log messages, but feel free to add more wherever you feel they will be useful.

These three tips will go a long way in helping you debug smarter, which should allow you to find solutions more quickly. In the event that a white flag must be raised and another developer's help requested, the efforts you made attempting to debug will have a positive impact on your productivity as well as your team's perception of you as a programmer.

Kamelopard Updates

I've just pushed Kamelopard v0.0.10 to RubyGems. With the last couple of releases, Kamelopard has gained a couple of useful features I felt deserved some additional attention.

Support for viewsyncrelay actions

Many of our Liquid Galaxy tours require more than just Google Earth. For instance, it's not uncommon to want audio or video media to play at a certain point in the tour. We may want our Liquid Galaxy enabled panoramic image viewer to start up and display an image, or perhaps we need to signal some other external process. Unfortunately Google Earth tours don't support configuration to run these actions directly upon reaching a certain point, but there are alternatives. Google Earth synchronizes nodes in a Liquid Galaxy with what are called ViewSync packets, which tell all the slave nodes exactly where a master node's camera is positioned, in terms of latitude, longitude, tilt, etc. We can watch this traffic to determine the master node's progress through a tour and trigger actions at defined locations, and we use an application called viewsyncrelay (available here) to do exactly that. We configure viewsyncrelay to run certain actions when the ViewSync traffic matches a set of constraints. For instance, an action might require ViewSync packets to fall within a certain latitude, longitude, altitude, and heading, and a particular previous action might have to run first in order to activate this one.

This works well enough for most purposes, but the viewsyncrelay configuration files can become complicated, and difficult to debug. Here's where Kamelopard comes in. Now, the same code that creates a tour can create viewsyncrelay actions. Here's an example. The code is still a bit unwieldy; it will get simpler and more elegant in future versions, after we've seen the best ways people come up with to use the feature.

require 'rubygems'
require 'kamelopard'

name_folder 'test'
name_document 'test'
pt = point 100, 100
pl = placemark 'test placemark', :geometry => pt
get_folder << pl

Kamelopard::VSRAction.new('action name',
    :action => 'mplayer play_this_video.webm',
    :constraints => {
        :latitude => to_constraint(band(100, 0.1).collect{ |l| lat_check(l) }),
        :longitude => to_constraint(band(100, 0.1).collect{ |l| long_check(l) }),
    }
)

write_kml_to 'doc.kml'
write_actions_to 'actions.yml'

In addition to the VSRAction object, these changes introduce a few new functions, including those shown here: band(a, b) returns a +/- b, in an array; lat_check() and long_check() ensure each value in the array is a valid latitude or longitude; and to_constraint() turns this validated array into a string suitable for use in a viewsyncrelay constraint. As I mentioned, this may prove awkward, but it's a first step. This code creates a file called actions.yml, ready for use in viewsyncrelay:

---
actions:
- name: action name
  input: ALL
  action: mplayer play_this_video.webm
  repeat: DEFAULT
  constraints:
    :latitude: ! '[-80.1, -79.9]'
    :longitude: ! '[99.9, 100.1]'

Master/slave modes

Liquid Galaxy tours often consist of two different KML files: one to run on the master, and another to run on each of the slaves. The slave versions generally don't include all of the screen overlay and network link objects used by the master, in particular, but there are plenty of objects that you might not want on the slaves. In the past we've had to edit the KML files manually to remove unnecessary objects, which is of course error prone, sometimes difficult, and something we have to redo every time we run our Kamelopard script and create a new tour version. Now, Kamelopard supports tagging objects as "master-only", and creating KML documents in either normal or master-only mode, to make this process entirely automatic. Here's an example:

require 'rubygems'
require 'kamelopard'

name_folder 'test'
name_document 'test'
pt = point 100, 100
pl = placemark 'test placemark', :geometry => pt
get_folder << pl

get_folder.master_only = true
write_kml_to 'slave.kml'
get_document.master_mode = true
write_kml_to 'master.kml'

This results in two files, called master.kml and slave.kml. Only master.kml contains the Folder object:



  
    test
    1
    0
    
      test
      1
      0
      
        test placemark
        1
        0
        
          100.0, 100.0, 0
          0
          clampToGround
        
      
    
  

The slave is essentially empty, because we didn't tell Kamelopard about any objects that weren't master-only:



  
    test
    1
    0
  

New repository

Since Kamelopard is developed primarily for our use creating Liquid Galaxy tours, it has been moved from github to a portion of the code.google.com Liquid Galaxy project. Clone the repository at https://code.google.com/p/liquid-galaxy/ to get your own copy, or install the gem from rubygems to play around.

Configuring RailsAdmin 0.0.5 with CKeditor 3.7.2

If you like adventures, read on! Because recently I went trough a really tough one with RailsAdmin 0.0.5 and Ckeditor 3.7.2 in production mode. I only needed to enable the WYSIWYG editor for one of the fields in admin, yet it turned out to be a bit more than just that.

After I installed ckeditor gem, created the custom config file as described in Ckeditor gem readme and added ckeditor support to the field as suggested by RailsAdmin configuration tutorial, both frontend and backend in production mode were broken in pieces with JavaScript errors. So what did I do wrong?

The problem with frontend

After careful investigation it turned out that ckeditor files were not loading on the frontend, but my custom ckeditor configuration file was. And because CKEDITOR was not defined anywhere, the following code in my config.js failed:

CKEDITOR.editorConfig = function( config )
 {
   config.toolbar = 'Basic';
   config.toolbar_Basic =
   [
     ['Source', 'Bold', 'Italic', 'NumberedList', 'BulletedList', 'Link', 'Unlink']
   ];

   config.enterMode = CKEDITOR.ENTER_BR;
   config.shiftEnterMode = CKEDITOR.ENTER_BR;
   config.autoParagraph = false;
 }

as I was getting

ReferenceError: CKEDITOR is not defined

Duh! I found a lot of complaints about ckeditor gem production mode loading issues due to the incorrect work with asset pipeline, as well as a solution that required update to 3.7.3, but at this point I did not need any ckeditor on the frontend, so I decided to put a sanity check into custom config file that would be useful to have anyway. None of the README's provided sample custom config file or considerations regarding loading order, so I had to improv on this one:

if (typeof(CKEDITOR) != 'undefined') {
  CKEDITOR.editorConfig = function( config )
  {
   config.toolbar = 'Basic';
   config.toolbar_Basic =
   [
     ['Source', 'Bold', 'Italic', 'NumberedList', 'BulletedList', 'Link', 'Unlink']
   ];

   config.enterMode = CKEDITOR.ENTER_BR;
   config.shiftEnterMode = CKEDITOR.ENTER_BR;
   config.autoParagraph = false;
  }
}

All JavaScript errors were gone on the frontend and the website looked good again.

The problem with RailsAdmin

RailsAdmin version 0.0.5 was still acting up, and none of WYSIWYG fields were rendered exposing the naked textarea boxes. At this point I could not upgrade RailsAdmin gem in the hope that the error will go away because of the regression issues, so I looked deeper into the problem. Only to find more JavaScript errors on pages.

RailsAdmin rendered Ckeditor-enhanced textareas with the following markup:

<textarea cols="48" data-options="{"jspath":"/assets/ckeditor/ckeditor.js","
base_location":"/assets/ckeditor/","options":{"customConfig":"/assets/ckeditor/config.js"}}"
data-richtext="ckeditor" id="testimonial_content" name="testimonial[content]" rows="3">

Please, note the explicit hard-coded call to "/assets/ckeditor/config.js". During asset precompilation Ckeditor gem would compile the source from vendor/assets/ckeditor folder into the special resource package that looked like this:

$ ls public/assets/ckeditor/
application.js
application.js.gz
application-1f3fd70816d8a061d7b04d29f1e369cd.js
application-1f3fd70816d8a061d7b04d29f1e369cd.js.gz
application-450040973e510dd5062f8431400937f4.css
application-450040973e510dd5062f8431400937f4.css.gz
application.css
application.css.gz
ckeditor-b7995683312f8b17684b09fd0b139241.pack
ckeditor.pack
filebrowser
images
plugins
skins
lang

and, apparently, plain ckeditor.js was just not there. In order to provide the necessary source files I had to explicitly add them to the precompile array for the production environment in config/environments/production.rb:

config.assets.precompile += %w( ckeditor/* )

Happy end

The necessary files showed up after I ran rake assets:precompile task, and the ckeditor fields rendered beautifully in admin.

It's very important to always stick to the newest version of gems possible, but in times when this option is not allowed, maybe, this article will help!

Git as rsync

I had a quick-and-dirty problem to solve recently:

The clients had uploaded many assorted images to a development camp, but the .gitignore meant those updates were not picked up when we committed and pushed and rolled out to the live site. Normally, one would just rsync the files, but for various reasons this was not practical.

So my solution, which I think can get filed under "Stupid 'git' tricks (as opposed to Tricks of a Stupid Git)":

(on the source repo)

$ git checkout -b images_update
$ git add -f path-to-missing-images
$ git commit -m 'Do not push me! I'm just a silly temporary commit'

("add -f" forces the images into the index, overriding our gitignore settings)

(on the target repo)

$ git remote add images /path/to/source/repo
$ git fetch
$ git checkout -f images/images_update path-to-missing-images
$ git remote rm images
$ git reset HEAD path-to-missing-images

That last "git reset" is because the newly-restored images will be git-added by default, and we didn't want them committed to the central repo.

So what did we do here? For those dumbfounded by the level of silly, we used git to record the state of all the files in a certain path; then we pulled them back out into another location without disturbing anything already there. But as a benefit, we do have a record of what happened, in case we need to reproduce it.

Detecting Bufferbloat

Bufferbloat is topic which has been gaining broader attention, but is still not widely understood. This post will walk you through the basics of bufferbloat and how to determine if you are the victim of bufferbloat.

A Brief Synopsis of the Bufferbloat Problem

The topic of bufferbloat has been explained wide and far, but I'll add to the conversation too, focusing on brevity. This summary is based on the highly informative and technical talk Bufferbloat: Dark Buffers in the Internet, a Google Tech Talk by Jim Gettys. There is an assumption in the design of TCP that if there is network congestion, there will be timely packet loss. This packet loss triggers well designed TCP flow control mechanisms which can manage the congestion. Unfortunately, engineers designing consumer grade routers and modems (as well as all sorts of other equipment) misunderstood or ignored this assumption and in an effort to prevent packet loss added large FIFO (first-in-first-out) buffers. If users congest a network chokepoint, typically an outgoing WAN link, the device's large buffers are filled with packets by TCP and held instead of being dropped. This "bufferbloat" prevents TCP from controlling flow and instead results in significant latency.

Detecting Bufferbloat

All that's required to experience bufferbloat is to saturate a segment of your network which has one of these large FIFO buffers. Again, the outgoing WAN link is usually the easiest to do, but can also happen on low-speed WiFi links. I experienced this myself when installing Google's Music Manager, which proceed to upload my entire iTunes library in the background, at startup, using all available bandwidth. (Thanks Google!) I detected the latency using mtr. Windows and OS X does not offer such a fancy tool, so you can simply just ping your WAN gateway and see the lag.


Music Manager enabled, bufferbloat, slow ping to WAN gateway


Music Manager paused, no bufferbloat, fast ping to WAN gateway

Managing Bufferbloat

Unfortunately, there are no easy answers out there right now for many users. Often we cannot control the amount of bandwidth a piece of software will try to use or the equipment given to us by an ISP. If you are looking for a partial solution to the problem, checkout Cerowrt, a fork of the OpenWrt firmware for routers. It makes use of the best available technologies used to combat bufferbloat. Additionally, be on the look out for any software that might saturate a network segment, such as Bittorrent, Netflix streaming, or large file transfers over weak WiFi links.

Ghost Table Cells in IE9

What's this about ghosts?

I recently came across an arcane layout issue in my work on the new RJ Matthews site. The problem was specific to Internet Explorer 9 (IE9). The related CSS styles had been well tested and rendered consistently across a variety of browsers including IE7 and 8. Everything was fine and dandy until some new content was introduced into the page for a "Quickview" feature. While all of the other browsers continued to behave and render the page correctly, the layout would break in random and confusing ways in IE9. The following screenshots compare the correct layout with an example of the broken layout in IE9.

Correct grid layout:

Correct grid

Broken layout in IE9:

IE9 ghost cells

The Stage

The following is a list of the factors at work on the page in question:

  • Internet Explorer 9
  • Browser mode: IE9, Document mode: IE9 standards
  • Some content manipulation performed via JavaScript (and jQuery in this case)
  • Lots of table cells

Debugging

The page included a list of products. The first "page" of twelve results was shown initially while JavaScript split the rest of the list into several additional pages. Once this JavaScript pagination function was complete, users could cycle through products in bite-sized pieces.

My first thought was that the issue may be related to CSS or Javascript. I tested and debugged the styles thoroughly, tweaked styles and edited the underlying HTML structure to see if that might resolve the problem. I also tested the JavaScript and compared the original HTML with the parts which had been paginated via JavaScript. No dice.

When changes improved the paginated HTML, the bug appeared in the initial HTML. Other changes resolved the issue in the original HTML but it appeared in the paginated HTML.

I inspected the table with the Chrome Developer Tools console and also with the Developer Tools in IE9. There did not appear to be any differences between the rows which rendered properly and those which were skewed.

Bugging Out

At this point I began to research the issue and discovered it was a bug in the IE9 browser. This Microsoft forum post describes the issue and includes responses from Microsoft stating that it will not be fixed. It also includes a sample application which demonstrates the issue. I tested and verified that the problem has been addressed and fixed in Internet Explorer 10 thankfully.

This explained the many, seemingly random ways I had seen the grid break. At times the cells were squished and pushed to the left. This was because the ghost cell had been added at the end of a row. Other times the cells were shifted to the right (as seen in the screenshot above). In this case, the ghost cell had been added to the middle of a row.

The Fix

Further digging revealed the the issue was related to whitespace between table cells. The solution was fairly simple: use a regular expression to remove all whitespace between the table elements:

$('#problem-table').html(function(i, el) {
  return el.replace(/>\s*</g, '><');
});

With all of the whitespace removed from the affected <table>, IE9 rendered the page correctly.

Getting started with Heroku

It's becoming increasingly popular to host applications with a nice cloud-based platform like Engine Yard or Heroku.

Here is a little guide showing how to join the development of a Heroku-based project. In Heroku terms it's called "collaborating on the project". The official tutorial does provide answers to most of the questions, but I would like to enhance it with my thoughts and experiences.

First essential question: how to get your hands on the app source code?

I wish Heroku had something like devcamps service provided, so you wouldn't need to experience the hassle of launching the application locally, dealing with the database and system processes needed for development. With Heroku the code does need to be cloned to the local environment like this:

$ heroku git:clone --app my_heroku_app

Second, how to commit the changes?

I got this error when trying to push to the repository:

! Your key with fingerprint xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx is not authorized
to access my_heroku_app.
fatal: The remote end hung up unexpectedly

Turned out I needed to add the new identity to my local machine.

Also, if you previously had accounts with Heroku with different email address, it's essential to create the new ssh key just for that application you are collaborating on. Heroku does not allow to use the same ssh key for different accounts.

Here is the full sequence:

$ ssh-keygen -t rsa -C "yourname@yourdomain.com" -f  ~/.ssh/id_rsa_heroku
$ ssh-add ~/.ssh/id_rsa_heroku

and, finally

$ heroku keys:add ~/.ssh/id_rsa_heroku.pub
$ git push heroku master

The code is not only pushed with this command, but it also gets immediately deployed on the server.

Finally, how to run the application console?

I use application console a lot to debug/troubleshoot/check things after the deployment.

For Heroku it's the Heroku Toolbelt "run" command that triggers all the usual command line routines. The "-a" parameter is necessary to define the application.

heroku run -a my_heroku_app script/rails console

That's it! Nice & easy!

Install SSL Certificate from Network Solutions on nginx

Despite nginx serving pages for 12.22% of the web's million busiest sites, Network Solutions does not provide instructions for installing SSL certificates for nginx. This artcle provides the exact steps for chaining the intermediary certificates for use with nginx.

Chaining the Certificates

Unlike Apache, nginx does not allow specification of intermediate certificates in a directive, so we must combine the server certificate, the intermediates, and the root in a single file. The zip file provided from Network Solutions contains a number of certificates, but no instructions on the order in which to chain them together. Network Solutions' instructions for installing on Apache provide a hint, but let's make it clear.

cat your.site.com.crt UTNAddTrustServer_CA.crt NetworkSolutions_CA.crt > chained_your.site.com.crt

This follows the general convention of "building up" to a trusted "root" authority by appending each intermediary. In this case UTNADDTrustServer_CA.crt is the intermediary while NetworkSolutions_CA.crt is the parent authority. With your certificates now chained together properly, use the usual nginx directives to configure SSL.

listen                 443;
ssl                    on;
ssl_certificate        /etc/ssl/chained_your.site.com.crt;
ssl_certificate_key    /etc/ssl/your.site.com.key;

As always, make sure your key file is secure by giving it minimal permissions.

chmod 600 your.site.com.key

I hope this little note helps to ease nginx users looking to use a Network Solutions SSL certificate.

jQuery Performance Tips: Slice, Filter, parentsUntil

I recently wrote about working with an intensive jQuery UI interface to emulate highlighting text. During this work, I experimented with and worked with jQuery optimization quite a bit. In the previous blog article, I mentioned that in some cases, the number of DOM elements that I was traversing at times exceeded 44,000, which caused significant performance issues in all browsers. Here are a few things I was reminded of, or learned throughout the project.

  • console.profile, console.time, and the Chrome timeline are all tools that I used during the project to some extent. I typically used console.time the most to identify which methods were taking the most time.
  • Caching elements is a valuable performance tool, as it's typically faster to run jQuery calls on a cached jQuery selector rather than reselecting the elements. Here's an example:
    Slower Faster
    //Later in the code
    $('.items').do_something();
    
    //On page load
    var cached_items = $('.items');
    //Later in the code
    cached_items.do_something();
    
  • The jQuery .filter operator came in handy, and gave a bit of a performance bump in some cases.
    Slower Faster
    $('.highlighted');
    
    cached_items.filter('.highlighted');
    
  • jQuery slicing from a cached selection was typically much faster than reselecting or selecting those elements by class. If retrieving slice boundaries is inexpensive and there are a lot of elements, slice was extremely valuable.
    Slower Faster
    cached_items.filter('.highlighted');
    
    cached_items.slice(10, 100);
    
  • Storing data on elements was typically a faster alternative than parsing the id from the HTML markup (class or id). If it's inexpensive to add data values to the HTML markup, this was a valuable performance tool. Note that it's important to test if the jQuery version in use automatically parses the data value to an integer.
    Slower Faster
    //given 
    var slice_start = parseInt($('tt#r123')
                        .attr('id')
                        .replace(/^r/, ''));
    
    //given 
    var slice_start = $('tt#r123')
                        .data('id');
    
  • Advanced jQuery selectors offered performance gain as opposed to jQuery iterators. In the example below, it's faster to use selectors :has and :not combined rather than iterating through each parent.
    Slower Faster
    $.each($('p.parent'), function(i, el) {
      //if el does not have any visible spans
      //  do_something()
    });
    
    $('p.parent:not(:has(span:visible))')
      .do_something();
    
  • The jQuery method parentsUntil was a valuable tool instead of looking at the entire document or a large set of elements. In the cases where the children were already defined in a subset selection, I used the parentsUntil method to select all parents until a specific DOM element.
    Slower Faster
    $('#some_div p.parent:not(:has(span:visible))')
      .do_something();
    
    subset_of_items
      .parentsUntil('#some_div')
      .filter(':not(:has(span:visible))')
      .do_something();
    

The best takeaway I can offer here is that it was almost always more efficient to work with as precise set of selected elements as possible, rather than reselecting from the whole document. The various methods such as filter, slice, and parentsUntil helped define the precise set of elements.