End Point

News

Welcome to End Point's blog

Ongoing observations by End Point people.

Oceanographic Museum of Monaco Liquid Galaxy

In December End Point installed a Liquid Galaxy at the spectacular and renowned Musée Océanographique de Monaco, which is breathtakingly situated on a cliff overlooking the Mediterranean. The system, donated by Google, will be officially presented on January 21st to H.S.H. Prince Albert II of Monaco, who is the Honorary President of the Oceanographic Institute of which the museum is a major part.

https://plus.google.com/photos/116748432819762979484/albums/5824395228310553057/5824395228004170738

End Point set up and configured the system. Our expertise was also called on by Google to create and adapt Google Earth Tours focused on the world's oceans, including a tour about Ocean Acidification. In addition, End Point engineers developed a customized panoramic photo viewer for the remarkable Catlin Seaview Survey panoramas, which display and provide a baseline dataset for the earth's coral reefs.

Many thanks are due to Jenifer Austin Foulkes, Google's Ocean Program Manager, and to Jason Holt of Google for their work in supporting this project.

It is difficult to speak highly enough about the Musée Océanographique de Monaco. Prince Albert I of Monaco was an oceanographer himself and created the museum in 1901 with a vision of bringing art and ocean science together. The royal family of Monaco has maintained its support of ocean science through the years, and Monaco today is a center for oceanography and for organizations concerned with the state of the world's oceans. The museum has a wonderful aquarium, compelling exhibits, and is actively engaged in ocean science. I wasn't aware of it before visiting the museum, but Jacques Cousteau was the Director of the Museum for over thirty years. I always liked that guy. :-)

France 3 TV covered the public opening of the Musée Océanographique de Monaco's Liquid Galaxy: http://cote-d-azur.france3.fr/2012/12/22/le-musee-oceanographique-de-monaco-accueille-le-simulateur-liquid-galaxy-de-google-169527.html

It's been a tremendous privilege to be a part of this project and I am happy to close out 2012 with this blog post reflecting on it. In 2013 End Point will continue to support this wonderful addition to the museum with remote system and content updates.

Photo provided by Olivier Dufourneaud

Piggybak: End of Year Update

Over the last few months, my coworkers and I have shared several updates on Piggybak progress (October 2012 Piggybak Roadmap , November 2012 Piggybak Roadmap Status Update). Piggybak is an open source, mountable as a Rails Engine, Ruby on Rails ecommerce platform developed and maintained by End Point. Here's a brief background on Piggybak followed by an end of year update with some recent Piggybak news.

A Brief Background

Over the many years that End Point has been around, we've amassed a large amount of experience in working with various ecommerce frameworks, open source and proprietary. A large portion of End Point's recent development work (we also offer database, hosting, and Liquid Galaxy support) has been with Interchange, a Perl based open source ecommerce framework, and Spree, a Ruby on Rails based open sourced ecommerce framework. Things came together for Piggybak earlier this year when a new client project prompted the need for a more flexible and customizable Ruby on Rails ecommerce solution. Piggybak also leveraged earlier work that I did with light-weight Sinatra-based cart functionality.

Jump ahead a few months, and now Piggybak is a strong base for an ecommerce framework with several extensions to provide advanced ecommerce features. Some of the features built and already reported on were real-time shipping lookup (USPS, UPS, and FedEx support), improvement of the Piggybak installation process, gift certificate, discount, and bundle discount support.

Recent Progress

Since the last general update, we've tackled a number of additional changes:

  • SSL support: The Piggybak core now supports SSL for the checkout, which leverages the lightweight Rails gem rack-ssl-enforcer. A Piggybak config variable specifying that checkout should be secure must be set to true in the main Rails application, which triggers that a specific set of pages should be secure. This configuration is not ideal to use if the main Rails application requires more complex management of secure pages.
  • Minor bug fixes & cleanup: The updates below include minor refactoring and/or bug fixes to the Piggybak core:
    • Moved order confirmation outside of controller, to minimize failure of order processing if the email confirmation fails.
    • RailsAdmin DRY cleanup
    • Abilities (CanCan) cleanup to require less manual coding, which simplifies the code required in the CanCan model.
    • Breakdown of orders/submit.html.erb, which allows for easier override of checkout page elements.
    • Tax + coupons bug fixes.
    • RailsAdmin upgrade to updated recent versions.
  • Heroku tutorial: Piggybak support in Piggybak was described in this blog article.
  • Advanced taxonomy or product organization: An extension for advanced product organization (e.g. categories, subcategories) was released, but we still plan to add more documentation regarding its functionality and use.
  • Bundle discount support: Another extension for bundle discount support was released. Bundle discount offers the ability to give customer discounts when a bundle or set of products has been added to the cart. Barrett shared his experiences in creating this extension in this article.
  • Fancy jQuery tour: I wrote about a new Piggybak demo tour that I created for checking out the features of Piggybak.
  • Advanced product optioning: Another extension for advanced product option support (e.g. size, color) was released a couple of months ago, but this recent article provides more documentation on its functionality and use.

What's Next?

At this point, one of our big goals is to grow the Piggybak portfolio and see many of the extensions in action. We'd also like to improve the Piggybak core and extension documentation to help get folks up and running on Piggybak quickly. In addition to documentation and portfolio growth, additional features we may focus on are:

  • Product reviews & ratings support
  • Saved address/address book support
  • Wishlist, saved cart functionality

A few large features that are on our wishlist that may need client sponsorship for build-out are:

  • Multiple shipping addresses per order: This allows for users to select multiple shipping addresses per order. I implemented this functionality for Paper Source just over a year ago. This would likely be developed in the form of an extension that requires several non-trivial Piggybak core overrides.
  • Subscription support: The Piggybak Google Group has expressed interest in subscription support, which also is not trivial.
  • Point-based credit support
  • Multi-store architecture: End Point is very familiar with multi-store architecture, which allows multiple stores to be support via one code base. I shared some of the options in this blog article.
  • One deal at a time support: This is another popular feature that End Point has been involved with for Backcountry.com sites Steep and Cheap, WhiskeyMilitia.com, and Chainlove.com.

Get Involved

If you are interested in helping develop Piggybak, don't hesitate to jump on the Piggybak google group or tackle one of the Piggybak GitHub issues.

If you are interested in getting your site up and running using Piggybak, contact End Point right now!

Find your Perl in Other Shells

Often when programming, it turns out the best tools for the job are system tools, even in an excellent language like Perl. Perl makes this easy with a number of ways you can allocate work to the underlying system: backtick quotes, qx(), system(), exec(), and open(). Virtually anyone familiar with Perl is familiar with most or all of these ways of executing system commands.

What's perhaps less familiar, and a bit more subtle, is what Perl really does when handing these off to the underlying system to execute. The docs for exec() tell us the following:

       exec LIST
       exec PROGRAM LIST
[snip]
            If there is more than one argument in LIST, or if LIST is an
            array with more than one value, calls execvp(3) with the
            arguments in LIST.  If there is only one scalar argument or an
            array with one element in it, the argument is checked for shell
            metacharacters, and if there are any, the entire argument is
            passed to the system's command shell for parsing (this is
            "/bin/sh -c" on Unix platforms, but varies on other platforms).

That last parenthetical is a key element when we "shell out" and expect certain behavior. Perl is going to use /bin/sh. But I don't; I use /bin/bash. And I am very happy to ignore this divergence ... until I'm not.

Without considering any of these issues, I had leveraged a shell command to do a nifty table comparison between supposedly replicated databases to find where the replication had failed. The code in question was the following:

$ diff -u \
> <(psql -c "COPY (SELECT * FROM foo ORDER BY foo_id) TO STDOUT" -U chung chung) \
> <(psql -c "COPY (SELECT * FROM foo ORDER BY foo_id) TO STDOUT" -U chung chung2)

The above code produced exactly the results I was looking for, so I integrated it into my Perl code via qx() and ran it. Doing so produced the following surprising result:

$ ./foo.pl
sh: -c: line 0: syntax error near unexpected token `('
sh: -c: line 0: `diff -u <(psql -c "COPY (SELECT * FROM foo ORDER BY foo_id) TO STDOUT" -U chung chung) <(psql -c "COPY (SELECT * FROM foo ORDER BY foo_id) TO STDOUT" -U chung chung2)'

I worked with an End Point colleague and figured out the problem. <() is supported by bash, but not by Bourne. Further, there is no way to instruct Perl to use /bin/bash for its target shell.

In order to access a different shell and leverage the desired features, I had to use the invocation described for PROGRAM LIST, but identify the shell itself as my program. While I'm unaware of any way to accomplish this with backticks, I was certainly able to do so using Perl's open():

my $cmd = q{diff -u <(psql -c "COPY (SELECT * FROM foo ORDER BY foo_id) TO STDOUT" -U chung chung) <(psql -c "COPY (SELECT * FROM foo ORDER BY foo_id) TO STDOUT" -U chung chung2)};

    open (my $pipe, '-|', '/bin/bash', '-c', $cmd)
        or die "Error opening pipe from diff: $!";
    while (<$pipe>) {
        # do my stuff
    }
    close ($pipe);

Now results between command line and invocation from Perl are consistent. And now I understand for future reference how to control which shell I find my Perl in.

Redirect from HTTP to HTTPS before basic auth

While reviewing PCI scan results for a client I found an issue where the scanner had an issue with a private admin URL requesting basic http auth over HTTP. The admin portion of the site has its own authentication method and it is served completely over HTTPS. We have a second layer of protection with basic auth, but the issue is the username and password could be snooped on since it can be accessed via HTTP.

The initial research and attempts at fixing the problem did not work out as intended. Until I found this blog post on the subject. The blog laid out all of the ways that I had already tried and then a new solution was presented.

I followed the recommended hack which is to use SSLRequireSSL in a location matching the admin and a custom 403 ErrorDocument. This 403 ErrorDocument does a bit of munging of the URL and redirects from HTTP to HTTPS. The instructions in the blog did have one issue, in our environment I could not serve the 403 document from the admin, I had to have it in an area that could be accessed by HTTP and by the public. I'm not sure how it could work being served from a URL that requires ssl and is protected by basic auth. The reason that this hack does work is because SSLRequireSSL is processed before any auth requirements and ErrorDocument 403 is presented when SSL is not being used.

Now hopefully the scanner will be happy (as happy as a scanner can be) by always requiring HTTPS when /admin appears in the URL and presenting an error when that is not the case, before the basic auth is requested.

Announcing Ruby gem: email_verifier

How many times have you tried to provide a really nice validation solution for our fields containing user emails? Most of the time - the best we can come up with is some long and incomprehensible Regex we find on StackOverflow or somewhere else on the Internet.

But that's really only a partial solution. As much as email format correctness is a tricky thing to get right using regular expressions, it doesn't provide us with any assurance that user entered email address in reality exists.

But it does a great job at finding out some typos and misspellings.. right?

Yes - but I'd argue that it doesn't cover full range of that kind of data entry errors. The user could fill in 'whatever' and traditional validation through regexes would do a great job at finding out that it's not really an email address. But what I'm concerned with here are all those situations when I fat finger kaml@endpoint.com instead of kamil@endpoint.com.

Some would argue at this point that it's still recoverable since I can find out about the error on the next page in a submission workflow, but I don't want to spend another something-minutes on going through the whole process again (possibly filling out tens of form fields along the way).

And look at this issue from the point of view of a web application owner: You'd like to be sure that all those leads you have in your database point to some real people and that some percentage of them will end up paying you at some point real money, making you a living. What if even 10% of email addresses invalid (being valid email addresses but pointing to no real mailboxes) due to user error? What would that potentially mean to you in cash?

The Solution

Recently, I faced this email validation question for mobixa.com. (By the way. if you own a smart phone that you'd like to sell - there is no better place than mobixa.com to do it!)

The results of my work, I'd like to announce here and now. Please give a warm welcome to a newborn citizen of RubyGems society: email_verifier

How does it work?

Email verifier takes a different approach to email validation. Instead of checking just the format of given address in question - it actually tries to connect with a mail server and pretends to send a real mail message. We can call it 'asking mail server if recipient exists'.

How to use it?

Add this line to your application's Gemfile:

gem 'email_verifier'

And then execute:

$ bundle

Or install it yourself as:

$ gem install email_verifier

Some SMTP servers will not allow you to check if you will not present yourself as some real user. So first thing you'd need to set up is to put something like this either in initializer or in application.rb file:

EmailVerifier.config do |config|
  config.verifier_email = "realname@realdomain.com"
end

Then add this to your model:

validates_email_realness_of :email

Or - if you'd like to use it outside of your models:

EmailValidator.check(youremail)

This method will return true or false, or - will throw exception with nicely detailed info about what's wrong.

Read More about the extension at Email verifier RDoc or try to sell your smartphone back here at Mobixa.com.

Verify Addresses the Easy Way with SmartyStreets


Adding an address form is a pretty common activity in web apps and even more so with ecommerce web apps. Validations on forms allow us to guide the user to filling out all required fields and to make sure the fields conform to basic formats. Up until now going further with addresses to verify they actually exist in the real world was a difficult enough task that most developers wouldn't bother with it. Imagine though the cost to the merchant who ships something to the wrong state because the customer accidently selected "SD" (South Dakota) when they thought they were selecting "SC" (South Carolina), a simple enough mistake to make and one that wouldn't be caught by most address forms. In today's ecommerce world customers expect deliveries to be fast and reliable, and in this case the customer would have to wait until the package is returned to the merchant with "Address Unknown" only to have to wait even longer for the reshipment. Even worse for the merchant, maybe the package never gets returned.

SmartyStreets is a new API web app that I implemented for our client Mobixa, a web app that allows people to sell their used mobile phones. Mobixa sends shipping labels and payment checks to customers so that they can send their phones to Mobixa and get paid for it. Wrong addresses can delay the whole process and Mobixa wanted to reduce the number of bad addresses that were being keyed in by the customers. SmartyStreets provides an easy way for developers to allow Address verification to their web forms so that addresses are verified against the US Postal Service's valid addresses. SmartyStreets is CASS certified meaning that they meet the USPS accuracy requirements.

The big advantage of Smarty Streets is that adding address verification to a form can be as easy as adding a link to their jQuery based plugin and then a script tag with your SmartyStreets API key. The plugin autodetects address fields and when a minimum of 3 fields are entered (address, city, state), it will display an indicator sprite to the user and send an async request to the API for verification. The verification has three possible outcomes:

  1. The address is verified. A valid address will display a verified image to the side of the zip code and all of the address fields will be modified with a more "correct" address, with correct being defined as what matches the USPS official definition for the matched address. Zip codes will be modified with the carrier route code, so "92806" becomes "92806-3433". An address modification would for example change "1735 pheasant" to "1731 N Pheasant St.". Proper casing and spelling errors will also be enacted.

  2. The address is ambiguous. An ambiguous address is one that returns multiple matches. Let's say for example that "1 Rosedale St.", could be "1 N Rosedale Street" or "1 S Rosedale Street". In this case it displays a popup which allows the user to select the correct address or to override the suggestions and continue with the address they entered.

  3. The address is invalid. An invalid address informs the user that it's invalid and both the invalid address and the ambiguous address offer the user two additional choices. "Double checking the address" will rerun the address validation after the user has modified the address. "I certify that what I type is correct", is a second choice which allows the user to continue with the address they typed. This last choice is important because it allows the user the power to continue with what they want instead of forcing them to conform to the address validation.

Checks are performed when SmartyStreets senses that it has enough address information to run a search. During this time the submit button to the form is disabled until the check is completed. Once a check is performed once, it will not perform again unless the user elects to "Double check" the address, this is a good design choice to prevent the user from getting stuck in an infinite loop of sorts.

Our implementation of SmartyStreets into Mobixa included customizing it to do things a little bit differently than the out of the box defaults. The jQuery plugin comes with many events and hooks for adding customization and if you want to go your own way you can implement everything on the frontend save for the API call yourself. Documentation on the website is useful, and the developers of SmartyStreets conveniently answered my questions via an Olark chat window.

The costs of SmartyStreets is that you have to spend time to implement it in your app, a monthly fee based on number of API calls, and also that your UI flow will change slightly in that the user will need to wait for the API call to complete before submitting the form. I don't always implement validation when I have an address form in an app, but when I do, I like to use SmartyStreets.

SFTP virtual users with ProFTPD and Rails: Part 1

I recently worked on a Rails 3.2 project that used the sweet PLupload JavaScript/Flash upload tool to upload files to the web app. To make it easier for users to upload large and/or remote files to the app, we also wanted to let them upload via SFTP. The catch was, our users didn't have SFTP accounts on our server and we didn't want to get into the business of creating and managing SFTP accounts. Enter: ProFTPD and virtual users.

ProFTPD's virtual users concept allows you to point ProFTPD at a SQL database for your user and group authentication. This means SFTP logins don't need actual system logins (although you can mix and match if you want). Naturally, this is perfect for dynamically creating and destroying SFTP accounts. Give your web app the ability to create disposable SFTP credentials and automatically clean up after the user is done with them, and you have a self-maintaining system.

Starting from the inside-out, you need to configure ProFTPD to enable virtual users. Here are the relevant parts from our proftpd.conf:

##
# Begin proftpd.conf excerpt. For explanation of individual config directives, see the 
# great ProFTPD docs at http://www.proftpd.org/docs/directives/configuration_full.html
##
DefaultServer off
Umask 002
AllowOverwrite on

# Don't reference /etc/ftpusers
UseFtpUsers off



# Enable SFTP
SFTPEngine on

# Enable SQL based authentication
SQLAuthenticate on

# From http://www.proftpd.org/docs/howto/CreateHome.html
# Note that the CreateHome params are kind of touchy and easy to break.
CreateHome on 770 dirmode 770 uid ~ gid ~

# chroot them to their home directory
DefaultRoot ~

# Defines the expected format of the passwd database field contents. Hint: An
# encrypted password will look something like: {sha1}IRYEEXBUxvtZSx3j8n7hJmYR7vg=
SQLAuthTypes OpenSSL

# That '*' makes that module authoritative and prevents proftpd from
# falling through to system logins, etc
AuthOrder mod_sql.c*

# sftp_users and sftp_groups are the database tables that must be defined with
# the proceeding column names. You can have other columns in these tables and
# ProFTPD will leave them alone. The sftp_groups table can be empty, but it must exist.
SQLUserInfo sftp_users username passwd uid sftp_group_id homedir shell
SQLGroupInfo sftp_groups name id members

SFTPHostKey /etc/ssh/ssh_host_rsa_key
SFTPHostKey /etc/ssh/ssh_host_dsa_key

SFTPCompression delayed
SFTPAuthMethods password
RequireValidShell no

# SQLLogFile is very verbose, but helpful for debugging while you're getting this working
SQLLogFile /var/log/proftpd_sql.sql

## Customize these for production
SQLConnectInfo database@localhost:5432 dbuser dbpassword

# The UID and GID values here are set to match the user that runs our web app because our
# web app needs to read and delete files uploaded via SFTP. Naturally, that is outside
# the requirements of a basic virtual user setup. But in our case, our web app user needs
# to be able to cd into a virtual user's homedir, and run a `ls` in there. Also, note that
# setting these two IDs here (instead of in our sftp_users table) *only* makes sense if
# you are using the DefaultRoot directive to chroot virtual users.
SQLDefaultUID  510
SQLDefaultGID  500


The CreateHome piece was the trickiest to get working just right for our use-case. But there are two reasons for that; we needed our web app to be able to read/delete the uploaded files, and we wanted to make ProFTPD create those home directories itself. (And it only creates that home directory once a user successfully logs in via SFTP. That means you can be more liberal in your UI with generating credentials that may never get used without having to worry about a ton of empty home directories lying about.)

That's it for the introductory "Part 1" of this article. In Part 2, I'll show how we generate credentials, the workflow behind displaying those credentials, and our SftpUser ActiveRecord model that handles it all. In Part 3, I'll finish up by running through exactly how our web app accesses these files, and how it cleans up after it's done.

Advanced Product Options (Variants) in Piggybak

About a month ago, Tim and I developed and released a Piggybak extension piggybak_variants, which provides advanced product optioning (or variant) support in Piggybak. Piggybak is an open source Ruby on Rails ecommerce platform developed and maintained by End Point. Here, I discuss the background and basics of the extension.

Motivation & Background

The motivation for this extension was the common ecommerce need for product options (e.g. size, color), where each variation shares high-level product information such as a title and description, but variants have different options, quantities available, and prices. Having been intimately familiar with Spree, another open source Ruby on Rails ecommerce framework, we decided to borrow similarities of Spree's product optioning data model after seeing its success in flexibility over many projects. The resulting model is similar to Spree's data model, but a bit different due to the varied nature in Piggybak's mountability design.


Spree's data model for advanced product optioning. A product has many variants. Each variant has and belongs to many option values. A product also has many options, which define which option values can be assigned to it.

Piggybak Variants Data Model


Option configuration data model in Piggybak

The data model starts with option configurations, option configurations are created and specify which class they belong to. For example, a Shirt model may have options Size and Color, and this would be stored in the option configurations table. In this case, an option will have a name (e.g. Size and Color) and a position for sorting (e.g. 1 and 2). The option configuration will reference an option and assign a klass to that option (in this case Shirt). Another example of option configurations may be a Picture Frame, that has option configurations for Dimensions and Finish.


Option value configuration in Piggybak

After option configurations are defined, one will define option values for each option configuration. For example, option values will include Red, Blue, and Green for the option Color with position 1, 2, and 3. And option values will include Small, Medium, and Large with positions 1, 2, and 3 for the option Size.


After options, option configurations, and option values are defined, we are ready to create our variants. Per the above data model, a variant has and belongs to many option_values_variants (and must have one value per option). In our Shirt example, a variant must have one Color option value and one Size option value assigned to it through the option_values_variants table. A variant belongs to a specific sellable item (Shirt) through a polymorphic relationship, which is consistent with Piggybak's mountability design to allow different classes to be sellable items. Finally, a variant has_one piggybak_sellable and accepts piggybak_sellable attributes in a nested form, which means that a variant has one sellable which contains quantity, pricing, and cart description information. What this gives us is a sellable item (Shirt) with many variants where each variant has option values and each variant has sellable information such as quantity available, price, and description in cart. Below I'll provide a few screenshots of what this looks like in the admin and front-end interface.

How to Use the Plugin

To install the extension, the following steps must be applied:

  1. Add the gem to the Gemfile and run bundle install
  2. Install and run the extension rake piggybak_variants:install:migrations and rake db:migrate
  3. Add acts_as_sellable_with_variants to any model that should have variants. You may need to add appropriate attr_accessible settings in your model as well, depending on your attribute accessibility settings.
  4. In the admin, define option configurations and option values for each option, then create variants for your sellable instances.
  5. Finally, add <%= variant_cart_form(@instance) %> to your sellable item's show page to render the cart form.

These steps are similar to Piggybak's core behavior for adding non-variant sellable items.

Screenshots

The Piggybak demo uses this extension for selling several product options of photography frames. The images and captions below represent the variants extension for this use case.


The Frame class has two options assigned to it (Frame Size and Frame Finish). Since Frame Size has a position equal to one and Frame Finish has a position equal to two, Frame Size will show as the first option on the product page.


The Frame Finish option is assigned to the Frame class and it has four option values (Black, Cherry, Bronze, and Iron).


On the Frame edit page, 8 variants are created to represent the combinations of 2 Frame Sizes and 4 Frame Finishes.
Each variant has pricing, quantity, and cart description information, as well as additional sellable fields.


And the product page shows the options and option values for that item, displayed based on Position and Size data.
When each option value is triggered, appropriate pricing information is displayed.

Conclusion

The goal of this extension was to provide variant functionality that is not necessarily required to be used with Piggybak. Piggybak can still be leveraged without this extension to provide simple single product option add to cart functionality. The Piggybak cart only examines what elements are in the cart based on the sellable_id and the quantity, which is the driving force of the core Piggybak architecture as well as this extension.

Stay tuned for additional updates to the Piggybak Ruby on Rails Ecommerce platform or contact End Point today to start leveraging our Ruby on Rails ecommerce expertise on your next project!

Lazy AJAX

Don't do this, at least not without a good reason. It's not the way to design AJAX interfaces from scratch, but it serves well in a pinch, where you have an existing CGI-based page and you don't want to spend a lot of time rewriting it.

I was in a hurry, and the page involved was a seldom-used administration page. I was attempting to convert it into an AJAX-enabled setup, wherein the page would stand still, but various parts of it could be updated with form controls, each of which would fire off an AJAX request, and use the data returned to update the page.

However, one part of it just wasn't amenable to this approach, or at least not quick-and-dirty. This part had a relatively large amount of inline interpolated (Interchange) data (if you don't know what Interchange is, you can substitute "PHP" in that last sentence and you'll be close enough.) I wanted to run the page back through the server-side processing, but only cared about (and would discard all but) one element of the page.

My lazy-programmer's approach was to submit the page itself as an AJAX request:

$.ajax({
    url: '/@_MV_PAGE_@',
    data: {
        'order_date': order_date,
        'shipmode' : shipmode
    },
    method: 'GET',
    async: true,
    success: function(data, status){
        $('table#attraction_booklet_order').replaceWith(
            $(data).find('#attraction_booklet_order').get(0)
        );
        $('table#attraction_booklet_order').show();
    }
}); 

In this excerpt, "MV_PAGE" is a server-side macro that evaluates to the current page's path. The element I care about is a rather complex HTML table containing all sorts of interpolated data. So I'm basically reloading the page, or at least that restricted piece of it. The tricky bit, unfamiliar to jQuery newcomers, lets you parse out something from the returned document much as you would from your current document.

Again, don't do this without a reason. When I have more time, I'll revisit this and improve it, but for now it's good enough for the current needs.

tmux and SecureCRT settings

Richard gave me a call today to show the wonders of tmux. Unfortunately, right off the bat I couldn't see color and there were a bunch of accented a's dividing the panes. After some trial and error and finding this post on the subject we got it working. The key is to configure SecureCRT to use xterm + ANSI colors and set the character set to UTF-8 and Use Unicode line drawing code points. Hooray! I'll be trying out tmux in day-to-day use to see if it will replace or augment screen for me.

Update Your (Gnu) Screen Config on the Fly

An Indispensable Tool

I use Screen constantly in my work at End Point. It is an indispensable tool that I would not want to operate without. It's so handy to resume where I left off after I've detached or when my connection drops unexpectedly. This is likely preaching to the choir but if you are not already using Screen and/or tmux, start now.

The Scenario

I often find myself in the following situation:

  1. SSH into a server
  2. Fire up a new Screen session
  3. Create several windows for editing files, tailing logs etc
  4. Realize the default Screen configuration is inadequate or does not exist.
  5. Facepalm \O/

While my needs are fairly minimal, I do like to bump up the scrollback buffer and display the list of windows in the status line.

Screen example

There are a couple of options at this point. I could put up with the default / non-existent configuration or create a config file and manually re-create the session and all of the windows to pick up the configuration changes. Neither of these options was desirable. I wanted to be able to update the configuration and have all of the existing windows pick up the changes. After asking around a little I ended up taking a look at the manual and discovered the `source` command.

Use the source (command)

The source command can be used to load or reload a Screen configuration file. It can be invoked from inside a Screen session like so:

C-a :source /absolute/path/to/config_file

It is important to note that you must use the absolute path to the config file. There are exceptions which can be found in the Screen man pages but I found it easier to just use the absolute path. Once the source command has been issued, the configuration will be applied to all existing windows! This was exactly what I was looking for. Armed with this information I copied my local .screenrc to the system clipboard, created a new config file on server and applied it to my session using the `source` command.

Works with tmux too

I like to use tmux as well and was happy to find it had a similar feature. The source-file command (`source` is aliased as well) is invoked in the exactly the same way:

C-prefix :source /absolute/path/to/config_file

After issuing the source-file command, all of the windows and panes in the current session will pick up the configuration changes.

Changing the Default Directory

Another related issue I often run into is wishing I had started my Screen or tmux session in a different directory. By default, when you start a Screen or tmux session, all new windows (and panes) will be opened from the same directory where Screen or tmux was invoked. However, this directory can be changed for existing sessions.

For Screen, the chdir command can be used:

C-a :chdir /some/new/directory

In tmux, the default-path command can be used:

C-prefix :default-path /some/new/directory

After issuing the chdir or default-path commands, all new windows and panes will be opened from the specified directory.

I hope this has been helpful — feel free add your own Screen and tmux tips in the comments!


Piggybak Extensions: A Basic How-To Guide

This article outlines the steps to build an extension for Piggybak. Piggybak is an open-source Ruby on Rails ecommerce platform created and maintained by End Point. It is developed as a Rails Engine and is intended to be mounted on an existing Rails application. If you are interested in developing an extension for Piggybak, this article will help you identify the steps you need to take to have your extension leveraging the Piggybak gem, and integrating smoothly into your app.

Introduction

The Piggybak platform is lightweight and relies on Rails meta-programming practices to integrate new extensions. The best references to use alongside your development should be the previously developed extensions found here:

It is likely that your extension will tie into the admin interface. Piggybak utilizes the RailsAdmin gem for its admin interface.

Setting up the Development Environment

A convenient way to start building out your extension is to develop against the demo app found here. The demo app utilizes the Piggybak gem and comes with sample data to populate the e-commerce store.

The Piggybak demo app sample data is exported for a PostgreSQL database. To use this data (suggested) you should be prepared to do one of the following:

  • be using PostgreSQL and understand how to work with the existing data dump
  • transform this data dump to another database format that fits your database flavor of choice
  • ignore the sample data and create your own

Creating the Extension (Gem, Engine)

In a folder outside of the project utilizing the Piggybak gem, create a mountable rails engine:

$ rails plugin new [extension_name] --mountable

The "mountable" option makes you engine namespace-isolated.

Next, update your app's Gemfile to include the extension under development

gem "piggybak_new_extension", :path => "/the/path/to/the/extension"

Run bundle install to install the extension in your application and restart your application.

Special Engine Configuration

Your extension will rely on the engine.rb file to integrate with Piggybak. A sample engine.rb for the piggybak_bundle_discount can be found here. Let's go over this file to get a clue of how bundle discounts are served as an extension in Piggybak.

Make sure you are requiring any of your classes at the top of your engine.rb file, e.g.:

require 'piggybak_bundle_discounts/order_decorator'

The code below is decorating the Piggybak::Order class, which is a helpful pattern to use when you wish to enhance class capabilities across engines. In the bundle discount case, the decorator adds several active record callbacks.

config.to_prepare do
  Piggybak::Order.send(:include, ::PiggybakBundleDiscounts::OrderDecorator)
end

An order is comprised of many line items, which are used to calculate the balance due. More information on the line item architecture is described here. If your extension needs to register new line item types to the order, you may use something similar to the following code to set up the information regarding this new line item type.

config.before_initialize do
  Piggybak.config do |config|
    config.extra_secure_paths << "/apply_bundle_discount"
    config.line_item_types[:bundle_discount] = { 
      :visible => true,
      :allow_destroy => true,
      :fields => ["bundle_discount"],
      :class_name => "::PiggybakBundleDiscounts::BundleDiscount",
      :display_in_cart => "Bundle Discount",
      :sort => config.line_item_types[:payment][:sort]
    } 
    config.line_item_types[:payment][:sort] += 1
  end
end

Does your extension need client side support? Piggybak utilizes the asset pipeline so you will need to register your assets here to have them pre-compiled.

initializer "piggybak_bundle_discounts.precompile_hook" do |app|
  app.config.assets.precompile += ['piggybak_bundle_discounts/piggybak_bundle_discounts.js']
end

Finally, since Piggybak utilizes RailsAdmin for its admin system, we need to register the models as following the RailsAdmin documentation.

initializer "piggybak_bundle_discounts.rails_admin_config" do |app|
  RailsAdmin.config do |config|
    config.model PiggybakBundleDiscounts::BundleDiscount do
      navigation_label "Extensions"
      label "Bundle Discounts"

      edit do
        field :name
        field :multiply do 
          help "Optional"
        end 
        field :discount
        field :active_until
        field :bundle_discount_sellables do 
          active true
          label "Sellables"
          help "Required"
        end
      end
    end

    config.model PiggybakBundleDiscounts::BundleDiscountSellable do
      visible false
      edit do
        field :sellable do          
          label "Sellable"
          help "Required"
        end
      end
    end
  end
end

What else?

From here, extension development can follow standard Rails engine development, which allows for support of its own models, controllers, views, and additional configuration. Any database migrations inside an extension must be copied to the main Rails application to be applied.

You may also need to be aware of how Piggybak integrates with CanCan to ensure that CanCan permissions on your extension models are set correctly.

End Point created and maintains Piggybak project. Much of the inspiration for Piggybak comes from our expert engineers who have ecommerce experience working and contributing to platforms such as Spree, RoR-e, and Interchange. If you are interested in talking with us about your next ecommerce project, or have an ecommerce project that needs support, contact us today.

Is AVS for International Customers Useless?

Any ecommerce site that sells "soft goods", some digitally delivered product, has to deal with a high risk of credit card fraud, since their product is usually received instantly and relatively easily resold. Most payment processors can make use of AVS (Address Verification System). It usually works well for cards issued by United States banks with customers having a U.S. billing address, but its track record with international customers and banks has been less than stellar.

AVS compares a buyer's address information with what the bank has on file for the card's billing address. To reduce false negatives, that comparison is limited to the postal code and the numeric part of the street address. The lack of consistent AVS implementation by non-U.S. banks, and the variety of postal codes seen outside the U.S., Canada, and the U.K., mean problems creep in for most international orders.

Any time you reject an order, whether it's for a legitimately incorrect billing address, a bank/AVS problem, or any other reason, you're increasing the likelihood of losing the customer's business, having them retry and cost you more in payment processing fees, or having them call your customer service line in frustration.

Also note that the AVS result is only available after a transaction is authorized or processed (and the payment processor has thus charged a fee), not before.

So what can you do as a conscientious developer? There are a number of approaches, all with drawbacks as you'll see:

Don't use AVS for foreign cards

That's the simple approach. Skip AVS altogether, at least for international orders, and let the chips fall where they may. By "international order", I mean "one that has a non-US billing address". Merchants tend to think in terms of "foreign" and "domestic" credit cards, and it's true that it's possible to determine the country of the bank based on the card number. See the Wikipedia articles on Bank card number and List of Issuer Identification Numbers for some mostly-accurate information. However, you really need a current "BIN" (Bank Identification Number) database, and those cost money and must be massaged into the format your ecommerce system needs. Oh, and usually your ecommerce system won't know anything about BIN numbers and you'll need custom programming to consult a BIN database.

So for most merchants, actually knowing whether a credit card number is from a U.S. or foreign bank isn't possible to determine, so they fall back to rough estimates such as assuming that a non-U.S. billing address means a non-U.S. bank, and then skipping or weakening the AVS check for those orders.

It seems like that would work, but it's wrong so often that it's not very useful. Customers with a U.S. billing address may have a card issued by a foreign bank that doesn't support AVS, and strict AVS checks for them will mean they can't complete the order. Customers with a foreign billing address may have a card issued by a U.S. bank that does support AVS. And some foreign banks do support AVS, just to keep things interesting.

In any case, disabling or weakening AVS may allow more fraudulent transactions than the merchant is willing to stomach. But this minimizes the grief you incur because of false positives (and the ensuing held funds and customer service calls).

AVS on the whole amount

A charge that fails an AVS check will result in the funds being held on the customer's card until removed, which can cause unease or downright hostility on the part of that customer, especially if you are charging something large compared to the customer's available credit.

Furthermore, many people's cards have a very low credit limit, and there are a lot more people with low remaining balances than you might first think.

In addition, customers using debit cards will be especially cranky if their funds are held because of a failed AVS check. They use that same account for writing checks, withdrawing cash from the ATM, etc., so when you tie up their real money in an account (not just some credit), it feels to them like you have stolen their money.

AVS on a "test charge"

You can submit a "test charge" (usually just $1) against the card to retrieve the the AVS information, to "fish out" the AVS response before making the full charge. This is quite useful if the full charge would be significant; it's less handy for small amounts.

This approach has a serious drawback: it means you are making an additional authorization request for each sale, which doubles transaction fees, and may bump the seller into a higher tier of transaction costs because of the total number of requests.

It is also increasingly noticed by customers in the era of computer banking. Many banks will show pending transactions including these little $1 charges, even after the full charge has come in, inducing over-anxious customers to call your customer service line and waste everyone's time.

Tragic calculus

There's really no magic bullet to correct the issues involved in AVS processing: you can sanitize the data, and cushion the shock for a customer when your website declines their card due to a failed AVS check, but in the end you have to resolve the tragic calculus of whether you lose more to fraud or to abandoned carts, and how many customer service calls you can afford to handle. It is not amusing for customer service reps to have to explain many times per day to anxious customers how the credit card processing system works, often directly contradicting the customers' banks' own explanations that may be so dumbed-down as to be simply incorrect.

Certainly the value of AVS as fraud prevention depends a lot on how you implement it. Perhaps it's time for you to consider whether additional customization of your order processing is in order to maximally balance customer satisfaction, processing charges, and keep fraud at a minimum.

Acknowledgments

This post was extensively edited and extended by Jon Jensen, who has seen plenty of this pain first-hand as well.

Custom validation with authlogic: Password can't be repeated.

I recently worked on a small security system enhancement for one of my projects: the user must not be able to repeat his or her password for at least ten cycles of change. Here is a little recipe for all the authlogic users out there.

We will store ten latest passwords in the users table.

def self.up
    change_table :users do |t|
      t.text    :old_passwords
    end
  end

The database value will be serialized and deserialized into Ruby array.

class User
  serialize :old_passwords, Array
end

If the crypted password field has changed, the current crypted password and its salt are added to the head of the array. The array is then sliced to hold only ten passwords.

def update_old_passwords
  if self.errors.empty? and send("#{crypted_password_field}_changed?")
    self.old_passwords ||= []
    self.old_passwords.unshift({:password => send("#{crypted_password_field}"), :salt =>  send("#{password_salt_field}") })
    self.old_passwords = self.old_passwords[0, 10]
  end
end

The method will be triggered after validation before save.

class User
  after_validation :update_old_passwords
end

Next, we need to determine if the password has changed, excluding the very first time when the password is set on the new record.

class User < ActiveRecord::Base
  def require_password_changed?
    !new_record? && password_changed?
  end
end

The validation method itself is implemented below. The idea is to iterate through the stored password salts and encrypt the current password with them using the authlogic mechanism, and then check if the resulting crypted password is already present in the array.

def password_repeated?
  return if self.password.blank?
  found = self.old_passwords.any? do |old_password|
    args = [self.password, old_password[:salt]].compact
    old_password[:password] == crypto_provider.encrypt(args)
  end
  self.errors.add_to_base "New password should be different from the password used last 10 times." if found
end

Now we can plug the validation into the configuration.

class User < ActiveRecord::Base
  acts_as_authentic do |c|
    c.validate :password_repeated?, :if => :require_password_changed?
  end
end

Done!

Interactive Piggybak Demo Tour

A new interactive tour of Piggybak and the Piggybak demo has been released at piggybak.org. Piggybak is an open source Ruby on Rails ecommerce framework built as a Rails 3 engine and intended to be mounted on existing Rails applications.

The tour leverages jTour (a jQuery plugin) and guides you through the homepage, navigation page, product page, cart and checkout pages, gift certificate page, advanced product option page, and WYSIWYG driven page.The tour also highlights several of the Piggybak plugins available and installed into the demo such as plugins that introduce advanced product navigation, advanced product optioning, and gift certificate functionality. Below are a few screenshots from the demo.

An interesting side note of developing this tour is that while I found many nice jQuery-driven tour plugins available for free or at a small cost, this jQuery plugin was the only plugin offering decent multi-page tour functionality.


Here is the starting point of Piggybak tour.

The Piggybak tour adds an item to the cart during the tour.

The Piggybak tour highlights advanced product navigation
in the demo.

The Piggybak tour highlights features and functionality
on the one-page checkout.

If you'd like to check out the interactive tour, visit the Piggybak demo page here and click on the link to begin the tour! Or contact End Point right now to get started on your next Ruby on Rails ecommerce project!


Mobixa: A Client Case Study

A few weeks ago we saw the official (and successful!) website launch for one of our clients, Mobixa. Mobixa will buy back your used iPhones and/or provide you with information about when you should upgrade your existing phone and sell it back. Right now, Mobixa is currently buying back iPhones and advising on iPhones and Androids. End Point has been working with Mobixa for several months now. This article outlines some of the interesting project notes and summarizes End Point's diverse skillset used for this particular website.

Initial Framework

Mobixa initially wanted a an initial proof of concept website without significant investment in development architecture because the long-term plan and success was somewhat unknown at the project unset. The initial framework comprised of basic HTML combined with a bit of logic driven by PHP. After a user submitted their phone information, data was sent to Wufoo via a Wufoo provided PHP-based API, and data was further handled from Wufoo. Wufoo is an online form builder that has nice export capabilities, and painlessly integrates with MailChimp.

This initial architecture was suitable for collecting user information, having minimal local database needs and allowing external systems (e.g. Wufoo, MailChimp) to handle much of the user logic. However, it became limiting when the idea of user persistence came into play – the long-term goal will be to allow users to modify previous submissions and look up their order information, essentially a need for basic user account management functionality. For that reason, we made a significant switch in the architecture, described below.

Framework #2: Rails 3

Because of the limiting nature of a database-less application with externally managed data and as business needs for users increased, we decided to make the move to Rails. End Point has a large team of Rails developers, Rails is a suitable framework for developing applications efficiently, and we are experienced in working with Rails plugins such as RailsAdmin, Devise, and CanCan, which immediately provide a configurable admin interface, user authentication, and user authentication to the application. In the process of moving to Rails, we eliminated the middle-man Wufoo to integrate with the shipping fulfillment center and MailChimp directly.

The current Mobixa site runs on Rails 3, Nginx and Unicorn backed by PostgreSQL, leverages End Point's DevCamps to allow multiple developers to simultaneously add features and maintain the site painlessly, and uses RailsAdmin, Devise, and CanCan. It features a responsive design and uses advanced jQuery techniques. The focus of the site is still a simple HTML page that passes user-entered information to the local database, but several user management features have been added as well as the ability to sell back multiple phones at a time.

MailChimp Integration

In my search for a decent Rails MailChimp integration gem, I found gibbon. Gibbon is fairly simple - it's an API wrapper for interacting with MailChimp. Any API capabilities and methods available in MailChimp can be called via Gibbon. The integration looks something like this:

# user model 
def update_mailchimp
  gb = Gibbon.new(*api_key*, { :timeout => 30 })

  info = gb.list_member_info({ :id => *list_id*, :email_address => self.email })

  if info["success"] == 1
    gb.listUpdateMember({ :id => *list_id*,
                          :email_address => self.email,
                          :merge_vars => self.mailchimp_data })
  else
    gb.list_subscribe({ :id => *list_id*,
                        :email_address => self.email,
                        # additional new user arguments #
                        :merge_vars => self.mailchimp_data })
  end
end

The above method instantiates a connection to Mailchimp and checks if the user is already subscribed to the Mailchimp list. If the user is subscribed, the listUpdateMember method is called to update the user subscription information. Otherwise, list_subscribe is called to add the user to the Mailchimp list.

What's Next?

In addition to expanding the product buyback capabilities, we expect to integrate additional features such as external-API driven address verification, social media integration, referral management, and more advanced user account management features. The project will continue to involve various members of our team such as Richard, Greg D., Tim, Kamil, Josh W. and me.

Slash URL

There's always more to learn in this job. Today I learned that Apache web server is smarter than me.

A typical SEO-friendly solution to Interchange pre-defined searches (item categories, manufacturer lists, etc.) is to put together a URL that includes the search parameter, but looks like a hierarchical URL:

/accessories/Mens-Briefs.html
/manufacturer/Hanes.html

Through the magic of actionmaps, we can serve up a search results page that looks for products which match on the "accessories" or "manufacturer" field. The problem comes when a less-savvy person adds a field value that includes a slash:

accessories: "Socks/Hosiery"
or
manufacturer: "Disney/Pixar"

Within my actionmap Perl code, I wanted to redirect some URLs to the canonical actionmap page (because we were trying to short-circuit a crazy Web spider, but that's beside the point). So I ended up (after several wild goose chases) with:

my $new_path = '/accessories/' .
   Vend::Tags->filter({body => (join '%2f' => (grep { /\D/ } @path)),
       op => 'urlencode', }) .
   '.html';

By this I mean: I put together my path out of my selected elements, joined them with a URL-encoded slash character (%2f), and then further URL-encoded the result. This was counter-intuitive, but as you can see at the first link in this article, it's necessary because Apache is smarter than you. Well, than me anyway.