End Point

News

Welcome to End Point's blog

Ongoing observations by End Point people.

Looking at development environments with DevCamps and Vagrant

For most web developers, you have practices and tools that you are used to using to do your work. And for most web developers this means setting up your workstation with all the things you need to do your editing, compiling, testing, and pushing code to some place for sharing or deployment. This is a very common practice even though it is fraught with problems- like getting a database setup properly, configuring a web server, any other services (memcached, redis, mongodb, etc), and many more issues.

Hopefully at some point you realize the pain that is involved in doing everything on your workstation directly and start looking for a better way to do web development. In this post I will be looking at some ways to do this better: using a virtual machine (VM), Vagrant, and DevCamps.

Using a VM for development

One way to improve things is to use a local virtual machine for your development (for example, using VirtualBox, or VMware Fusion). You can edit your code normally on your workstation, but then execute and test it in the VM. This also makes your workstation "clean", moving all those dependencies (like a database, web server, etc.) off your workstation and into the VM. It also gets your dev environment closer to production, if not identical. Sounds nice, but let's break down the pros and cons.

Benefits of using a VM
  • Dev environment closely matches production.
  • Execute and test code in a dedicated machine (not your workstation directly).
  • Allows for multiple projects to be worked on concurrently (one VM per project).
  • Exposes the developer to the Operations (systems administration) side of the web application (always a good thing).
  • Developer can edit files using their favorite text editor locally on the workstation (but will need to copy files to the VM as needed).
Problems with using a VM
  • Need to create and configure the VM. This could be very time consuming and error prone.
  • Still need to install and configure all services and packages. This could also be time consuming and error prone.
  • Backups of your work/configuration/everything are your own responsibility (extremely unlikely to happen).
  • Access to your dev environment is extremely limited, thus probably only you can access it and test things on it. No way for a QA engineer or business owner to test/demo your work.
  • Inexperienced developers can break things, or change them to no longer match production (install arbitrary packages, different versions than what is in production, screw up the db, screw up Apache configuration, etc.).
  • If working with an established database, then downloading a dump, installing, and getting the database usable is time consuming and error prone. ("I just broke my dev database!" can be a complete blocker for development.)
  • The developer needs to set up networking for the VM in order to ssh to it, copy files back and forth, and point a web browser to it. This may include manually setting up DNS, or /etc/hosts entries, or port forwarding, or more complex setups.
  • If using SSL with the web application, then the developer also needs to generate and install the SSL cert and configure the web server correctly.

Vagrant

What is Vagrant? It is a set of tools to make it easier to use a virtual machine for your web development. It attempts to lessen many of the problems listed above through the use of automation. By design it also makes some assumptions about how you are using the VM. For example, it assumes that you have the source code for you project in a directory somewhere directly on your workstation and would prefer to use your favorite text editor on those files. Instead of expecting you to continually push updated files to your VM, it sets up a corresponding directory on the VM and keeps the two in sync for you (using either shared folders, NFS, Samba, or rsync). It also sets up the networking for accessing the VM, usually with port forwarding, so you don't have to worry about that.

Benefits of Vagrant
  • Same as those listed above for using a VM, plus...
  • Flexible configuration (Vagrantfile) for creating and configuring the VM.
  • Automated networking for the VM with port forwarding. Abstracted ssh access (don't need to set up a hostname for the VM, simply type `vagrant ssh` to connect). Port forwarded browser access to the VM (usually http://localhost:8080, but configurable).
  • Sync'd directory between your workstation and the VM for source code. Allows for developers to use their favorite text editor locally on their workstation without needing to manually copy files to the VM.
  • Expects the use of a configuration management system (like puppet, chef, salt, or bash scripts) to "provision" the VM (which could help with proper and consistent setup).
  • Through the use of Vagrant Cloud you can get a generated url for others to access your VM (makes it publicly available through a tunnel created with the command `vagrant share`).
  • Configuration (Vagrantfile and puppet/chef/salt/etc.) files can be maintained/reviewed by Operations engineers for consistency with production.
Problems with Vagrant
  • Still need to install and configure all services and packages. This is lessened with the use of a configuration management tool like puppet, but you still need to create/debug/maintain the puppet configuration and setup.
  • Backups of your work/configuration/everything are your own responsibility (extremely unlikely to happen). This may be lessened for VM configuration files, assuming they are included in your project's VCS repo along with your source code.
  • Inexperienced developers can still break things, or change them to no longer match production (install arbitrary packages, different versions than what is in production, screw up the db, screw up Apache configuration, etc.).
  • If working with an established database, then downloading a dump, installing, and getting the database usable is time consuming and error prone. ("I just broke my dev database!" can be a complete blocker for development.)
  • If using SSL with the web application, then the developer also needs to generate and install the SSL cert and configure the web server correctly. This might be lessened if puppet (or whatever) is configured to manage this for you (but then you need to configure puppet to do that).

DevCamps

The DevCamps system takes a different approach. Instead of using VMs for development, it utilizes a shared server for all development. Each developer has their own account on the camps server and can create/update/delete "camps" (which are self-contained environments with all the parts needed). There is an initial setup for using camps which needs thorough understanding of the web application and all of its dependencies (OS, packages, services, etc.). For each camp, the system will create a directory for the user with everything related to that camp in it, including the web application source code, their own web server configuration, their own database with its own configuration, and any other resources. Each camp is assigned a camp number, and all services for that camp run on different ports (based on the camp number). For example, camp 12 may have Apache running on ports 9012 (HTTP) and 9112 (HTTPS) and MySQL running on port 8912. The developer doesn't need to know these ports, as tools allow for easier access to the needed services (commands like `mkcamp`, `re` for restarting services, `mysql_camp` for access to the database, etc.).

DevCamps has been designed to address some of the pain usually associated with development environments. Developers usually do not need to install anything, since all dependencies should already be installed on the camps server (which should be maintained by an Operations engineer who can keep the packages, versions, etc. consistent with production). Having all development on a server allows Operations engineers to backup all dev work fairly easily. Databases do not need to be downloaded, manually setup, or anything- they should be set up initially with the camps system and then running `mkcamp` clones the database and sets it up for you. Running `refresh-camp --db` allows a developer to delete their camp's database and get a fresh clone, ready to use.

Benefits of DevCamps
  • Each developer can create/delete camps as needed, allowing for multiple camps at once and multiple projects at once.
  • Operations engineers can manage/maintain all dependencies for development, ensuring everything is consistent with production.
  • Backups of all dev work is easy (Operations engineer just needs to backup the camps server).
  • Developer does not need to configure services (camp templating system auto-generates needed configuration for proper port numbers), such as Apache, nginx, unicorn, MySQL, Postgres, etc.
  • SSL certificates can be easily shared/generated/installed/etc. automatically with the `mkcamp` script. Dev environments can easily have HTTPS without the developer doing anything.
  • Developers should not have permissions to install/change system packages or services. Thus inexperienced developers should not be able to break the server, other developer's environments, install arbitrary software. Screwing up their database or web server config can be fixed by either creating a new camp, refreshing their existing one, or an Operations engineer can easily fix it for them (since it is on a central server they would already have access to, and not need to worry about how to access some VM who knows where).
Problems with DevCamps
  • Since all camps live on a shared server running on different ports, this will not closely match production in that way. However, this may not be significant if nearly everything else does closely match production.
  • Adding a new dependency (for example, adding mongodb, or upgrading the version of Apache) may require quite a bit of effort and will affect all camps on the server- Operations engineer will need to install the needed packages and add/change the needed configuration to the camps system and templates.
  • Using your favorite text editor locally on your workstation doesn't really work since all code lives on the server. It is possible to SFTP files back and forth, but this can be tedious and error prone.
  • Many aspects of the Operations (systems administration) side of the web application are hidden from the developer (this might also be considered a benefit).
  • All development is on a single server, which may be a single point of failure (if the camps server is down, then all development is blocked for all developers).
  • One camp can use up more CPU/RAM/disk/etc. then others and affect the server's load, affecting the performance of all other camps.

Concluding Thoughts

It seems that Vagrant and DevCamps certainly have some good things going for them. I think it might be worth some thought and effort to try to meld the two together somehow, to take the benefits of both and reduce the problems as much as possible. Such a system might look like this:

  • Utilize vagrant commands and configuration, but have all VMs live on a central VM server. Thus allowing for central backups and access.
  • Source code and configuration lives on the server/VM but a sync'd directory is set up (sshfs mount point?) to allow for local editing of text files on the workstation.
  • VMs created should have restricted access, preventing developers from installing arbitrary packages, versions, screwing up the db, etc.
  • Configuration for services (database, web server, etc.) should be generated/managed by Operations engineers for consistency (utilizing puppet/chef/salt/etc.).
  • Databases should be cloned from a local copy on the VM server, thus avoiding the need to download anything and reducing setup effort.
  • SSL certs should be copied/generated locally on the VM server and installed as appropriate.
  • Sharing access to a VM should not depend on Vagrant Cloud, but instead should use some sort of internal service on the VM server to automate VM hostname/DNS for browser and ssh access to the VM.

I'm sure there are more pros and cons that I've missed. Add your thoughts to the comments below. Thanks.

Liquid Galaxy for the Daniel Island School

This past week, End Point had the distinct pleasure of sending a Liquid Galaxy Express (the highly portable version of the platform) to the Daniel Island School in Charleston, South Carolina. Once it arrived, we provided remote support to their staff setting up the system. Through the generous donations of Mason Holland, Benefitfocus, and other donors, this PK-8 grade school is now the first school in the country below the university level with a Liquid Galaxy on campus.

From Claire Silanowicz, who coordinated the installation:

Mason Holland was introduced to the Liquid Galaxy system while visiting the Google Headquarters in San Francisco several months ago. After deciding to donate it to the Daniel Island School here in Charleston, SC, he brought me on to help with the project. I didn't know much about the Liquid Galaxy at first, but quickly realized how cool of a project this was going to be. With some help, I assembled a team of about 8 Benefitfocus employees to help with installation and long-term implementation. Benefitfocus is full of employees who are so passionate about innovative technology, and Mason's involvement with Benefitfocus was a perfect way to connect the company to the community. We had one meeting before the installation date to go over the basics and a few days later 5 of us were at the school unpacking boxes and assembling the 7-screen display. Once it was completed and turned on, we were all in awe. We went from the Golden Gate Bridge to the Duomo in Florence, Italy in a matter of seconds. We traveled to our homes and went to see our office building on street view. After going back to the school in the days to follow, I realized we only touched the tip of the iceberg. The faculty at the school had discovered the museums and underwater views that Google has managed to capture.

The Liquid Galaxy isn't known for revolutionizing the way children learn, but I firmly believe it is going to do just that at the Daniel Island School. The teachers and faculty are so excited to incorporate this new technology into their curriculae. They have a unique opportunity to take this technology and make it an integral part of their teaching. I hope that in the future, other elementary and middle schools can have the Liquid Galaxy system so that teachers all over the country can collaborate and take advantage of everything it has to offer!

STEM education is becoming ever-more important in the fast economy of the 21st century. With a Liquid Galaxy these young students are exposed at a very early age to the wonders of geography, geology, urban development, oceanography, and demographics, not to mention the technological wonderment the platform itself invokes with the young minds: with seven 55" screens mounted on an arced frame, a touchscreen podium and a rack of computers nearby, the Liquid Galaxy is a visually impressive piece of technology regardless of what is being shown on the screens.

This installation is another in a string of academic and educational deployments for the platform. End Point provides 24-hour monitoring and remote support for Liquid Galaxies at Westfield University in Massachusetts, University of Kansas, The National Air & Space Museum in Washington DC, and the Oceanographic Museum in Monaco and a host of other educational institutions. We also work closely with researchers at Lleida campus in Spain and the University of Western Sydney in Australia. We know of other Liquid Galaxies on campuses in Colorado, Georgia, Oklahoma, and Israel.

We expect great things from these students, and hope that some may eventually join us at End Point as developers!

Spree Commerce, Take Care When Offering Free Shipping Promotion

Hello again all. I was working on another Spree Commerce Site, a Ruby on Rails based e-commerce platform. As many of you know, Spree Commerce comes with Promotions. According to Spree Commerce documentation, Spree Commerce Promotions are:

"... used to provide discounts to orders, as well as to add potential additional items at no extra cost. Promotions are one of the most complex areas within Spree, as there are a large number of moving parts to consider."

The promotions feature can be used to offer discounts like free shipping, buy one get one free etc.. The client on this particular project had asked for the ability to provide a coupon for free shipping. Presumably this would be a quick and easy addition since these types of promotions are included in Spree.

The site in question makes use of Spree's Active Shipping Gem, and plugs in the UPS Shipping API to return accurate and timely shipping prices with the UPS carrier.

The client offers a variety of shipping methods including Flat Rate Ground, Second Day Air, 3 Day Select, and Next Day Air. Often, Next Day Air shipping costs several times more than Ground. E.g.: If something costs $20 to ship Ground, it could easily cost around $130 to ship Next Day Air.

When creating a free shipping Promotion in Spree it’s important to understand that by default it will be applied to all shipping methods. In this case, the customer could place a small order, apply the coupon and receive free Next Day Air shipping! To take care of this you need to use Promotion Rules. Spree comes with several built-in rules:

  • First Order: The user’s order is their first.
  • ItemTotal: The order’s total is greater than (or equal to) a given value.
  • Product: An order contains a specific product.
  • User: The order is by a specific user.
  • UserLoggedIn: The user is logged in.

As you can see there is no built in Promotion Rule to limit the free shipping to certain shipping methods. But fear not, it’s possible to create a custom rule.

module Spree
     class Promotion
       module Rules
         class RestrictFreeShipping < PromotionRule
           MATCH_POLICIES = %w(all)
  
           def eligible?(order, options={})
             e = false
             if order.shipment.shipping_method.admin_name == "UPS Flat Rate Ground"
               e = true
             else
               e = false
             end
            return e
           end
        end
      end
    end
  end

Note that you have to create a partial for the rule, as per the documentation.

Then, in config/locales/en.yml I added a name and description for the rule.

en:
     spree:
       promotion_rule_types:
         restrict_free_shipping:
           name: Restrict Free Shipping To Ground
           description: If somebody uses a free shipping coupon it should only apply to ground shipping

The last step was to restart the app and configure the promotion in the Spree Admin interface.

Postfix IPv6 preference

On a Debian GNU/Linux 7 ("wheezy") system with both IPv6 and IPv4 networking setup, running Postfix 2.9.6 as an SMTP server, we ran into a mildly perplexing situation. The mail logs showed that outgoing mail to MX servers we know have IPv6 addresses, the IPv6 address was only being used occasionally, while the IPv4 address was being used often. We expected it to always use IPv6 unless there was some problem, and that's been our experience on other mail servers.

At first we suspected some kind of flaky IPv6 setup on this host, but that turned out not to be the case. The MX servers themselves are fine using only IPv6. In the end, it turned out to be a Postfix configuration option called smtp_address_preference:

smtp_address_preference (default: any)

The address type ("ipv6", "ipv4" or "any") that the Postfix SMTP client will try first, when a destination has IPv6 and IPv4 addresses with equal MX preference. This feature has no effect unless the inet_protocols setting enables both IPv4 and IPv6. With Postfix 2.8 the default is "ipv6".

Notes for mail delivery between sites that have both IPv4 and IPv6 connectivity:

The setting "smtp_address_preference = ipv6" is unsafe. It can fail to deliver mail when there is an outage that affects IPv6, while the destination is still reachable over IPv4.

The setting "smtp_address_preference = any" is safe. With this, mail will eventually be delivered even if there is an outage that affects IPv6 or IPv4, as long as it does not affect both.

This feature is available in Postfix 2.8 and later.

That documentation made it sound as if the default had changed to "ipv6" in Postfix 2.8, but at least on Debian 7 with Postfix 2.9, it was still defaulting to "any", thus effectively randomly choosing between IPv4 and IPv6 on outbound SMTP connections where the MX record pointed to both.

Changing the option to "ipv6" made Postfix behave as expected.

Ubuntu upgrade gotchas

I recently upgraded my main laptop to Ubuntu 14.04, and had to solve a few issues along the way. Ubuntu is probably the most popular Linux distribution. Although it is never my first choice (that would be FreeBSD or Red Hat), Ubuntu is superb at working "out of the box", so I often end up using it, as the other distributions all have issues.

Ubuntu 14.04.1 is a major "LTS" version, where LTS is "long term support". The download page states that 14.04 (aka "Trusty Tahr") comes with "five years of security and maintenance updates, guaranteed." Alas, the page fails to mention the release date, which was July 24, 2014. When a new version of Ubuntu comes out, the OS will keep nagging you until you upgrade. I finally found a block of time in which I could survive without my laptop, and started the upgrade process. It took a little longer than I thought it would, but went smoothly except for one issue:

Issue 1: xscreensaver

During the install, the following warning appeared:

"One or more running instances of xscreensaver or xlockmore have been detected on this system. Because of incompatible library changes, the upgrade of the GNU libc library will leave you unable to authenticate to these programs. You should arrange for these programs to be restarted or stopped before continuing this upgrade, to avoid locking your users out of their current sessions."

First, this is a terrible message. I'm sure it has caused lots of confusion, as most users probably do not know what what xscreensaver and xlockmore are. Is it so hard for the installer to tell which one is in use? Why in the world can the installer not simply stop these programs itself?! The solution was simple enough: in a terminal, I ran:

pgrep -l screensaver
pkill screensaver
pgrep -l screensaver

The first command was to see if I had any programs running with "screensaver" in their name (I did: xscreensaver). As it was the only program that matched, it was safe to run the second command, which stopped xscreensaver. Finally, I re-ran the pgrep to make sure it was stopped and gone. Then I did the same thing with the string "lockmore" (which found no matches, as I expected). Once xscreensaver was turned off, I told the upgrade to continue, and had no more problems until after Ubuntu 14.04 was installed and running. The first post-install problem appeared after I suspended the computer and brought it back to life - no wireless network!

Issue 2: no wireless after suspend

Once suspended and revived, the wireless would simply not work. Everything looked normal: networking was enabled, wifi hotspots were detected, but a connection could simply not be made. After going through bug reports online and verifying the sanity of the output of commands such as "nmcli nm" and "lshw -C network", I found a solution. This was the hardest issue to solve, as it had no intuitive solution, nothing definitive online, and was full of red herrings. What worked for me was to *remove* the suspension of the iwlwifi module. I commented out the line from /etc/pm/config.d/modules, in case I ever need it again, so the file now looks like this:

# SUSPEND_MODULES="iwlwifi"

Once that was commented out, everything worked fine. I tested by doing sudo pm-suspend from the command-line, and then bringing the computer back up and watching it automatically reconnect to my local wifi.

Issue 3: color diffs in git

I use the command-line a lot, and a day never goes by without heavy use of git as well. On running a "git diff" in the new Ubuntu version, I was surprised to see a bunch of escape codes instead of the usual pretty colors I was used to:

ESC[1mdiff --git a/t/03dbmethod.t b/t/03dbmethod.tESC[m
ESC[1mindex 108e0c5..ffcab48 100644ESC[m
ESC[1m--- a/t/03dbmethod.tESC[m
ESC[1m+++ b/t/03dbmethod.tESC[m
ESC[36m@@ -26,7 +26,7 @@ESC[m ESC[mmy $dbh = connect_database();ESC[m
 if (! $dbh) {ESC[m
    plan skip_all => 'Connection to database failed, cannot continue testing';ESC[m
 }ESC[m
ESC[31m-plan tests => 543;ESC[m
ESC[32m+ESC[mESC[32mplan tests => 545;ESC[m

After poking around with terminal settings and the like, a coworker suggested I simply tell git to use an intelligent pager with the command git config --global core.pager "less -r". The output immediately improved:

diff --git a/t/03dbmethod.t b/t/03dbmethod.t
index 108e0c5..ffcab48 100644
--- a/t/03dbmethod.t
+++ b/t/03dbmethod.t
@@ -26,7 +26,7 @@ my $dbh = connect_database();
 if (! $dbh) {
    plan skip_all => 'Connection to database failed, cannot continue testing';
 }
-plan tests => 543;
+plan tests => 545;

Thanks Josh Williams! The above fix worked perfectly. I'm a little unsure of this solution as I think the terminal and not git is really to blame, but it works for me and I've seen no other terminal issues yet.

Issue 4: cannot select text in emacs

The top three programs I use every day are ssh, git, and emacs. While trying (post-upgrade) to reply to an email inside mutt, I found that I could not select text in emacs using ctrl-space. This is a critical problem, as this is an extremely important feature to lose in emacs. This problem was pretty easy to track down. The program "ibus" was intercepting all ctrl-space calls for its own purpose. I have no idea why ctrl-space was chosen, being used by emacs since before Ubuntu was even born (the technical term for this is "crappy default"). Fixing it requires visiting the ibus-setup program. You can reach it via the system menu by going to Settings Manager, then scroll down to the "Other" section and find "Keyboard Input Methods". Or you can simply run ibus-setup from your terminal (no sudo needed).


The ibus-setup window

However you get there, you will see a section labelled "Keyboard Shortcuts". There you will see a "Next input method:" text box, with the inside of it containing <Control>space. Aha! Click on the three-dot button to the right of it, and change it to something more sensible. I decided to simply add an "Alt", such that going to the next input method will require Ctrl-Alt-Space rather than Ctrl-Space. To make that change, just select the "Alt" checkbox, click "Apply", click "Ok", and verify that the text box now says <Control><Alt>space.

So far, those are the only issues I have encountered using Ubuntu 14.04. Hopefully this post is useful to someone running into the same problems. Perhaps I will need to refer back to it in a few years(?) when I upgrade Ubuntu again! :)

The Beauty of IPMI

For our Liquid Galaxy installations, we use a master computer known as a "head node" and a set of slave computers known as "display nodes." The slave computers all PXE-boot from the head node, which directs them to boot from a specific ISO disk image.

In general, this system works great. We connect to the head node and from there can communicate with the display nodes. We can boot them, change their ISO, and do all sorts of other maintenance tasks.

There are two main settings that we change in the BIOS to make things run smoothly. First is that we set the machine to power on when AC power is restored. Second, we set the machine's boot priority to use the network.

Occasionally, though, the CMOS battery has an issue, and the BIOS settings get lost.  How do we get in and boot the machine up? This is where ipmitool has really become quite handy.

Today we had a problem with one display node at one of our sites. It seems that all of the machines in the Liquid Galaxy were rebooted, or otherwise powered off and then back on. One of them just didn't come up, and it was causing me much grief.  We have used ipmitool in the past to be able to help us administer the machines.

IPMI stands for Intelligent Platform Media Interface, and it gives the administrator some non-operating system level access to the machine.  Most vendors have some sort of management interface (HP's iLO, Dell's DRAC), including our Asus motherboards.  The open source ipmitool is the tool we use on our Linux systems to be able to interface with the IPMI module on the motherboard.

I connected to the head node and ran the following command and got the following output:

    admin@headnode:~ ipmitool -H 10.42.41.33 -I lanplus -P 'xxxxxx' chassis status
    System Power         : off
    Power Overload       : false
    Power Interlock      : inactive
    Main Power Fault     : false
    Power Control Fault  : false
    Power Restore Policy : always-off
    Last Power Event     : ac-failed
    Chassis Intrusion    : inactive
    Front-Panel Lockout  : inactive
    Drive Fault          : false
    Cooling/Fan Fault    : true

While Asus's Linux support is pretty lacking, and most of the options we find here don't work with with the open source ipmitool, we did find "System Power : off" in the output, which is a pretty good indicator of our problem.  This tells me that the BIOS settings have been lost for some reason, as we had previously set the system to power on when AC power was restored.  I ran the following to tell it to boot into the BIOS, then powered on the machine:

    admin@headnode:~ ipmitool -H 10.42.41.33 -I lanplus -P 'xxxxxx' chassis bootdev bios
    admin@headnode:~ ipmitool -H 10.42.41.33 -I lanplus -P 'xxxxxx' chassis power on

At this point, the machine is ready for me to be able to access the BIOS through a terminal window. I opened a new terminal window and typed the following:

    admin@headnode:~ ipmitool -H ipmi-lg2-3 -U admin -I lanplus sol activate
    Password:

After typing in the password, I get the ever-helpful dialog below:

    [SOL Session operational.  Use ~? for help]

I didn't bother with the ~? because I knew that the BIOS would eventually just show up in my terminal. There are, however, other commands that pressing ~? would show.

See, look at this terminal version of the BIOS that we all know and love!



Now that the BIOS was up, it's as if I was really right in front of the computer typing on a keyboard attached to it. I was able to get in and change the settings for the APM, so that the system will power on upon restoration of AC power. I also verified that the machine is set to boot from the network port before saving changes and exiting. The next thing I knew, the system was booting up PXE, which then pointed it to the proper ISO, and then it was all the way up and running.

And this, my friends, is why systems should have IPMI. I state the obvious here when I say that life as a system administrator is so much easier when one can get into the BIOS on a remote system.

PyOhio 2014: Python FTW!

Just got back from PyOhio a couple of days ago. Columbus used to be my old stomping grounds so it's often nice to get back there. And PyOhio had been on my TODO for a number of years now, but every time it seemed like something else just got in the way. This year I figured it was finally time, and I'm quite glad it worked out.

While of course everything centered around usage with Python, much of the talks surrounded other tools or projects. I return with a much better view of good technologies likes Redis, Ansible, Docker, ØMQ, Kafka, Celery, asyncio in Python 3.4, Graphite, and much more that isn't coming to mind at the moment. I have lots to dig into now.

It also pleased me to see so much Postgres love! I mean, clearly, once you start to use it you'll fall in love, that's without question. But the hall track was full of conversations about how various people were using Postgres, what it tied in to in their applications, and various tips and tricks they'd discovered in its functionality. Just goes to prove that Postgres == ♥.

Naturally PostgreSQL is what I spoke on; PL/Python, specifically. It actually directly followed a talk on PostgreSQL's LISTEN/NOTIFY feature. I was a touch worried about overlap considering some of the things I'd planned, but it turns out the two talks more or less dovetailed from one to the other. It was unintentional, but it worked out very well.

Anyway, the slides are available, but the talk wasn't quite structured in the normal format of having those slides displayed on a projector. Instead, in a bit of an experiment, the attendees could hit a web page and bring up the slides on their laptops or such. That slide deck opened a long-polling socket back to the server, and the web app could control the slide movement on those remote screens. That let the projector keep to a console session that was used to illustrate PL/Python and PostgreSQL usage. As you might expect, the demo included a run through the PL/Python and related code that drove that socket. Hopefully the video, when it's available, caught some of it.

The sessions were recorded on video, but one thing I hadn't expected was how that influenced which talks I tried to attend. Knowing that the more software-oriented presentations will be available for viewing later, where available I opted for more hardware-oriented topics, or other talks where being present seemed like it would have much more impact. I also didn't feel rushed between sessions, on the occasions where I got caught in a hall track conversation or checked out something in the open spaces area (in one sense,a dedicated hall track room.)

Overall, it was a fantastic conference and a great thank you goes out to everyone that helped make it happen!