End Point

News

Welcome to End Point’s blog

Ongoing observations by End Point people

Gem Dependency Issues with Rails 5 Beta

The third-party gem ecosystem is one of the biggest selling points of Rails development, but the addition of a single line to your project's Gemfile can introduce literally dozens of new dependencies. A compatibility issue in any one of those gems can bring your development to a halt, and the transition to a new major version of Rails requires even more caution when managing your gem dependencies.

In this post I'll illustrate this issue by showing the steps required to get rails_admin (one of the two most popular admin interface gems for Rails) up and running even partially on a freshly-generated Rails 5 project. I'll also identify some techniques for getting unreleased and forked versions of gems installed as stopgap measures to unblock your development while the gem ecosystem catches up to the new version of Rails.

After installing the current beta3 version of Rails 5 with gem install rails --pre and creating a Rails 5 project with rails new I decided to address the first requirement of my application, admin interface, by installing the popular Rails Admin gem. The rubygems page for rails_admin shows that its most recent release 0.8.1 from mid-November 2015 lists Rails 4 as a requirement. And indeed, trying to install rails_admin 0.8.1 in a Rails 5 app via bundler fails with a dependency error:

Resolving dependencies...
Bundler could not find compatible versions for gem "rails":
In snapshot (Gemfile.lock):
rails (= 5.0.0.beta3)

In Gemfile:
rails (< 5.1, >= 5.0.0.beta3)

rails_admin (~> 0.8.1) was resolved to 0.8.1, which depends on
rails (~> 4.0)

I took a look at the GitHub page for rails_admin and noticed that recent commits make reference to Rails 5, which is an encouraging sign that its developers are working on adding compatibility with Rails 5. Looking at the gemspec in the master branch on GitHub shows that the rails_admin gem dependency has been broadened to include both Rails 4 and 5, so I updated my app's Gemfile to install rails_admin directly from the master branch on GitHub:

gem 'rails_admin', github: 'sferik/rails_admin'

This solved the above dependency of rails_admin on Rails 4 but revealed some new issues with gems that rails_admin itself depends on:

Resolving dependencies...
Bundler could not find compatible versions for gem "rack":
In snapshot (Gemfile.lock):
rack (= 2.0.0.alpha)

In Gemfile:
rails (< 5.1, >= 5.0.0.beta3) was resolved to 5.0.0.beta3, which depends on
actionmailer (= 5.0.0.beta3) was resolved to 5.0.0.beta3, which depends on
actionpack (= 5.0.0.beta3) was resolved to 5.0.0.beta3, which depends on
rack (~> 2.x)

rails_admin was resolved to 0.8.1, which depends on
rack-pjax (~> 0.7) was resolved to 0.7.0, which depends on
rack (~> 1.3)

rails (< 5.1, >= 5.0.0.beta3) was resolved to 5.0.0.beta3, which depends on
actionmailer (= 5.0.0.beta3) was resolved to 5.0.0.beta3, which depends on
actionpack (= 5.0.0.beta3) was resolved to 5.0.0.beta3, which depends on
rack-test (~> 0.6.3) was resolved to 0.6.3, which depends on
rack (>= 1.0)

rails_admin was resolved to 0.8.1, which depends on
sass-rails (< 6, >= 4.0) was resolved to 5.0.4, which depends on
sprockets (< 4.0, >= 2.8) was resolved to 3.6.0, which depends on
rack (< 3, > 1)

This bundler output shows a conflict where Rails 5 depends on rack 2.x while rails_admin's rack-pjax dependency depends on rack 1.x. I ended up resorting to a Google search which led me to the following issue in the rails_admin repo: https://github.com/sferik/rails_admin/issues/2532

Installing rack-pjax from GitHub:

gem 'rack-pjax', github: 'afcapel/rack-pjax', branch: 'master'

resolves the rack dependency conflict, and bundle install now completes without error. Things are looking up! At least until you try to run the Rake task to rails g rails_admin:install and you're presented with this mess:

/Users/patrick/.rbenv/versions/2.3.0/lib/ruby/gems/2.3.0/gems/actionpack-5.0.0.beta3/lib/action_dispatch/middleware/stack.rb:108:in `assert_index': No such middleware to insert after: ActionDispatch::ParamsParser (RuntimeError)
from /Users/patrick/.rbenv/versions/2.3.0/lib/ruby/gems/2.3.0/gems/actionpack-5.0.0.beta3/lib/action_dispatch/middleware/stack.rb:80:in `insert_after'

This error is more difficult to understand, especially given the fact that the culprit (the remotipart gem) is not actually mentioned anywhere in the error. Thankfully, commenters on the above-mentioned rails_admin issue #2532 were able to identify the remotipart gem as the source of this error and provide a link to a forked version of that gem which allows rails_admin:install to complete successfully (albeit with some functionality still not working).

In the end, my Gemfile looked something like this:

gem 'rails_admin', github: 'sferik/rails_admin'
# Use github rack-pjax to fix dependency versioning issue with Rails 5
# https://github.com/sferik/rails_admin/issues/2532
gem 'rack-pjax', github: 'afcapel/rack-pjax'
# Use forked remotipart until following issues are resolved
# https://github.com/JangoSteve/remotipart/issues/139
# https://github.com/sferik/rails_admin/issues/2532
gem 'remotipart', github: 'mshibuya/remotipart', ref: '3a6acb3'

A total of three unreleased versions of gems, including the forked remotipart gem that breaks some functionality, just to get rails_admin installed and up and running enough to start working with. And some technical debt in the form of comments about follow-up tasks to revisit the various gems as they have new versions released for Rails 5 compatibility.

This process has been a reminder that when working in a Rails 4 app it's easy to take for granted the ability to install gems and have them 'just work' in your application. When dealing with pre-release versions of Rails, don't be surprised when you have to do some investigative work to figure out why gems are failing to install or work as expected.

My experience has also underscored the importance of understanding all of your application's gem dependencies and having some awareness of their developers' intentions when it comes to keeping their gems current with new versions of Rails. As a developer it's in your best interest to minimize the amount of dependencies in your application, because adding just one gem (which turns out to have a dozen of its own dependencies) can greatly increase the potential for encountering incompatibilities.

Postgres concurrent indexes and the curse of IIT

Postgres has a wonderful feature called concurrent indexes. It allows you to create indexes on a table without blocking reads OR writes, which is quite a handy trick. There are a number of circumstances in which one might want to use concurrent indexes, the most common one being not blocking writes to production tables. There are a few other use cases as well, including:


Photograph by Nicholas A. Tonelli

  • Replacing a corrupted index
  • Replacing a bloated index
  • Replacing an existing index (e.g. better column list)
  • Changing index parameters
  • Restoring a production dump as quickly as possible

In this article, I will focus on that last use case, restoring a database as quickly as possible. We recently upgraded a client from a very old version of Postgres to the current version (9.5 as of this writing). The fact that use of pg_upgrade was not available should give you a clue as to just how old the "very old" version was!

Our strategy was to create a new 9.5 cluster, get it optimized for bulk loading, import the globals and schema, stop write connections to the old database, transfer the data from old to new, and bring the new one up for reading and writing.

The goal was to reduce the application downtime as much as reasonably possible. To that end, we did not want to wait until all the indexes were created before letting people back in, as testing showed that the index creations were the longest part of the process. We used the "--section" flags of pg_dump to create pre-data, data, and post-data sections. All of the index creation statements appeared in the post-data file.

Because the client determined that it was more important for the data to be available, and the tables writable, than it was for them to be fully indexed, we decided to try using CONCURRENT indexes. In this way, writes to the tables could happen at the same time that they were being indexed - and those writes could occur as soon as the table was populated. That was the theory anyway.

The migration went smooth - the data was transferred over quickly, the database was restarted with a new postgresql.conf (e.g. turn fsync back on), and clients were able to connect, albeit with some queries running slower than normal. We parsed the post-data file and created a new file in which all the CREATE INDEX commands were changed to CREATE INDEX CONCURRENTLY. We kicked that off, but after a certain amount of time, it seemed to freeze up.


The frogurt is also cursed.

Looking closer showed that the CREATE INDEX CONCURRENTLY statement was waiting, and waiting, and never able to complete - because other transactions were not finishing. This is why concurrent indexing is both a blessing and a curse. The concurrent index creation is so polite that it never blocks writers, but this means processes can charge ahead and be none the wiser that the create index statement is waiting on them to finish their transaction. When you also have a misbehaving application that stays "idle in transaction", it's a recipe for confusion. (Idle in transaction is what happens when your application keeps a database connection open without doing a COMMIT or ROLLBACK). A concurrent index can only completely finish being created once any transaction that has referenced the table has completed. The problem was that because the create index did not block, the app kept chugging along, spawning new processes that all ended up in idle in transaction.

At that point, the only way to get the concurrent index creation to complete was to forcibly kill all the other idle in transaction processes, forcing them to rollback and causing a lot of distress for the application. In contrast, a regular index creation would have caused other processes to block on their first attempt to access the table, and then carried on once the creation was complete, and nothing would have to rollback.

Another business decision was made - the concurrent indexes were nice, but we needed the indexes, even if some had to be created as regular indexes. Many of the indexes were able to be completed (concurrently) very quickly - and they were on not-very-busy tables - so we plowed through the index creation script, and simply canceled any concurrent index creations that were being blocked for too long. This only left a handful of uncreated indexes, so we simply dropped the "invalid" indexes (these appear when a concurrent index creation is interrupted), and reran with regular CREATE INDEX statements.

The lesson here is that nothing comes without a cost. The overly polite concurrent index creation is great at letting everyone else access the table, but it also means that large complex transactions can chug along without being blocked, and have to have all of their work rolled back. In this case, things worked out as we did 99% of the indexes as CONCURRENT, and the remaining ones as regular. All in all, the use of concurrent indexes was a big win, and they are still an amazing feature of Postgres.

Cybergenetics Helps Free Innocent Man

We all love a good ending. I was happy to hear that one of End Point’s clients, Cybergenetics, was involved in a case this week to free a falsely imprisoned man, Darryl Pinkins.

Darryl was convicted of a crime in Indiana in 1991. In 1995 Pinkins sought the help of the Innocence Project. His attorney Frances Watson and her students turned to Cybergenetics and their DNA interpretation technology called TrueAllele® Casework. The TrueAllele DNA identification results exonerated Pinkins. The Indiana Court of Appeals dropped all charges against Pinkins earlier this week and he walked out of jail a free man after fighting for 24 years to clear his name.

TrueAllele can separate out the people who contributed their DNA to a mixed DNA evidence sample. It then compares the separated out DNA identification information to other reference or evidence samples to see if there is a DNA match.

End Point has worked with Cybergenetics since 2003 and consults with them on security, database infrastructure, and website hosting. End Point congratulates Cybergenetics on their success in being part of the happy ending for Darryl Pinkins and his family!

More of the story is available at Cybergenetics’ Newsroom or the Chicago Tribune.

We are bigger than VR gear - Liquid Galaxy

Nowadays, virtual reality is one of the hottest topics in tech, with VR enabling users to enter immersive environments built up by computer technology. I attended Mobile World Congress 2016 a few weeks ago, and it was interesting to see people sit next to one another and totally ignore one another while they were individually immersed in their own virtual reality worlds.

When everyone is so addicted to their little magic boxes, they tend to lose their connections with people around them. End Point has developed a new experience in which users can watch and share their virtually immersive world together. This experience is called the Liquid Galaxy.

When a user stands in front of Liquid Galaxy and is surrounded by a multitude of huge screens arranged in a semicircle, he puts not only his eyes but his whole body into an unprecedented 3D space. These screens are big enough to cover the audience’s entire peripheral vision and bring great visual stimulation from all directions. When using the Liquid Galaxy system, the users become fully immersed in the system and the imagery they view.


Movie Night at End Point

This digital chamber can be considered a sort of VR movie theater, where an audience can enjoy the same content, and probably the same bucket of popcorn! While this setup makes the Liquid Galaxy a natural fit for any sort of exhibit, many End Point employees have also watched full length feature movies on the system during our monthly Movie Night at our Headquarters office in Manhattan. This sort of shared experience is not something that is possible on typical VR, because unlike VR the Liquid Galaxy is serving a larger audience and presenting stories in a more interactive way.


For most meetings, exhibitions, and other special occasions, the Liquid Galaxy helps to provide an amazing and impactful experience to the audience. Any scenario can be built for users to explore, and geospatial data sets can be presented immersively.

With the ability to serve a group of people simultaneously, Liquid Galaxy increases the impact of content presentation and brings a revolutionary visual experience to its audiences. If you'd like to learn more, you can call us at 212-929-6923, or contact us here.

Liquid Galaxy for Real Estate

The Liquid Galaxy, an immersive and panoramic presentation tool, is the perfect fit for any time you want to grab the attention of your audience and leave a lasting impression. The system has applications in a variety of industries (which include museums and aquariums, hospitality and travel, research libraries at universities, events, and real estate, to name a few) but no industry's demand rivals the popularity seen in real estate.

The Liquid Galaxy provides an excellent tool for real estate brokerages and land use agencies to showcase their properties with multiple large screens showing 3D building models and complete Google Earth data. End Point can configure the Liquid Galaxy to highlight specific buildings, areas on the map, or any set of correlated land use data, which can then be shown in a dazzling display that forms the centerpiece of a conference room or lobby. We can program the Liquid Galaxy to show floor plans, panoramic interior photos, and even Google Street View “walking tours” around a given property.

A Liquid Galaxy in your office will provide your firm with a sophisticated and cutting edge sales tool. You will depart from the traditional ways of viewing, presenting, and even managing real estate sites by introducing your clients to multiple prime locations and properties in a wholly unique, professional and visually stunning manner. We can even highlight amenities such as mass transit, road usage, and basic demographic data for proper context.

The Liquid Galaxy allows your clients an in-depth contextual tour of multiple listings in the comfort of your office without having to travel to multiple locations. Liquid Galaxy brings properties to the client instead of taking the client to every property. This saves time and energy for both you and your prospective clients, and sets your brokerage apart as a technology leader in the market.

If you'd like to learn more about the Liquid Galaxy, you can call us at 212-929-6923, or contact us here.

Client web browser logging

Introduction

The current state of development for web browsers is still problematic. We have multiple browsers, each browser has plenty of versions. There are multiple operating systems and devices that can be used. All of this makes it impossible to be sure that our code will work on every possible browser and system (unfortunately). With proper testing, we can make our product stable and good enough for production, but we can't expect that everything will go smoothly, well, it won't. He is always somewhere, a guy sitting in his small office and using outdated software, Internet Explorer 6 for example. Usually you want to try to support as many as possible users, here, I will explain how to help find them. Then you just need to decide if it is worth fixing an issue for them.

Browser errors logging

What can really help us and is really simple to do is browser error logging. Every time an error occurs on the client side (browser will generate an error that the user most likely won't see), we can log this error on the server side, even with a stack trace. Let's see an example:

window.onerror = function (errorMsg, url, lineNumber, column, errorObj) {
    $.post('//your.domain/client-logs', function () {
        errorMsg: errorMsg,
        url: url,
        lineNumber: lineNumber,
        column: column,
        errorObj: errorObj
    });
        
    // Tell browser to run its own error handler as well   
    return false;
};

What do we have here? We bind a function to the window.onerror event. Every time an error occurs this function will be called. Some arguments are passed together:

  • errorMsg - this is an error message, usually describing why an error occurred (for example: "Uncaught ReferenceError: heyyou is not defined"),
  • url - current url location,
  • lineNumber - script line number where an error happened,
  • column - the same as above but about column,
  • errorObj - the most important part here, an error object with a stack trace included.

What to do with this data? You will probably want to send it to a server and save it, to be able to go through this log from time to time like we do in our example:

$.post('//your.domain/client-logs', function () {
    errorMsg: errorMsg,
    url: url,
    lineNumber: lineNumber,
    column: column,
    errorObj: errorObj
});

It's very helpful, usually with proper unit and functional testing errors generated are minor, but sometimes you may find a critical issue before a bigger number of clients will actually discover it. It is a big profit.

JSNLog

JSNLog is a library that helps with client error logging. You can find it here: http://jsnlog.com/. I can fully recommend using this one, it can also do the AJAX calls, timeout handling, and many more.

Client error notification

If you want to be serious and professional every issue should be reported to a user in some way. On the other side, it's sometimes dangerous to do if the user will be spammed with information that an error occurred because of some minor error. It's not easy to find the best solution because it's not easy to identify an error priority.

Just from experience, if you have a system where users are logged on, you can create a simple script that will send an email to a user with a question regarding an issue. You can set up a limit value to avoid sending too many messages. If the user will be interested he can always reply and explain an issue. Usually the user will appreciate this interest.

Errors logging in Angular

It's worth mentioning how we can handle error logging in the Angular framework, with useful stack traces and error descriptions. See an example below:

First we need to override default log functions in Angular:

angular.module('logToServer', [])
  .service('$log', function () {
    this.log = function (msg) {
      JL('Angular').trace(msg);
    };
    this.debug = function (msg) {
      JL('Angular').debug(msg);
    };
    this.info = function (msg) {
      JL('Angular').info(msg);
    };
    this.warn = function (msg) {
      JL('Angular').warn(msg);
    };
    this.error = function (msg) {
      JL('Angular').error(msg);
    };
  });

Then override exception handler to use our function:

factory('$exceptionHandler', function () {
    return function (exception, cause) {
      JL('Angular').fatalException(cause, exception);
      throw exception;
    };
  });

We also need an interceptor to handle AJAX call errors. This time we need to override $q object like this:

factory('logToServerInterceptor', ['$q', function ($q) {
    var myInterceptor = {
      'request': function (config) {
          config.msBeforeAjaxCall = new Date().getTime();

          return config;
      },
      'response': function (response) {
        if (response.config.warningAfter) {
          var msAfterAjaxCall = new Date().getTime();
          var timeTakenInMs = msAfterAjaxCall - response.config.msBeforeAjaxCall;

          if (timeTakenInMs > response.config.warningAfter) {
            JL('Angular.Ajax').warn({ 
              timeTakenInMs: timeTakenInMs, 
              config: response.config, 
              data: response.data
            });
          }
        }

        return response;
      },
      'responseError': function (rejection) {
        var errorMessage = "timeout";
        if (rejection && rejection.status && rejection.data) {
          errorMessage = rejection.data.ExceptionMessage;
        }
        JL('Angular.Ajax').fatalException({ 
          errorMessage: errorMessage, 
          status: rejection.status, 
          config: rejection.config }, rejection.data);
        
          return $q.reject(rejection);
      }
    };

    return myInterceptor;
  }]);

How it looks all together:

angular.module('logToServer', [])
  .service('$log', function () {
    this.log = function (msg) {
      JL('Angular').trace(msg);
    };
    this.debug = function (msg) {
      JL('Angular').debug(msg);
    };
    this.info = function (msg) {
      JL('Angular').info(msg);
    };
    this.warn = function (msg) {
      JL('Angular').warn(msg);
    };
    this.error = function (msg) {
      JL('Angular').error(msg);
    };
  })
  .factory('$exceptionHandler', function () {
    return function (exception, cause) {
      JL('Angular').fatalException(cause, exception);
      throw exception;
    };
  })
  .factory('logToServerInterceptor', ['$q', function ($q) {
    var myInterceptor = {
      'request': function (config) {
          config.msBeforeAjaxCall = new Date().getTime();

          return config;
      },
      'response': function (response) {
        if (response.config.warningAfter) {
          var msAfterAjaxCall = new Date().getTime();
          var timeTakenInMs = msAfterAjaxCall - response.config.msBeforeAjaxCall;

          if (timeTakenInMs > response.config.warningAfter) {
            JL('Angular.Ajax').warn({ 
              timeTakenInMs: timeTakenInMs, 
              config: response.config, 
              data: response.data
            });
          }
        }

        return response;
      },
      'responseError': function (rejection) {
        var errorMessage = "timeout";
        if (rejection && rejection.status && rejection.data) {
          errorMessage = rejection.data.ExceptionMessage;
        }
        JL('Angular.Ajax').fatalException({ 
          errorMessage: errorMessage, 
          status: rejection.status, 
          config: rejection.config }, rejection.data);
        
          return $q.reject(rejection);
      }
    };

    return myInterceptor;
  }]);

This should handle most of the errors that could happen in the Angular framework. Here I used the JSNLog library to handle sending logs to a server.

Almost the end

There are multiple techniques for logging errors on a client side. It does not really matter which one you choose, it only matters that you do it. Especially when it's really a little amount of time to invest and make it work and a big profit in the end.

How to Build a Skyscraper

Another talk from MountainWest RubyConf that I enjoyed was How to Build a Skyscraper by Ernie Miller. This talk was less technical and instead focused on teaching principles and ideas for software development by examining some of the history of skyscrapers.

Equitable Life Building

Constructed from 1868 to 1870 and considered by some to be the first skyscraper, the Equitable Life Building was, at 130 feet, the tallest building in the world at the time. An interesting problem arose when designing it: it was too tall for stairs. If a lawyer’s office was on the seventh floor of the building, he wouldn’t want his clients to walk up six flights of stairs to meet with him.

Elevators and hoisting systems existed at the time, but they had one fatal flaw: there were no safety systems if the rope broke or was cut. While working on converting a sawmill to a bed frame factory, a man named Elisha Otis had the idea for a system to stop an elevator if its rope is cut. He and his sons designed the system and implemented it at the factory. At the time, he didn’t think much of the design, and didn’t patent it or try to sell it.

Otis’ invention became popular when he showcased it at the 1854 New York World’s Fair with a live demo. Otis stood in front of a large crowd on a platform and ordered the rope holding it to be cut. Instead of plummeting to the ground, the platform was caught by the safety system after falling only a few inches.

Having a way to safely and easily travel up and down many stories literally flipped the value propositions of skyscrapers upside down. Where lower floors were desired more because they were easy to access, higher floors are now more coveted, since they are easy to access but get the advantages that come with height, such as better air, light, and less noise. A solution that seems unremarkable to you might just change everything for others.

When the Equitable Life Building was first constructed, it was described as fireproof. Unfortunately, it didn’t work out quite that way. On January 9, 1912, the timekeeper for a cafe in the building started his day by lighting the gas in his office. Instead of disposing properly of the match, he distractedly threw it into the trashcan. Within 10 minutes, the entire office was engulfed in flame, which spread to the rest of the building, completely destroying it and killing six people.

Never underestimate the power of people to break what you build.

Home Insurance Building

The Home Insurance Building, constructed in 1884, was the first building to use a fireproof metal frame to bear the weight of the building, as opposed to using load-bearing masonry. The building was designed by William LeBaron Jenney, who was struck by inspiration when his wife placed a heavy book on top of a birdcage. From Wikipedia:

According to a popular story, one day he came home early and surprised his wife who was reading. She put her book down on top of a bird cage and ran to meet him. He strode across the room, lifted the book and dropped it back on the bird cage two or three times. Then, he exclaimed: “It works! It works! Don’t you see? If this little cage can hold this heavy book, why can’t an iron or steel cage be the framework for a whole building?”

With this idea, he was able to design and build the Home Insurance Building to be 10 stories and 138 feet tall while only weighing 1/3rd of what the same building in stone would weigh because he was able to Find inspiration from unexpected places.

Monadnock Building

The Monadnock Building was designed by Daniel Burnham and John Wellborn Root. Burnham preferred simple and functional designs and was known for his stinginess while Root was more artistically inclined and known for his detailed ornamentation on building designs. Despite their philosophical differences, they were one of the world’s most successful architectural firms.

One of the initial sketches (shown) for the building included Ancient Egyptian-inspired ornamentation with slight flaring at the top. Burnham didn’t like the design, as illustrated in a letter he wrote to the property manager:

My notion is to have no projecting surfaces or indentations, but to have everything flush .... So tall and narrow a building must have some ornament in so conspicuous a situation ... [but] projections mean dirt, nor do they add strength to the building ... one great nuisance [is] the lodgment of pigeons and sparrows.

While Root was on vacation, Burnham worked to re-design the building to be straight up-and-down with no ornamentation. When Root returned, he initially objected to the design but eventually embraced it, declaring that the heavy lines of the Egyptian pyramids captured his imagination. We can learn a simple lesson from this: Learn to embrace constraints.

When construction was completed in 1891, the building was a total of 17 stories (including the attic) and 215 feet tall. At the time, it was the tallest commercial structure in the world. It is also the tallest load-bearing brick building constructed. In fact, to support the weight of the entire building, the walls at the bottom had to be six feet (1.8 m) wide.

Because of the soft soil of Chicago and the weight of the building, it was designed to settle 8 inches into the ground. By 1905, it had settled that much and several inches more, which led to the reconstruction of the first floor. By 1948, it had settled 20 inches, making the entrance a step down from the street. If you only focus on profitability, don’t be surprised when you start sinking.

Fuller Flatiron Building

The Flatiron building, constructed in 1902, was also designed by Daniel Burnham, although Root had died of pneumonia during the construction of the Monadnock building. The Flatiron building presented an interesting problem because it was to be built on an odd triangular plot of land. In fact, the building was only 6 and a half feet wide at the tip, which obviously wouldn’t work with the load-bearing masonry design of the Monadnock building.

So the building was constructed using a steel-frame structure that would keep the walls to a practical size and allow them to fully utilize the plot of land. The space you have to work with should influence how you build and you should choose the right materials for the job.

During construction of the Flatiron building, New York locals called it “Burnham’s Folly” and began to place bets on how far the debris would fall when a wind storm came and knocked it over. However, an engineer named Corydon Purdy had designed a steel bracing system that would protect the building from wind four times as strong as it would ever feel. During a 60-mph windstorm, tenants of the building claimed that they couldn’t feel the slightest vibration inside the building. This gives us another principle we can use: Testing makes it possible to be confident about what we build, even when others aren’t.

40 Wall Street v. Chrysler Building


40 Wall Street
Photo by C R, CC BY-SA 2.0

The stories of 40 Wall Street and the Chrysler Building start with two architects, William Van Alen and H. Craig Severance. Van Alen and Severance established a partnership together in 1911 which became very successful. However, as time went on, their personal differences caused strain in the relationship and they separated on unfriendly terms in 1924. Soon after the partnership ended, they found themselves to be in competition with one another. Severance was commissioned to design 40 Wall Street while Van Alen would be designing the Chrysler Building.

The Chrysler Building was initially announced in March of 1929, planned to be built 808 feet tall. Just a month later, Severance was one-upping Van Alen by announcing his design for the building, coming in at 840 feet. By October, Van Alen announced that the steel work of the Chrysler Building was finished, putting it as the tallest building in the world, over 850 feet tall. Severance wasn’t particularly worried, as he already had plans in motion to build higher. Even after reports came in that the Chrysler Building had a 60-foot flagpole at the top, Severance made more changes for 40 Wall Street to be taller than the Chrysler Building. These plans were enough for the press to announce that 40 Wall Street had won the race to build highest since construction of the Chrysler Building was too far along to be built any higher.


The Chrysler Building
Photo by Chris Parker, CC BY-ND 2.0

Unfortunately for Severance, the 60-foot flagpole wasn’t a flagpole at all. Instead, it was part of an 185-foot steel spire which Van Alen had designed and had built and shipped to the construction site in secret. On October 23rd, 1929, the pieces of the spire were hoisted to the top of the building and installed in just 90 minutes. The spire was initially mistaken for a crane, and it wasn’t until 4 days after it was installed that the spire was recognized as a permanent part of the building, making it the tallest in the world. When all was said and done, 40 Wall Street was came in at 927 feet, with a cost of $13,000,000, while the Chrysler Building finished at 1,046 feet and cost $14,000,000.

There are two morals we can learn from this story: There is opportunity for great work in places nobody is looking and big buildings are expensive, but big egos are even more so.

Empire State Building

The Empire State Building was built in just 13 months, from March 17, 1930, to April 11, 1931. Its primary architects were Richmond Shreve and William Lamb, who were part of the team assembled by Severance to design 40 Wall Street. They were joined by Arthur Harmon to form Shreve, Lamb, & Harmon. Lamb’s partnership with Shreve was not unlike that of Van Alen and Severance or Burnham and Root. Lamb was more artistic in his architecture, but he was also pragmatic, using his time and design constraints to shape the design and characteristics of the building.

Lamb completed the building drawings in just two weeks, designing from the top down, which was a very unusual method. When designing the building, Lamb made sure that even when he was making concessions, using the building would be a pleasant experience for those who mattered. Lamb was able to complete the design so quickly because he reused previous work, specifically the Reynolds Building in Winston-Salem, NC, and the Carew Tower in Cincinnati, Ohio.

In November of 1929, Al Smith, who commissioned the building as head of Empire State, Inc., announced that the company had purchased land next to the plot where the construction would start, in order to build higher. Shreve, Lamb, and Harmon were opposed to this idea since it would force tenants of the top floors to switch elevators on the way up, and they were focused on making the experience as pleasant as possible.

John Raskob, one of the main people financing the building, wanted the building to be taller. While looking at a small model of the building, he reportedly said “What this building needs is a hat!” and proposed his idea of building a 200-foot mooring tower for a zeppelin at the top of the building, despite several problems such as high winds making the idea unfeasible. But Raskob felt that he had to build the tallest building in the world, despite all of the problems and the higher cost that a taller building would introduce because people can rationalize anything.

There are two more things we should note about the story of the Empire State building. First, despite the fact that it was designed top-to-bottom, it wasn’t built like that. No matter how a something is designed, it needs to be built from the bottom up. Second, the Empire State Building was a big accomplishment in architecture and construction, but at no small cost. Five people died during the construction of the building, and that may seem like a small number considering the scale of the project, but we should remember that no matter how important speed is, it’s not worth losing people over.

United Nations Headquarters

The United Nations Headquarters were constructed between 1948 and 1952. It wasn’t built to be particularly tall—less than half the height of the Empire State Building—but it came with its own set of problems. As you can see in the picture, the building had a lot of windows. The wide faces of the building are almost completely covered in windows. These windows offer great lighting and views, but when the sun shines on them, they generate a lot of heat, not unlike a greenhouse. Unless you’re building a greenhouse, you probably don’t want that. It doesn’t matter how pretty your building is if nobody wants to occupy it.

The solution to the problem was created years before by an engineer named Willis Carrier, who created an “Apparatus for Treating Air” (now called an air conditioner) to keep the paper in a printing press from being wrinkled. By creating this air conditioner, Carrier didn’t just make something cool. He made something cool that everyone can use. Without it, buildings like then UNHQ could never have been built.

Willis (or Sears) Tower


Willis Tower, left

The Willis tower was built between 1970 and 1973. Fazlur Rahman Khan was hired as the structural engineer for the Willis Tower, which needed to be very tall in order to house all of the employees of Sears. A steel frame design wouldn’t work well in Chicago (also known as the Windy City) since they tend to bend and sway in heavy winds, which can cause discomfort for people on higher floors, even causing sea-sickness in some cases.

To solve the problem, Khan invented a “bundled tube structure”, which put the structure of a building on the outside as a thin tube. Using the tube structure not only allowed Khan to build a higher tower, but it also increased floor space and cost less per unit area. But these innovations only came because Khan realized that the higher you build, the windier it gets.

Taipei 101

Taipei 101 was constructed from 1999 to 2004 near the Pacific Ring of Fire, which is the most seismically active part of the world. Earthquakes present very different problems from the wind since they affect a building at its base, instead of the top. Because of the location of the building it needed to be able to withstand both typhoon-force winds (up to 130 mph) and extreme earthquakes, which meant that it had to be designed to be both structurally strong and flexible.

To accomplish this, the building was constructed with high-performance steel, 36 columns, and 8 “mega-columns” packed with concrete connected by outrigger trusses which acted similarly to rubber bands. During the construction of the building, Taipei was hit by a 6.8-magnitude earthquake which destroyed smaller buildings around the skyscraper, and even knocked cranes off of the incomplete building, but when the building was inspected it was found to have no structural damage. By being rigid where it has to be and flexible where it can afford to be, Taipei 101 is one of the most stable buildings ever constructed.

Of course, being flexible introduces the problem of discomfort for people in higher parts of the building. To solve this problem, Taipei 101 was built with a massive 728-ton (1,456,000 lb) tuned mass damper, which helps to offset the swaying of the building in strong winds. We can learn from this damper: When the winds pick up, it’s good to have someone (or something) at the top pulling for you.

Burj Khalifa

The newest and tallest building on our list, the Burj Khalifa was constructed from 2004 to 2009. With the Burj Khalifa, design problems came with incorporating adequate safety features. After the terrorist attacks of September 11, 2001, the problem of evacuation became more prominent in construction and design of skyscrapers. When it comes to an evacuation, stairs are basically the only way to go, and going down many flights of stairs can be as difficult as going up them—especially if the building is burning around you. The Burj Khalifa is nearly twice as tall as the old World Trade Center, and in the event of an emergency, walking down nearly half a mile of stairs won’t work out.

So how do the people in the Burj Khalifa get out in an emergency? Well, they don’t. Instead, the Burj Khalifa is designed with periodic safe rooms protected by reinforced concrete and fireproof sheeting that will protect people inside for up to two hours of during a fire. Each room has a dedicated supply of air, which is delivered through fire resistant pipes. These safe rooms are placed every 25 floors or so, because a safe space won’t do good if it can’t be reached.

You may know that the most common cause of death in a fire is actually smoke inhalation, not the fire itself. To deal with this, the Burj Khalifa has a network of high powered fans throughout which will blow clean air from outside into the building and keep the stairwells leading to the safe rooms clear of smoke. A very important part of this is pushing the smoke out of the way, eliminating the toxic elements.

It’s important to remember that these safe spaces, as useful as they may be, are not a substitute for rescue workers coming to aid the people trapped in the building. The safe rooms are only there to protect people who can’t help themselves until help can come. Because, after all, what we build is only important because of the people who use it.

Thanks to Ernie Miller for a great talk! The video is also available on YouTube.