End Point

News

Welcome to End Point's blog

Ongoing observations by End Point people.

Cuenca, Ecuador

Given that End Point uses a distributed office structure and that I do almost all of my work online, in theory I should be able to work just about anywhere there's a good Internet connection. That sounds neat, but is it in fact true? Well, earlier this year I put the theory to a serious test. By nature I'm not all that adventurous a traveller, but my wife is, and she's fluent in Spanish. Our teenage sons are going to be grown men all too soon, so if we were ever to take the plunge into living abroad as a family, I realized it was now or never.

In looking for a place to go we had some criteria:

  • a safe place in Latin America without an excess of walls
  • good or at least reasonable Internet connectivity
  • soccer training for our 15-year-old
  • high school in Spanish for our 16-year-old.

We are very lucky that my friend Tovias suggested his home town (more or less) and volunteered his family to look after us there!

So, in February I picked up and relocated with my family to Cuenca, Ecuador, for just over three months. I worked. My wife, Gina, homeschooled one of our sons, and generally kept us all going in this beautiful and historical city.

Here's a Gigapan I took of Cuenca from Turi above the city:

Some scattershot facts, impressions and comments on Cuenca:

  • Cuenca's the third largest city in Ecuador. The population is about 475,000.
  • The elevation of the city is about 8,300 feet.
  • The city has a temperate climate, even though it is close to the equator, at a latitude of 2° 53' 0" S. The altitude has a lot to do with this, of course.
  • The people are nice!
  • I found the standard of living to be "decent"—in the full sense of the word. Certainly incomes are well below those in the US, but prices are too, and I saw almost no abject poverty.
  • Ecuador uses the US dollar as its national currency and the dollar goes a surprisingly long way. (I hope the US doesn't let Ecuador and End Point down by defaulting on its debt.)
  • There are a lot of Ecuadorians from Cuenca and the nearby region who are living in the New York Metro Area. You can see the impact of their sending money home. I met a surprising number of people at random who had spent time either living in or visiting New York or New Jersey.
  • I found myself well above average in height for the first time in my life, but the novelty wore off quickly.
  • The public services are good!
    • Buses are just 25 cents!--although some belch too much smoke.
    • Taxi fares run from $1.50 for shorter rides to $2.00 for longer rides. A very long trip will cost $3.00. (Okay, the city is much smaller than NYC, but still the fares per distance are much much less. (Gasoline is more than $1.25/gallon cheaper than in the US and wages much lower too. The cabs are smaller than US cabs, but that's just fine by me.)
    • I liked the cleanliness of the city.
    • There are some nice parks and they're reasonably well maintained.
  • It's a pleasure to walk about in the city! There's an old city within the city and it is especially nice to walk about in. There are four fast flowing little rivers running through the city and I kept thinking how they should put in some waterwheels or turbines and tap a portion of all that ferocious energy.
  • There's not much English spoken in the shops, which is good if you are trying to learn Spanish.
  • There are lots of free cultural events.
  • The city felt safe and crime is low. I was told that Guayaquil, Ecuador's largest city, is dangerous, and that Quito, the capital, also has a noticable crime problem. Perhaps being number three in size or a better standard of living accounts for Cuenca's relative safety.
  • Cuenca's a university town. It's great to see so many students.
  • There are a lot of Americans who are retiring to Cuenca. They know a good deal when they see one.
  • People are required to vote in elections.

This gets its own special bullet:

  • There are plenty of nice restaurants in Cuenca and the prices are especially easy on an American's wallet. Our friends, the Ramons, have a small restaurant serving comidas tipicas where we enjoyed many excellent meals. The food is wholesome, tasty, and served with a family's care; plus, soup, main course and beverage: $1.50. My son Cris got regular cooking lessons there, and I got a couple lessons in making pasteles, too. (I'll need a bunch more lessons to get it right.) My appreciation for how hard some people work in good cheer to make ends meet, counting and making their blessings as they go, was reinforced by our every meal there. By all means, stop by our friends' restaurant and tell Damian or Nube that you read about their restaurant in this blog article and that I sent you! (Broken Spanish is welcome!) It's called "El Truquito del Sabor" and is on Cacique Dumar, an extension of Calle Larga across Avenida Huaina Capac. It's catacorner to Museo del Banco Central. The museum has very well done exhibits and there's an Inca ruin behind it. It is one of the must-see places to visit in the city.

So what about working from Cuenca? (That's what I thought this article was going to be about.)

While we were in Cuenca I worked a lot. In fact, I worked more than usual on a consistent basis. Part of this was that I didn't spend time on driving my son back and forth to soccer training twice a week (he was able to take a bus to and from training five days a week) or play much soccer myself (although I did place a few pickup games on Saturday in the park, which was a blast) or on weeknight social engagements, or on home maintenance tasks (not that I'm much on those to begin with, but in Cuenca I had none at all to handle); but also, my exceptionally long work hours while I was in Cuenca probably had something to do with my feeling guilty for getting away on such a great trip, but not wanting it to be mistaken for a vacation. For me, part of the adventure of this trip was expanding my work environment to a global dimension.

Still, we did get out most every weekend to explore some great areas within easy drives of the city thanks to the Ramon family which couldn't have been more hospitable. Here's a Gigapan I took with them in el Parque Nacional de Cajas which borders Cuenca.

The Continental Divide runs through Cajas, and this panorama was taken there at about 13,000 feet.

The great furnished apartment we rented was wired for cable, and included Internet service. I had the apartment's management agent, Michael Berger of Cuenca Condos, an American who is responsible, good to deal with, and computer-knowledgeable, upgrade to a plan that was the "CM 1800" plan, which boasted 1800 Mbps down and 400 Kbps up. Here's a rate sheet for the Cable Internet service. I found the connection generally reliable, the speed mostly as advertised, and good enough for supporting my IP voice conversations with good quality. I noticed a nightly outage of about ten to fifteen minutes around midnight regularly.

I recall having three outages of between one and five hours at our apartment during working hours over the fifteen weeks I was in Cuenca. Also, one day there were several hours of seriously degraded performance, which corresponded with some very stormy weather. Of course outages of this level are a problem in business and can be very stressful. A corollary of Murphy's Law dictates that service disruptions will always happen at the worst times. I realized going in that it would be prudent to have a backup connection. In my home office in the US I have service from multiple providers, so to have less than this in Ecuador would have been silly. Michael showed me the ropes on how to use the cell phone data service on demand. It was decidedly cool. To charge a USB dongle that can be used for Internet service you remove the SIM card from the dongle, insert it into a cell phone, and do a handshake following instructions on a scatch-off card with a secret code that you can buy at stores throughout the city. This gets the SIM card charged up with however much service you've paid for, after which you reinsert the SIM card into the dongle. Since I used it as a backup service just charging it up for one day at a time for $3 was perfect. (It would have been better if I'd done a couple of trial runs before having to use the service under pressure, but that's another story.) I didn't expect that I would be able to have Skype calls over this type of Internet connection, but eventually I did give that a try, and, lo and behold, to my surprise the connection was just fine.

Personally, I've been slow to move to an IP telephone. At End Point until somewhat recently we required everybody to have an old-fashioned land-line telephone. Most of us still have them—along with whatever other phone-types we may have. For the longest time, as far I was concerned the quality and reliability of IP telephone connections and cell phones just didn't cut it for professional conversations. But of course IP telephony has greatly improved and this was going to be the only way I could reasonably do business in the US and around the world from Ecuador. I used Skype (for which I had a Skype-In number, which I am now still using), Google Talk, and we also got a "Magic Jack". The Magic Jack is a USB gizmo for Windows and Mac machines that you plug a regular old phone into. It is simple to install, you get a US phone number with it, and it's cheap. We used this as our family phone. We didn't find the quality of the calls up to the quality of Skype and Google Talk connections, but it was useful at times.

Before we went to Cuenca my wife made various friendly contacts there through Couch Surfing and one, Juan Fernando Granda of Consultores Gramer, was kind enough to advise me on Internet connections and more in Cuenca. (Juan Fernando suggested the Magic Jack.) Consultores Gramer is an agile development company specializing in GeneXus development that Juan Fernando and his partner Alex Merchan founded. It was interesting to hear about IT and web development in Ecuador and business practices there. I was surprised to learn that there aren't any ecommerce websites where card transactions can be finalized online in Ecuador because of banking regulations. Also, Juan Fernando pointed out that because of the low labor costs in Ecuador there is less incentive to automate some things. Nevertheless, IT is inevitably advancing and I saw plenty of enthusiastic laptop, cell phone and Internet use. Alex gave me a good demo of Consultores Gramer's system which is built with GeneXus and which they further customize with it. He's masterful with GeneXus. Being a proprietary Windows-based system, End Point won't be doing development with GeneXus, but if you need some good GeneXus developers I have a fine group to recommend.

With a couple of months behind me since my return from Ecuador I have some other points to share about our "test" of carrying on my work with End Point while living abroad:

  • It took a lot of preparation to do what we did, including getting our house closed up and arrangements made in advance for housing, school and soccer on the other end. I'm incredibly lucky that my wife took care of the vast bulk of this and that we had the extraordinary support from our friends that we got.
  • The reset in location and routine for me actually increased my productivity.
  • It was an wonderful and special experience for our family.
  • We would have been hard pressed to have done any better for our sons' educations.
  • Working long days in English (typically 12+ hours/day) slowed down my learning Spanish, but my Spanish sure improved more than it would have if I were just taking a course or two at home.
  • There's a risk in going someplace great like Cuenca: you may fall in love with it and want to move there permanently.

Company Presentation: jQuery and Rails

Yesterday, I gave a company presentation on jQuery and Rails. The talk covered details on how jQuery and Rails work together to build rich web applications, with a considerable amount of focus on AJAX methods. Check out the slides here:

One piece of knowledge I took away from the talk is how different the Rails 3 approach is for unobtrusive AJAX behavior using helpers like link_to_remote and remote_form_for. Mike Farmer made a recommendation to read the rails.js source here to see how onclick behavior is handled in Rails 3.

Rails Optimization: Advanced Techniques with Solr

Recently, I've been involved in optimization on a Rails 2.3 application. The application had pre-existing fragment caches throughout the views with the use of Rails sweepers. Fragment caches are used throughout the site (rather than action or page caches) because the application has a fairly complex role management system that manages edit access at the instance, class, and site level. In addition to server-side optimization with more fragment caching and query clean-up, I did significant asset-related optimization including extensive use of CSS sprites, combining JavaScript and CSS requests where ever applicable, and optimizing images with tools like pngcrush and jpegtran. Unfortunately, even with the server-side and client-side optimization, my response times were still sluggish, and the server response was the most time consuming part of the request for a certain type of page that's expected to be hit frequently:

A first stop in optimization was to investigate if memcached would speed up the site, described in this article Unfortunately, that did not improve the speed much.

Next, I re-examined the debug log to see what was taking so much time. The debug log looked like this (note that table names have been changed):

Processing ThingsController#index (for 174.111.14.48 at 2011-07-12 16:32:04) [GET]
  Parameters: {"action"=>"index", "controller"=>"things"}
Thing Load (441.2ms)  SELECT * FROM "things" WHERE ("things"."id" IN (22,6,23,7,35,24,36,25,14,9,37,26,15,...)) 
Rendering template within layouts/application
Rendering things/index
Cached fragment hit: views/all-tags (1.6ms)
Rendered things/_nav_search (3.3ms)
Rendered shared/_sort (0.2ms)
Cached fragment hit: ...
Cached fragment hit: ...
Cached fragment hit: ...
Rendered ...
Rendered ...
Completed in 821ms (View: 297, DB: 443) | 200 OK [http://www.mysite.com/things]

From the debug log, we can point out:

  • The page loads in 821ms according to Rails, which is similar to the time reported in the waterfall shown above.
  • The page is loading several cached fragments, which is good.
  • The biggest time-suck of the page loading is a SELECT * FROM things ...

To rule out any database slowness due to missing indexes, I examined the query speed via console (note that this application runs on PostgreSQL):

=> EXPLAIN ANALYZE SELECT * FROM "things" WHERE ("things"."id" IN (22,6,23,7,35,24,36,25,14,9,37,26,15,...));
                                                 QUERY PLAN                                                 
------------------------------------------------------------------------------------------------------------
 Seq Scan on things  (cost=0.00..42.19 rows=24 width=760) (actual time=0.023..0.414 rows=25 loops=1)
   Filter: (id = ANY ('{22,6,23,7,35,24,36,25,14,9,37,26,15,...}'::integer[]))
 Total runtime: 0.452 ms
(3 rows)

The query here is on the scale of 1000 times faster than the loading of the objects from the ThingsController. It's well known that object instantiation in Ruby is slow. There's not much I can do to speed up the pure performance of object instantation except possibly 1) upgrade to Ruby 1.9 or 2) try something like JRuby or Rubinius, which are both out of the scope of this project.

My next best option is to investigate using Rails low-level caching here to cache my objects pulled from the database, but there are a few challenges with this:

  • The object instantiation is happening as part of a Solr (via sunspot) query, not a standard ActiveRecord lookup.
  • The Solr object that's retrieved is used for pagination with the will_paginate gem.
  • Rails low-level caches can only store serializable objects. The Solr search object and WillPaginate:Collection object (a wrapper around an array of elements that can be paginated) are not serializable, so I must determine a suitable structure to store in the cache.

Controller

After troubleshooting, here's what I came up with:

@things = Rails.cache.fetch("things-search-#{params[:page]}-#{params[:tag]}-#{params[:sort]}") do  
  things = Sunspot.new_search(Thing)

  things.build do
    if params.has_key?(:tag)
      with :tag_list, CGI.unescape(params[:tag])
    end
    with :active, true
    paginate :page => params[:page], :per_page => 25
    order_by params[:sort].to_sym, :asc
  end
  things.execute!
  t = things.hits.inject([]) { |arr, h| arr.push(h.result); arr }
  { :results => t,  
    :count => things.total }
end 
@things = WillPaginate::Collection.create(params[:page], 25, @things[:count]) { |pager| pager.replace(@things[:results]) }

Here's how it breaks down:

  • My cache key is based on the page #, tag information, and sort type, shown in the argument passed into the low-level cache build:
    @things = Rails.cache.fetch("things-search-#{params[:page]}-#{params[:tag]}-#{params[:sort]}") do  
    ###
    end 
    
  • All this stuff creates a Solr object, sets the Solr object details, and builds the result set. In this particular Solr object, we are pulling things that have an :active value of true, may or may not have a specific tag, limiting the result set to 25, and ordering by the :sort parameter:
      things = Sunspot.new_search(Thing)
    
      things.build do
        if params.has_key?(:tag)
          with :tag_list, CGI.unescape(params[:tag])
        end
        with :active, true
        paginate :page => params[:page], :per_page => 25
        order_by params[:sort].to_sym, :asc
      end
      things.execute!
    
  • things is my Sunspot/Solr object. I build an array of the Solr result set items and record the total number of things found. A hash that contains an array of "things" and a total count is my serializable cacheable object.
      t = things.hits.inject([]) { |arr, h| arr.push(h.result); arr }
      { :results => t,  
        :count => things.total }
    
  • The tricky part here is building a WillPaginate::Collection object after pulling the cached data, since a WillPaginate object is also not serializable. This needs to know what the current page is, things per page, and total number of things found to correctly build the pagination links, but it doesn't require that you have all the other "things" available:
    @things = WillPaginate::Collection.create(params[:page], 25, @things[:count]) { |pager| pager.replace(@things[:results]) }
    

View

My view contains the standard will_paginate reference:

There are <%= pluralize @things.total_entries, 'Thing' %> Total
<%= will_paginate @things %>

And I pass the result set in a partial as a collection to display my listed items:

<%= render :partial => 'shared/single_thing', :collection => @things %>

Sweepers

Another thing to get right here is clearing the low-level cache with Rails sweepers. I have a fairly standard Sweeper setup similar to the one described here. I utilize two ActiveRecord callbacks (after_save, before_destroy) in my sweeper to clear the cache, shown below.

class ThingSweeper < ActionController::Caching::Sweeper
  observe Thing

  def after_save(record)
    Rails.cache.delete_matched(%r{things-search*})

    # expire_fragment ...
  end

  def before_destroy(record)
    Rails.cache.delete_matched(%r{things-search*})
 
    # expire_fragment ...
  end
end

With the changes described here (caching a serializable hash with the Solr results and total count, generating a WillPaginate:Collection object, and defining the Sweepers to clear the cache), I saw great improvements in performance. The standard "index" page request does not hit the database at all for users not logged in nor does it experience the sluggish object instantiation. My waterfall now looks like this:

And running 100 requests at a concurrency of 1 on the system (running in production on a development server) shows the requests are averaging 165ms, which is decent. After I wrote this post, I did a even more optimization on a different page type in the application that I hope to share in a future blog post.

Note: Ideally, it would be better to cache individual objects so that I would not have expire entire search caches on every save or delete. However, I could not find methods in Solr that allows us to pull a list of ids of the result set without building the result set.

Announcing pg_blockinfo!

I'm pleased to announce the initial release of pg_blockinfo. It is a tool to examine your PostgreSQL heap data files, written in Perl.

Similar in purpose to pg_filedump, it is used to display (and soon validate) buffer-page-level information for PostgreSQL page/heap files.

pg_blockinfo aims to work in a portable and non-destructive way, using read-only "mmap", sys-level IO functions, and "unpack" in order to minimize any Perl overhead.

What we buy for the compromise of writing this in Perl instead of C is two-fold:

  1. portability/low impactpg_blockinfo has no other dependencies than Perl and several core Perl modules and will work in environments where you can't or won't easily install other packages or compile based on specific headers.
  2. expressibility — while not currently supported in full, one of pg_blockinfo's future goals is to allow you to specify criteria for display of both page-level and tuple-level info. It will allow you to define arbitrary Perl expressions to filter the objects you're looking at (i.e., pages, tuples, etc; think "grep" but on a tuple level). It will support a DSL for querying based off of the named fields as well as allow you to supply arbitrary Perl for scanning for any criteria.

Requirements

We require a perl version with PerlIO ":mmap" support, which basically means any perl >= 5.8. We do not require any non-core perl modules; currently we only use Data::Dumper and Getopt::Long for debugging and option parsing respectively, the former only when requested.

Getting pg_blockinfo

The canonical git repo for development for pg_blockinfo is located at github: http://github.com/machack666/pg_blockinfo/

.

For the development repo, simply run:

$ git clone git://github.com/machack666/pg_blockinfo.git

Or you can just grab the current script directly here.

Using pg_blockinfo

To get help including available options, canonical and alternate/abbreviated names of recognized fields, range syntax:

$ pg_blockinfo -h

To dump all fields for all page headers for all pages in a relation:

$ pg_blockinfo /path/to/relfile

To include only specific fields in the output you can specify multiple -f options and/or include multiple options per -f argument by comma delimiting. Field specifiers are processed in order, so only the final logical set will be included.

"all" is a special shorthand type which will expand to all known columns. pg_blockinfo may support other shorthand groups in the future. When no fields are provided explicitly, "all" is implicitly assumed.

There are both positive and negative field inclusions. An example of a positive inclusion is:

$ pg_blockinfo /path/to/relfile -f prune_xid,tli

This will display only the indicated fields in question for all blocks in relfile. To include all fields *except* certain ones, prefix their name with a '-' sign:

$ pg_blockinfo -f -pagesize_version /path/to/relfile

This will display all page header fields in all blocks with the exception of the pagesize_version header field.

One consequence of the way these field display options are designed (particularly going forward with additional field/tuple display options) that you could define a "view" of the column data using a shell alias, then add/remove columns/criteria by passing additional -f options to it:

# using this as a shorthand to display just those fields
$ alias lsn='pg_blockinfo -f lsn_seq,lsn_off,tli'
$ lsn -f -tli /path/to/foo                          # remove fields from the display
$ lsn -f prune_xid /path/to/foo                     # or add to the list as well

Similar functionality is available for selecting the specific blocks available using the range option (-r or -b), which lets you specify a range of blocks to look at instead of the entire file.

$ pg_blockinfo -r 2-49 /path/to/relfile
$ pg_blockinfo -r -100 /path/to/relfile
$ pg_blockinfo -r 2,4,120-140,0xFF-0x1100 /path/to/relfile

Range options can be provided multiple times, each with one or more comma-delimited block-range specifications. Blocks are numbered from 0, can be provided in decimal or hexadecimal (when prefixed via 0x), and can appear singly or in a range (unbounded or unbounded) when separated by a hyphen.

Planned future features/TODO

In no particular order:

  • dump tuples/tuple headers.
  • better output/interpretation of bitflags.
  • support alternate structures to allow detection/specification of different target versions of the page/tuple headers.
  • allow querying/filtering pages/tuples.
  • validation/sanity checking of various pages.
  • actual extraction of ranges in the heap file.
  • extract/dump tuples by raw ctid.
  • allow arbitrary expressions to define powerful filtering options when querying/displaying information about the tuples/data files.
  • detections of invalid toast tuple pointers/corrupted lz_compressed data (will require connection to theactive system catalog).
  • detect relfile type?
  • mvcc queries against tuples at a given arbitrarily-constructed snapshot
  • detect xids that are invalid (i.e. map to non-existent pages in the pg_clog directory).
  • better/shorter name?

I look forward to any feedback, patches, or other improvements/interest.

Raw Caching Performance in Ruby/Rails

Last week, I set up memcached with a Rails application in hopes of further improving performance after getting a recommendation to pursue it. We're already using many Rails low-level caches and fragment caches throughout the application because of its complex role management system. Those are stored in NFS on a NetApp filer, and I was hoping switching to memcached would speed things up. Unfortunately, my http request performance tests (using ab) did not back this up: using file caching on NFS with the NetApp was about 20% faster than memcached from my tests.

I brought this up to Jon, who suggested we run performance tests on the caching mechanism only rather than testing caching via full http requests, given how many layers of the stack are involved and influence the overall performance number. From the console, I ran the following:

$ script/console   # This app is on Rails 2.3
> require 'benchmark'
> Rails.cache.delete("test")
> Rails.cache.fetch("test") { [SomeKlass.first, SomeKlass.last] }
> # to emulate what would potentially be stored with low-level cache
> Benchmark.bm(15) { |x| x.report("times:") { 10000.times do; Rails.cache.fetch("test"); end } } 

We ran the console test with a few different caching configurations, with the results shown below. The size of the cached data here was ~2KB.

cache_store avg time/request
:mem_cache_store 0.00052 sec
:file_store, tmpfs (local virtual memory RAM disk) 0.00020 sec
:file_store, local ext4 filesystem 0.00017 sec
:file_store, NetApp filer NFS over gigabit Ethernet 0.00022 sec

chart1

I also ran the console test with a much larger cache size of 822KB, with much different results:

cache_store avg time/request
:mem_cache_store 0.00022 sec
:file_store, tmpfs (local virtual memory RAM disk) 0.01685 sec
:file_store, local ext4 filesystem 0.01639 sec
:file_store, NetApp filer NFS over gigabit Ethernet 0.01591 sec

chart2

Conclusion

It's interesting to note here that the file-system caching outperformed memcached on the smaller cache, but memcached far outperformed the file-system caching on the larger cache. Ultimately, this difference is negligible compared to additional Rails optimization I applied after these tests, which I'll explain in a future blog post.

Google+

Over the weekend, I dug into Google+ a bit. I wanted to share a few notes about the experience with my coworkers and the world.

Speed

Google has done a great job on performance from what I can tell. They've followed their own recommendations on optimization by doing things like implementing CSS sprites, caching static assets, and gzipping content. I can't do performance tests on my authenticated account at WebPageTest.org, but the perceived performance is great.

User Interface

Parts of the user interface look similar to Facebook. But Google deviated from their norm of utilitarian/pragmatic design according to this article, and I appreciate their focus on aesthetics here. This combined with the speed makes for a delightful user experience.

Circles

Google offers limited sharing functionality with what they call "Circles", similar to Facebook groups. In my case, I have the following circles:

  • Friends
  • Family
  • End Point
  • Web People (professional contacts)
  • Photography

Circles are integrated into every part of Google+, which makes it easy to limit posts, photos, videos or other content to individuals or circles. In addition to limiting sharing permissions, you can also limit viewing "Stream" (akin to Facebook's News Feed) activity to circles; for example, I might only be interested in looking at my "Friends" stream on the weekend, while I might be more inclined to look at my "End Point" stream during the week when I'm in work mode :)

Conference Calling or "Hangouts"

Yesterday, I tried Google Hangouts with a friend from my Windows laptop. I was required to install software on my laptop to run the chat app. Hangouts are similar to Skype video conferencing but are currently limited to 10 people. The quality of the conference call was comparable to Skype and this feature is an obvious competitor to Skype. Skype has always been a bit finicky on Ubuntu, so hangouts will be a clear replacement for me here if it runs smoothly on my Ubuntu laptop(s). I expect the conference calling feature set to grow over time — it'll be interesting to see what comes out.

Other Stuff

Jon mentioned that the Android app for Google+ is nice. I haven't examined Sparks much, but it is similar to sharing in Facebook. I also just read that the iPhone app for Google+ is on it's way (from a Google+ contact). Finally, I'd consider shifting my photo backups from Flickr to Picasa if future Picasa updates justify the switch, which would allow me to easily share and tag photos, as well as back-up those 2-3 MB photos!

Conclusion

I'm impressed with the speed and usability of Google+. I'm interested to see the upcoming features and equally as interested to see how much cross-posting is going to happen between Google+ and Facebook, brought up by Dave Jenkins, who has also written about Google+ after he received an early invite.

Home router problems with .0 IP address

In our work the occasional mysterious problem surfaces which makes me appreciate how tractable and sane the majority of the challenges are. Here I'll tell the story of one of the mysterious problems.

In Internet routing of IPv4 addresses, there's nothing inherently special about an IP address that ends in .0, .255, or anything else. It all depends on the subnet. In the days before CIDR (Classless Inter-Domain Routing) brought us arbitrary subnet masks, there were classes of routing, most commonly A, B, and C. And the .0 and .255 addresses were special.

That was a long time ago, but it can still cause occasional trouble today. One of our hosting providers assigned us an IP address ending in .0, which we used for hosting a website. It worked fine, and was in service for many months before we heard any reports of trouble.

Then we heard a report from one of our clients that they could not access that website from their home, but they could from their office. We couldn't ever figure out why.

Next one of our own employees found that he could not access the website from his home, but he could from other locations.

Finally we had enough evidence when a friend from the open source community also could not access that website from his home.

The commonality was in the router they were using:

  • Belkin G Wireless Router Model F5D7234-4 v4
  • Belkin F5D9231-4 v1
  • and the third thought it was a Belkin but they were not able to provide the exact model.

We moved the website to a different IP address on the same server, and they had no problem accessing it.

The routers are obviously broken, but there's little sense arguing about that. For now we avoid using any .0 IP address because there are going to be some few people who can't reach it.