End Point

News

Welcome to End Point's blog

Ongoing observations by End Point people.

MongoDB and OpenStack - OSI Days 2014, India

The 11th edition of Open Source India, 2014 was held at Bengaluru, India. The two day conference was filled with three parallel tech talks and workshops which was spread across various Open Source technologies.

IMG_20141110_223543.jpg

In-depth look at Architecting and Building solutions using MongoDB

Aveekshith Bushan & Ranga Sarvabhouman from MongoDB started off the session with a comparison of the hardware cost involved with storage systems in earlier and recent days. In earlier days, the cost of storage hardware was very expensive, so the approach was to filter the data to reduce the size before storing into the database. So we were able to generate results from filtered data and we didn’t have option to process the source data. After the storage became cheap, we can now store the raw data and then we do all our filter/processing and then distribute it.

Earlier,
        Filter -> Store -> Distribute
Present,
        Store -> Filter -> Distribute

Here we are storing huge amount of data, so we need a processing system to handle and analyse the data in efficient manner. In current world, the data is growing like anything and 3Vs are phenomenal of growing (Big)Data. We need to handle the huge Volume of Variety of data in a Velocity. MongoDB follows certain things to satisfy the current requirement.

MongoDB simply stores the data as a document without any data type constraints which helps to store huge amount of data quickly. It leaves the constraints checks to the application level to increase the storage speed in database end. But it does recognises the data type after the data is stored as document. In simple words, the philosophy is: Why do we need to check the same things (datatype or other constraints) in two places (application and database)?

MongoDB stores all relations as single document and fetches the data in single disk seek. By avoiding multiple disk seeks, this results in the fastest retrieval of data. Whereas in relational database the relations stored in different tables which leads to multiple disk seek to retrieve the complete data of an entity. And MongoDB doesn’t support joins but it have Reference option to refer another collection(Table) without imposing foreign key constraints.

As per db-engines rankings, MongoDB stays in the top of NoSQL database world. Also it provides certain key features which I have remembered from the session:
    • Sub-documents duplicates the data but it helps to gain the performance(since the storage is cheap, the duplication doesn’t affect much)
    • Auto-sharding (Scalability)
    • Sharding helps parallel access to the system
    • Range Based Sharding 
    • Replica Sets (High availability)
    • Secondary indexes available
    • Indexes are single tunable part of the MongoDB system 
    • Partition across systems 
    • Rolling upgrades
    • Schema free
    • Rich document based queries
    • Read from secondary
When do you need MongoDB?
    • The data grows beyond the system capacity in relational database
    • In a need of performance in online requests
Finally, speakers emphasized to understand use case clearly and choose right features of MongoDB to get effective performance.

OpenStack Mini Conf


A special half day OpenStack mini conference was organised at second half of first day. The talks were spread across basics to in depth of OpenStack project. I have summarised all the talks here to give an idea of OpenStack software platform.

OpenStack is a Open Source cloud computing platform to provision the Infrastructure as a Service(IaaS). There is a wonderful project DevStack out there to set up the OpenStack on development environment in easiest and fastest way. A well written documentation of the OpenStack project clearly explains everything. In addition, anyone can contribute to OpenStack with help of How to contribute guide, also project uses Gerrit review system and Launchpad bug tracking system.

OpenStack have multiple components to provide various features in Infrastructure as a Service. Here is the list of OpenStack components and the purpose of each one.

Nova (Compute) - manages the pool of computer resources
Cinder (Block Storage) - provides the storage volume to machines
Neutron (Network) - manages the networks and IP addresses
Swift (Object Storage) - provides distributed high availability(replication) on storage system.
Glance (Image) - provides a repository to store disk and server images
KeyStone (Identity) - enables the common authentication system across all components
Horizon (Dashboard) - provides GUI for users to interact with OpenStack components
Ceilometer (Telemetry) - provides the services usage and billing reports
Ironic (Bare Metal) - provisions bare metal instead of virtual machines
Sahara (Map Reduce) - provisions hadoop cluster for big data processing

OpenStack services are usually mapped to AWS services to better understand the purpose of the components. The following table depicts the mapping of similar services in OpenStack and AWS:

OpenStack
AWS
Nova
EC2
Cinder
EBS
Neutron
VPC
Swift
S3
Glance
AMI
KeyStone
IAM
Horizon
AWS Console
Ceilometer
Cloudwatch
Sahara
Elastic Mapreduce

Along with the overview of OpenStack architecture, there were couple of in-depth talks which are listed below with slides.
That was a wonderful Day One of OSI 2014 which helped me to get better understanding of MongoDB and OpenStack.

Novo website do Liquid Galaxy em Português!

End Point Corporation tem o prazer de anunciar o lançamento oficial do seu novo website em Português! O site, http://liquidgalaxy.pt.endpoint.com/ oficialmente sinaliza a chegada do Liquid Galaxy da End Point ao Brasil e tem como objetivo fornecer serviço a todos os atuais e futuros clientes em um dos maiores e mais dinâmicos mercados da América do Sul.

Com uma população de mais 200 milhões, o Brasil também é um rápido adoptante de novas tecnologias com um numero considerável de líderes do setor que podem beneficiar diretamente a implementação do Liquid Galaxy. Isto inclui um setor de commodities solido, uma expansão imobiliária cresente, turismo e um mercado de mídia vibrante, todos fortes candidatos para a nova tecnologia.

Brasil também é o ponto de entrada para o mercado sul-americano em geral. Estamos confiantes de que podemos aumentar a penetração no mercado Brasileiro, outras oportunidades na região irão seguir. Dave Jenkins, nosso vice-presidente de vendas e Marketing, oferece o seguinte: "nós estamos excitados para ver essa expansão no Brasil. Eu sempre vejo grandes coisas saindo de São Paulo e Rio, sempre participo das conferências tecnologicas, que estão sempre superlotadas.

Se você gostaria de saber mais sobre esta tecnologia, por favor contacte-nos: vendas@endpoint.com

Brazilian Portuguese Liquid Galaxy website launch!

End Point Corporation is pleased to announce the official launch of its new Brazilian Portuguese Liquid Galaxy website! The site, found at http://liquidgalaxy.pt.endpoint.com/ officially signals the arrival of End Point’s Liquid Galaxy to Brazil, and aims to provide service to all current and future customers in what is South America’s largest and most dynamic market.

With a population over 200 million, Brazil is also a quick adopter of new technologies with sizeable industry sectors that can benefit directly from the implementation of a Liquid Galaxy. This includes a massive commodities sector, booming real estate, tourism and a vibrant media market, all of which are strong candidates for the technology.

Brazil is also a logical entry-point into the larger South American market in general. We’re confident that as we increase market penetration in Brazil, other opportunities in the region will soon follow. Dave Jenkins, our VP of Sales and Marketing, offers the following: “We’re excited to see this expansion into Brazil. I always see great things coming out of São Paulo and Rio whenever I go there for tech conferences, which are always booked to overflowing levels.”

If you have international business in South America, or are based in Brazil and would like to know more about this great technology, please contact us at vendas@endpoint.com

Create a sales functionality within Spree 2.3 using Spree fancy

Introduction

I recently started working with Spree and wanted to learn how to implement some basic features. I focused on one of the most common needs of any e-commerce business - adding a sale functionality to products. To get a basic understanding of what was involved, I headed straight to the Spree Developer Guides. As I was going through the directions, I realized it was intended for the older Spree version 2.1. This led to me running into a few issues as I went through it using Spree's latest version 2.3.4. I wanted to share with you what I learned, and some tips to avoid the same mistakes I made.

Set-up

I'll assume you have the prerequisites it lists including Rails, Bundler, ImageMagick and the Spree gem. These are the versions I'm running on my Mac OS X:
  • Ruby: 2.1.2p95
  • Rails: 4.1.4
  • Bundler: 1.5.3
  • ImageMagick: 6.8.9-1
  • Spree: 2.3.4

What is Bundler? Bundler provides a consistent environment for Ruby projects by tracking and installing the exact gems and versions that are needed. You can read more about the benefits of using Bundler on their website. If you're new to Ruby on Rails and/or Spree, you'll quickly realize how useful Bundler is when updating your gems.

After you've successfully installed the necessary tools for your project, it's time to create our first Rails app, which will then be used as a foundation for our simple Spree project called mystore

Let's create our app

Run the following commands:

$ rails new mystore
$ cd mystore
$ gem install spree_cmd

*Note: you may get a warning that you need to run bundle install before trying to start your application since spree_gateway.git isn't checked out yet. Go ahead and follow those directions, I'll wait.

Spree-ify our app

We can add the e-commerce platform to our Rails app by running the following command:

spree install --auto-accept

If all goes well, you should get a message that says, "Spree has been installed successfully. You're all ready to go! Enjoy!". Now the fun part - let's go ahead and start our server to see what our demo app actually looks like. Run rails s to start the server and open up a new browser page pointing to the URL localhost:3000.
*Note - when you navigate to localhost:3000, watch your terminal - you'll see a lot of processes running in the background as the page loads simultaneously in your browser window. It can be pretty overwhelming, but as long as you get a "Completed 200 OK" message in your terminal, you should be good to go! See it below:


Our demo app actually comes with an admin interface ready to use. Head to your browser window and navigate to http://localhost:3000/admin. The login Spree instructs you to use is spree@example.com and password spree123.

Once you login to the admin screen, this is what you should see:


Once you begin to use Spree, you'll soon find that the most heavily used areas of the admin panel include Orders, Products, Configuration and Promotions. We'll be going into some of these soon.

Extensions in 3.5 steps

The next part of the Spree documentation suggests adding the spree_fancy extension to our store to update the look and feel of the website, so let's go ahead and follow the next few steps:

Step 1: Update the Gemfile

We can find our Gemfile by going back to the terminal, and within the mystore directory, type ls to see a list of all the files and subdirectories within the Spree app. You will see the Gemfile there - open it using your favorite text editor. Add the following line to the last line of your Gemfile, and save it:
gem 'spree_fancy', :git => 'git://github.com/spree/spree_fancy.git', :branch => '2-1-stable'

Notice the branch it mentions is 2-1-stable. Since you just installed Spree, you are most likely using the latest version, 2-3-stable. I changed my branch in the above gem to '2-3-stable' to reflect the Spree version I'm currently using. After completing this step, run bundle install to install the gem using Bundler.

Now we need to copy over the migrations and assets from the spree_fancy extension by running this command in your terminal within your mystore application:

$ bundle exec rails g spree_fancy:install

Step 1.5: We've hit an error!

At this point, you've probably hit a LoadError, and we can no longer see our beautiful Spree demo app, instead getting an error page which says "Sprockets::Rails::Helper::AssetFilteredError in Spree::Home#index" at the top. How do we fix this?

Within your mystore application directory, navigate to config/intializers/assets.rb file and edit the last line of code by uncommenting it and typing:

Rails.application.config.assets.precompile += %w ( bx_loader.gif )

Now restart your server and you will see your new theme!

Step 2: Create a sales extension

Now let's see how to create an extension instead of using an existing one. According to the Spree tutorial, we first need to generate an extension - remember to run this command from a directory outside of your Spree application:

$ spree extension simple_sales
Once you do that, cd into your spree_simple_sales directory. Next, run bundle install to update your Spree extension.

Now you can create a migration that adds a sale_price column to variants using the following command:

bundle exec rails g migration add_sale_price_to_spree_variants sale_price:decimal

Once your migration is complete, navigate in your terminal to db/migrate/XXXXXXXXXXXX_add_sale_price_to_spree_variants.rb and add in the changes as shown in the Spree tutorial:

class AddSalePriceToSpreeVariants < ActiveRecord::Migration
  def change
    add_column :spree_variants, :sale_price, :decimal, :precision => 8, :scale => 2
  end
end
Now let's switch back to our mystore application so that we can add our extension before continuing any development. Within mystore, add the following to your Gemfile:
gem 'spree_simple_sales', :path => '../spree_simple_sales'
You will have to adjust the path ('../spree_simple_sales') depending on where you created your sales extension.

Now it's time to bundle install again, so go ahead and run that. Now we need to copy our migration by running this command in our terminal:

$ rails g spree_simple_sales:install

Step 3: Adding a controller Action to HomeController

Once the migration has been copied, we need to extend the functionality of Spree::HomeController and add an action that selects “on sale” products. Before doing that, we need to make sure to change our .gemspec file within the spree_simple_sales directory (remember: this is outside of our application directory). Open up the spree_simple_sales.gemspec file in your text editor Add the following line to the list of dependencies:

s.add_dependency ‘spree_frontend’

Run bundle.

Run $ mkdir -p app/controllers/spree to create the directory structure for our controller decorator. This is where we will create a new file called home_controller_decorator.rb and add the following content to it:

module Spree
  HomeController.class_eval do
    def sale
      @products = Product.joins(:variants_including_master).where('spree_variants.sale_price is not null').uniq
    end
  end
end

As Spree explains it, this script will select just the products that have a variant with a sale_price set.

Next step - add a route to this sales action in our config/routes.rb file. Make sure your routes.rb file looks like this:

Spree::Core::Engine.routes.draw do
  get "/sale" => "home#sale"
end

Let's set a sale price for the variant

Normally, to change a variant attribute, we could do it through the admin interface, but we haven’t created this functionality yet. This means we need to open up our rails console:
*Note - you should be in the mystore directory

Run $ rails console

The next steps are taken directly from the Spree documentation:

“Now, follow the steps I take in selecting a product and updating its master variant to have a sale price. Note, you may not be editing the exact same product as I am, but this is not important. We just need one “on sale” product to display on the sales page.”

> product = Spree::Product.first
=> #<Spree::Product id: 107377505, name: "Spree Bag", description: "Lorem ipsum dolor sit amet, consectetuer adipiscing...", available_on: "2013-02-13 18:30:16", deleted_at: nil, permalink: "spree-bag", meta_description: nil, meta_keywords: nil, tax_category_id: 25484906, shipping_category_id: nil, count_on_hand: 10, created_at: "2013-02-13 18:30:16", updated_at: "2013-02-13 18:30:16", on_demand: false>

> variant = product.master
=> #<Spree::Variant id: 833839126, sku: "SPR-00012", weight: nil, height: nil, width: nil, depth: nil, deleted_at: nil, is_master: true, product_id: 107377505, count_on_hand: 10, cost_price: #<BigDecimal:7f8dda5eebf0,'0.21E2',9(36)>, position: nil, lock_version: 0, on_demand: false, cost_currency: nil, sale_price: nil>

> variant.sale_price = 8.00
=> 8.0

> variant.save
=> true

Hit Ctrl-D to exit the console.

Now we need to create the page that renders the product that is on sale. Let’s create a view to display these “on sale” products.

Create the required views directory by running: $ mkdir -p app/views/spree/home

Create the a file in your new directory called sale.html.erb and add the following to it:

<%= render 'spree/shared/products', :products => @products %>

Now start your rails server again and navigate to localhost:3000/sale to see the product you listed on sale earlier! Exciting stuff, isn't it? The next step is to actually reflect the sale price instead of the original price by fixing our sales price extension using Spree Decorator.

Decorate your variant

Create the required directory for your new decorator within your mystore application: $ mkdir -p app/models/spree

Within your new directory, create a file called variant_decorator.rb and add:

module Spree
  Variant.class_eval do
    alias_method :orig_price_in, :price_in
    def price_in(currency)
      return orig_price_in(currency) unless sale_price.present?
      Spree::Price.new(:variant_id => self.id, :amount => self.sale_price, :currency => currency)
    end
  end
end

The original method of price_in now has an alias of price_in unless there is a sale_price present, in which case the sale price is returned on the product’s master variant.

In order to ensure that our modification to the core Spree functionality works, we need to write a couple of unit tests for variant_decorator.rb. We need a full Rails application present to test it against, so we can create a barebones test_app to run our tests against.

Run the following command from the root directory of your EXTENSION: $ bundle exec rake test_app

It will begin the process by saying “Generating dummy Rails application…” - great! you’re on the right path.

Once you finish creating your dummy Rails app, run the rspec command and you should see the following output: No examples found.

Finished in 0.00005 seconds
0 examples, 0 failures

Now it’s time to start adding some tests by replicating your extension’s directory structure in the spec directory: $ mkdir -p spec/models/spree

In your new directory, create a file called variant_decorator_spec.rb and add this test:

require 'spec_helper'

describe Spree::Variant do
  describe "#price_in" do
    it "returns the sale price if it is present" do
      variant = create(:variant, :sale_price => 8.00)
      expected = Spree::Price.new(:variant_id => variant.id, :currency => "USD", :amount => variant.sale_price)

      result = variant.price_in("USD")

      result.variant_id.should == expected.variant_id
      result.amount.to_f.should == expected.amount.to_f
      result.currency.should == expected.currency
    end

    it "returns the normal price if it is not on sale" do
      variant = create(:variant, :price => 15.00)
      expected = Spree::Price.new(:variant_id => variant.id, :currency => "USD", :amount => variant.price)

      result = variant.price_in("USD")

      result.variant_id.should == expected.variant_id
      result.amount.to_f.should == expected.amount.to_f
      result.currency.should == expected.currency
    end
  end
end

Deface overrides

Next we need to add a field to our product admin page, so we don’t have to always go through the rails console to update a product’s sale_price. If we directly override the view that Spree provides, whenever Spree updates the view in a new release, the updated view will be lost, so we’d have to add our customizations back in to stay up to date.

A better way to override views is to use Deface, which is a Rails library to directly edit the underlying view file. All view customizations will be in ONE location: app/overrides which will make sure your app is always using the latest implementation of the view provided by Spree.

  1. Go to mystore/app/views/spree and create an admin/products directory and create the file _form.html.erb.
  2. Copy the full file NOT from Spree’s GitHub but from your Spree backend. You can think of your Spree backend as the area to edit your admin (among other things) - the spree_backend gem contains the most updated _form.html.erb - if you use the one listed in the documentation, you will get some Method Errors on your product page.

In order to find the _form.html.erb file in your spree_backend gem, navigate to your app, and within that, run the command: bundle show spree_backend

The result is the location of your spree_backend. Now cd into that location, and navigate to app/views/spree/admin/products - this is where you will find the correct _form.html.erb. Copy the contents of this file into the newly created _form.html.erb file within your application’s directory structure you just created: mystore/app/views/spree/admin/products.

Now we want to actually add a field container after the price field container for sale price so we need to create another override by creating a new file in your application’s app/overrides directory called add_sale_price_to_product_edit.rb and add the following content:

Deface::Override.new(:virtual_path => 'spree/admin/products/_form',
  :name => 'add_sale_price_to_product_edit',
  :insert_after => "erb[loud]:contains('text_field :price')",
  :text => "
    <%= f.field_container :sale_price do %>
      <%= f.label :sale_price, raw(Spree.t(:sale_price) + content_tag(:span, ' *')) %>
      <%= f.text_field :sale_price, :value =>
        number_to_currency(@product.sale_price, :unit => '') %>
      <%= f.error_message_on :sale_price %>
    <% end %>
  ")

The last step is to update our model in order to get an updated product edit form. Create a new file in your application’s app/models/spree directory called product_decorator.rb. Add the following content:

module Spree
  Product.class_eval do
    delegate_belongs_to :master, :sale_price
  end
end

Now you can check to see if it worked by heading to http://localhost:3000/admin/products and you should edit one of the products. Once you’re on the product edit page, you should see a new field container called SALE PRICE. Add a sale price in the empty field and click on update. Once completed, navigate to http://localhost:3000/sale to find an updated list of products on sale.

CONCLUSION

Congratulations, you've created the sales functionality! If you're using Spree 2.3 to create a sales functionality for your application, I would love to know what your experience was like. Good luck!

Can we Server Name Indicate yet?

The encryption times, they are a-changin'.

Every once in a while I'll take a look at the state of SNI, in the hopes that we're finally ready for putting it to wide-scale use.
It started a few years back when IPv6 got a lot of attention, though in reality very few end user ISP's had IPv6 connectivity at that time.  (And very few still do!  But that's another angry blog.)  So, essentially, IPv4 was still the only option, and thus SNI was still important.

Then earlier this year when Microsoft dropped [public] support for Windows XP.  Normally this is one of those things that would be pretty far off my radar, but Internet Explorer on XP is one of the few clients* that doesn't support SNI.  So at that time, with hope in my heart, I ran a search through the logs on a few of our more active servers, only to find that roughly 5% of the hits are MSIE on Windows XP.  So much for that.

(* Android < 3.0 has the same problem, incidentally. But it in contrast constituted 0.2% of the hits.  So I'm not as worried about the lack of support in that case.)

Now in fairly quick succession a couple other things have happened: SSLv3 is out, and SSL certificates with SHA-1 signatures are out.  This has me excited.  I'll tell you why in a moment.

First, now that I've written "SNI" four times at this point I should probably tell you that it stands for Server Name Indication, and basically means the client sends the intended server name very early in the connection process.  That gives the server the opportunity to select the correct certificate for the given name, and present it to the client.

If at this point you're yelling "of course!" at the screen, press Ctrl-F and search for "SSLv3" below.

For the rest, pull up a chair, it's time for a history lesson.

Back in the day, when a web browser wanted to connect to a site it performed a series of steps: it looks up the given name in DNS and gets an IP address, connects to that IP address, and then requests the path in the form of "GET /index.html".  Quite elegant, fairly straightforward.  And notice the name only matters to the client, as it uses it for the DNS look-up.  To the server it matters not at all.  The server accepts a connection on an IP address and responds to the request for a specific path.

A need arose for secure communication.  Secure Socket Layer (SSL) establishes an encrypted channel over which private information can be shared.  In order to fight man-in-the-middle attacks, a certificate exchange takes place. When the connection is made the server offers up the certificate and the client (among other things I'm glossing over) confirms the name on the certificate matches the name it thinks it tried to connect to.  Note that the situation is much the same as above, in that the client cares about the name, but the server just serves up what it has associated with the IP address.

Around the same time, the Host header appears.   Finally the browser has a way to send the name of the site it's trying to access over to the server; it just makes it part of the request.  What a simple thing it is, and what a world that opens up on the server side.  Instead of associating a single site per IP address, a single web server listening on one address can examine the Host header and serve up a virtually unlimited number of completely distinct sites.

However there's a problem.  At the time, both were great advances.  But, unfortunately, were mutually exclusive.  SSL is established first, immediately upon connection, and after which the HTTP communication happens over the secure channel.  But the Host header is part of the HTTP request.  Spot the problem yet?  The server has to serve up a certificate before the client has an opportunity to tell the server what site it wants.  If the server has multiple and it serves up the wrong one, the name doesn't match what the client expects, and at best it displays a big, scary warning to the end user, or at worst refuses to continue with communication.

There's a few work-arounds, but this is already getting long and boring.  If you're really curious, search for "Subject Alternate Name" and try to imagine why it's an inordinately expensive solution when the list of sites a server needs to support changes.

So for a long time, that was the state of things.  And by a long time, I mean almost 20 years, from when these standards were released.  In computing, that's a long time.  Thus I'd hoped that by now, SNI would be an option as the real solution.

Fast forward to the last few months.  We've had the news that SSLv3 isn't to be trusted, and the news that SHA-1 signatures on SSL certificates are pretty un-cool.  SHA-256 is in, and major CA's are now using that signature by default.  Why does this have me excited?  In the case of the former, Windows XP pre-SP3 only supports up to SSLv3, so any site that's mitigated the POODLE vulnerability is already excluding these clients.  Similarly pre-IE8 clients are excluded by sites that have implemented SHA-2 certificates in the case of the latter.

Strictly speaking we're not 100% there, as a fully up-to-date Internet Explorer on Windows XP is still compatible with these recent SSL ecosystem changes.  But the sun is setting on this platform, and maybe soon we'll be able to start putting this IPv4-saving technology into use.

Soon.

Dear PostgreSQL: Where are my logs?

From Flickr user Jitze Couperus

When debugging a problem, it's always frustrating to get sidetracked hunting down the relevant logs. PostgreSQL users can select any of several different ways to handle database logs, or even choose a combination. But especially for new users, or those getting used to an unfamiliar system, just finding the logs can be difficult. To ease that pain, here's a key to help dig up the correct logs.

Where are log entries sent?

First, connect to PostgreSQL with psql, pgadmin, or some other client that lets you run SQL queries, and run this:
foo=# show log_destination ;
 log_destination 
-----------------
 stderr
(1 row)
The log_destination setting tells PostgreSQL where log entries should go. In most cases it will be one of four values, though it can also be a comma-separated list of any of those four values. We'll discuss each in turn.

SYSLOG

Syslog is a complex beast, and if your logs are going here, you'll want more than this blog post to help you. Different systems have different syslog daemons, those daemons have different capabilities and require different configurations, and we simply can't cover them all here. Your syslog may be configured to send PostgreSQL logs anywhere on the system, or even to an external server. For your purposes, though, you'll need to know what "ident" and "facility" you're using. These values tag each syslog message coming from PostgreSQL, and allow the syslog daemon to sort out where the message should go. You can find them like this:
foo=# show syslog_facility ;
 syslog_facility 
-----------------
 local0
(1 row)

foo=# show syslog_ident ;
 syslog_ident 
--------------
 postgres
(1 row)
Syslog is often useful, in that it allows administrators to collect logs from many applications into one place, to relieve the database server of logging I/O overhead (which may or may not actually help anything), or any number of other interesting rearrangements of log data.

EVENTLOG

For PostgreSQL systems running on Windows, you can send log entries to the Windows event log. You'll want to tell Windows to expect the log values, and what "event source" they'll come from. You can find instructions for this operation in the PostgreSQL documentation discussing server setup.

STDERR

This is probably the most common log destination (it's the default, after all) and can get fairly complicated in itself. Selecting "stderr" instructs PostgreSQL to send log data to the "stderr" (short for "standard error") output pipe most operating systems give every new process by default. The difficulty is that PostgreSQL or the applications that launch it can then redirect this pipe to all kinds of different places. If you start PostgreSQL manually with no particular redirection in place, log entries will be written to your terminal:
[josh@eddie ~]$ pg_ctl -D $PGDATA start
server starting
[josh@eddie ~]$ LOG:  database system was shut down at 2014-11-05 12:48:40 MST
LOG:  database system is ready to accept connections
LOG:  autovacuum launcher started
LOG:  statement: select syntax error;
ERROR:  column "syntax" does not exist at character 8
STATEMENT:  select syntax error;
In these logs you'll see the logs from me starting the database, connecting to it from some other terminal, and issuing the obviously erroneous command "select syntax error". But there are several ways to redirect this elsewhere. The easiest is with pg_ctl's -l option, which essentially redirects stderr to a file, in which case the startup looks like this:
[josh@eddie ~]$ pg_ctl -l logfile -D $PGDATA start
server starting
Finally, you can also tell PostgreSQL to redirect its stderr output internally, with the logging_collector option (which older versions of PostgreSQL named "redirect_stderr"). This can be on or off, and when on, collects stderr output into a configured log directory.

So if you end see a log_destination set to "stderr", a good next step is to check logging_collector:
foo=# show logging_collector ;
 logging_collector 
-------------------
 on
(1 row)
In this system, logging_collector is turned on, which means we have to find out where it's collecting logs. First, check log_directory. In my case, below, it's an absolute path, but by default it's the relative path "pg_log". This is relative to the PostgreSQL data directory. Log files are named according to a pattern in log_filename. Each of these settings is shown below:
foo=# show log_directory ;
      log_directory      
-------------------------
 /home/josh/devel/pg_log
(1 row)

foo=# show data_directory ;
       data_directory       
----------------------------
 /home/josh/devel/pgdb/data
(1 row)

foo=# show log_filename ;
          log_filename          
--------------------------------
 postgresql-%Y-%m-%d_%H%M%S.log
(1 row)
Documentation for each of these options, along with settings governing log rotation, is available here.

If logging_collector is turned off, you can still find the logs using the /proc filesystem, on operating systems equipped with one. First you'll need to find the process ID of a PostgreSQL process, which is simple enough:
foo=# select pg_backend_pid() ;
 pg_backend_pid 
----------------
          31950
(1 row)
Then, check /proc/YOUR_PID_HERE/fd/2, which is a symlink to the log destination:
[josh@eddie ~]$ ll /proc/31113/fd/2
lrwx------ 1 josh josh 64 Nov  5 12:52 /proc/31113/fd/2 -> /var/log/postgresql/postgresql-9.2-local.log

CSVLOG

The "csvlog" mode creates logs in CSV format, designed to be easily machine-readable. In fact, this section of the PostgreSQL documentation even provides a handy table definition if you want to slurp the logs into your database. CSV logs are produced in a fixed format the administrator cannot change, but it includes fields for everything available in the other log formats. For these to work, you need to have logging_collector turned on; without logging_collector, the logs simply won't show up anywhere. But when configured correctly, PostgreSQL will create CSV format logs in the log_directory, with file names mostly following the log_filename pattern. Here's my example database, with log_destination set to "stderr, csvlog" and logging_collector turned on, just after I start the database and issue one query:
[josh@eddie ~/devel/pg_log]$ ll
total 8
-rw------- 1 josh josh 611 Nov 12 16:30 postgresql-2014-11-12_162821.csv
-rw------- 1 josh josh 192 Nov 12 16:30 postgresql-2014-11-12_162821.log
The CSV log output looks like this:
[josh@eddie ~/devel/pg_log]$ cat postgresql-2014-11-12_162821.csv 
2014-11-12 16:28:21.700 MST,,,2993,,5463ed15.bb1,1,,2014-11-12 16:28:21 MST,,0,LOG,00000,"database system was shut down at 2014-11-12 16:28:16 MST",,,,,,,,,""
2014-11-12 16:28:21.758 MST,,,2991,,5463ed15.baf,1,,2014-11-12 16:28:21 MST,,0,LOG,00000,"database system is ready to accept connections",,,,,,,,,""
2014-11-12 16:28:21.759 MST,,,2997,,5463ed15.bb5,1,,2014-11-12 16:28:21 MST,,0,LOG,00000,"autovacuum launcher started",,,,,,,,,""
2014-11-12 16:30:46.591 MST,"josh","josh",3065,"[local]",5463eda6.bf9,1,"idle",2014-11-12 16:30:46 MST,2/10,0,LOG,00000,"statement: select 'hello, world!';",,,,,,,,,"psql"

Finding specific git commit at a point in time

When using git, being able to track down a particular version of a file is an important debugging skill. The common use case for this is when someone is reporting a bug in your project, but they do not know the exact version they are using. While normal software versioning resolves this, bug reports often come in from people using the HEAD of a project, and thus the software version number does not help. Finding the exact set of files the user has is key to being able to duplicate the bug, understand it, and then fix it.

How you get to the correct set of files (which means finding the proper git commit) depends on what information you can tease out of the user. There are three classes of clues I have come across, each of which is solved a different way. You may be given clues about:

  1. Date: The date they downloaded the files (e.g. last time they ran a git pull)
  2. File: A specific file's size, checksum, or even contents.
  3. Error: An error message that helps guide to the right version (especially by giving a line number)

Finding a git commit by date

This is the easiest one to solve. If all you need is to see how the repository looked around a certain point in time, you can use git checkout with git-rev-parse to get it. I covered this in detail in an earlier post, but the best answer is below. For all of these examples, I am using the public Bucardo repository at git clone git://bucardo.org/bucardo.git

$ DATE='Sep 3 2014'
$ git checkout `git rev-list -1 --before="$DATE" master`
Note: checking out '79ad22cfb7d1ea950f4ffa2860f63bd4d0f31692'.

You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:

  git checkout -b new_branch_name

HEAD is now at 79ad22c... Need to update validate_sync with new columns

Or if you prefer xargs over backticks:

$ DATE='Sep 3 2014'
$ git rev-list -1 --before="$DATE" master | xargs -Iz git checkout z

What about the case in which there were multiple important commits on the given day? If the user doesn't know the exact time, you will have to make some educated guesses. You might add the -p flag to git log to examine what changes were made and how likely they are to interact with the bug in question. If it is still not clear, you may just want to have the user mail you a copy or a checksum of one of the key files, and use the method below.

Once you have found the commit you want, it's a good idea to tag it right away. This applies to any of the three classes of clues in this article. I usually add a lightweight git tag immediately after doing the checkout. Then you can easily come back to this commit simply by using the name of the tag. Give it something memorable and easy, such as the bug number being reported. For example:

$ git checkout `git rev-list -1 --before="$DATE" master`
## Give a lightweight tag to the current commit
$ git tag bug_23142
## We need to get back to our main work now
$ git checkout master
## Later on, we want to revisit that bug
$ git checkout bug_23142
## Of course, you may also want to simply create a branch

Finding a git commit by checksum, size, or exact file

Sometimes you can find the commit you need by looking for a specific version of an important file. One of the "main" files in the repository that changes often is your best bet for this. You can ask the user for the size, or just a checksum of the file, and then see which repository commits have a matching entry.

Finding a git commit when given a checksum

As an example, a user in the Bucardo project has encountered a problem when running HEAD, but all they know is that they checked it out of sometime in the last four months. They also run "md5sum Bucardo.pm" and report that the MD5 of the file Bucardo.pm is 767571a828199b6720f6be7ac543036e. Here's the easiest way to find what version of the repository they are using:

$ SUM=767571a828199b6720f6be7ac543036e
$ git log --format=%H \
  | xargs -Iz sh -c \
    'echo -n "z "; git show z:Bucardo.pm | md5sum' \
  | grep -m1 $SUM \
  | cut -d " " -f 1 \
  | xargs -Iz git log z -1
xargs: sh: terminated by signal 13
commit b462c256e62e7438878d5dc62155f2504353be7f
Author: Greg Sabino Mullane 
Date:   Fri Feb 24 08:34:50 2012 -0500

    Fix typo regarding piddir

I'm using variables in these examples both to make copy and paste easier, and because it's always a good idea to save away constant but hard-to-remember bits of information. The first part of the pipeline grabs a list of all commit IDs: git log --format=%H.

We then use xargs to feed list of commit ids one by one to a shell. The shell grabs a copy of the Bucardo.pm file as it existed at the time of that commit, and generates an MD5 checksum of it. We echo the commit on the line as well as we will need it later on. So we now generate the commit hash and the md5 of the Bucardo.pm file.

Next, we pipe this list to grep so we only match the MD5 we are looking for. We use -m1 to stop processing once the first match is found (this is important, as the extraction and checksumming of files is fairly expensive, so we want to short-circuit it as soon as possible). Once we have a match, we use the cut utility to extract just the commit ID, and pipe that back into git log. Voila! Now we know the very last time the file existed with that MD5, and can checkout the given commit. (The "terminated by signal 13" is normal and expected)

You may wonder if a sha1sum would be better, as git uses those internally. Sadly, the process remains the same, as the algorithm git uses to generate its internal SHA1 checksums is sha1("blob " . length(file) . "\0" . contents(file)), and you can't expect a random user to compute that and send it to you! :)

Finding a git commit when given a file size

Another piece of information the user can give you very easily is the size of a file. For example, they may tell you that their copy of Bucardo.pm weighs in at 167092 bytes. As this file changes often, it can be a unique-enough marker to help you determine when they checkout out the repository. Finding the matching size is a matter of walking backwards through each commit and checking the file size of every Bucardo.pm as it existed:

$ SIZE=167092
$ git rev-list --all \
  | while read commit
 do if git ls-tree -l -r $commit \
  | grep -q -w $SIZE
 then echo $commit
 break
 fi
 done
d91807d59a6326e48077311e96e4d5730f24304c

The git ls-tree command generates a list of all blobs (files) for a given commit. The -l option tells it to also print the file size, and the -r option asks it to recurse. So we use git rev-list to generate a list of all the commits (by default, these are output from newest to oldest). Then we pass each commit to the ls-tree command, and use grep to see if that number appears anywhere in the output. If it does, grep returns truth, making the if statement fire the echo, which shows is the commit. The break ensures we stop after the first match. We now have the (probable) commit that the user checked the file out of. As we are not matching by filename, it's probably a good idea to double-check by running git ls-tree -l -r on the given commit.

Finding a git commit when given a copy of the file itself

This is very similar to the size method above, except that we are given the file itself, not the size, so we need to generate some metadata about it. You could run a checksum or a filesize and use one of the recipes above, or you could do it the git way and find the SHA1 checksum that git uses for this file (aka a blob) by using git hash-object. Once you find that, you can use git ls-tree as before, as the blob hash is listed next to the filename. Thus:

$ HASH=`git hash-object ./bucardo.clue`
$ echo $HASH
639b247aab027b79bda788182c8b6785ed319662
$ git rev-list --all \
  | while read commit
 do if git ls-tree -r $commit \
  | grep -F -q $HASH
 then echo $commit
 break
 fi
 done
cd1d776307204cb77a731aa1b15c3c43a275c70e

Finding a git commit by error message

Sometimes the only clue you are given is an error message, or some other snippet that you can trace back to one or more commits. For example, someone once mailed the list to ask about this error that they received:

DBI connect('dbname=bucardo;host=localhost;port=5432',
  'bucardo',...) failed: fe_sendauth: no password supplied at 
  /usr/local/bin/bucardo line 8627.

A quick glance at line 8627 of the file "bucardo" in HEAD showed only a closing brace, so it must be an earlier version of the file. What was needed was to walk backwards in time and check that line for every commit until we find one that could have triggered the error. Here is one way to do that:

$ git log --format=%h \
  | xargs -n 1 -I sh -c \
  "echo -n {}; git show {}:bucardo | head -8627 | tail -1" \
  | less
## About 35 lines down:
379c9006     $dbh = DBI->connect($BDSN, 'bucardo'...

Therefore, we can do a "git checkout 379c9006" and see if we can solve the user's problem.

These are some of the techniques I use to hunt down specific commits in a git repository. Are there other clues you have run up against? Better recipes for hunting down commits? Let me know in the comments below.