End Point

News

Welcome to End Point's blog

Ongoing observations by End Point people.

MySQL to PostgreSQL Migration Tips

I recently was involved in a project to migrate a client's existing application from MySQL to PostgreSQL, and I wanted to record some of my experiences in doing so in the hopes they would be useful for others.

Note that these issues should not be considered exhaustive, but were taken from my notes of issues encountered and/or things that we had to take into consideration in this migration process.

Convert the schema

The first step is to convert the equivalent schema in your PostgreSQL system, generated from the original MySQL.

We used `mysqldump --compatible=postgresql --no-data` to get a dump which matched PostgreSQL's quoting rules. This file still required some manual editing to cleanup some of the issues, such as removing MySQL's "Engine" specification after a CREATE TABLE statement, but this resulted in a script in which we were able to create a skeleton PostgreSQL database with the correct database objects, names, types, etc.

Some of the considerations here include the database collations/charset. MySQL supports multiple collations/charset per database; in this case we ended up storing everything in UTF-8, which matched the encoding of the PostgreSQL database, so there were no additional changes needed here; otherwise, it would have been necessary to note the original encoding of the individual tables and later convert that to UTF-8 in the next step.

We needed to make the following modifications for datatypes:

MySQL Datatype PostgreSQL Datatype
tinyint int
int(NN) int
blob text*
datetime date
int unsigned int
enum('1') bool
longtext text
varbinary(NN) bytea

* Note: this was the right choice given the data, but not necessarily in general.

A few other syntactic changes; MySQL's UNIQUE KEY in the CREATE TABLE statement needs to just be UNIQUE.

Some of the MySQL indexes were defined as FULLTEXT indexes as well, which was a keyword PostgreSQL did not recognize. We made note of these, then created just normal indexes for the time being, intending to review to what extent these actually needed full text search capabilities.

Some of the AUTO_INCREMENT fields did not get the DEFAULT value set correctly to a sequence, because those types just ended up as integers without being declared a serial field, so we used the following query to correct this:

-- cleanup missing autoincrement fields

WITH
datasource AS (
    SELECT
        k.table_name,
        k.column_name,
        atttypid::regtype,
        adsrc
    FROM
        information_schema.key_column_usage k
    JOIN
        pg_attribute
    ON
        attrelid = k.table_name :: regclass AND
        attname = k.column_name
    LEFT JOIN
        pg_attrdef
    ON
        adrelid = k.table_name :: regclass AND
        adnum = k.ordinal_position
    WHERE
        table_name IN (
            SELECT table_name::text FROM information_schema.key_column_usage WHERE constraint_name LIKE '%_pkey' GROUP BY table_name HAVING count(table_name) = 1
        ) AND
        adsrc IS NULL AND
        atttypid = 'integer' ::regtype
),
frags AS (
    SELECT
        quote_ident(table_name || '_' || column_name || '_seq') AS q_seqname,
        quote_ident(table_name) as q_table,
        quote_ident(column_name) as q_col
    FROM
        datasource
),
queries AS (
    SELECT
        'CREATE SEQUENCE ' || q_seqname || ';
' ||
        'ALTER TABLE ' || q_table || ' ALTER COLUMN ' || q_col || $$ SET DEFAULT nextval('$$ || q_seqname || $$');
    $$ ||
        $$SELECT setval('$$ || q_seqname || $$',(SELECT max($$ || q_col || ') FROM ' || q_table || '));
' AS query
    FROM frags
)
SELECT
    COALESCE(string_agg(query, E'\n'),$$SELECT 'No autoincrement fixes needed';$$) AS queries FROM queries
\gset

BEGIN;
:queries
COMMIT;

Basically the idea is that we look for all table with a defined integer primary key (hand-waving it it by using the _pkey suffix in the constraint name), but without a current default value, then generate the equivalent SQL to create a sequence and set that table's default value to the nextval() for the sequence in question. We also generate SQL to scan that table and set that sequence value to the next appropriate value for the column in question. (Since this is for a migration and we know we'll be the only user accessing these tables we can ignore MVCC.)

Another interesting thing about this script is that we utilize psql's ability to store results in a variable, using the \gset command, then we subsequently execute this SQL by interpolating that corresponding variable in the same script.

Convert the data


The next step was to prepare the data load from a MySQL data-only dump. Using a similar dump recipe as for the initial import, we used: `mysqldump --compatible=postgresql --no-create-info --extended-insert > data.sql` to save the data in a dump file so we could iteratively tweak our approach to cleaning up the MySQL data.

Using our dump file, we attempted a fresh load into the new PostgreSQL database. This failed initially due to multiple issues, including ones of invalid character encoding and stricter datatype interpretations in PostgreSQL.

What we ended up doing was to create a filter script to handle all of the "fixup" issues needed here. This involved decoding the data and reencoding to ensure we were using proper UTF8, performing some context-sensitive datatype conversions, etc.

Additional schema modifications


As we were already using a filter script to process the data dump, we decided to take the opportunity to fixup some warts in the current table definitions. This included some fields which were varchar, but should have actually been numeric or integer; as this was a non-trivial schema (100 tables) we were able to use PostgreSQL's system views to identify a list of columns which should should be numeric and were currently not.

Since this was an ecommerce application, we identified columns that were likely candidates for data type reassignment based on field names *count, *qty, *price, *num.

Once we identified the fields in question, I wrote a script to generate the appropriate ALTER TABLE statements to first drop the default, change the column type, then set the new default. This was done via a mapping between table/column name and desired output type.

Convert the application


The final (and needless to say most involved step) was to convert the actual application itself to work with PostgreSQL. Despite the fact that these databases both speak SQL, we had to come up for solutions for the following issues:

Quotation styles


MySQL is more lax with its quoting styles, so some of this migration involved hunting down differences in quoting styles. The codebase contained lots of double-quoted string literals, which PostgreSQL interprets as identifiers, as well as the difference in quoting of column names (backticks for MySQL, double-quotes for PostgreSQL). These had to be identified whereever they appeared and fixed to use a consistent quoting style.

Specific unsupported syntax differences:


INSERT ON DUPLICATE KEY

MySQL supports the INSERT ON DUPLICATE KEY syntax. Modifying these queries involved creating a special UPSERT-style function to support the different options in use in the code base. We isolated and categorized the uses of INSERT ON DUPLICATE KEY UPDATE into several categories: those which did a straight record replace and those which did some sort of modification. I wrote a utility script (detailed later in this article) which served to replicate the logic needed to handle this as the application would expect.

Upcoming versions of PostgreSQL are likely to incorporate an INSERT ... ON CONFLICT UPDATE/IGNORE syntax, which would produce a more direct method of handling migration of these sorts of queries.

INSERT IGNORE

MySQL's INSERT ... IGNORE syntax allows you to insert a row and effectively ignore a primary key violation, assuming that the rest of the row is valid. You can handle this case via creating a similar UPSERT function as in the previous point. Again, this case will be easily resolved if PostgreSQL adopts the INSERT ... ON CONFLICT IGNORE syntax.

REPLACE INTO

MySQL's REPLACE INTO syntax effectively does a DELETE followed by an INSERT; it basically ensures that a specific version of a row exists for the given primary key value. We handle this case by just modifying these queries to do an unconditional DELETE for the Primary Key in question followed by the corresponding INSERT. We ensure these are done within a single transaction so the result is atomic.

INTERVAL syntax

Date interval syntax can be slightly different in MySQL; intervals may be unquoted in MySQL, but must be quoted in PostgreSQL. This project necessitated hunting down several instances to add quoting of specific literal INTERVAL instances.

Function considerations


last_insert_id()

Many times when you insert a records into a MySQL table, later references to this are found using the last_insert_id() SQL function. These sorts of queries need to be modified to utilize the equivalent functionality using PostgreSQL sequences, such as the currval() function.

GROUP_CONCAT()

MySQL has the GROUP_CONCAT function, which serves as a (string) "join" of sorts. We emulate this behavior in PostgreSQL by using the string_agg aggregate function with the delimiter of choice.

CONCAT_WS() - expected to be but not an issue; PG has this function

PostgreSQL has included a CONCAT_WS() function since PostgreSQL 9.1, so this was not an issue with the specific migration, but could still be an issue if you are migrating to an older version of PostgreSQL.

str_to_date()

This function does not exist directly in PostgreSQL, but can be simulated using to_date(). Note however that the format string argument differs between MySQL and PostgreSQL's versions.

date_format()

MySQL has a date_format() function which transforms a date type to a string with a given format option. PostgreSQL has similar functionality using the to_char() function; the main difference here lies in the format string specifier.

DateDiff()

DateDiff() does not exist in PostgreSQL, this is handled by transforming the function call to the equivalent date manipulation operators using the subtraction (-) operator.

rand() to random()

This is more-or-less a simple function rename, as the equivalent functionality for returning a random float between 0.0 <= x <= 1.0 exists in PostgreSQL and MySQL, it's just what the function name itself is. The other difference is that MySQL supports a scale argument so the random number for rand(N) will be returned between 0.0 <= x <= N, whereas you'd have to scale the result in PostgreSQL yourself, via random() * N.

IF() to CASE WHEN ELSE

MySQL has an IF() function which returns the second argument in the case the first argument evaluates to true otherwise returns the third argument. This can be trivially converted from IF(expression1, arg2, arg3) to the equilvalent PostgreSQL syntax: CASE WHEN expression1 THEN arg2 ELSE arg3.

IFNULL() to COALESCE()

MySQL has a function IFNULL() which returns the first argument if it is not NULL, otherwise it returns the second argument. This can effecively be replaced by the PostgreSQL COALESCE() function, which serves the same purpose.

split_part()

MySQL has a built-in function called split_part() which allows you to access a specific index of an array delimited by a string. PostgreSQL also has this function, however the split_part() function in MySQL allows the index to be negative, in which case this returns the part from the right-hand side.

in MySQL:
split_part('a banana boat', ' ', -1) => 'boat'
in PostgreSQL:
split_part('a banana boat', ' ', -1) => // ERROR:  field position must be greater than zero

I fixed this issue by creating a custom plpgsql function to handle this case. (In my specific case, all of the negative indexes were -1; i.e., the last element in the array, so I created a function to return only the substring occurring after the last instance of the delimiter.)

Performance considerations


You may need to revisit COUNT(*) queries

MySQL ISAM tables have a very fast COUNT(*) calculation, owing to queries taking a table lock, while PostgreSQL utilizes MVCC in order to calculate table counts which necessitates a full table scan to see which rows are visible to the specific calling snapshot, so this assumption may need to be revisited in order to calculate an equivalent performant query.

GROUP BY vs DISTINCT ON

MySQL is much more (*ahem*) flexible when it comes to GROUP BY/aggregate queries, allowing some columns to be excluded in a GROUP BY or an aggregate function. Making the equivalent query in PostgreSQL involves transforming the query from SELECT ... GROUP BY (cols) to a SELECT DISTINCT ON (cols) ... and providing an explicit sort order for the rows.

More notes

Don't be afraid to script things; in fact, I would go so far as to suggest that everything you do should be scripted. This process was complicated and there were lots of moving parts to ensure moved in tandem. There were changes being made on the site itself concurrently, so we were doing testing against a dump of the original database at a specific point-in-time. Having everything scripted ensured that this process was repeatable and testable, and that we could get to a specific point in the process without having to remember anything I'd done off-the-cuff.

In addition to scripting the actual SQL/migrations, I found it helpful to script the solutions to various classifications of problems. I wrote some scripts which I used to create some of the various scaffolding/boilerplate for the tables involved. This included a script which would create an UPSERT function for a specific table given the table name, which was used when replaceing the INSERT ON DUPLICATE KEY UPDATE functions. This generated script could then be tailored to handle more complex logic beyond a simple UPDATE. (One instance here is an INSERT ON DUPLICATE KEY UPDATE which increased the count of a specific field in the table instead of replacing the value.)

#!/usr/bin/env perl
# -*-cperl-*-

use strict;
use warnings;

use Data::Dumper;

my $table = shift or die "Usage: $0 <table>\n";
my @cols = @ARGV;

my $dbh = DBI->connect(...);

my @raw_cols = @{ $dbh->column_info(undef, 'public', $table, '%')->fetchall_arrayref({}) };
my %raw_cols = map { $_->{COLUMN_NAME} => $_ } @raw_cols;

die "Can't find table $table\n" unless @raw_cols;

my @missing_cols = grep { ! defined $raw_cols{$_} } @cols;

die "Referenced non-existing columns: @missing_cols\n" if @missing_cols;

my %is_pk;

unless (@cols) {
    @cols = map { $_->{COLUMN_NAME} } @raw_cols;
}

my @pk_cols = $dbh->primary_key(undef, 'public', $table);

@is_pk{@pk_cols} = (1)x@pk_cols;

my @data_cols = grep { ! $is_pk{$_} } @cols;

die "Missing PK cols from column list!\n" unless @pk_cols == grep { $is_pk{$_} } @cols;
die "No data columns!\n" unless @data_cols;

print <<EOF
CREATE OR REPLACE FUNCTION
    upsert_$table (@{[
    join ', ' => map {
        "_$_ $raw_cols{ $_ }->{pg_type}"
    } @cols
]})
RETURNS void
LANGUAGE plpgsql
AS \$EOSQL\$
BEGIN
    LOOP
        UPDATE $table SET @{[
    join ', ' => map { "$_ = _$_" } @data_cols
]} WHERE @{[
    join ' AND ' => map { "$_ = _$_" } @pk_cols
]};
        IF FOUND THEN
            RETURN;
        END IF;
        BEGIN
            INSERT INTO $table (@{[join ',' => @cols]}) VALUES (@{[join ',' => map { "_$_" } @cols]});
            RETURN;
        EXCEPTION WHEN unique_violation THEN
            -- Do nothing, and loop to try the UPDATE again.
        END;
    END LOOP;
END
\$EOSQL\$
;
EOF
;

This script created an upsert function from a given table to update all columns by default, also allowing you to create one with a different number of columns upserted.

I also wrote scripts which could handle/validate some of the column datatype changes. Since there were large numbers of columns which were changed, often multiple in the same table, I was able to have this script create a single ALTER TABLE statement with multiple ALTER COLUMN TYPE USING clauses, plus be able to specify the actual method that these column changes were to take place. These included several different approaches, depending on the target data type, but generally were to solve cases where there were farily legitimate data that was not picked up by PostgreSQL's input parsers. These included how to interpret blank fields as integers (in some cases we wanted it to be 0, in others we wanted it to be NULL), werid numeric formatting (leaving off numbers before or after the decimal point), etc.

We had to fix up in several locations missing defaults for AUTO_INCREMENT columns. The tables were created with the proper datatype, however we had to find tables which matched a specific naming convention and create/associate a sequence/serial column, set the proper default here, etc. (This was detailed above.)

There was a fair amount of iteration and customization in this process, as there was a fair amount of data which was not of the expected format. The process was iterative, and generally involved attempting to alter the table from within a transaction and finding the next datum which the conversion to the expected type did not work. This would result in a modification of the USING clause of the ALTER TABLE ALTER COLUMN TYPE to accommodate some of the specific issues.

In several cases, there were only a couple records which had bad/bunko data, so I included explicit UPDATE statements to update those data values via primary key. While this felt a bit "impure", it was a quick and preferred solution to the issue of a few specific records which did not fit general rules.

Integrate Twilio in Django

Twilio


Twilio is a powerful HTTP API that allows you to build powerful voice and SMS apps. The goal of this blog post is to help make building the SMS applications as simple as possible in django.

There is a already Twilio Python help library available. The open source twilio-python library lets us to write python code to make HTTP requests to the Twilio API.

Installation


The easiest way to install twilio-python library is using pip. Pip is a package manager for python.

Simply run following command in terminal.

$ pip install twilio

Twilio API Credentails

To Integrate twilio API in django application, we need TWILIO_ACCOUNT_SID and TWILIO_AUTH_TOKEN variables. These variables can be found by logging into your Twilio account dashboard. These variables are used to communicate with the Twilio API.

You’ll need to add them to your settings.py file:

TWILIO_ACCOUNT_SID = 'ACXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX'
TWILIO_AUTH_TOKEN = 'YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY'

Create a New App


We are going to interact with people using SMS, so I prefer to create an app named communication. I am assuming, you've already installed Django.

Run following command in terminal.

$ django-admin.py startapp communcation

We will need to register the new app in our django project.
Add it to your INSTALLED_APPS tuple in your settings.py file:

INSTALLED_APPS = (
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.sites',
'django.contrib.messages',
'django.contrib.staticfiles',

‘communication’,
...
)

Create the Model


Now we’ll open up communication/models.py to start creating models for our app.

class SendSMS(models.Model):
    to_number = models.CharField(max_length=30)
    from_number = models.CharField(max_length=30)
    sms_sid = models.CharField(max_length=34, default="", blank=True)
    account_sid = models.CharField(max_length=34, default="", blank=True)
    created_at = models.DateTimeField(auto_now_add=True)
    sent_at = models.DateTimeField(null=True, blank=True)
    delivered_at = models.DateTimeField(null=True, blank=True)
    status = models.CharField(max_length=20, default="", blank=True)


and run the syncdb command after defining the model:

$ python manage.py syncdb

It will create the necessary database tables for our app.

Create utils.py file


Create a new file named utils.py and save in communication/utils.py.

Put the following code in communication/utils.py:

from django.conf import settings

import twilio
import twilio.rest


def send_twilio_message(to_number, body):
    client = twilio.rest.TwilioRestClient(
        settings.TWILIO_ACCOUNT_SID, settings.TWILIO_AUTH_TOKEN)

    return client.messages.create(
        body=body,
        to=to_number,
        from_=settings.TWILIO_PHONE_NUMBER
    )


Testing send_twilio_message


Open the shell and run following commands.

>>> from communication.utils import send_twilio_message
>>> sms = send_twilio_message('+15005550006', 'Hello Endpointer,')
>>> print sms.sid
SM97f8ac9321114af1b7fd4463ff8bd038

Having the sid means that everything in the backend is working fine. And we can proceed to work on the front end.

Create Form

Lets create a form to gather the data.  Now open/create up communication/forms.py to start creating forms for our app. And paste the following code into it:

class SendSMSForm(forms.ModelForm):

    class Meta:
        model = SendSMS
        fields = ('to_number', 'body')


The View CreateView


class SendSmsCreateView(CreateView):
    model = SendSMS
    form_class = SendSMSForm
    template_name = 'communication/sendsms_form.html'
    success_url = reverse_lazy('send_sms')

    def form_valid(self, form):
        number = self.cleaned_data['to_number']
        body = self.cleaned_data['body']
        # call twilio
        sent = send_twilio_message(number, body)

        # save form
        send_sms = form.save(commit=False)
        send_sms.from_number = settings.TWILIO_PHONE_NUMBER
        send_sms.sms_sid = sent.sid
        send_sms.account_sid = sent.account_sid
        send_sms.status = sent.status
        send_sms.sent_at = now()
        if sent.price:
            send_sms.price = Decimal(force_text(sent.price))
            send_sms.price_unit = sent.price_unit
        send_sms.save()

    return super(SendSmsCreateView, self).form_valid(form)



Defining URLS

The URL configuration tells Django how to match a request’s path to your Python code. Django looks for the URL configuration, defined as urlpatterns, in the urls.py file in your project:

from django.conf.urls import patterns, url

from .views import SendSmsCreateView

urlpatterns = patterns('',
    url(
        regex=r'^communication/send/sms/$',
        view=SendSmsCreateView.as_view(),
        name='send_sms'
    ),
)

Creating the Template

Now that we’ve defined a URL for our list view, we can try it out. Django includes a server suitable for development purposes that you can use to easily test your project:

If you visit the http://127.0.0.1:8000/communication/send/sms/ in your browser, though, you’ll see an error: TemplateDoesNotExist.

This is because we have not defined the template file yet. So now create sendsms_form.html file in templates/communication/ and put the following code in it:

{% csrf_token %} {% for field in form %}
{{ field }} {{ field.errors }}
{% endfor %}

Now reload the http://127.0.0.1:8000/communication/send/sms/ in your browser. Assuming everything is okay, you should then see the following form:


Fill out the form, and hit the submit button to send your SMS.

CONCLUSION

Congratulations, your SMS is successfully sent. Good luck!

When Postgres will not start

One of the more frightening things you can run across as a DBA (whether using Postgres or a lesser system) is a crash followed by a complete failure of the database to start back up. Here's a quick rundown of the steps one could take when this occurs.

The first step is to look at why it is not starting up by examining the logs. Check your normal Postgres logs, but also check the filename passed to the --log argument for pg_ctl, as Postgres may not have even gotten far enough to start normal logging. Most of the time these errors are not serious, are fairly self-explanatory, and can be cured easily - such as running out of disk space. When in doubt, search the web or ask in the #postgresql IRC channel and you will most likely find a solution.

Sometimes the error is more serious, or the solution is not so obvious. Consider this problem someone had in the #postgresql channel a while back:

LOG: database system was interrupted while in recovery at 2014-11-03 12:43:09 PST
HINT: This probably means that some data is corrupted and you will have to use the last backup for recovery.
LOG: database system was not properly shut down; automatic recovery in progress
LOG: redo starts at 1883/AF9458E8
LOG: unexpected pageaddr 1882/BAA7C000 in log file 6275, segment 189, offset 10993664
LOG: redo done at 1883/BDA7A9A8
LOG: last completed transaction was at log time 2014-10-25 17:42:53.836929-07
FATAL: right sibling's left-link doesn't match: block 6443 links to 998399 instead of expected 6332 in index "39302035"

As you can see, Postgres has already hinted you may be in deep trouble with its suggestion to use a backup. The Postgres daemon completely fails to start because an index is corrupted. Postgres has recognized that the B-tree index no longer looks like a B-tree should and bails out.

For many errors, the next step is to attempt to start Postgres in single-user mode. This is similar to "Safe mode" in Windows - it starts Postgres in a simplified, bare-bones fashion, and is intended primarily for debugging issues such as a failed startup. This mode is entered by running the 'postgres' executable directly (as opposed to having pg_ctl do it), and passing specific arguments. Here is an example:

$ /usr/bin/postgres --single -D /var/lib/pgsql93/data -P -d 1

This starts up the 'postgres' program (used to be 'postmaster'), enters single-user mode, specifies where the data directory is located, turns off system indexes, and sets the debug output to 1. After it is run, you will have a simple prompt. From here you can fix your problem, such as reindexing bad indexes, that may have caused a normal startup to fail. Use CTRL-d to exit this mode:

$ /usr/bin/postgres --single -D /var/lib/pgsql93/data -P -d 1
NOTICE:  database system was shut down at 2014-11-20 16:51:26 UTC
DEBUG:  checkpoint record is at 0/182B5F8
DEBUG:  redo record is at 0/182B5F8; shutdown TRUE
DEBUG:  next transaction ID: 0/1889; next OID: 12950
DEBUG:  next MultiXactId: 1; next MultiXactOffset: 0
DEBUG:  oldest unfrozen transaction ID: 1879, in database 1
DEBUG:  oldest MultiXactId: 1, in database 1
DEBUG:  transaction ID wrap limit is 2147485526, limited by database with OID 1
DEBUG:  MultiXactId wrap limit is 2147483648, limited by database with OID 1

PostgreSQL stand-alone backend 9.3.5
backend> [CTRL-d]
NOTICE:  shutting down
NOTICE:  database system is shut down

If you are not able to fix things with single-user mode, it's time to get serious. This would be an excellent time to make a complete file-level backup. Copy the entire data directory to a different server or at least a different partition. Make sure you get everything in the pg_xlog directory as well, as it may be symlinked elsewhere.

Time to use pg_resetxlog, right? No, not at all. Use of the pg_resetxlog utility should be done as an absolute last resort, and there are still some things you should try first. Your problem may have already been solved - so the next step should be to upgrade Postgres to the latest revision. With Postgres, a revision (the last number in the version string) is always reserved for bug fixes only. Further, changing the revision is almost always as simple as installing a new binary. So if you are running Postgres version 9.0.3, upgrade to the latest in the 9.0 series (9.0.18 as of this writing). Check the release notes, make the upgrade, and try to start up Postgres.

Still stumped? Consider asking for help. For fast, free help, try the #postgresql IRC channel. For slightly slower free help, try the pgsql-general mailing list. For both of these options, the majority of the subscribers are clustered near the US Eastern time zone, so response times will be faster at 3PM New York time versus 3AM New York time. For paid help, you can find a Postgres expert (such as End Point!) at the list of professional services at postgresql.org,

The next steps depend on the error, but another route is to hack the source code for Postgres to work around the error preventing the startup. This could mean, for example, changing a FATAL exception to an WARNING, or other trickery. This is expert-level stuff, to be sure, but done carefully can still be safer than pg_resetxlog. If possible, try this on a copy of the data!

If you have done everything else, it is time to attempt using pg_resetxlog. Please make sure you read the manual page about it before use. Remember this is a non-reversible, possibly data-destroying command! However, sometimes it is the only thing that will work.

If you did manage to fix the problem - at least enough to get Postgres to start - the very next item is to make a complete logical backup of your database. This means doing a full pg_dump right away. This is especially important if you used pg_resetxlog. Dump everything, then restore it into a fresh Postgres cluster (upgrading to the latest revision first if needed!). The pg_dump will not only allow you to create a clean working version of your database, but is a great way to check on the integrity of your data, as it by necessity examines every row of data you have. It will *not* check on the sanity of your indexes, but there are other ways to do that, the simplest being to do a REINDEX DATABASE on each database in your cluster.

All of these steps, including pg_resetxlog, may or may not help. In the "left-link doesn't match" example at the top, nothing was able to fix the problem (not single-user mode, nor a more recent revision, nor pg_resetxlog). It's possible that the data could have been recovered by hacking the source code or using tools to extract the data directly, but that was not necessary as this was a short-lived AWS experiment. The consensus was it was probably a hardware problem. Which goes to show that you can never totally trust your hardware or software, so always keep tested, frequent, and multiple backups nearby!

MongoDB and OpenStack - OSI Days 2014, India

The 11th edition of Open Source India, 2014 was held at Bengaluru, India. The two day conference was filled with three parallel tech talks and workshops which was spread across various Open Source technologies.

IMG_20141110_223543.jpg

In-depth look at Architecting and Building solutions using MongoDB

Aveekshith Bushan & Ranga Sarvabhouman from MongoDB started off the session with a comparison of the hardware cost involved with storage systems in earlier and recent days. In earlier days, the cost of storage hardware was very expensive, so the approach was to filter the data to reduce the size before storing into the database. So we were able to generate results from filtered data and we didn’t have option to process the source data. After the storage became cheap, we can now store the raw data and then we do all our filter/processing and then distribute it.

Earlier,
        Filter -> Store -> Distribute
Present,
        Store -> Filter -> Distribute

Here we are storing huge amount of data, so we need a processing system to handle and analyse the data in efficient manner. In current world, the data is growing like anything and 3Vs are phenomenal of growing (Big)Data. We need to handle the huge Volume of Variety of data in a Velocity. MongoDB follows certain things to satisfy the current requirement.

MongoDB simply stores the data as a document without any data type constraints which helps to store huge amount of data quickly. It leaves the constraints checks to the application level to increase the storage speed in database end. But it does recognises the data type after the data is stored as document. In simple words, the philosophy is: Why do we need to check the same things (datatype or other constraints) in two places (application and database)?

MongoDB stores all relations as single document and fetches the data in single disk seek. By avoiding multiple disk seeks, this results in the fastest retrieval of data. Whereas in relational database the relations stored in different tables which leads to multiple disk seek to retrieve the complete data of an entity. And MongoDB doesn’t support joins but it have Reference option to refer another collection(Table) without imposing foreign key constraints.

As per db-engines rankings, MongoDB stays in the top of NoSQL database world. Also it provides certain key features which I have remembered from the session:
    • Sub-documents duplicates the data but it helps to gain the performance(since the storage is cheap, the duplication doesn’t affect much)
    • Auto-sharding (Scalability)
    • Sharding helps parallel access to the system
    • Range Based Sharding 
    • Replica Sets (High availability)
    • Secondary indexes available
    • Indexes are single tunable part of the MongoDB system 
    • Partition across systems 
    • Rolling upgrades
    • Schema free
    • Rich document based queries
    • Read from secondary
When do you need MongoDB?
    • The data grows beyond the system capacity in relational database
    • In a need of performance in online requests
Finally, speakers emphasized to understand use case clearly and choose right features of MongoDB to get effective performance.

OpenStack Mini Conf


A special half day OpenStack mini conference was organised at second half of first day. The talks were spread across basics to in depth of OpenStack project. I have summarised all the talks here to give an idea of OpenStack software platform.

OpenStack is a Open Source cloud computing platform to provision the Infrastructure as a Service(IaaS). There is a wonderful project DevStack out there to set up the OpenStack on development environment in easiest and fastest way. A well written documentation of the OpenStack project clearly explains everything. In addition, anyone can contribute to OpenStack with help of How to contribute guide, also project uses Gerrit review system and Launchpad bug tracking system.

OpenStack have multiple components to provide various features in Infrastructure as a Service. Here is the list of OpenStack components and the purpose of each one.

Nova (Compute) - manages the pool of computer resources
Cinder (Block Storage) - provides the storage volume to machines
Neutron (Network) - manages the networks and IP addresses
Swift (Object Storage) - provides distributed high availability(replication) on storage system.
Glance (Image) - provides a repository to store disk and server images
KeyStone (Identity) - enables the common authentication system across all components
Horizon (Dashboard) - provides GUI for users to interact with OpenStack components
Ceilometer (Telemetry) - provides the services usage and billing reports
Ironic (Bare Metal) - provisions bare metal instead of virtual machines
Sahara (Map Reduce) - provisions hadoop cluster for big data processing

OpenStack services are usually mapped to AWS services to better understand the purpose of the components. The following table depicts the mapping of similar services in OpenStack and AWS:

OpenStack
AWS
Nova
EC2
Cinder
EBS
Neutron
VPC
Swift
S3
Glance
AMI
KeyStone
IAM
Horizon
AWS Console
Ceilometer
Cloudwatch
Sahara
Elastic Mapreduce

Along with the overview of OpenStack architecture, there were couple of in-depth talks which are listed below with slides.
That was a wonderful Day One of OSI 2014 which helped me to get better understanding of MongoDB and OpenStack.

Novo website do Liquid Galaxy em Português!

End Point Corporation tem o prazer de anunciar o lançamento oficial do seu novo website em Português! O site, http://liquidgalaxy.pt.endpoint.com/ oficialmente sinaliza a chegada do Liquid Galaxy da End Point ao Brasil e tem como objetivo fornecer serviço a todos os atuais e futuros clientes em um dos maiores e mais dinâmicos mercados da América do Sul.

Com uma população de mais 200 milhões, o Brasil também é um rápido adoptante de novas tecnologias com um numero considerável de líderes do setor que podem beneficiar diretamente a implementação do Liquid Galaxy. Isto inclui um setor de commodities solido, uma expansão imobiliária cresente, turismo e um mercado de mídia vibrante, todos fortes candidatos para a nova tecnologia.

Brasil também é o ponto de entrada para o mercado sul-americano em geral. Estamos confiantes de que podemos aumentar a penetração no mercado Brasileiro, outras oportunidades na região irão seguir. Dave Jenkins, nosso vice-presidente de vendas e Marketing, oferece o seguinte: "nós estamos excitados para ver essa expansão no Brasil. Eu sempre vejo grandes coisas saindo de São Paulo e Rio, sempre participo das conferências tecnologicas, que estão sempre superlotadas.

Se você gostaria de saber mais sobre esta tecnologia, por favor contacte-nos: vendas@endpoint.com

Brazilian Portuguese Liquid Galaxy website launch!

End Point Corporation is pleased to announce the official launch of its new Brazilian Portuguese Liquid Galaxy website! The site, found at http://liquidgalaxy.pt.endpoint.com/ officially signals the arrival of End Point’s Liquid Galaxy to Brazil, and aims to provide service to all current and future customers in what is South America’s largest and most dynamic market.

With a population over 200 million, Brazil is also a quick adopter of new technologies with sizeable industry sectors that can benefit directly from the implementation of a Liquid Galaxy. This includes a massive commodities sector, booming real estate, tourism and a vibrant media market, all of which are strong candidates for the technology.

Brazil is also a logical entry-point into the larger South American market in general. We’re confident that as we increase market penetration in Brazil, other opportunities in the region will soon follow. Dave Jenkins, our VP of Sales and Marketing, offers the following: “We’re excited to see this expansion into Brazil. I always see great things coming out of São Paulo and Rio whenever I go there for tech conferences, which are always booked to overflowing levels.”

If you have international business in South America, or are based in Brazil and would like to know more about this great technology, please contact us at vendas@endpoint.com

Create a sales functionality within Spree 2.3 using Spree fancy

Introduction

I recently started working with Spree and wanted to learn how to implement some basic features. I focused on one of the most common needs of any e-commerce business - adding a sale functionality to products. To get a basic understanding of what was involved, I headed straight to the Spree Developer Guides. As I was going through the directions, I realized it was intended for the older Spree version 2.1. This led to me running into a few issues as I went through it using Spree's latest version 2.3.4. I wanted to share with you what I learned, and some tips to avoid the same mistakes I made.

Set-up

I'll assume you have the prerequisites it lists including Rails, Bundler, ImageMagick and the Spree gem. These are the versions I'm running on my Mac OS X:
  • Ruby: 2.1.2p95
  • Rails: 4.1.4
  • Bundler: 1.5.3
  • ImageMagick: 6.8.9-1
  • Spree: 2.3.4

What is Bundler? Bundler provides a consistent environment for Ruby projects by tracking and installing the exact gems and versions that are needed. You can read more about the benefits of using Bundler on their website. If you're new to Ruby on Rails and/or Spree, you'll quickly realize how useful Bundler is when updating your gems.

After you've successfully installed the necessary tools for your project, it's time to create our first Rails app, which will then be used as a foundation for our simple Spree project called mystore

Let's create our app

Run the following commands:

$ rails new mystore
$ cd mystore
$ gem install spree_cmd

*Note: you may get a warning that you need to run bundle install before trying to start your application since spree_gateway.git isn't checked out yet. Go ahead and follow those directions, I'll wait.

Spree-ify our app

We can add the e-commerce platform to our Rails app by running the following command:

spree install --auto-accept

If all goes well, you should get a message that says, "Spree has been installed successfully. You're all ready to go! Enjoy!". Now the fun part - let's go ahead and start our server to see what our demo app actually looks like. Run rails s to start the server and open up a new browser page pointing to the URL localhost:3000.
*Note - when you navigate to localhost:3000, watch your terminal - you'll see a lot of processes running in the background as the page loads simultaneously in your browser window. It can be pretty overwhelming, but as long as you get a "Completed 200 OK" message in your terminal, you should be good to go! See it below:


Our demo app actually comes with an admin interface ready to use. Head to your browser window and navigate to http://localhost:3000/admin. The login Spree instructs you to use is spree@example.com and password spree123.

Once you login to the admin screen, this is what you should see:


Once you begin to use Spree, you'll soon find that the most heavily used areas of the admin panel include Orders, Products, Configuration and Promotions. We'll be going into some of these soon.

Extensions in 3.5 steps

The next part of the Spree documentation suggests adding the spree_fancy extension to our store to update the look and feel of the website, so let's go ahead and follow the next few steps:

Step 1: Update the Gemfile

We can find our Gemfile by going back to the terminal, and within the mystore directory, type ls to see a list of all the files and subdirectories within the Spree app. You will see the Gemfile there - open it using your favorite text editor. Add the following line to the last line of your Gemfile, and save it:
gem 'spree_fancy', :git => 'git://github.com/spree/spree_fancy.git', :branch => '2-1-stable'

Notice the branch it mentions is 2-1-stable. Since you just installed Spree, you are most likely using the latest version, 2-3-stable. I changed my branch in the above gem to '2-3-stable' to reflect the Spree version I'm currently using. After completing this step, run bundle install to install the gem using Bundler.

Now we need to copy over the migrations and assets from the spree_fancy extension by running this command in your terminal within your mystore application:

$ bundle exec rails g spree_fancy:install

Step 1.5: We've hit an error!

At this point, you've probably hit a LoadError, and we can no longer see our beautiful Spree demo app, instead getting an error page which says "Sprockets::Rails::Helper::AssetFilteredError in Spree::Home#index" at the top. How do we fix this?

Within your mystore application directory, navigate to config/intializers/assets.rb file and edit the last line of code by uncommenting it and typing:

Rails.application.config.assets.precompile += %w ( bx_loader.gif )

Now restart your server and you will see your new theme!

Step 2: Create a sales extension

Now let's see how to create an extension instead of using an existing one. According to the Spree tutorial, we first need to generate an extension - remember to run this command from a directory outside of your Spree application:

$ spree extension simple_sales
Once you do that, cd into your spree_simple_sales directory. Next, run bundle install to update your Spree extension.

Now you can create a migration that adds a sale_price column to variants using the following command:

bundle exec rails g migration add_sale_price_to_spree_variants sale_price:decimal

Once your migration is complete, navigate in your terminal to db/migrate/XXXXXXXXXXXX_add_sale_price_to_spree_variants.rb and add in the changes as shown in the Spree tutorial:

class AddSalePriceToSpreeVariants < ActiveRecord::Migration
  def change
    add_column :spree_variants, :sale_price, :decimal, :precision => 8, :scale => 2
  end
end
Now let's switch back to our mystore application so that we can add our extension before continuing any development. Within mystore, add the following to your Gemfile:
gem 'spree_simple_sales', :path => '../spree_simple_sales'
You will have to adjust the path ('../spree_simple_sales') depending on where you created your sales extension.

Now it's time to bundle install again, so go ahead and run that. Now we need to copy our migration by running this command in our terminal:

$ rails g spree_simple_sales:install

Step 3: Adding a controller Action to HomeController

Once the migration has been copied, we need to extend the functionality of Spree::HomeController and add an action that selects “on sale” products. Before doing that, we need to make sure to change our .gemspec file within the spree_simple_sales directory (remember: this is outside of our application directory). Open up the spree_simple_sales.gemspec file in your text editor Add the following line to the list of dependencies:

s.add_dependency ‘spree_frontend’

Run bundle.

Run $ mkdir -p app/controllers/spree to create the directory structure for our controller decorator. This is where we will create a new file called home_controller_decorator.rb and add the following content to it:

module Spree
  HomeController.class_eval do
    def sale
      @products = Product.joins(:variants_including_master).where('spree_variants.sale_price is not null').uniq
    end
  end
end

As Spree explains it, this script will select just the products that have a variant with a sale_price set.

Next step - add a route to this sales action in our config/routes.rb file. Make sure your routes.rb file looks like this:

Spree::Core::Engine.routes.draw do
  get "/sale" => "home#sale"
end

Let's set a sale price for the variant

Normally, to change a variant attribute, we could do it through the admin interface, but we haven’t created this functionality yet. This means we need to open up our rails console:
*Note - you should be in the mystore directory

Run $ rails console

The next steps are taken directly from the Spree documentation:

“Now, follow the steps I take in selecting a product and updating its master variant to have a sale price. Note, you may not be editing the exact same product as I am, but this is not important. We just need one “on sale” product to display on the sales page.”

> product = Spree::Product.first
=> #<Spree::Product id: 107377505, name: "Spree Bag", description: "Lorem ipsum dolor sit amet, consectetuer adipiscing...", available_on: "2013-02-13 18:30:16", deleted_at: nil, permalink: "spree-bag", meta_description: nil, meta_keywords: nil, tax_category_id: 25484906, shipping_category_id: nil, count_on_hand: 10, created_at: "2013-02-13 18:30:16", updated_at: "2013-02-13 18:30:16", on_demand: false>

> variant = product.master
=> #<Spree::Variant id: 833839126, sku: "SPR-00012", weight: nil, height: nil, width: nil, depth: nil, deleted_at: nil, is_master: true, product_id: 107377505, count_on_hand: 10, cost_price: #<BigDecimal:7f8dda5eebf0,'0.21E2',9(36)>, position: nil, lock_version: 0, on_demand: false, cost_currency: nil, sale_price: nil>

> variant.sale_price = 8.00
=> 8.0

> variant.save
=> true

Hit Ctrl-D to exit the console.

Now we need to create the page that renders the product that is on sale. Let’s create a view to display these “on sale” products.

Create the required views directory by running: $ mkdir -p app/views/spree/home

Create the a file in your new directory called sale.html.erb and add the following to it:

<%= render 'spree/shared/products', :products => @products %>

Now start your rails server again and navigate to localhost:3000/sale to see the product you listed on sale earlier! Exciting stuff, isn't it? The next step is to actually reflect the sale price instead of the original price by fixing our sales price extension using Spree Decorator.

Decorate your variant

Create the required directory for your new decorator within your mystore application: $ mkdir -p app/models/spree

Within your new directory, create a file called variant_decorator.rb and add:

module Spree
  Variant.class_eval do
    alias_method :orig_price_in, :price_in
    def price_in(currency)
      return orig_price_in(currency) unless sale_price.present?
      Spree::Price.new(:variant_id => self.id, :amount => self.sale_price, :currency => currency)
    end
  end
end

The original method of price_in now has an alias of price_in unless there is a sale_price present, in which case the sale price is returned on the product’s master variant.

In order to ensure that our modification to the core Spree functionality works, we need to write a couple of unit tests for variant_decorator.rb. We need a full Rails application present to test it against, so we can create a barebones test_app to run our tests against.

Run the following command from the root directory of your EXTENSION: $ bundle exec rake test_app

It will begin the process by saying “Generating dummy Rails application…” - great! you’re on the right path.

Once you finish creating your dummy Rails app, run the rspec command and you should see the following output: No examples found.

Finished in 0.00005 seconds
0 examples, 0 failures

Now it’s time to start adding some tests by replicating your extension’s directory structure in the spec directory: $ mkdir -p spec/models/spree

In your new directory, create a file called variant_decorator_spec.rb and add this test:

require 'spec_helper'

describe Spree::Variant do
  describe "#price_in" do
    it "returns the sale price if it is present" do
      variant = create(:variant, :sale_price => 8.00)
      expected = Spree::Price.new(:variant_id => variant.id, :currency => "USD", :amount => variant.sale_price)

      result = variant.price_in("USD")

      result.variant_id.should == expected.variant_id
      result.amount.to_f.should == expected.amount.to_f
      result.currency.should == expected.currency
    end

    it "returns the normal price if it is not on sale" do
      variant = create(:variant, :price => 15.00)
      expected = Spree::Price.new(:variant_id => variant.id, :currency => "USD", :amount => variant.price)

      result = variant.price_in("USD")

      result.variant_id.should == expected.variant_id
      result.amount.to_f.should == expected.amount.to_f
      result.currency.should == expected.currency
    end
  end
end

Deface overrides

Next we need to add a field to our product admin page, so we don’t have to always go through the rails console to update a product’s sale_price. If we directly override the view that Spree provides, whenever Spree updates the view in a new release, the updated view will be lost, so we’d have to add our customizations back in to stay up to date.

A better way to override views is to use Deface, which is a Rails library to directly edit the underlying view file. All view customizations will be in ONE location: app/overrides which will make sure your app is always using the latest implementation of the view provided by Spree.

  1. Go to mystore/app/views/spree and create an admin/products directory and create the file _form.html.erb.
  2. Copy the full file NOT from Spree’s GitHub but from your Spree backend. You can think of your Spree backend as the area to edit your admin (among other things) - the spree_backend gem contains the most updated _form.html.erb - if you use the one listed in the documentation, you will get some Method Errors on your product page.

In order to find the _form.html.erb file in your spree_backend gem, navigate to your app, and within that, run the command: bundle show spree_backend

The result is the location of your spree_backend. Now cd into that location, and navigate to app/views/spree/admin/products - this is where you will find the correct _form.html.erb. Copy the contents of this file into the newly created _form.html.erb file within your application’s directory structure you just created: mystore/app/views/spree/admin/products.

Now we want to actually add a field container after the price field container for sale price so we need to create another override by creating a new file in your application’s app/overrides directory called add_sale_price_to_product_edit.rb and add the following content:

Deface::Override.new(:virtual_path => 'spree/admin/products/_form',
  :name => 'add_sale_price_to_product_edit',
  :insert_after => "erb[loud]:contains('text_field :price')",
  :text => "
    <%= f.field_container :sale_price do %>
      <%= f.label :sale_price, raw(Spree.t(:sale_price) + content_tag(:span, ' *')) %>
      <%= f.text_field :sale_price, :value =>
        number_to_currency(@product.sale_price, :unit => '') %>
      <%= f.error_message_on :sale_price %>
    <% end %>
  ")

The last step is to update our model in order to get an updated product edit form. Create a new file in your application’s app/models/spree directory called product_decorator.rb. Add the following content:

module Spree
  Product.class_eval do
    delegate_belongs_to :master, :sale_price
  end
end

Now you can check to see if it worked by heading to http://localhost:3000/admin/products and you should edit one of the products. Once you’re on the product edit page, you should see a new field container called SALE PRICE. Add a sale price in the empty field and click on update. Once completed, navigate to http://localhost:3000/sale to find an updated list of products on sale.

CONCLUSION

Congratulations, you've created the sales functionality! If you're using Spree 2.3 to create a sales functionality for your application, I would love to know what your experience was like. Good luck!