End Point

News

Welcome to End Point’s blog

Ongoing observations by End Point people

Postgres 9.5: three little things

The recent release of Postgres 9.5 has many people excited about the big new features such as UPSERT (docs) and row-level security (docs). Today I would like to celebrate three of the smaller features that I love about this release.

Before jumping into my list, I'd like to thank everyone who contributes to Postgres. I did some quick analysis and found that 85 people, from Adrien to Zeus, have helped version 9.5 of Postgres, at least according to the git logs. Of course, that number is actually higher, as it doesn't take into account people helping out on the #postgresql channel, running buildfarm animals, doing packaging work, keeping the infrastructure running, etc. Thanks to you all!

Feature: REINDEX VERBOSE

The first feature is one I've been wishing for a long time - a verbose form of the REINDEX command. Thanks to Sawada Masahiko for adding this. Similar to VACUUM, REINDEX gets kicked off and then gives no progress or information until it finishes. While VACUUM has long had the VERBOSE option to get around this, REINDEX gives you no clue to which index it was working on, or how much work each index took to rebuild. Here is a normal reindex, along with another 9.5 feature, the ability to reindex an entire schema:

greg=# reindex schema public;
## What seems like five long minutes later...
REINDEX

The new syntax uses parenthesis to support VERBOSE and any other future options. If you are familiar with EXPLAIN's newer options, you may see a similarity. More on the syntax in a bit. Here is the much improved version in action:

greg=# reindex (verbose) schema public;
INFO:  index "foobar_pkey" was reindexed
DETAIL:  CPU 11.00s/0.05u sec elapsed 19.38 sec.
INFO:  index "foobar_location" was reindexed
DETAIL:  CPU 5.21s/0.05u sec elapsed 18.27 sec.
INFO:  index "location_position" was reindexed
DETAIL:  CPU 9.10s/0.05u sec elapsed 19.70 sec.
INFO:  table "public.foobar" was reindexed
INFO:  index "foobaz_pkey" was reindexed
DETAIL:  CPU 7.04s/0.05u sec elapsed 19.61 sec.
INFO:  index "shoe_size" was reindexed
DETAIL:  CPU 12.26s/0.05u sec elapsed 19.33 sec.
INFO:  table "public.foobaz" was reindexed
REINDEX

Why not REINDEX VERBOSE TABLE foobar? Or even REINDEX TABLE foobar WITH VERBOSE? There was a good discussion of this on pgsql-hackers when this feature was being developed, but the short answer is that parenthesis are the correct way to do things moving forward. Robert Haas summed it up well:

The unparenthesized VACUUM syntax was added back before we realized that that kind of syntax is a terrible idea. It requires every option to be a keyword, and those keywords have to be in a fixed order. I believe the intention is to keep the old VACUUM syntax around for backward-compatibility, but not to extend it. Same for EXPLAIN and COPY.

The psql help option should show the new syntax:

greg=# \h REINDEX
Command:     REINDEX
Description: rebuild indexes
Syntax:
REINDEX [ ( { VERBOSE } [, ...] ) ] { INDEX | TABLE | SCHEMA | DATABASE | SYSTEM } name

Feature: pg_ctl defaults to "fast" mode

The second feature in Postgres 9.5 I am happy about is the change in niceness of pg_ctl from "smart" mode to "fast" mode. The help of pg_ctl explains the different modes fairly well:

pg_ctl is a utility to initialize, start, stop, or control a PostgreSQL server.

Usage:
  pg_ctl stop    [-W] [-t SECS] [-D DATADIR] [-s] [-m SHUTDOWN-MODE]
...
Shutdown modes are:
  smart       quit after all clients have disconnected
  fast        quit directly, with proper shutdown
  immediate   quit without complete shutdown; will lead to recovery on restart

In the past, the default was 'smart'. Which often means your friendly neighborhood DBA would type "pg_ctl restart -D data", then watch the progress dots slowly marching across the screen, until they remembered that the default mode of "smart" is kind of dumb - as long as there is one connected client, the restart will not happen. Thus, the DBA had to cancel the command, and rerun it as "pg_ctl restart -D data -m fast". Then they would vow to remember to add the -m switch in next time. And promptly forget to the next time they did a shutdown or restart. :) Now pg_ctl has a much better default. Thanks, Bruce Momjian!

Feature: new 'cluster_name' option

When you run a lot of different Postgres clusters on your server, as I tend to do, it can be hard to tell them apart as the version and port are not reported in the ps output. I sometimes have nearly a dozen different clusters running, due to testing different versions and different applications. Similar in spirit to the application_name option, the new cluster_name option solves the problem neatly by allowing a custom string to be put in to the process title. Thanks to Thomas Munro for inventing this. So instead of this:

greg      7780     1  0 Mar01 pts/0    00:00:03 /home/greg/pg/9.5/bin/postgres -D data
greg      7787  7780  0 Mar01 ?        00:00:00 postgres: logger process   
greg      7789  7780  0 Mar01 ?        00:00:00 postgres: checkpointer process   
greg      7790  7780  0 Mar01 ?        00:00:09 postgres: writer process   
greg      7791  7780  0 Mar01 ?        00:00:06 postgres: wal writer process   
greg      7792  7780  0 Mar01 ?        00:00:05 postgres: autovacuum launcher process   
greg      7793  7780  0 Mar01 ?        00:00:11 postgres: stats collector process  
greg      6773     1  0 Mar01 pts/0    00:00:02 /home/greg/pg/9.5/bin/postgres -D data2
greg      6780  6773  0 Mar01 ?        00:00:00 postgres: logger process   
greg      6782  6773  0 Mar01 ?        00:00:00 postgres: checkpointer process   
greg      6783  6773  0 Mar01 ?        00:00:04 postgres: writer process   
greg      6784  6773  0 Mar01 ?        00:00:02 postgres: wal writer process   
greg      6785  6773  0 Mar01 ?        00:00:02 postgres: autovacuum launcher process   
greg      6786  6773  0 Mar01 ?        00:00:07 postgres: stats collector process

One can adjust the cluster_name inside each postgresql.conf (for example, to 'alpha' and 'bravo'), and get this:

greg      8267     1  0 Mar01 pts/0    00:00:03 /home/greg/pg/9.5/bin/postgres -D data
greg      8274  8267  0 Mar01 ?        00:00:00 postgres: alpha: logger process   
greg      8277  8267  0 Mar01 ?        00:00:00 postgres: alpha: checkpointer process   
greg      8278  8267  0 Mar01 ?        00:00:09 postgres: alpha: writer process   
greg      8279  8267  0 Mar01 ?        00:00:06 postgres: alpha: wal writer process   
greg      8280  8267  0 Mar01 ?        00:00:05 postgres: alpha: autovacuum launcher process   
greg      8281  8267  0 Mar01 ?        00:00:11 postgres: alpha: stats collector process  
greg      8583     1  0 Mar01 pts/0    00:00:02 /home/greg/pg/9.5/bin/postgres -D data2
greg      8590  8583  0 Mar01 ?        00:00:00 postgres: bravo: logger process   
greg      8592  8583  0 Mar01 ?        00:00:00 postgres: bravo: checkpointer process   
greg      8591  8583  0 Mar01 ?        00:00:04 postgres: bravo: writer process   
greg      8593  8583  0 Mar01 ?        00:00:02 postgres: bravo: wal writer process   
greg      8594  8583  0 Mar01 ?        00:00:02 postgres: bravo: autovacuum launcher process   
greg      8595  8583  0 Mar01 ?        00:00:07 postgres: bravo: stats collector process

There are a lot of other things added in Postgres 9.5. I recommend you visit this page for a complete list, and poke around on your own. A final shout out to all the people that are continually improving the tab-completion of psql. You rock.

Improve SEO URLs for Interchange search pages

This is an article aimed at beginner-to-intermediate Interchange developers.

A typical approach to a hierarchical Interchange site is:

Categories -> Category -> Product

I.e., you list all your categories as links, each of which opens up a search results page filtering the products by category, with links to the individual product pages via the flypage.

Recently I upgraded a site so the category URLs were a bit more SEO-friendly. The original category filtering search produced these lovely specimens:

/search.html?fi=products&st=db&co=1&sf=category&se=Shoes&op=rm
   &sf=inactive&se=yes&op=ne&tf=category&ml=100

but what I really wanted was:

/cat/Shoes.html

Such links are easier to communicate to users, more friendly to search engines, less prone to breakage (e.g., by getting word-wrapped in email clients), and avoid exposing details of your application (here, we've had to admit publicly that we have a table called "products" and that some items are "inactive"; a curious user might decide to see what happens if they change "sf=inactive&se=yes" to some other expression).

Here's how I attacked this.

Creating a category listing page

First, I copied my "results.html" page to "catpage.html". That way, my original search results page can continue to serve up ad hoc search results.

The search results were displayed via:

[search-region]
...
[/search-region]

I converted this to a database query:

[query sql="SELECT * FROM products WHERE NOT inactive AND category = [sql-quote][cgi category][/sql-quote]"
 type=list prefix=item]
...
[/query]

I chose to use a prefix other than the default since it would avoid having to change so many tags in the page, and now both the original search page and new catpage would look much the same internally (and thus, if desired, I could refactor them in the future).

Note that I've defined part of the API for this page: the category to be searched is set in a CGI variable called "category".

In my specific case, there was additional tinkering with this tag, because I had nested [query] tags already in the page within the search-region.

Creating a "cat" actionmap

In order to translate a URL containing SEO-friendly "/cat/Shoes.html" into my search, I need an actionmap. Here's mine; it's very simple.

Actionmap cat <<"CODE"
sub {
  my $url = shift;
  my @url_parts = split '/' => $url;
  shift @url_parts if $url_parts[0] eq 'cat';

  $CGI->{mv_nextpage} = 'catpage.html';
  $CGI->{category} = shift @url_parts;
  return 1;
}
CODE

Actionmaps are called when Interchange detects that a URL begins with the actionmap's name; here "cat". They are passed a parameter containing the URL fragment (after removing all the site stuff). Here, that would be (e.g.) "/cat/Shoes". We massage the URL to get our category code, and set up the page to be called along with the CGI parameter(s) it expects.

Cleaning up the links

At the start of this article I noted that I may have a page listing all my categories. In my original setup, this generated links using a construction like this:


  Shoes

Now my links are the much simpler:

Shoes

In my specific case, these links were generated within a [query] loop, but the approach is the same.

Note: the Strap demo supports SEO-friendly URLs out-of-the-box, and that it is included with the latest Interchange 5.10 release.

Full Screen Gallery with Supersized and video slides

I was recently looking to build a full screen image and video gallery for our client Mission Blue. Something similar to the Google Maps interface you can see in the screenshot below:

After scouring the Internet to find a suitable jQuery plugin I finally decided on Supersized, Full screen background slideshow plugin for jQuery.

After downloading the library, include it on the page:

<link href="/wp-content/plugins/wp-supersized/theme/supersized.shutter.css?ver=4.2.2" id="supersized_theme_css-css" media="all" rel="stylesheet" type="text/css"></link>
<script src="/wp-includes/js/jquery/ui/effect.min.js?ver=1.11.4" type="text/javascript"></script>
<script src="/wp-content/plugins/wp-supersized/js/jquery.easing.min.js?ver=1.3" type="text/javascript"></script>
<script src="/wp-content/plugins/wp-supersized/js/jquery.easing.compatibility.js?ver=1.0" type="text/javascript"></script>
<script src="/wp-content/plugins/wp-supersized/js/jquery.animate-enhanced.min.js?ver=0.75" type="text/javascript"></script>
<script type='text/javascript' src='/wp-content/plugins/wp-supersized/js/supersized.3.2.7.min.js?ver=3.2.7'></script>

Basic functionality

Let's create a variable that will hold all the images in the slideshow:

var images = [];
images.push({
  type: 'IMAGE',
  image: 'img1.jpg',
  title: 'Image 1',
  thumb: 'img1_thumb.jpg',
  url: 'http://www.endpoint.com'
});
images.push({
  type: 'YOUTUBE',
  image: 'screenshot1.jpg',
  title: 'YouTube slide',
  videoid: 'abc12345678',
  thumb: 'screenshot1_thumb.jpg',
  url: 'https://www.youtube.com/watch?v=abc12345678'
});

Let's initialize Supersized:

jQuery.supersized({
  slideshow: 1,
  autoplay: 0,
  min_width: 0,
  min_height: 0,
  vertical_center: 1,
  horizontal_center: 1,
  fit_always: 0,
  fit_portrait: 1,
  fit_landscape: 0,
  slide_links: 'blank',
  thumb_links: 1,
  thumbnail_navigation: 1,
  slides: images,
  mouse_scrub: 0
});

Customizing the toolbar

<div id="thumb-tray" class="load-item">
  <div id="thumb-back"></div>
  <div id="thumb-forward"></div>
</div>
<div id="slidecaption"></div>

Customizing the screen image size

I didn't want to have the full screen image as it was a little overwhelming for the user. I wanted the black bars just like in the Google interface. Supersized allows for easy customization. This CSS did the trick:

#supersized, #supersized li {
  width: 70% !important;
  left: 0 !important;
  right: 0 !important;
  top: 1px !important;
  margin:auto;
}

Introducing video (YouTube) slides

First, I added the Youtube API:

<script type="text/javascript" src="https://www.youtube.com/iframe_api"></script>

Then I added a couple of CSS styles:

#supersized .player {
  margin: auto;
  display: block;
}

Finally, I went into the Supersized library source and modified it. To allow for the video slides to appear, I added the new condition and the slide type 'YOUTUBE'

base._renderSlide = function(loadPrev, options) {
  var linkTarget = base.options.new_window ? ' target="_blank"' : '';
  var imageLink = (base.options.slides[loadPrev].url) ? "href='" + base.options.slides[loadPrev].url + "'" : "";
  var slidePrev = base.el + ' li:eq(' + loadPrev + ')';
  var imgPrev = $('<img src="' + base.options.slides[loadPrev].image + '"/>');

  if (base.options.slides[loadPrev].type == 'YOUTUBE') {
    imgPrev.load(function () {
      var video = $('<div class="player" id="player'+ base.options.slides[loadPrev].videoid + '"></div>');
      video.appendTo(slidePrev);
      var player = new YT.Player('player' + base.options.slides[loadPrev].videoid, {
        height: 390,
        width: 640,
        videoId: base.options.slides[loadPrev].videoid
      });
    });// End Load
  }
  else {
    imgPrev.appendTo(slidePrev).wrap('<a ' + imageLink + linkTarget + '></a>').parent().parent().addClass('image-loading ' + options['class']);

    imgPrev.load(function () {
      $(this).data('origWidth', $(this).width()).data('origHeight', $(this).height());
      base.resizeNow();// Resize background image
    });// End Load
  }
};

Final Result

This is how gallery looks with the customizations:

This is what a video slide looks like:

Hope you found this writeup useful!

Medium-inspired Parallax Blur Effect For WordPress

Are you are running a WordPress blog, but secretly dying to have that Medium parallax blur effect? I recently implemented this, and would like to share it with you. By the way, while I was working on the article, the effect was removed from Medium, which only makes having one on the website more precious.

Let's assume that we have our custom theme class MyTheme. In functions.php:

class MyThemeBaseFunctions {
  public function __construct() {
    add_image_size('blurred', 1600, 1280, true);
    add_filter('wp_generate_attachment_metadata', array($this,'wp_blur_attachment_filter'));
  }
}

We added a custom image size blurred, and a callback wp_blur_attachment_filter to wp_generate_attachment_metadata. Here's where the magic happens.

Before that let's talk a little about ImageMagick, a powerful library for image processing that we will use to create the blurred effect. After some experimenting I figured that the image needed to be darkened, and then a regular blur should be applied with sigma=20. You can read more about these settings at ImageMagick Blur Usage. I used Gaussian Blur at first, but found the processing was extremely slow, and there wasn't much difference in the end result compared to other blur methods.

Now we are ready to write a blurring function:

public function wp_blur_attachment_filter($image_data) {
    if ( !isset($image_data['sizes']['blurred']) )
       return $image_data;
       $upload_dir = wp_upload_dir();
       $src = $upload_dir['path'] . '/' . $image_data['sizes']['large']['file'];
       $destination = $upload_dir['path'] . '/' . $image_data['sizes']['blurred']['file'];
       $imagick = new \Imagick($src);
       $imagick->blurImage(0, 20, Imagick::CHANNEL_ALL);
       $imagick->modulateImage(75, 105, 100);
       $imagick->writeImage($destination);
       return $image_data;
 }

I darken the image:

$imagick->modulateImage(75, 105, 100);

And I blur the image:

$imagick->blurImage(0, 20, Imagick::CHANNEL_ALL);

Now we are able to use the custom image size in the template. Place the helper function in functions.php:

public static function non_blurred($src) {
  $url = get_site_url() . substr($src, strrpos($src, '/wp-content'));
  $post_ID = attachment_url_to_postid($url);
  list($url, $width, $height) = wp_get_attachment_image_src($post_ID, 'large');
  return $url;
}

public static function blurred($src) {
  $url = get_site_url() . substr($src, strrpos($src, '/wp-content'));
  $post_ID = attachment_url_to_postid($src);
  list($url, $width, $height) = wp_get_attachment_image_src($post_ID, 'blurred');
  return $url;
}

And now use it in the template like this:

<div class="blurImg">
  <div style="background-image: url('<?php echo MyTheme::non_blurred(get_theme_mod( 'header' )); ?>')"></div>
  <div style="background-image: url('<?php echo MyTheme::blurred(get_theme_mod('header')); ?>'); opacity: 0;" class="blur"></div>
</div>
<header></header>

Add CSS:

.blurImg {
  height: 440px;
  left: 0;
  position: relative;
  top: 0;
  width: 100%;
  z-index: -1;
}

.blurImg > div {
  background-position: center center;
  background-repeat: no-repeat;
  background-size: cover;
  height: 440px;
  position: fixed;
  width: 100%;
}

header {
  padding: 0 20px;
  position: absolute;
  top: 0;
  width: 100%;
  z-index: 1;
}

Add JavaScript magic sauce to gradually replace the non-blurred image with the blurred version as the user scrolls:

(function() {
    jQuery(window).scroll(function() {
      var H = 240;
      var oVal = jQuery(window).scrollTop() / H;
      jQuery(".blurImg .blur").css("opacity", oVal);
    });
  }).call(this);

Generating the blurred version can be very strenuous on the server. I would oftentimes receive the error PHP Fatal error: Maximum execution time of 30 seconds exceeded. There are ways to work around that. One way is to use a quicker method of blurring the image by shrinking it, blurring, and resizing it back. Another way is to use a background job, something like this:

add_action( 'add_attachment', array($this, 'wp_blur_attachment_filter') );
add_action( 'wp_blur_attachment_filter_hook', 'wp_blur_attachment_filter_callback');
function wp_blur_attachment_filter_callback($path) {
  $path_parts = pathinfo($path);
  $imagick = new \Imagick($path);
  $imagick->blurImage(0, 20, Imagick::CHANNEL_ALL);
  $imagick->modulateImage(75, 105, 100);
  $destination = dirname($path) . "/" . $path_parts['filename'] . "_darken_blur." . $path_parts['extension'];
  $imagick->writeImage($destination);
}

public function wp_blur_attachment_filter($post_ID) {
  $path = get_attached_file( $post_ID );
  wp_schedule_single_event(time(), 'wp_blur_attachment_filter_hook', array($path));
}

Or better yet, use cloud image processing — I wrote about that here.

I hope you found this writeup useful!

PostgreSQL Point-in-time Recovery: An Unexpected Journey

With all the major changes and improvements to PostgreSQL's native replication system through the last few major releases, it's easy to forget that there can be benefits to having some of the tried and true functionalities from older PostgreSQL versions in place.

In particular, with the ease of setting up Hot Standby/Streaming Replication, it's easy to get replication going with almost no effort. Replication is great for redundancy, scaling, and backups, however it does not solve all potential data-loss problems; for best results when used in conjunction with Point-in-time Recovery (PITR) and the archiving features of PostgreSQL.

Background

We recently had a client experience a classic blunder with their database; mainly that of performing a manual UPDATE of the database without wrapping in a transaction block. The table in question was the main table in the application, and the client had done an unqualified UPDATE, unintentionally setting a specific field to a constant value instead of targetting the specific row they thought they were going for.

Fortunately, the client had backups. Unfortunately the backups themselves would not be enough; being a snapshot of the data earlier in the day, we would have lost all changes made throughout the day.

This resulted in a call to us to help out with the issue. We fortunately had information about precisely when the errant UPDATE took place, so we were able to use this information to help target a PITR-based restore.

The Approach

Since we did not want to lose other changes made in this database cluster either before or after this mistake, we came up with the following strategy which would let us keep the current state of the database but just recover the field in question:

  1. Create a parallel cluster for recovery.
  2. Load the WAL until just before the time of the event.
  3. Dump the table in question from the recovery cluster.
  4. Load the table in the main cluster with a different name.
  5. Use UPDATE FROM to update the field values for the table with their old values based on the table's Primary Key.

In practice, this worked out pretty well, though of course there were some issues that had to be worked around.

PostgreSQL's PITR relies on its WAL archiving mechanism combined with taking regular base backups of the data directory. As part of the archive setup, you choose the strategies (such as the frequency of the base backups) and ensure that you can recover individual WAL segment files when desired.

In order for the above strategy to work, you need hardware to run this on. The client had proposed their standby server which was definitely equipped to handle this and did not have much load. The client had initially suggested that we could break the replication, but we recommended against that, due to both having sufficient disk space and being able to avoid future work and risk by having to rebuild the replica after this stage.

We copied over the daily base backup into its own directory/mount point here, adjusted the recovery.conf file to point to the local WAL directory, and copied the necessary WAL files from the archive location to the pg_xlog directory of the new cluster. We also had to adjust a few parameters in the new cluster, most notably the "port" parameter to run the cluster on a different port. We also used the timestamp of the incident as a target for the recovery.conf's recovery_target_time setting. After starting up the new cluster and letting things process, we were able to dump the table in question and finish the recovery on the master.

Some issues did come up for us that we needed expert-level knowledge of the system, as well as having some good luck in the timing if the event. We had to locate several of the WAL files in the initial archive on the primary server due to some issues with the (inherited by us) configuration. Also due to the timing of the event and the amount of time it took to create the parallel cluster, we successfully were able to create the new instance before the next nightly base backup was run, which was fortunate, because it otherwise would have resulted in our inability to resolve this issue. (The client had things configured to keep only a single base backup around.)

Lessons Learned

With any issue, there is a takeaway, so what are those here?

  • Always use explicit transactions when manually modifying data, or modify your production environment's .psqlrc to add\set AUTOCOMMIT off.
  • Not all data-loss situations can be fixed with replication alone—Point in Time Recovery is absolutelystillrelevant these days.
  • It helps to have a PostgreSQL expert on-hand day-or-night. End Point offers 24x7 PostgreSQL support, which you can engage by getting ahold of us here.

Breaking Bash

Recently I managed to break the bash shell in an interesting and puzzling way. The initial symptoms were very frustrating: a workflow process we use here (creating a development camp) failed for me, but for no one else. That was at least a clue that it was me, not the workflow process.

Eventually, I narrowed down the culprit to the "grep" command (and that was more through luck than steadfast Sherlock-like detective work).

$ grep foo bar

grep: foo: No such file or directory

Eh? grep is mis-parsing the arguments! How does that happen?

So I began to study my bash environment. Eventually I came up with this fascinating little typo:

export GREP_OPTIONS='—color=auto'

That's supposed to be:

export GREP_OPTIONS='--color=auto'

but it got recorded in my .bashrc as a en-dash, not a double-dash. (My guess is that I cut-and-pasted this from a web page where someone over-helpfully "typeset" this command.)

Ironically, this typo is innocuous under Bash 3.x, but when you slot it into a Bash 4.x installation, all heck busts loose.

Using Google Analytics to understand and grow your business

Google Analytics, a web analytics service offered by Google, is a very handy tool for understanding your audience. It allows you to understand where traffic comes from and what resonates with your audience, which has led to Google Analytics being the most widely used web analytics service on the internet. If you understand your website’s traffic, you then have the ability to focus your website and content to optimize engagement and growth.

With Google Analytics, you have the ability to see traffic from all channels. This will lead to clear insights, and will help you understand what’s working and what’s not.
  • Organic - traffic from search engines which is not paid for
  • Paid Search - visitors that clicked on one of your paid advertisements (also known as Pay-Per-Click or PPC)
  • Direct - visitors that typed your website address directly into the browser (includes bookmarks)
  • Social - traffic from sites that are considered to be social networking sites
  • Referral - visitors that arrived from 3rd party referrals
  • Email - visitors that are directed from an email
  • Display - visitors directed from video and display advertising

It will be helpful to walk through an example. Say you launch an email marketing campaign, and want to understand how your audience responded to your content. First, you can check how many people clicked the ad to come to your website. From there, you can see how they navigated the page. Who came to the webpage? Did they take the actions you were hoping they would take? How long did they spend viewing the page? Where did they click? Did this click lead to a conversion/sale?

Prior to coming to End Point, I was working as an Analytic Strategist at a digital media agency. One of my biggest clients was luxury jewelry company Tiffany & Co. My responsibilities included analyzing seasonal and promotional trends to develop forecasts, broken out by channel. I would also evaluate the marketing effectiveness of initiatives by keeping a close eye on the user experience and navigational behavior, and provide recommendations throughout the customer journey.

Many jewelry promotions are seasonal, so we were constantly making changes to optimize the page and ensure traffic was navigating the website as we hoped. I would make sure to keep in mind the 4 P’s - Product, Price, Place, and Promotion. I wanted to ensure that the layout of your site, price-point, and specials were always in line with what the audience is most looking for during the season.

This experience of understanding client goals, and helping them achieve those goals through the success of their websites, has proven to be a valuable skill since joining End Point’s team. Beyond having the ability to help clients with their websites, this knowledge has also been helpful internally. I keep a close eye on where our website traffic is coming from, including seeing which blog posts resonate with different audiences. I have made recommendations for adjustments to poor-performing pages, and taken time to analyze pages with high bounce rates, to see whether the bounce rates are due to the page not being well-constructed or due to traffic not being properly directed. I also take note of any industry changes, and make website adjustments to accommodate those changes. For example, End Point has a plethora of skill in many programming languages, and if we see one language gaining popularity we will make a point to highlight it more prominently.

I look forward to continuing to use Google Analytics to strengthen our company websites and to help our clients strengthen theirs. Please don't hesitate to reach out at ask@endpoint.com if you would like to learn more.