News

Welcome to End Point’s blog

Ongoing observations by End Point people

Smartrac's Liquid Galaxy at National Retail Federation

Last week, Smartrac exhibited at the retail industry’s BIG Show, NRF 2017, using a Liquid Galaxy with custom animations to showcase their technology.

Smartrac provides data analytics to retail chains by tracking physical goods with NFC and Bluetooth tags that combine to track goods all the way from the factory to the distribution center to retail stores. It's a complex but complete solution. How best to visualize all that data and show the incredible value that Smartrac brings? Seven screens with real time maps in Cesium, 3D store models in Unity, and browser-based dashboards, of course. End Point has been working with Smartrac for a number of years as a development resource on their SmartCosmos platform, helping them with UX and back-end database interfaces. This work included development of REST-based APIs for data handling, as well as a Virtual Reality project utilizing the Unity game engine to visualize data and marketing materials directly on several platforms including the Oculus Rift, the Samsung Gear 7 VR, and WebGL. Bringing that back-end work forward in a highly visible platform for the retail conference was a natural extension for them, and the Liquid Galaxy fulfilled that role perfectly. The large Liquid Galaxy display allowed Smartrac to showcase some of their tools on a much larger scale.

For this project, End Point deployed two new technologies for the Liquid Galaxy:
  • Cesium Maps - Smartrac had two major requirements for their data visualizations: show the complexity of the solution and global reach, while also making the map data offline wherever possible to avoid the risk of sketchy Internet connections at the convention center (a constant risk). For this, we deployed Cesium instead of Google Earth, as it allowed for a fully offline tileset that we could store locally on the server, as well as providing a rich data visualization set (we've shown other examples before).
  • Unity3D Models - Smartrac also wanted to show how their product tracking works in a typical retail store. Rather than trying to wire a whole solution during the short period for a convention, however, they made the call to visualize everything with Unity, a very popular 3D rendering engine. Given the multiple screens of the Liquid Galaxy, and our ability to adjust the view angle for each screen in the arch around the viewers, this Unity solution would be very immersive and able to tell their story quite well.
Smartrac showcased multiple scenes that incorporated 3D content with live data, labels superimposed on maps, and a multitude of supporting metrics. End Point developers worked on custom animation to show their tools in a engaging demo. During the convention, Smartrac had representatives leading attendees through the Liquid Galaxy presentations to show their data. Video of these presentations can be viewed below.



Smartrac’s Liquid Galaxy received positive feedback from everyone who saw it, exhibitors and attendees alike. Smartrac felt it was a great way to walk through their content, and attendees both enjoyed the content and were intrigued by the display on which they were seeing the content. Many attendees who had never seen Liquid Galaxy inquired about it.

If you’d like to learn more about Liquid Galaxy or new projects we are working on or having custom content developed, please visit our Liquid Galaxy website or contact us here.

TriSano Case Study


Overview

End Point has been working with state and local health agencies since 2008. We host disease outbreak surveillance and management systems and have expertise providing clients with the sophisticated case management tools they need to deliver in-house analysis, visualization, and reporting - combined with the flexibility to comply with changing state and federal requirements. End Point provides the hosting infrastructure, database, reporting systems, and customizations that agencies need in order to service to their populations.

Our work with health agencies is a great example of End Point’s ability to use our experience in open source technology, Ruby on Rails, manage and back up large secure datasets, and integrate reporting systems to build and support a full-stack application. We will discuss one such client in this case study.

Why End Point?

End Point is a good fit for this project because of our expertise in several areas including reporting and our hosting capabilities. End Point has had a long history of consultant experts in PostgreSQL and Ruby on Rails, which are the core software behind this application.

Also, End Point specializes in customizing open-source software, which can save not-for-profit and state agencies valuable budget dollars they can invest in other social programs.

Due to the secure nature of the medical data in these database, we and our clients must adhere to all HIPAA and CDC policies regarding hosting of data handling, server hosting, and staff authorization and access auditing.




Team

Steve Yoman

Steve serves as the project manager for both communication and internal development for End Point’s relationship with the client. Steve brings many years in project management to the table for this job and does a great job keeping track of every last detail, quote, and contract item.


Selvakumar Arumugam

Selva is one of those rare engineers who is gifted with both development and DevOps expertise. He is the main developer on daily tasks related to the disease tracking system. He also does a great job navigating a complex hosting environment and has helped the client make strides towards their future goals.


Josh Tolley

Josh is one of End Point’s most knowledgeable database and reporting experts. Josh’s knowledge of PostgreSQL is extremely helpful to make sure that the data is secure and stable. He built and maintains a standalone reporting application based on Pentaho.




Application

The disease tracking system consists of several applications including a web application, reporting application, two messaging areas, and SOAP services that relay data between internal and external systems.

TriSano: The disease tracking web app is an open source Ruby on Rails application based on the TriSano product, originally built at the Collaborative Software Initiative. This is a role-based web application where large amounts of epidemiological data can be entered manually or by data transfer.

Pentaho: Pentaho is a PostgreSQL reporting application that allows you to run a separate reporting service or embed reports into your website. Pentaho has a community version and an enterprise version, which is what is used on this particular project. This reporting application provides OLAP services, dashboarding, and generates ad hoc and static reports. Josh Tolley customized Pentaho so that the client can download or create custom reports depending on their needs.

Two Messaging Area applications: The TriSano system also serves as the central repository for messaging feeds used to collect data from local health care providers, laboratories throughout the state, and the CDC.

SOAP services run between the TriSano web app, the Pentaho reporting application, and the client’s data systems translate messages into the correct formats and relay the information to each application.

Into the Future

Based on the success over 9+ years working on this project, the client continues to work with their End Point team to manage their few non open-source software licenses, create long term security strategies, and plan and implement all of their needs related to the continuous improvement and changes in epidemiology tracking. We partner with the client so they can focus their efforts on reading results and planning for the future health of their citizens. This ongoing partnership is something End Point is very proud to be a part of and we hope to continue our work in this field well into the future.

End Point Rings the Morning Bell for Small Business



Recently Chase unveiled a digital campaign for Chase for Business by asking small businesses to submit themselves ringing their own morning bells every day when they open for business. Chase would select one video every day to post on their website and to play on their big screen in Times Square.

A few months back, Chase chose to feature End Point for their competition! They sent a full production team to our office to film us and how we ring the morning bell.

In preparation for Chase, we built a Liquid Galaxy presentation for Chase on our content management system. The presentation consisted of two scenes. In scene 1, we had “Welcome to Liquid Galaxy” written out across the outside four screens. We displayed the End Point Liquid Galaxy logo on the center screen, and set the system to orbit around the globe. In scene 2, the Liquid Galaxy flies to Chase’s Headquarter office in New York City, and orbits around their office. Two bells ring, each shown across two screens. The bell videos used were courtesy of Rayden Mizzi and St Gabriel's Church. Our logo continues to display on the center screen, and the Chase for Business website is shown on a screen as well.

The video that Chase created (shown above) features our CEO Rick giving an introduction of our company and then clicking on the Liquid Galaxy’s touchscreen to launch into the presentation.

We had a great time working with Chase, and were thrilled that they chose to showcase our company as part of their work to promote small businesses! To learn more about the Liquid Galaxy, you can visit our Liquid Galaxy website or contact us here.

Using Awk to beautify grep searches

Recently we've seen a sprout of re-implementations of many popular Unix tools. With the expansion of communities built around new languages or platforms, it seems that apart from the novelties in technologies — the ideas on how to use them stay the same. There are more and more solutions to the same kinds of problems:

  • text editors
  • CSS pre-processors
  • find-in-files tools
  • screen scraping tools
  • ... many more ...

In this blog post I'd like to tackle the problem from yet another perspective. Instead of resolving to "new and cool" libraries and languages (grep implemented in X language) — I'd like to use what's out there already in terms of tooling to build a nice search-in-files tool for myself.

Search in files tools

It seems that for many people it's very important to have a "search in files" tool that they really like. Some of the nice work we've seen so far include:

These are certainly very nice. As the goal of this post is to build something out of the tooling found in any minimal Unix-like installation — they won't work though. They either need to be compiled or require Perl to be installed which isn't everywhere (e. g. FreeBSD on default — though obviously available via the ports).

What I really need from the tool

I do understand that for some developers, waiting 100 ms longer for the search results might be too long. I'm not like that though. Personally, all I care about when searching is how the results are being presented. I also like to have the consistency of using the same approach between many machines I work on. We're often working on remote machines at End Point. The need to install e.g Rust compiler just to get the ripgrep tool is too time consuming and hence doesn't contribute to getting things done faster. Same goes for e. g the_silver_searcher which needs to be compiled too. What options do I have then?

Using good old Unix tools

The "find in files" functionality is covered fully by the Unix grep tool. It allows searching for a given substring but also "Regex" matches. The output can not only contain only the lines with matches, but also the lines before and after to give some context. The tool can provide line numbers and also search recursively within directories.

While I'm not into speeding it up, I'd certainly love to play with its output because I do care about my brain's ability to parse text and hence: be more productive.

The usual output of grep:

$ # searching inside of the ripgrep repo sources:
$ egrep -nR Option src
(...)
src/search_stream.rs:46:    fn cause(&self) -> Option<&StdError> {
src/search_stream.rs:64:    opts: Options,
src/search_stream.rs:71:    line_count: Option<u64>,
src/search_stream.rs:78:/// Options for configuring search.
src/search_stream.rs:80:pub struct Options {
src/search_stream.rs:89:    pub max_count: Option<u64>,
src/search_stream.rs:94:impl Default for Options {
src/search_stream.rs:95:    fn default() -> Options {
src/search_stream.rs:96:        Options {
src/search_stream.rs:113:impl Options {
src/search_stream.rs:160:            opts: Options::default(),
src/search_stream.rs:236:    pub fn max_count(mut self, count: Option<u64>) -> Self {
src/search_stream.rs:674:    pub fn next(&mut self, buf: &[u8]) -> Option<(usize, usize)> {
src/worker.rs:24:    opts: Options,
src/worker.rs:28:struct Options {
src/worker.rs:38:    max_count: Option<u64>,
src/worker.rs:44:impl Default for Options {
src/worker.rs:45:    fn default() -> Options {
src/worker.rs:46:        Options {
src/worker.rs:72:            opts: Options::default(),
src/worker.rs:148:    pub fn max_count(mut self, count: Option<u64>) -> Self {
src/worker.rs:186:    opts: Options,
(...)

What my eyes would like to see is more like the following:

$ mygrep Option src
(...)
src/search_stream.rs:
 46        fn cause(&self) -> Option<&StdError> {
 ⁞    
 64        opts: Options,
 ⁞    
 71        line_count: Option<u64>,
 ⁞    
 78    /// Options for configuring search.
 ⁞    
 80    pub struct Options {
 ⁞    
 89        pub max_count: Option<u64>,
 ⁞    
 94    impl Default for Options {
 95        fn default() -> Options {
 96            Options {
 ⁞    
 113   impl Options {
 ⁞    
 160               opts: Options::default(),
 ⁞    
 236       pub fn max_count(mut self, count: Option<u64>) -> Self {
 ⁞    
 674       pub fn next(&mut self, buf: &[u8]) -> Option<(usize, usize)> {

src/worker.rs:
 24        opts: Options,
 ⁞    
 28    struct Options {
 ⁞    
 38        max_count: Option<u64>,
 ⁞    
 44    impl Default for Options {
 45        fn default() -> Options {
 46            Options {
 ⁞    
 72                opts: Options::default(),
 ⁞    
 148       pub fn max_count(mut self, count: Option<u64>) -> Self {
 ⁞    
 186       opts: Options,
(...)

Fortunately, even the tiniest of Unix like system installation already has all we need to make it happen without the need to install anything else. Let's take a look at how we can modify the output of grep with awk to achieve what we need.

Piping into awk

Awk has been in Unix systems for many years — it's older than me! It is a programming language interpreter designed specifically to work with text. In Unix, we can use pipes to direct output of one program to be the standard input of another in the following way:

$ oneapp | secondapp

The idea with our searching tool is to use what we already have and pipe it between the programs to format the output as we'd like:

$ egrep -nR Option src | awk -f script.awk

Notice that we used egrep when in this simple case we didn't need to. It was sufficient to use fgrep or just grep.

Very quick introduction to coding with Awk

Awk is one of the forefathers of languages like Perl and Ruby. In fact some of the ideas I'll show you here exist in them as well.

The structure of awk programs can be summarized as follows:

BEGIN {
  # init code goes here
}

# "body" of the script follows:

/pattern-1/ {
  # what to do with the line matching the pattern?
}

/pattern-n/ {
  # ...
}

END {
  # finalizing
}

The interpreter provides default versions for all three parts: a "no-op" for BEGIN and END and "print each line unmodified" for the "body" of the script.

Each line is being exploded into columns based on the "separator" which by default is any number of consecutive white characters. One can change it via the -F switch or by assigning the FS variable inside the BEGIN area. We'll do just that in our example.

The "columns" that lines are being exploded into can be accessed via the special variables:

$0 # the whole line
$1 # first column
$2 # second column
# etc

The FS variable can contain a pattern too. So for example if we'd have a file with the following contents:

One | Two | Three | Four
Eins | Zwei | Drei | Vier
One | Zwei | Three | Vier

The following assignment would make Awk explode lines into proper columns:

BEGIN {
  FS="|"
}

# the ~ operator gives true if left side matches
# the regex denoted by the right side:
$1 ~ "One" {
  print $2
}

Running the following script would result with:

$ cat file.txt | awk -f script.awk
Two
Zwei

Simple Awk coding to format the search results

Armed with this simple knowledge, we can tackle the problem we stated in the earlier part of this post:

BEGIN {
  # the output of grep in the simple case
  # contains:
  # <file-name>:<line-number>:<file-fragment>
  # let's capture these parts into columns:
  FS=":"
  
  # we are going to need to "remember" if the <file-name>
  # changes to print it's name and to do that only
  # once per file:
  file=""
  
  # we'll be printing line numbers too; the non-consecutive
  # ones will be marked with the special line with vertical
  # dots; let's have a variable to keep track of the last
  # line number:
  ln=0
  
  # we also need to know we've just encountered a new file
  # not to print these vertical dots in such case:
  filestarted=0
}

# let's process every line except the ones grep prints to
# say if some binary file matched the predicate:
!/(--|Binary)/ {

  # remember: $1 is the first column which in our case is
  # the <file-name> part; The file variable is used to
  # store the file name recently processed; if the ones 
  # don't match up - then we know we encountered a new
  # file name:
  if($1 != file && $1 != "")
  {
    file=$1
    print "\n" $1 ":"
    ln = $2
    filestarted=0
  }

  # if the line number isn't greater than the last one by
  # one then we're dealing with the result from non-consecutive
  # line; let's mark it with vertical dots:
  if($2 > ln + 1 && filestarted != 0)
  {
    print "⁞"
  }

  # the substr function returns a substring of a given one
  # starting at a given index; we need to print out the
  # search result found in a file; here's a gotcha: the results
  # may contain the ':' character as well! simply printing
  # $3 could potentially left out some portions of it;
  # this is why we're using the whole line, cutting off the
  # part we know for sure we don't need:
  out=substr($0, length($1 ":" $2 ": "))

  # let's deal with only the lines that make sense:
  if($2 >= ln && $2 != "")
  {
    # sprintf function matches the one found in C lang;
    # here we're making sure the line numbers are properly
    # spaced:
    linum=sprintf("%-4s", $2)
    
    # print <line-number> <found-string>
    print linum " " out
    
    # assign last line number for later use
    ln=$2
    
    # ensure that we know that we "started" current file:
    filestarted=1
  }
}

Notice that the "middle" part of the script (the one with the patterns and actions) gets ran in an implicit loop - once for each input line.

To use the above awk script you could wrap it up with the following shell script:

#!/bin/bash

egrep -nR $@ | awk -f script.awk

Here we're very trivially (and somewhat naively) passing all the arguments passed to the script to egrep with the use of $@.

This of course is a simple solution. Some care needs to be applied when trying to make it work with A, B and C switches, it's not difficult either though. All it takes is to e.g pipe it through sed (another great Unix tool - the "stream editor") to replace the initial '-' characters in the [filename]-[line-number] parts to match our assumptions of having ":" as the separator in the awk script.

In praise of "what-already-works"

The simple script like shown above could easily be placed in your GitHub, BitBucket or GitLab account and fetched with curl on whichever machine you're working on. With one call to curl and maybe another one to put the scripts somewhere in the local PATH you'd gain a productivity enhancing tool that doesn't require anything else to work than what you already have.

I'll keep learning "what we already have" to not fall too much into "what's hot and new" unnecessarily.

Liquid Galaxy at CES

This past week, End Point attended and exhibited at CES, a global consumer electronics and consumer technology tradeshow that takes place every January in Las Vegas, Nevada. End Point’s Liquid Galaxy was set up in the Gigabyte exhibit at Caesar’s Palace.

Gigabyte invited us to set up a Liquid Galaxy in their exhibit space because they believe the Liquid Galaxy is the best show-piece for their Brix hardware. The Brix, or “Brix GTX Pro” in this case, offers an Intel i7 6th gen processor and NVIDIA GTX950 graphics (for high performance applications, such as gaming) hardware in a small and sleek package (12 in. length, 9 in. width, 1 in. height). Since each Brix GTX Pro offers 4 display outputs, we only needed two Brix to run all 7 screens and touchscreen, and one Brix to power the headnode!

This was the first time we have powered a Liquid Galaxy with Gigabyte Brix units, and the hardware proved to be extremely effective. It is a significantly sleeker solution than hardware situated in a server rack. It is also very cost-effective.

We created custom content for Gigabyte on our Content Management System. An video of one of our custom presentations can be viewed below. We built the presentation so the the GTX Pro product webpage was on the left-most screen, and the GTX Pro landing webpage was on the right-most screen. A custom Gigabyte video built for CES covered the center three screens. The Gigabyte logo was put on the 2nd screen to the left. In the background, the system was set to orbit on Google Earth. This presentation built for Gigabyte, which includes graphics, webpages, videos, and KML, demonstrates many of the capabilities of End Point’s Content Management System and the Liquid Galaxy.

In addition to being a visually dazzling display tool for Gigabyte to show off to its network of customers and partners, Liquid Galaxy was a fantastic way for Gigabyte to showcase the power of their Brix hardware. The opportunity to collaborate with Gigabyte on the Liquid Galaxy was a welcome one, and we look forward to further collaboration.

To learn more about the Liquid Galaxy, you can visit our Liquid Galaxy website or contact us here.