News

Welcome to End Point’s blog

Ongoing observations by End Point people

Wrocoverb part 2 - The Elixir Hype

One of the main reasons I attend Wrocloverb almost every year, is that it's a great forum for confronting ideas. It's almost a tradition to have at least 2 very enjoyful discussion panels during this conference. One of them was devoted to Elixir and why Ruby [1] community is so hyping about it.


Why Elixir is “sold” to us as “new better Ruby” while its underlying principles are totally different? Won’t it result in Elixir programmers that do not understand Elixir (like Rails programmers that do not know Ruby)?


Panelists discussed briefly the history of Elixir:

Jose Valim (who created Elixir) was working on threading in Rails and he was searching for better approaches for threading in web frameworks. He felt like lots of things were lacking in Erlang and Elixir is his approach for better Exceptions, better developer experience.

Then they've jumped to Elixir's main goals which are:
  • Compatibility with Erlang (all datatypes)
  • Better tooling
  • Improving developer's experience

After that, they've started speculating about problems that Elixir solves and RoR doesn't:

Ruby on Rails addresses many problems in ways that may be somehow archaic to us in the ever-scaling world of 2017. There are many approaches to it - e.g. "actor model" which is implemented in Ruby by Celluloid, in Scala with Akka and also Elixir and Phoenix (Elixir's rails-like framework) has it's own actor model.

Phoenix ("Rails for Elixir") is just an Elixir app - unlike Rails, it is not separate from Elixir. Moreover Elixir is exactly the same language as Erlang so:

Erlang = Elixir = Phoenix

Great comment:


Then a discussion followed during which panelists speculated about the price of the jump from Rails to Elixir:

Java to Rails jump was caused by business/productivity. There's no such jump for Phoenix/Elixir. Elixir code is more verbose (less instance variables, all args are passed explicitly to all functions)

My conclusions

A reason why this discussion was somehow shallow and pointless was that those two world have different problems. Great comment:

Elixir solves a lot of technical problems with scaling thanks to Erlang's virtual machine. Such problems are definitely only a small part of what typical ruby problems solvers deal with on a daily basis. Hearing Elixir and Ruby on Rails developers talk past each other was probably the first sign of the fact that there's no hyping technology right now. Each problem can be addressed by many tech tools and frameworks.

[1] Wrocloverb describes itself as a "best Java conference in Ruby world". It's deceiving:

Wrocloverb 2017 part 1

Wrocloverb is a single-track 3-day conference that takes place in Wrocław (Poland) every year in March.

Here's a subjective list of most interesting talks from the first day

# Kafka / Karafka (by Maciej Mensfeld)

Karafka is another library that simplifies Apache Kafka usage in Ruby. It lets Ruby on Rails apps benefit from horizontally scalable message busses in a pub-sub (or publisher/consumer) type of network.

Why Kafka is (probably) better message/task broker for your app:
- broadcasting is a real power feature of kafka (http lacks that)
- author claims that its easier to support it  rather than ZeroMQ/RabbitMQ
- it's namespaced with topics (similar to Robot Operating System)
- great replacement for ruby-kafka and Poseidon



# Machine Learning to the Rescue (Mariusz Gil)


This talk was devoted to Machine Learning success (and failure) story of the author.

Author underlined that Machine Learning is a process and proposed following workflow:


  1. define a problem
  2. gather you data
  3. understand your data
  4. prepare and condition the data
  5. select & run your algorithms
  6. tune algorithms parameters
  7. select final model
  8. validate final model (test using production data)
Mariusz described few ML problems that he has dealt with in the past. One of them was a project designed to estimate cost of a code review. He outlined the process of tuning the input data. Here's a list of what comprised the input for a code review estimation cost:
  • number of lines changed
  • number of files changed
  • efferent coupling
  • afferent coupling
  • number of classes
  • number of interfaces
  • inheritance level
  • number of method calls
  • lloc metric
  • lcom metric (whether single responsibility pattern is followed or not)

# Spree lightning talk by sparksolutions.co

One of the lightning talks was devoted to Spree. Here's some interesting latest data from spree world:

  • number of contributors of spree - 700
  • it's very modular modular
  • it's api driven
  • it's one of the biggest repos on github
  • very large number of extensions
  • it drives thousands of stores worldwide
  • Spark Solutions is a maintainer
  • Popular companies that use spree: Go Daddy, Goop, Casper, Bonobos, Littlebits, Greetabl
  • it support rails 5, rails 4.2 and rails 3.x
Author also released newest 3.2.0 stable version during the talk:








Half day GlusterFS training in Selangor, Malaysia

On January 21, 2017, I had an opportunity to join a community-organized training on storage focused on GlusterFS. GlusterFS is an open source cloud-based filesharing network. The training was not a strictly structured training as the topic approached knowledge sharing from various experts and introduced GlusterFS to the ones who were new to it. The first session was delivered by Mr Adzmely Mansor from NexoPrima. He shared a bit of his view on GlusterFS and technologies that are related to it.

Mr Haris, a freelance Linux expert, later lead a GlusterFS technical class. Here we created two virtual machines (we used Virtualbox) to understand how GlusterFS works in a hands-on scenario. We used Ubuntu 16.04 as the guest OS during technical training. We used Digital Ocean's GlusterFS settings as a base of reference. The below commands detail roughly what we did during the training.

In GlusterFS the data section is called as "brick". Hence we could have a lot of "bricks" if we have it more than once :) . As Ubuntu already had the related packages in its repository, we could simply run apt-get for the package installation. Our class notes were loosely based from Digital Ocean's GlusterFS article here. (Note: the article was based on Ubuntu 12.04 so some of the steps could be omitted).

The GlusterFS packages could be installed as a superuser with the following command:

apt-get install glusterfs-server

Since we were using a bridged VM during the demo, we simply edited the /etc/hosts in the each VM so they could communicate between each other by using hostname instead of using typing the IP manually.

root@gluster2:~# cat /etc/hosts|grep gluster
192.168.1.11	gluster1
127.0.0.1	gluster2

Here we will try to probe the remote host whether it is reachable:

root@gluster2:~# gluster peer probe gluster1
peer probe: success. Host gluster1 port 24007 already in peer list

The following commands create the storage volume. Later, whatever we put in the /data partition will be reachable on the other gluster node.

gluster volume create datastore1 replica 2 transport tcp gluster1:/data gluster2:/data
gluster volume create datastore1 replica 2 transport tcp gluster1:/data gluster2:/data force
gluster volume start datastore1

Most of the parts here could be retrieved from the link that I gave above. But let's see what will happen later on when the mounting part is done.

cd /datastore1/
root@gluster2:/datastore1# touch blog
root@gluster2:/datastore1# ls -lth
total 512
-rw-r--r-- 1 root root  0 Mar 14 21:33 blog
-rw-r--r-- 1 root root 28 Jan 21 12:15 ujian.txt

The same output could be retrieved from gluster1

root@gluster1:/datastore1# ls -lth
total 512
-rw-r--r-- 1 root root  0 Mar 14 21:33 blog
-rw-r--r-- 1 root root 28 Jan 21 12:15 ujian.txt


Mr. Adzmely gave explanation on the overall picture of GlusterFS



Mr. Haris explained on the technical implementation of GlusterFS

In terms of the application, the redundancy based storage is good for situations where you have a file being updated on several servers and you need to ensure the file is there for retrieval even if one of the servers is down. One audience member shared his experience deploying GlusterFS in his workplace (a university) for the purpose of new intake of student's registration. If anyone ever uses Samba filesystem or NFS, this kind of similar, but GlusterFS is much more advanced. I recommend additional reading here.

Shell efficiency: mkdir and mv

Little tools can be a nice improvement. Not everything needs to be thought-leaderish.

For example, once upon a time in my Unix infancy I didn't know that mkdir has the -p option to make intervening directories automatically. So back then, in order to create the path a/b/c/ I would've run: mkdir a; mkdir b; mkdir c when I could instead have simply run: mkdir -p a/b/c.

In working at the shell, particularly on my own local machine, I often find myself wanting to move one or several files into a different location, to file them away. For example:

mv -i ~/Downloads/Some\ Long\ File\ Name.pdf ~/some-other-long-file-name.tar.xz ~/archive/new...

at which point I realize that the subdirectory of ~/archive that I want to move those files into does not yet exist.

I can't simply move to the beginning of the line and change mv to mkdir -p without removing my partially-typed ~/archive/new....

I can go ahead and remove that, and then after I run the command I have to change the mkdir back to mv and add back the ~/archive/new....

In one single day I found I was doing that so often that it became tedious, so I re-read the GNU coreutils manpage for mv to see if there was a relevant option I had missed or a new one that would help. And I searched the web to see if a prebuilt tool is out there, or if anyone had any nice solutions.

To my surprise I found nothing suitable, but I did find some discussion forums full of various suggestions and many brushoffs and ill-conceived suggestions that either didn't work for me or seemed much overengineered.

The solution I came up with was very simple. I've been using it for a few months and am happy enough with it to share it and see if it helps anyone else.

In zsh (my main local shell) add to ~/.zshrc:

mkmv() {
    mkdir -p -- "$argv[-1]"
    mv "$@"
}

And in bash (which I use on most of the many servers I access remotely) add to ~/.bashrc:

mkmv() {
    mkdir -p -- "${!#}"
    mv "$@"
}

To use: Once you realize you're about to try to move files or directories into a nonexistent directory, simply go to the beginning of the line (^A in standard emacs keybindings) and type mk in front of the mv that was already there:

mkmv -i ~/Downloads/Some\ Long\ File\ Name.pdf ~/some-other-long-file-name.tar.xz ~/archive/new...

It makes the directory (or directories) and then completes the move.

There are a few important considerations that I didn't foresee in my initial naive implementation:

  • Having the name be somethingmv meant less typing than something requiring me to remove the mv.
  • For me, it needs to support not just moving one thing to one destination, but rather a whole list of things. That meant accessing the last argument (the destination) for the mkdir.
  • I also needed to allow through arguments to mv such as -i, -v, and -n, which I often use.
  • The -- argument to mkdir ensures that we don't accidentally end up with any other options and that we can handle a destination with a leading - (however unlikely that is).
  • The mv command needs to have a double-quoted "$@" so that the original parameters are each expanded into double-quoted arguments, allowing for spaces and other shell metacharacters in the paths. (See the zsh and bash manpages for details on the important difference in behavior of "$@" compared to "$*" and either of them unquoted.)

This doesn't support GNU extensions to mv such as a --target-directory that precedes the source paths. I don't use that interactively, so I don't mind.

Because this is such a small thing, I avoided for years bothering to set it up. But now that I use it all the time I'm glad I have it!

360° Panoramic Video on Liquid Galaxy



End Point’s Liquid Galaxy recently had an exciting breakthrough! We can now show 360° panoramic video effectively and seamlessly on our Liquid Galaxy systems.

Since our integration of seamless 360° panovideo, our marketing and content teams have been loading up our system with the best 360° panoramic videos. The Liquid Galaxy system at our headquarter office has 360° panoramic videos from National Geographic, The New York Times, Google Spotlight, gaming videos, and more.


The technical challenge with panoramic video is to get multiple instances of the video player app to work in sync. On the Liquid Galaxy, you're actually seeing 7 different video player apps all running at the same time and playing the same video file, but with each app showing a slightly adjusted slice of the 360° panoramic video. This synchronization and angle-adjustment is at the heart of the Liquid Galaxy platform, and allows a high-resolution experience and surrounding immersion that doesn't happen on any other video wall system.

We anticipate that our flawless 360° panoramic video will resonate with many industries, one of whom is the gaming industry. Below we’ve included a video of 360° gaming videos, and how they appear on Liquid Galaxy. Mixed with all the other exciting capabilities on Liquid Galaxy, we anticipate the ability to view angle-adjusted and 3D-immersive 360° video on the system will be a huge crowd-pleaser.

If you are interested in learning more about 360° panoramic video on Liquid Galaxy, don’t hesitate to contact us or visit our Liquid Galaxy website for more information.