News

Welcome to End Point’s blog

Ongoing observations by End Point people

Web Security Services Roundup

Security is often a very difficult thing to get right, especially when it’s not easy to find reliable or up-to-date information or the process of testing can be confusing and complicated. We have a lot of history and experience working on the security of websites and servers, and we’ve found many tools and websites to be very helpful. Here is a collection of those.

Server-side security

There are a number of tools available that can scan your website to check for common vulnerabilities and the quality of SSL/TLS configuration, as well as give great tips on how to improve security for your website.

  • Qualys SSL Labs Server Test takes a simple domain name, performs a series of tests from a variety of clients, and returns a simple letter grade (from A+ down to F) indicating the quality of your SSL/TLS configuration, as well as a detailed summary for a host of configuration options. It covers certificates key and algorithms; TLS and SSL configurations; cipher suites; handshakes on a wide variety of platforms including Android, iOS, Chrome, Firefox, Internet Explorer and Edge, Safari, and others; common protocols and vulnerabilities; and other details.
  • HTTP Security Report does a similar scan, but provides a much more simplified summary of a website, with a numeric score from 0 to 100. It gives a simple, easy to understand list of results, with a green check mark or a red X to indicate whether something is configured for security or not. It also provides short paragraphs explaining settings and recommended configurations.
  • HT-Bridge SSL/TLS Server Test is very similar to Qualys SSL Labs Server Test, but provides some valuable extra information, such as PCI-DSS, HIPAA, and NIST guidelines compliance, as well as industry best practices and basic analysis of third-party content.
  • securityheaders.io is another letter-grade scan, but focuses on server headers only. It provides simple explanations for each recommended server header and links to guides on how to configure them correctly.
  • Observatory by Mozilla scans and gives information on HTTP, TLS, and SSH configuration, as well as simple summaries from other websites, including Qualys, HT-Bridge, and securityheaders.io as covered above.
  • SSL-Tools is focused on SSL and TLS configuration and certificates, with tools to scan websites and mail servers, check for common vulnerabilities, and decode certificates.
  • Microsoft Site Scan performs a series of simple tests, focused more on general website guidelines and best practices, including tests for outdated libraries and plugins which can be a security issue.
  • testssl.sh, the final website scanning tool I’ll cover, is a more advanced bash script that covers many of the same things these other websites do, but provides lots of options for fine-tuning test methods, returned information, and testing abnormal configurations. It’s also open source and doesn’t rely on any third parties.

These websites provide valuable information on SSL/TLS which can be used to create a secure, fast, and functional server configuration:

  • Security/Server Side TLS on the Mozilla wiki is a fantastic page which provides great summaries, recommendations, and reference information on many TLS topics, including handshakes, OCSP Stapling, HSTS, HPKP, certificate ciphers, and common attacks.
  • Mozilla SSL Configuration Generator is a simple tool that generates boilerplate server configuration files for common servers, including Apache and Nginx, and specific server and OpenSSL versions. It also allows you to target “modern”, “intermediate”, or “old” clients and servers, which will give the best configuration possible for each level.
  • Is TLS Fast Yet? is a great, simple, and to-the-point informational website which explains why TLS is so important and how to improve its performance so it has the smallest impact possible on your website’s speed.

Client-side security

These websites provide information and diagnostic tools to ensure that you are using a secure browser.

  • badssl.com gives a list of links to subdomains with various SSL configurations, including badly configured SSL, so you can have a good idea of what a well-configured website looks like versus one with errors in configuration, weak ciphers or key exchange protocols, or insecure HTTP forms.
  • IPv6 Test checks your network and browser for IPv6 support, showing you your ISP, reverse DNS pointers, both your IPv4 and IPv6 addresses, and giving an idea of when your computer or network may have problems with dual-stack IPv4 + IPv6 remote hosts or DNS.
  • How’s My SSL? and Qualys Labs SSL Client Test both check your browser for support of SSL/TLS versions, protocols, ciphers, and features, as well as susceptibility to common vulnerabilities.

General Tools

  • NeverSSL is a simple website that promises to never use SSL. Many public wifi networks require you to go through a payment or login page, which can be blocked when trying to access a well-secured website such as Google, Facebook, Twitter, or Amazon, which can cause trouble connecting to that website. NeverSSL provides an easy and simple way to access that login website.
  • crt.sh is a search engine for public TLS certificate information. It provides a history of certificates for a given domain name, with information including issuer and issue date, as well as an advanced search.
  • Digital Attack Map is an interactive map showing DDoS attacks across the world.
  • The Internet-Wide Scan Data Repository is a public archive of scans across the internet, intended for research and provided by the University of Michigan Censys Team.
  • take-a-screenshot.org is a simple website that shows how to take a screenshot on a variety of operating systems and desktop environments. It’s a fantastic tool to help less technically-minded people share their screens or issues they’re having.

Custom eCommerce Development

"Custom eCommerce" means different things to different people and organizations. For some eCommerce shopping cart sites that pump out literally hundreds of websites a year, it may mean you get to choose from a dizzying array of "templates" that can set your website apart from others.
For others, it may be a slightly more involved arrangement where you can "create categories" to group display of your products on your website ... after you have entered your products into a prearranged database schema.
There are many levels of "custom" out there. Generally speaking, the closer you get to true "custom", the more accurate the term "development" becomes.
It is very important for your business that you decide what fits your needs, and that you match your needs to a platform or company that can provide appropriate services. As you can imagine, this will depend entirely on your business.

Example scenarios

For example, a small one- or two-person business that does fulfillment of online orders may be well suited for a pre-built approach, where you pay a monthly fee to simply log into an admin, add your products, and some content, and the company does the rest. It handles all of the "details."
A slightly larger company that has maybe 5-10 employees, and possibly a staff member with some understanding of websites, may choose to purchase a package that requires more customization and company related input, and perhaps even design or choice of templates.
From this level up, decisions become far more important and complex. For example, even though the previously described company may be perfectly suited with the choice described, if sales are expected to increase dramatically in the near future, or if the company is in a niche market where custom accounting or regulations require specific handling of records, a more advanced approach may be warranted.

What we do

The purpose of this post is not to give you guidelines as to what sort of website service you should buy, or consultancy you should hire for your company. Rather it is to point out some of the types of things that we at End Point do for companies that need a higher level of custom eCommerce development. In fact, the development we do is not limited to eCommerce.

We offer a full range of business consultancy and IT development services. We can guide you through many areas of your business development. True, we primarily provide services to companies that sell things on the web. But we also provide support for inventory management systems in your warehouses, accounts receivable / payable integration with your websites, management of your POS (point of sale) machines, strategic pricing for seasonal products with expiry dates, and the list goes on.

Real-life scenarios

The following is a real-life example of services we have provided for one client.

Case Study: Vervante

Consultant vs Service

Hopefully, the real life scenarios will help serve as an example as to how complex business needs can be, and how using an out of the box "eCommerce" website, will not work in every circumstance. When you find a good business consultant, you will know it. A consultant will not try to make your business fit into their template, they will listen to you and then assemble and tailor products to fit your business.
Regardless of the state of maturity of your business, very seldom will a single "system" or "website" cover all of your business needs. It will more likely be a collection of systems. Which systems and how they work together is likely what will determine success or failure. The more mature your business, the broader the scope of systems required to support the growing requirements of your business.
So whether you are a sole proprietor getting started with your business, or you are a CTO tasked with organizing and optimizing the many systems in your organization, understanding what type of service or partner you need, is the first step. In the future I will spotlight a few other examples of how we have assisted businesses in growing and improving how they do business.

Client Case Study: Vervante - Publishing, Production and Fulfillment Services

A real-life scenario

The following is a real-life example of services we have provided for one of our clients.
Vervante Corporation provides a print on demand and order fulfillment service for thousands of customers, in their case, "Authors". Vervante needed a way for these authors to keep track of their products. Essentially they needed an Inventory management system. So we designed a complete system from the ground up that allows Vervante's authors many custom functions that simply are not offered in a pre-built package anywhere.
This is also a good time to mention that you should always view your web presence, in fact your business itself, as a process, not a one time "setup". Your products will change, your customers will change, the web will change, everything will change. If you want your business to be successful, you will change.

Some Specifics

While it is beyond the scope of this case study to describe all of the programs that were developed for Vervante, it will be valuable for the reader to sample just a few of the areas to understand how diverse a single business can be. Here are a few of the functions we have built from scratch, over several years to continue to provide Vervante, their authors, and even their vendors with efficient processes to achieve their daily business needs.

Requirements

  1. Author Requirement - First, in some cases, the best approach to a problem is to use someone else's solution! Vervante's authors have large data files that are converted to a product, and then shipped on demand as the orders come in. So we initially provided a custom file transfer process so that customers could directly upload their files to a server we set up for Vervante. Soon Vervante's rapid growth outpaced the efficacy of this system, so we investigated and determined the most efficient and cost-effective approach was to incorporate a 3rd party service. So we recommended a well known file transfer service and wrote a program to communicate with the file transfer service API. Now a client can easily describe and upload large files to Vervante.
    video
    View File Save Process
  2. Storage Requirement - The remote storage of these large files caused Vervante a dramatic inefficiency as relates to access times, as they worked daily on these files to format, organize, and create product masters. So we needed to provide Vervante with a local server to act as a file server that was on their local network (LAN), where it could be rapidly accessed and manipulated.
    This was a challenge, as Vervante did not have IT personnel on site. So we purchased an appropriate server, set up everything in our offices, and shipped the complete server to them! They plugged the server into their local network, and with a long phone call, we had the server up and running and remotely managed.
  3. Author Requirement - On the website, the authors first wanted to see what they had in inventory. Some customers provided Vervante with some product components that needed to be included with a complete product, while others relied on Vervante to build all components of their products. They also requested a way to set minimum inventory stock requirements.
    So we built an interface that would allow authors to:
    (a) See their current stock levels for all products,
    (b) View outstanding orders for these items,
    (c) Set minimum inventory levels that they would like to have maintained at the fulfillment warehouse.
    For example a finished product may consist of a book, a CD, and a DVD. A customer may supply the CD and require Vervante to produce the book and the DVD "on demand" for the product. We created a system that tracked all items at a "base" item level, and then allowed Vervante to "build" products with as many of these "base" items as necessary, to create the final product. The base items could be combined to create an item, and two or more items could be combined to produce yet another item. It is a recursive item inventory system, built from scratch specifically for Vervante.
  4. Vervante Vendor (fulfillment warehouse) Requirement - Additionally, the fulfillment warehouse that receives, stores, builds and ships end user products, needed access to this system. They had several needs including:
    • Retrieving pending orders for shipment
    • Creating packing / pick slips for the orders
    • Create shipping labels for orders
    • Manage returns
    • Input customer supplied inventory
    • Input fulfillment created inventory
    In our administrative user interface for the fulfillment house, we developed a series of customer specific processes to address the above needs. Here is a high level example of how a few of the items on the list above are achieved:
    • The fulfillment house logs into the user admin first thing in the morning, and prints the outstanding orders.
    • The "orders" are formatted similar to a packing slip, and each slip has all line items of the order, and a bar code imprinted on the slip.
    • This document is used as a "pick" slip, and is placed in a "pick" basket. The user then goes through the warehouse, gathers the appropriate items, and when complete the order is placed on a feed belt to the shipper location.
    • When the basket lands in front of the shipper, that person checks the contents of the basket against the slip, and then uses a bar code scanner to scan the order. That scan triggers a query into our system that returns all applicable shipping data into an Endicia or UPS system.
    • A shipping label is created, and the shipping cost and tracking information is returned to the our system.
    • Additionally the inventory is decremented accordingly when the order receives a shipping label and tracking number.
  5. Requirements: administrative / accounting - Vervante also needed an administrative / accounting arm, designed to control all of the accounting functions such as:
    • Recording customers' fulfillment charges
    • Recording customers' sales (Vervante sells product for the customers as well as fulfilling outside orders)
    • Determining fulfillment vendor fees and payments
    • Tracking shipping costs
    • Monthly billing of all customers
    • Monthly payments for all customers.
    • Interface with in-house accounting systems and keeping systems in sync
    • Tracking and posting outside order transactions
The above described processes are just a few of the processes that we developed from scratch, and matched to Vervante's needs. It is also a tiny portion of their system.

Last, but not least

Oh, and one other interesting fact: When Vervante first came to us several years ago, they had fewer than 20 customers. Today, they provide order fulfillment and print on demand services for nearly 4000 customers. So when we say to plan ahead for growth, we have experience in that area.

How to split Git repositories into two

Ever wondered how to split your Git repo into two repos?


 

First you need to find out what files and directories you want to move to separate repos. In the above example we're moving dir3, dir4 and dir7 to repo A, and dir1, dir2, dir5 and dir8 to repo B.

Steps

What you need to do is to go through each and every commit in git history for every branch and filter out commits that modify directories that you dont care about in your new repo. The only flaw of this method is that it will leave those empty, filtered out commits in the history.

Track all branches

First we need to start tracking all branches locally:

for i in $(git branch -r | grep -vE "HEAD|master" | sed 's/^[ ]\+//');
  do git checkout --track $i
done

Then copy your original repo to two separate dirs: repo_a and repo_b.

cp -a source_repo repo_a
cp -a source_repo repo_b

Filter the history

Following command will delete all dirs that exclusively belong to repo B, thus we create repo A. Filtering is not limited to directories. You can provide relative paths to files, dirs etc.

cd repo_a
git filter-branch --index-filter 'git rm --cached -r dir8, dir2 || true' -- --all

cd repo_b
git filter-branch --index-filter 'git rm --cached -r dir3, dir4, dir7 || true' -- --all

Note that the `|| true` prevents git from failing to filter our dirs mentioned in the `rm` clause in early stages of the git history where the dirs did not yet exist.

Look at the list of branches once again (in both repos):

git branch -l

Set new origins and push

In every repo, we need to remove the old origin and set up new origin. After it's done, we're ready to push.

Remove old origin:

git remote rm origin

Add new origin:

git remote add origin git@github.com:YourOrg/repo_a.git

Push all tracked branches:

git push origin --all

That's it!

How to check process duration in Linux with the "ps" command

In certain cases we might want to get a certain process' elapsed time for our own reason. Turns out "ps" command could easily assist us in that. According to "ps" manual, etime could put the duration of time in [[DD-]hh:]mm:ss. format, while etimes in seconds.

From "ps" manpage:

etime       ELAPSED   elapsed time since the process was started, in the form [[DD-]hh:]mm:ss.
etimes      ELAPSED   elapsed time since the process was started, in seconds.

To use that, we could use (in [[DD-]hh:]mm:ss. format):

ps -p "pid" -o etime
or in seconds:
ps -p "pid" -o etimes

In this case the "pid" should be replaced with your intended process ID.

The following will help to nicely reporting the output. We can put -o etime or -o etimes with other argument, that is "command", in order to show the executed command along with its very own absolute path:

ps -p "28590" -o etime,command
ELAPSED COMMAND
21:45 /usr/bin/perl ./fastcgi-wrapper.pl 7999

We can also get the start date of the process' execution:

najmi@ubuntu-ampang:~$ ps -p 21745 -o etime,command,start
    ELAPSED COMMAND                      STARTED
 1-19:47:45 /usr/lib/firefox/firefox      Aug 02


What if we do not want to manually parsing the PID, instead (since we are very sure) to just get the name of the running application? We could just simply use pgrep or pidof

najmi@ubuntu-ampang:~$ ps -p $(pgrep firefox) -o pid,cmd,start,etime
  PID CMD                          STARTED     ELAPSED
21745 /usr/lib/firefox/firefox      Aug 02  2-04:29:36

najmi@ubuntu-ampang:~$ ps -p $(pidof firefox) -o pid,cmd,start,etime
  PID CMD                          STARTED     ELAPSED
21745 /usr/lib/firefox/firefox      Aug 02  2-04:29:42

What if the command issued many processes? Take an example of the Chrome browser:
najmi@ubuntu-ampang:~$ ps -p $(pidof chrome) -o pid,comm,cmd,start,etime
error: process ID list syntax error

Usage:
 ps [options]

 Try 'ps --help '
  or 'ps --help '
 for additional help text.

For more details see ps(1).

The best way (so far) that I could get is by creating a loop. It seems pidof is much more accurate when parsing the exact application (string) that we feed into it.

With pgrep:
najmi@ubuntu-ampang:~$ for i in `pgrep chrome`;do ps -p $i -o pid,comm,cmd,start,etime|tail -n +2;done
 2255 chrome          /opt/google/chrome/chrome - 08:05:43    02:55:17
 4990 chrome          /opt/google/chrome/chrome - 10:39:16       21:44
 5567 chrome          /opt/google/chrome/chrome - 10:53:13       07:47
 9448 chrome          /opt/google/chrome/chrome -   Jul 31  3-12:25:08
10033 chrome          /opt/google/chrome/chrome     Jul 27  8-10:43:42
10044 chrome          /opt/google/chrome/chrome -   Jul 27  8-10:43:42
10050 chrome          /opt/google/chrome/chrome -   Jul 27  8-10:43:42
10187 chrome          /opt/google/chrome/chrome -   Jul 27  8-10:43:39
10234 chrome          /opt/google/chrome/chrome -   Jul 27  8-10:43:37
10236 chrome          /opt/google/chrome/chrome -   Jul 27  8-10:43:37
19440 chrome          /opt/google/chrome/chrome - 22:30:34    12:30:26
20229 chrome          /opt/google/chrome/chrome -   Aug 03  1-09:51:25
20514 chrome          /opt/google/chrome/chrome - 22:52:25    12:08:35
20547 chrome          /opt/google/chrome/chrome - 22:52:36    12:08:24
21009 chrome          /opt/google/chrome/chrome -   Aug 03  1-09:27:11
22458 chrome          /opt/google/chrome/chrome -   Jul 27  8-03:44:07
22474 chrome-gnome-sh /usr/bin/python3 /usr/bin/c   Jul 27  8-03:44:07
23681 chrome          /opt/google/chrome/chrome -   Aug 03  1-03:33:45
23691 chrome          /opt/google/chrome/chrome -   Aug 03  1-03:33:45
23870 chrome          /opt/google/chrome/chrome - 00:15:15    10:45:45
24544 chrome          /opt/google/chrome/chrome - 00:40:17    10:20:43
25116 chrome          /opt/google/chrome/chrome - 00:51:31    10:09:29
25466 chrome          /opt/google/chrome/chrome - 00:59:55    10:01:05
29060 chrome          /opt/google/chrome/chrome - 02:15:42    08:45:18
With pidof:
najmi@ubuntu-ampang:~$ for i in `pidof chrome`;do ps -p $i -o pid,comm,cmd,start,etime|tail -n +2;done
29060 chrome          /opt/google/chrome/chrome - 02:15:42    08:47:40
25466 chrome          /opt/google/chrome/chrome - 00:59:55    10:03:27
25116 chrome          /opt/google/chrome/chrome - 00:51:31    10:11:51
24544 chrome          /opt/google/chrome/chrome - 00:40:17    10:23:05
23870 chrome          /opt/google/chrome/chrome - 00:15:15    10:48:07
23691 chrome          /opt/google/chrome/chrome -   Aug 03  1-03:36:07
23681 chrome          /opt/google/chrome/chrome -   Aug 03  1-03:36:07
22458 chrome          /opt/google/chrome/chrome -   Jul 27  8-03:46:29
21009 chrome          /opt/google/chrome/chrome -   Aug 03  1-09:29:33
20547 chrome          /opt/google/chrome/chrome - 22:52:36    12:10:46
20514 chrome          /opt/google/chrome/chrome - 22:52:25    12:10:57
20229 chrome          /opt/google/chrome/chrome -   Aug 03  1-09:53:47
19440 chrome          /opt/google/chrome/chrome - 22:30:34    12:32:48
10236 chrome          /opt/google/chrome/chrome -   Jul 27  8-10:45:59
10234 chrome          /opt/google/chrome/chrome -   Jul 27  8-10:45:59
10187 chrome          /opt/google/chrome/chrome -   Jul 27  8-10:46:01
10050 chrome          /opt/google/chrome/chrome -   Jul 27  8-10:46:04
10044 chrome          /opt/google/chrome/chrome -   Jul 27  8-10:46:04
10033 chrome          /opt/google/chrome/chrome     Jul 27  8-10:46:04
 9448 chrome          /opt/google/chrome/chrome -   Jul 31  3-12:27:30
 5567 chrome          /opt/google/chrome/chrome - 10:53:13       10:09
 4990 chrome          /opt/google/chrome/chrome - 10:39:16       24:06
 2255 chrome          /opt/google/chrome/chrome - 08:05:43    02:57:39

There is other tool, called as stat which records the timestamp of a file but for slightly different purpose. Stay tune for the next blogpost!

JBoss 4/5/6 to Wildfly migration tips

Introduction

Recently, we have taken over a big Java project that ran on the old JBoss 4 stack. As we know how dangerous for a business is outdated software, we and our client agreed that the most important task is to upgrade the server stack to the latest WildFly version.

It’s definitely not an easy job, but it’s worth to invest to sleep well and don’t worry about software problems.

This time it was even more work because of a complicated and not documented application, that’s why I wanted to share some tips and problem resolutions for issues I encountered.

Server configuration

You can set it up using multiple configuration files in the standalone/configuration directory.

I can recommend to use the standalone-full.xml file for most of setup, it contains a default full stack as opposed to standalone.xml.

You can also set up an application specific configuration using various configuration XML files (https://docs.jboss.org/author/display/WFLY10/Deployment+Descriptors+used+In+WildFly).Remember to keep the appliction specific configuration in the Classpath.

Quartz as a message queue

The Quartz library was used as a native message queue in previous JBoss versions. If you struggle and try to use its resource adapter with WildFly just skip it. It’s definitely too much work, even if it’s possible.

In the latest WildFly version (as of today 10.1) the default message queue library is ActiveMQ. It has almost the same API as the old Quartz has, so it’s easy to use it.

Quartz as a job scheduler

We had multiple cron-like jobs to migrate as well. All the jobs used Quartz to schedule runs.

The best solution here is to update Quartz to the latest version (yes!) and use a new API (http://www.quartz-scheduler.org/documentation/quartz-2.2.x/tutorials/tutorial-lesson-06.html) to create CronTriggers for the jobs.

trigger = newTrigger()
    .withIdentity("trigger3", "group1")
    .withSchedule(cronSchedule("0 42 10 * * ?"))
    .forJob(myJobKey)
    .build();

You can use the same cron syntax (e.g 0 42 10 * * ?) as in the 12 years old Quartz version. Yes!

JAR dependencies

In WildFly you can set up an internal module for each JAR dependency. It can be pretty time consuming to create declarations for more than 100 libraries (exactly 104 in our case). We decided to use Maven to handle dependencies of our application and skip declaring modules in WildFly. Why? In our opinion it’s better to encapsulate everything in an EAR file and keep WildFly configuration minimal as we won’t use our server for any other application in the future.

Just keep your dependencies in the Classpath and you will be fine.

JBoss CLI

I really prefer the bin/jboss-cli.sh interface to the web interface to handle deployments. It’s a powerful tool and much faster to work with than clicking through the UI.

JNDI path

If you can’t access your JNDI definition try to use the global namespace. Up to JAVA EE6 developers defined their own JNDI names. These names had a global scope. This doesn’t work anymore. To access a previously globally scoped name use this pattern: java:global/OLD_JNDI_NAME.

The java:global namespace was introduced in JAVA EE 6.

Reverse proxy

To configure a WildFly application with a reverse proxy you need to, of course, set up a virtual host with a reverse proxy declaration.

In addition, you must add an attribute to the server’s http-listener in the standalone-full.xml file. The attribute is proxy-address-forwarding and must be set to true.

Here is an example declaration:


    
        
            
 
    

Summary

If you consider to upgrade to WildFly I can recommend it, it’s much faster than JBoss 4/5/6, scalable and fully prepared for modern applications.

co:collective Doable Innovation Software

co:collective is a growth accelerator that works with leadership teams to conceive and execute innovation in the customer experience using a proprietary methodology called StoryDoing©.

Doable, one of co:collective’s recent innovations, is a cloud based platform, designed to empower employees to meaningfully contribute and collaborate on ideas that move their business forward. The tool allows companies to solicit ideas from employees at all levels of an organization, filter down those ideas, make decisions as a team, and then implement a project - all the while collaborating in a fun and easy-to-use application. Over 200 companies across multiple sectors use Doable to create new products, new features, and problem solve to keep their business growing.

co:collective engaged End Point’s front-end developer, Kamil Ciemniewski, to work with their in-house development team led by Tommy Dunn. Kamil joined the Doable effort to refactor the Doable application which was moving from a Ruby on Rails-based application to an Angular frontend application with a Ruby on Rails backend. Kamil has been working with Doable since March to complete the application re-write and the project has gone immensely well. Kamil brings extensive frontend and backend knowledge to co:collective and together they’ve been able to refactor their site to be more efficient and powerful than before.