News

Welcome to End Point’s blog

Ongoing observations by End Point people

End Point Liquid Galaxy at GEOINT Symposium

End Point Liquid Galaxy will be coming to San Antonio to participate in GEOINT 2017 Symposium. We are excited to demonstrate our geospatial capabilities on an immersive and panoramic 7 screen Liquid Galaxy system. We will be exhibiting at booth #1012 from June 4-7.

On the Liquid Galaxy, complex data sets can be explored and analyzed in a 3D immersive fly-through environment. Presentations can highlight specific data layers combined with video, 3D models, and browsers for maximum communications efficiency. The end result is a rich, highly immersive, and engaging way to experience your data.

Liquid Galaxy’s extensive capabilities include ArcGIS, Cesium, Google Maps, Google Earth, LIDAR point clouds, realtime data integration, 360 panoramic video, and more. The system always draws huge crowds at conferences; people line up to try out the system for themselves.

End Point has deployed Liquid Galaxy systems around the world. This includes many high profile clients, such as Google, NOAA, CBRE, National Air & Space Museum, Hyundai, and Barclays. Our clients utilize our content management system to create immersive and interactive presentations that tell engaging stories to their users.

GEOINT is hosted and produced by the United States Geospatial Intelligence Foundation (USGIF). It is the nation’s largest gathering of industry, academia, and government to include Defense, Intelligence and Homeland Security communities as well as commercial, Fed/Civil, State and Local geospatial intelligence stakeholders.

We look forward to meeting you at booth #1012 at GEOINT. In the meantime, if you have any questions please visit our website or email ask@endpoint.com.

Age comparison in Bash for files and processes

You want your script to run a command only if elapsed-time for a given process is greater than X?

Well, bash does not inherently understand a time comparison like:

if [ 01:23:45 -gt 00:05:00 ]; then
    foo
fi

However, bash can compare timestamps of files using -ot and -nt for "older than" and "newer than", respectively. If the launch of our process includes creation of a PID file, then we are in luck! At the beginning of our loop, we can create a file with a specific age and use that for quick and simple comparison.

For example, if we only want to take action when the process we care about was launched longer than 24 hours ago, try:

touch -t $(date --date=yesterday +%Y%m%d%H%M.%S) $STAMPFILE

Then, within your script loop, compare the PID file with the $STAMPFILE, like this:

if [ $PIDFILE -ot $STAMPFILE ]; then
    foo
fi

And of course if you want to be sure you're working with the PID file of a process which is actually responding, you can try to send it signal 0 to check:

if kill -0 `cat $PIDFILE`; then
    foo
fi

Postal code pain and fun

We do a lot of ecommerce development at End Point. You know the usual flow as a customer: Select products, add to the shopping cart, then check out. Checkout asks questions about the buyer, payment, and delivery, at least. Some online sales are for “soft goods”, downloadable items that don’t require a delivery address. Much of online sales are still for physical goods to be delivered to an address. For that, a postal code or zip code is usually required.

No postal code?

I say usually because there are some countries that do not use postal codes at all. An ecommerce site that expects to ship products to buyers in one of those countries needs to allow for an empty postal code at checkout time. Otherwise, customers may leave thinking they aren’t welcome there. The more creative among them will make up something to put in there, such as “00000” or “99999” or “NONE”.

Someone has helpfully assembled and maintains a machine-readable (in Ruby, easily convertible to JSON or other formats) list of the countries that don’t require a postal code. You may be surprised to see on the list such countries as Hong Kong, Ireland, Panama, Saudi Arabia, and South Africa. Some countries on the list actually do have postal codes but do not require them or commonly use them.

Do you really need the customer’s address?

When selling both downloadable and shipped products, it would be nice to not bother asking the customer for an address at all. Unfortunately even when there is no shipping address because there’s nothing to ship, the billing address is still needed if payment is made by credit card through a normal credit card payment gateway — as opposed to PayPal, Amazon Pay, Venmo, Bitcoin, or other alternative payment methods.

The credit card Address Verification System (AVS) allows merchants to ask a credit card issuing bank whether the mailing address provided matches the address on file for that credit card. Normally only two parts are checked: (1) the street address numeric part, for example, “123” if “123 Main St.” was provided; (2) the zip or postal code, normally only the first 5 digits for US zip codes, and often non-US postal code AVS doesn’t work at all with non-US banks.

Before sending the address to AVS, validating the format of postal codes is simple for many countries: 5 digits in the US (allowing an optional -nnnn for ZIP+4), and 4 or 5 digits in most others countries — see the Wikipedia List of postal codes in various countries for a high-level view. Canada is slightly more complicated: 6 characters total, alternating a letter followed by a number, formally with a space in the middle, like K1A 0B1 as explained in Wikipedia’s components of a Canadian postal code.

So most countries’ postal codes can be validated in software with simple regular expressions, to catch typos such as transpositions and missing or extra characters.

UK postcodes

The most complicated postal codes I have worked with is the United Kingdom’s, because they can be from 5 to 7 characters, with an unpredictable mix of letters and numbers, normally formatted with a space in the middle. The benefit they bring is that they encode a lot of detail about the address, and it’s possible to catch transposed character errors that would be missed in a purely numeric postal code. The Wikipedia article Postcodes in the United Kingdom has the gory details.

It is common to use a regular expression to validate UK postcodes in software, and many of these regexes are to some degree wrong. Most let through many invalid postcodes, and some disallow valid codes.

We recently had a client get a customer report of a valid UK postcode being rejected during checkout on their ecommerce site. The validation code was using a regex that is widely copied in software in the wild:

[A-PR-UWYZ0-9][A-HK-Y0-9][AEHMNPRTVXY0-9]?[ABEHMNPRVWXY0-9]?[0-9][ABD-HJLN-UW-Z]{2}

(This example removes support for the odd exception GIR 0AA for simplicity’s sake.)

The customer’s valid postcode that doesn’t pass that test was W1F 0DP, in London, which the Royal Mail website confirms is valid. The problem is that the regex above doesn’t allow for F in the third position, as that was not valid at the time the regex was written.

This is one problem with being too strict in validations of this sort: The rules change over time, usually to allow things that once were not allowed. Reusable, maintained software libraries that specialize in UK postal codes can keep up, but there is always lag time between when updates are released and when they’re incorporated into production software. And copied or customized regexes will likely stay the way they are until someone runs into a problem.

The ecommerce site in question is running on the Interchange ecommerce platform, which is based on Perl, so the most natural place to look for an updated validation routine is on CPAN, the Perl network of open source library code. There we find the nice module Geo::UK::Postcode which has a more current validation routine and a nice interface. It also has a function to format a UK postcode in the canonical way, capitalized (easy) and with the space in the correct place (less easy).

It also presents us with a new decision: Should we use the basic “valid” test, or the “strict” one? This is where it gets a little trickier. The “valid” check uses a regex validation approach will still let through some invalid postcodes, because it doesn’t know what all the current valid delivery destinations are. This module has a “strict” check that uses a comprehensive list of all the “outcode” data — which as you can see if you look at that source code, is extensive.

The bulkiness of that list, and its short shelf life — the likelihood that it will become outdated and reject a future valid postcode — makes strict validation checks like this of questionable value for basic ecommerce needs. Often it is better to let a few invalid postcodes through now so that future valid ones will also be allowed.

The ecommerce site I mentioned also does in-browser validation via JavaScript before ever submitting the order to the server. Loading a huge list of valid outcodes would waste a lot of bandwidth and slow down checkout loading, especially on mobile devices. So a more lax regex check there is a good choice.

When Christmas comes

There’s no Christmas gift of a single UK postal code validation solution for all needs, but there are some fun trivia notes in the Wikipedia page covering Non-geographic postal codes:

A fictional address is used by UK Royal Mail for letters to Santa Claus:

Santa’s Grotto
Reindeerland XM4 5HQ

Previously, the postcode SAN TA1 was used.

In Finland the special postal code 99999 is for Korvatunturi, the place where Santa Claus (Joulupukki in Finnish) is said to live, although mail is delivered to the Santa Claus Village in Rovaniemi.

In Canada the amount of mail sent to Santa Claus increased every Christmas, up to the point that Canada Post decided to start an official Santa Claus letter-response program in 1983. Approximately one million letters come in to Santa Claus each Christmas, including from outside of Canada, and they are answered in the same languages in which they are written. Canada Post introduced a special address for mail to Santa Claus, complete with its own postal code:

SANTA CLAUS
NORTH POLE H0H 0H0

In Belgium bpost sends a small present to children who have written a letter to Sinterklaas. They can use the non-geographic postal code 0612, which refers to the date Sinterklaas is celebrated (6 December), although a fictional town, street and house number are also used. In Dutch, the address is:

Sinterklaas
Spanjestraat 1
0612 Hemel

This translates as “1 Spain Street, 0612 Heaven”. In French, the street is called “Paradise Street”:

Saint-Nicolas
Rue du Paradis 1
0612 Ciel

That UK postcode for Santa doesn’t validate in some of the regexes, but the simpler Finnish, Canadian, and Belgian ones do, so if you want to order something online for Santa, you may want to choose one of those countries for delivery. :)

Designing a Computer Science Program for Free (or Cheap)

This blog post is for people like me who are interested in improving their knowledge about computers, software and technology in general but are inundated with an abundance of resources and no clear path to follow. Many of the courses online tend to not have any real structure. While it's great that this knowledge is available to anyone with access to the internet, it often feels overwhelming and confusing. I always enjoy a little more structure to study, much like in a traditional college setting. So, to that end I began to look at MIT's OpenCourseWare and compare it to their actual curriculum.

I'd like to begin by acknowledging that some time ago Scott Young completed the MIT Challenge where he "attempted to learn MIT’s 4-year computer science curriculum without taking classes". My friend Najmi here at End Point also shared a great website with me to "Teach Yourself Computer Science". So, this is not the first post to try to make sense of all the free resources available to you, it's just one which tries to help organize a coherent plan of study.

Methodology

I wanted to mimic MIT's real CS curriculum. I also wanted to limit my studies to Computer Science only, while stripping out anything not strictly related. It's not that I am not interested in things like speech classes or more advanced mathematics and physics, but I wanted to be pragmatic about the amount of time I have each week to put in to study outside of my normal (very busy) work week. I imagine anyone reading this would understand and very likely agree.

I examined MIT's course catalog. They have 4 undergraduate programs in the Department of Electrical Engineering and Computer Science:

  • 6-1 program: Leads to the Bachelor of Science in Electrical Science and Engineering. (Electrical Science and Engineering)
  • 6-2 program: Leads to the Bachelor of Science in Electrical Engineering and Computer Science and is for those whose interests cross this traditional boundary.
  • 6-3 program: Leads to the Bachelor of Science in Computer Science and Engineering.(Computer Science and Engineering)
  • 6-7 program: Is for students specializing in computer science and molecular biology.
Because I wanted to stick what I believed would be most practical for my work at End Point, I selected the 6-3 program. With my intended program selected, I also decided that the full course load for a bachelor's degree was not really what I was interested in. Instead, I just wanted to focus on the computer science related courses (with maybe some math and physics only if needed to understand any of the computer courses).

So, looking at the requirements, I began to determine which classes I'd require. Once I had this, I could then begin to search the MIT OpenCourseWare site to ensure the classes are offered, or find suitable alternatives on Coursera or Udemy. As is typical, there are General Requirements and Departmental Requirements. So, beginning with the General Institute Requirements, lets start designing a computer science program with all the fat (non-computer science) cut out.


General Requirements:



I removed that which was not computer science related. As I mentioned, I was aware I may need to add some math/science. So, for the time being this left me with:


Notice that it says

one subject can be satisfied by 6.004 and 6.042[J] (if taken under joint number 18.062[J]) in the Department Program

It was unclear to me what "if taken under joint number 18.062[J]" meant (nor could I find clarification) but as will be shown later, 6.004 and 6.042[J] are in the departmental requirements, so let's commit to taking those two which would leave the requirement of one more REST course. After some Googling I found the list of REST courses here. So, if you're reading this to design your own program, please remember that later we will commit to 6.004 and 6.042[J] and go here to select a course.

So, now on to the General Institute Requirements Laboratory Requirement. We only need to choose one of three:

  • - 6.01: Introduction to EECS via Robot Sensing, Software and Control
  • - 6.02: Introduction to EECS via Communications Networks
  • - 6.03: Introduction to EECS via Medical Technology


So, to summarize the general requirements we will take 4 courses:

Major (Computer Science) Requirements:


In keeping with the idea that we want to remove non-essential, and non-CS courses, let's remove the speech class. So here we have a nice summary of what we discovered above in the General Requirements, along with details of the computer science major requirements:


As stated, let's look at the list of Advanced Undergraduate Subjects and Independent Inquiry Subjects so that we may select one from each of them:



Lastly, it's stated that we must

Select one subject from the departmental list of EECS subjects

a link is provided to do so, however it brings you here and I cannot find a list of courses. I believe that this link no longer takes you to the intended location. A Google search brought up a similar page, but with a list of courses, as can be seen here. So, I will pick one from that page.

The next step was to find the associated courses on MIT OpenCourseWare

Sample List of Classes

So, now you will be able to follow the links I provided above to select your classes. I was not always able to find courses that matched by exact name and/or course number. Sometimes I had to read the description and look through several courses which seemed similar. I will provide my own list in case you'd just like to us mine:

Conclusion

So there you have it, please feel free to comment with any of your favorite resources.

The New Earth



As many of you may have seen, earlier this week Google released a major upgrade to the Google Earth app. Overall, it's much improved, sharper, and a deeper experience for viewers. We will be upgrading/incorporating our managed fleet of Liquid Galaxies over the next two months after we've had a chance to fully test its functionality and polish the integration points, but here are some observations for how we see this updated app impacting the overall Liquid Galaxy experience.

  • Hooray! The new Earth is here! The New Earth is here! Certainly, this is exciting for us. The Google Earth app plays a key central role in the Liquid Galaxy viewing experience, so a major upgrade like this is a most welcome development. So far, the reception has been positive. We anticipate it will continue to get even better as people really sink their hands into the capabilities and mashup opportunities this browser-based Earth presents.

  • We tested some pre-release versions of this application, and successfully integrated them with the Liquid Galaxy and are very happy with how we are able to view-synchronize unique instances of the new Google Earth across displays with appropriate geometrically configured offsets.

  • What to look for in this new application:
    • Stability: The new Google Earth runs as a NaCl application in a Chrome browser. This is an enormous advance for Google Earth. As an application in Chrome it is instantly accessible to billions of new users with their established expectations. Because the new Google Earth uses Chrome the Google Earth developers will no longer need to engage in the minutiae of having to support multiple desktop operating systems, but now can instead concentrate on the core-functionality of Google Earth and leverage the enormous amount of work that the Chrome browser developers do to make Chrome a cross-platform application.
    • Smoother 3D: The (older) Google Earth sometimes has a sort of "melted ice cream" look to the 3D buildings in many situations. Often, buildings fail to fully load from certain viewpoints. From what we're seeing so far, the 3D renderings in the New Earth appear to be a lot sharper and cleaner.
    • Browser-based possibilities: As focus turns more and more to browser-based apps, and as JavaScript libraries continue to mature, the opportunities and possibilities for how to display various data types, data visualizations, and interactions really start to multiply. We can already see this with the sort of deeper stories and knowledge cards that Google is including in the Google Earth interface. We hope to take the ball and run with it, as the Liquid Galaxy can already handle a host of different media types. We might exploit layers, smart use controls, realtime content integration from other available databases, and... okay, I'm getting ahead of myself.

  • The New Google Earth makes a major point of featuring stories and deeper contextual information, rather than just ogling at the terrain: as pretty as the Grand Canyon is to look at, knowing a little about the explorers, trails, and history makes it such a nicer experience to view. We've gone through the same evolution with the Liquid Galaxy: it used to be just a big Google Earth viewer, but we quickly realized the need for more context and usable information for a richer interaction with the viewers by combining Earth with street view, panoramic video, 3D objects, etc. It's why we built a content management system to create presentations with scenes. We anticipate that the knowledge cards and deeper information that Google is integrating here will only strengthen that interaction.
We are looking to roll out the new Google Earth to the fleet in the next couple of months. We need to do a lot of testing and then update the Liquid Galaxies with minimal (or no) disturbance to our clients, many of whom rely on the platform as a daily sales and showcasing tool for their businesses. As always, if you have any questions, please reach us directly via email or call.

Job opening: Web developer

We are looking for another talented software developer to consult with our clients and develop web applications for them in Ruby on Rails, Django, AngularJS, Java, .NET, Node.js, and other technologies. If you like to solve business problems and can take responsibility for getting a job done well without intensive oversight, please read on!

End Point is a 20-year-old web consulting company based in New York City, with 45 full-time employees working mostly remotely from home offices. We are experts in web development, databases, and DevOps, collaborating using SSH, Screen/tmux, chat, Hangouts, Skype, and good old phones.

We serve over 200 clients ranging from small family businesses to large corporations. We use open source frameworks in a variety of languages including JavaScript, Ruby, Java, Scala, Kotlin, C#, Python, Perl, and PHP, tracked by Git, running mostly on Linux and sometimes on Windows.

What is in it for you?

  • Flexible full-time work hours
  • Paid holidays and vacation
  • For U.S. employees: health insurance subsidy and 401(k) retirement savings plan
  • Annual bonus opportunity
  • Ability to move without being tied to your job location

What you will be doing:

  • Work from your home office, or from our offices in New York City and the Tennessee Tri-Cities area
  • Consult with clients to determine their web application needs
  • Build, test, release, and maintain web applications for our clients
  • Work with open source tools and contribute back as opportunity arises
  • Use your desktop platform of choice: Linux, macOS, Windows
  • Learn and put to use new technologies
  • Direct much of your own work

What you will need:

  • Professional experience building reliable server-side apps
  • Good front-end web skills with responsive design using HTML, CSS, and JavaScript, including jQuery, Angular, Backbone.js, Ember.js, etc.
  • Experience with databases such as PostgreSQL, MySQL, SQL Server, MongoDB, CouchDB, Redis, Elasticsearch, etc.
  • A focus on needs of our clients and their users
  • Strong verbal and written communication skills

We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of gender, race, religion, color, national origin, sexual orientation, age, marital status, veteran status, or disability status.

Please email us an introduction to jobs@endpoint.com to apply. Include a resume, your GitHub or LinkedIn URLs, or whatever else that would help us get to know you. We look forward to hearing from you! Full-time employment seekers only, please -- this role is not for agencies or subcontractors.

The mystery of the disappearing SSH key

SSH (Secure Shell) is one of the programs I use every single day at work, primarily to connect to our client's servers. Usually it is a rock-solid program that simply works as expected, but recently I discovered it behaving quite strangely - a server I had visited many times before was now refusing my attempts to login. The underlying problem turned out to be a misguided decision by the developers of OpenSSH to deprecate DSA keys. How I discovered this problem is described below (as well as two solutions).

The use of the ssh program is not simply limited to logging in and connecting to remote servers. It also supports many powerful features, one of the most important being the ability to chain multiple connections with the ProxyCommand option. By using this, you can "login" to servers that you cannot reach directly, by linking together two or more servers behind the scenes.

As as example, let's consider a client named "Acme Anvils" that strictly controls access to its production servers. They make all SSH traffic come in through a single server, named dmz.acme-anvils.com, and only on port 2222. They also only allow certain public IPs to connect to this server, via whitelisting. On our side, End Point has a server, named portal.endpoint.com, that I can use as a jumping off point, which has a fixed IP that we can give to our clients to whitelist. Rather than logging in to "portal", getting a prompt, and then logging in to "dmz", I can simply add an entry in my ~/.ssh/config file to automatically create a tunnel between the servers - at which point I can reach the client's server by typing "ssh acmedmz":

##
## Client: ACME ANVILS
##

## Acme Anvil's DMZ server (dmz.acme-anvils.com)
Host acmedmz
User endpoint
HostName 555.123.45.67
Port 2222
ProxyCommand ssh -q greg@portal.endpoint.com nc -w 180s %h %p

Notice that the "Host" name may be set to anything you want. The connection to the client's server uses a non-standard port, and the username changes from "greg" to "endpoint", but all of that is hidden away from me as now the login is simply:

[greg@localhost]$ ssh acmedmz
[endpoint@dmz]$

It's unusual that I'll actually need to do any work on the dmz server, of course, so the tunnel gets extended another hop to the db1.acme-anvils.com server:

##
## Client: ACME ANVILS
##

## Acme Anvil's DMZ server (dmz.acme-anvils.com)
Host acmedmz
User endpoint
HostName 555.123.45.67
Port 2222
ProxyCommand ssh -q greg@portal.endpoint.com nc -w 180s %h %p

## Acme Anvil's main database (db1.acme-anvils.com)
Host acmedb1
User postgres
HostName db1
ProxyCommand ssh -q acmedmz nc -w 180s %h %p

Notice how the second ProxyCommand references the "Host" of the section above it. Neat stuff. When I type "ssh acemdb1", I'm actually connecting to the portal.endpoint.com server, then immediately running the netcat (nc) command in the background, then going through netcat to dmz.acme-anvils.com and running a second netcat command on *that* server, and finally going through both netcats to login to the db1.acme-anvils.com server. It sounds a little complicated, but quickly becomes part of your standard tool set once you wrap your head around it. After you update your .ssh/config file, you soon forget about all the tunneling and feel as though you are connecting directly to all your servers. That is, until something breaks, as it did recently for me.

The actual client this happened with was not "Acme Anvils", of course, and it was a connection that went through four servers and three ProxyCommands, but for demonstration purposes let's pretend it happened on a simple connection to the dmz.acme-anvils.com server. I had not connected to the server in question for a long time, but I needed to make some adjustments to a tail_n_mail configuration file. The first login attempt failed completely:

[greg@localhost]$ ssh acmedmz
endpoint@dmz.acme-anvils.com's password: 

Although the connection to portal.endpoint.com worked fine, the connection to the client server failed. This is not an unusual problem: it usually signifies that either ssh-agent is not running, or that I forgot to feed it the correct key via the ssh-add program. However, I quickly discovered that ssh-agent was working and contained all my usual keys. Moreover, I was able to connect to other sites with no problem! On a hunch, I tried breaking down the connections into manual steps. First, I tried logging in to the "portal" server. It logged me in with no problem. Then I tried to login from there to dmz.acme-anvils.com - which also logged me in with no problem! But trying to get there via ProxyCommand still failed. What was going on?

When in doubt, crank up the debugging. For the ssh program, using the -v option turns on some minimal debugging. Running the original command from my computer with this option enabled quickly revealed the problem:

[greg@localhost]$ ssh -v acmedmz
OpenSSH_7.4p1, OpenSSL 1.0.2k-fips  26 Jan 2017
debug1: Reading configuration data /home/greg/.ssh/config
debug1: /home/greg/.ssh/config line 1227: Applying options for acmedmz
debug1: Reading configuration data /etc/ssh/ssh_config
...
debug1: Executing proxy command: exec ssh -q greg@portal.endpoint.com nc -w 180s 555.123.45.67 2222
...
debug1: Authenticating to dmz.acme-anvils.com:2222 as 'endpoint'
...
debug1: Host 'dmz.acme-anvils.com' is known and matches the ECDSA host key.
...
debug1: Skipping ssh-dss key /home/greg/.ssh/greg2048dsa.key - not in PubkeyAcceptedKeyTypes
debug1: SSH2_MSG_SERVICE_ACCEPT received
debug1: Authentications that can continue: publickey,password
debug1: Next authentication method: publickey
debug1: Offering RSA public key: /home/greg/.ssh/greg4096rsa.key
debug1: Next authentication method: password
endpoint@dmz.acme-anvils.com's password: 

As highlighted above, the problem is that my DSA key (the "ssh-dss key") was rejected by my ssh program. As we will see below, DSA keys are rejected by default in recent versions of the OpenSSH program. But why was I still able to login when not hopping through the middle server? The solution lays in the fact that when I use the ProxyCommand, *my* ssh program is negotiating with the final server, and is refusing to use my DSA key. However, when I ssh to the portal.endpoint.com server, and then on to the next one, the second server has no problem using my (forwarded) DSA key! Using the -v option on the connection from portal.endpoint.com to dmz.acme-anvils.com reveals another clue:

[greg@portal]$ ssh -v endpoint@dmz.acme-anvils.com:2222
...
debug1: Connecting to dmz [1234:5678:90ab:cd::e] port 2222.
...
debug1: Next authentication method: publickey
debug1: Offering RSA public key: /home/greg/.ssh/endpoint2.ssh
debug1: Authentications that can continue: publickey,password
debug1: Offering DSA public key: /home/greg/.ssh/endpoint.ssh
debug1: Server accepts key: pkalg ssh-dss blen 819
debug1: Authentication succeeded (publickey).
Authenticated to dmz ([1234:5678:90ab:cd::e]:2222).
...
debug1: Entering interactive session.
[endpoint@dmz]$

If you look closely at the above, you will see that we first offered an RSA key, which was rejected, and then we successfully offered a DSA key. This means that the endpoint@dm account has a DSA, but not a RSA, public key inside of its ~/.ssh/authorized_keys file. Since I was able to connect to portal.endpoint.com, its ~/.ssh/authorized_keys file must have my RSA key.

For the failing connection, ssh was able to use my RSA key to connect to portal.endpoint.com, run the netcat command, and then continue on to the dmz.acme-anvils.com server. However, this connection failed as the only key my local ssh program would provide was the RSA one, which the dmz server did not have.

For the working connection, ssh was able to connect to portal.endpoint.com as before, and then into an interactive prompt. However, when I then connected via ssh to dmz.acme-anvils.com, it was the ssh program on portal, not my local computer, which negotiated with the dmz server. It had no problem using a DSA key, so I was able to login. Note that both keys were happily forwarded to portal.endpoint.com, even though my ssh program refused to use them!

The quick solution to the problem, of course, was to upload my RSA key to the dmz.acme-anvils.com server. Once this was done, my local ssh program was more than happy to login by sending the RSA key along the tunnel.

Another solution to this problem is to instruct your SSH programs to recognize DSA keys again. To do this, add this line to your local SSH config file ($HOME/.ssh/config), or to the global SSH config file (/etc/ssh/config):

PubkeyAcceptedKeyTypes +ssh-dss

As mentioned earlier, this whole mess was caused by the OpenSSH program deciding to deprecate DSA keys. Their rationale for targeting all DSA keys seems a little weak at best: certainly I don't feel that my 2048-bit DSA key is in any way a weak link. But the writing is on the wall now for DSA, so you may as well replace your DSA keys with RSA ones (and an ed25519 key as well, in anticipation of when ssh-agent is able to support them!). More information about the decision to force out DSA keys can be found in this great analysis of the OpenSSH source code.