What did we learn from an astronomer’s hacker hunt in the 80's? Apparently, not too much

Jo Christian Oterhals
6 min readDec 29, 2020

C omputer security has seen its share of mind-boggling news lately. None more mind boggling than the news about how alleged Russian hackers installed a backdoor into the IT monitoring product Solarwind Orion. Through this they got got entrance into the computer systems of several US agencies and departments — ironically even into the systems of a cyber security company (Fireeye) and Microsoft itself . The news made me think of my own history with computer security, and down memory lane I went.

One particular day in late July or early August 1989 my parents, sister and me were driving home from a short summer vacation. At a short stop in a largish city, I had found a newsstand carrying foreign magazines. There I’d bought a copy of PC/Computing’s September issue (to this day I don’t understand why American magazines are on sale a couple of months before the cover date) so that I had something to make time in the backseat pass faster.

Among articles about the relatively new MS-DOS replacement OS/2 (an OS co-developed with IBM that Microsoft would come to orphan the minute they launched Windows 3.0 and understood the magnitude of the success they had on their hands) and networking solutions from Novell (which Windows would kill as well, albeit more indirectly), the magazine brought an excerpt of the book “The Cuckoo’s Egg” by a guy named Clifford Stoll. Although I had bought the magazine for the technical information such as the stuff mentioned above, this excerpt stood out. It was a mesmerising story about how an astronomer-turned-IT-guy stumbled over a hacker, and how he, aided by interest but virtually no support from the FBI, CIA and NSA, almost single handedly traced the hacker’s origins back to a sinister government sponsored organisation in the then communist East Germany.

This is the exact moment I discovered that my passion — computers — could form the basis of a great story.

Coastal Norway where I grew up is probably as far from the author’s native San Francisco as anything; at least our book stores were. So it wasn’t until the advent of Amazon.com some years later that I was able to order a copy of the book. Luckily, the years passed had not diminished the story. Granted, the Internet described by the author Clifford Stoll was a little more clunky than the slightly more modern dial-up internet I ordered the book on. But subtract the World Wide Web from the equation and the difference between his late eighties internet and my mid-nineties modem version weren’t all that big. My Internet was as much a monochrome world of telnet and text-only servers as it was a colourful web. Email, for instance, was something I managed by telnetting into a HP Unix server and using the command line utility pine to read and send messages.

What struck me with the story was that the hacker’s success very often was enabled by sloppy system administration; one could arguably say that naïve or ignorant assumptions by sysadmins all across the US made the hack possible. Why sloppy administration and ignorant assumptions? Well, some of the reason was that the Internet was largely run by academia back then. Academia was (and is) a culture of open research and sharing of ideas and information. As such it’s not strange that sysadmins of that time assumed that users of the computer systems had good intentions too.

But no one had considered that the combination of several (open) sources of information and documents stored on these servers, could end in very comprehensive insight into, say, the Space Shuttle program or military nuclear research. Actually, the main downside to unauthorised usage had to do with cost: processing power was expensive at the time and far from a commodity. Users were billed by for their computer usage. So billing was actually the reason why Stoll started his hacker hunt. There was a few cents worth of computer time that couldn’t be accounted for. Finding out whether this was caused by a bug or something else, was the primary goal of Mr. Stoll’s hunt. What the hacker had spent this time on was — at first — a secondary issue at best.

With that in mind it’s maybe not so strange that one of the most common errors made was not changing default passwords on multi-user computers connected to the internet. One of the systems having a default password was the now virtually extinct VAX/VMS operating system for Digital’s microcomputer series VAX. This was one of the things Mr. Stoll found out by logging each and every interaction the hacker, using Stoll’s compromised system as a gateway, had with other systems (the description of how he logged all this by wiring printers up to physical ports on the microcomputer, rewiring the whole thing every time the hacker logged on through another port, is by itself worth reading the book for). Using the backdoor, the hacker did not only gain access to that computer — they got root privileges as well.

In the 30+ years passed since I read the book I’ve convinced myself about two things: 1) we’ve learned to not use default passwords anymore, and 2) that VMS systems exhibiting this kind of backdoor are long gone.

Well, I believed these things until a few weeks ago. That’s when I stumbled on to a reddit post — now deleted, but there still is a cached version available on Waybackmachine. Here the redditor explained how he’d identified 33 remaining VAX/VMS systems still on the Internet:

About a year ago I read the book “A Cuckoo’s Egg”, written in 1989. It included quite a bit of information pertaining to older systems such as the VAX/VMS. I went to Censys (when their big data was still accessible easily and for free) and downloaded a set of the telnet (port 23) data. A quick grep later and I had isolated all the VAX/VMS targets on the net. Low and behold, of the 33 targets (I know, really scraping the bottom of the barrel here) more than half of them were still susceptible to default password attacks literally listed in a book from almost 3 decades ago. After creating super user accounts I contacted these places to let them know that that they were sporting internet facing machines using default logins from 1989. All but one locked me out. This is 31 years later… The future will be a mess, kids

I applaud the redditor that discovered this. Because isn’t what he found a testament of something breathtakingly incompetent and impressive at the same time? Impressive in the sense that someone’s been able to keep these ancient systems alive on the internet for decades; incompetent because the sysadmins has ignored patching the most well documented security flaw of those systems for well over a quarter century?

So maybe this starts to answer question posed in the title: Did we learn anything from this?

Yes, of course we did. If we look past the VMS enthusiast out there, computer security is very different now than back then. Unencrypted communication is almost not used anywhere anymore. Security is provided by multilayered hardware and software solutions. In addition are not only password policies widely enforced on users, but two-factor and other extra layers of authentication is used as well.

But the answer is also No. While my organisations such as my workplace — which is not in the business of having secrets — has implemented lots of the newest security measures, this autumn we learned that the Norwegian parliament — which is in the business of having secrets — haven’t. They had weak password policies and no two-factor authentication for their email system.

Consequently they recently became an easy target for Russian hackers. I obviously don’t know what was the reasoning behind having weak security implemented. But my guess is that the IT department assessed the digital competence of the parliament members and concluded that it was too low for them to handle strong passwords and managing two-factor authentication.

And this is perhaps the point where the security of yesteryear and security today differs the most: As we’re closing in on 2021, weak security is a conscious choice; but it is the same as leaving the door wide open, and any good sysadmin knows it.

The ignorance exhibited in the case of the Norwegian parliament borders, in my opinion, on criminal ignorance — although I guess no one will ever have to take the consquence. What it does prove, however, is that while systems can be as good as anything, people are still the weakest link in any such system.

In sum I think my answer to the initial question is an uneasy Maybe. We still have some way to go before what Cliff Stoll taught us 32 years ago has become second nature.

--

--

Jo Christian Oterhals

Norwegian with many interests. Programming being one of them.