This site graciously hosted
by our friends at

Additional Case Studies

30 June 2003

Many years ago a manufacturer invited one of us to test for security weaknesses a new piece of hardware they were just about to bring to market.

The device was called a "switch". It connected terminals--not workstations--to a central timesharing computer. The idea was that the customer would connect dozens or hundreds of terminals to the switch, as well as whatever big iron they owned. Their users could then, seated at their desks in front of a terminal, connect to any host in the complex. (In the days before Ethernet, this was Hot Stuff.)

One day after most system tests were complete, the project manager invited us (rather grudgingly, we thought) to see if we could compromise the security of the new device. The risk of concern was that some user could manage to get connected to a host he or she wasn't supposed to. Such an occurrence was supposed to be impossible: the designers believed that the "connection manager", the application that arbitrated connections, was hack-proof. Still, we were told to drop by one night, after the day shift testers were done, and see what havoc we could wreak. Everybody made arrangements for what was expected to be a long overnight session.

After a short tour, we were shown to one of the test terminals and invited to have at it. But just as we assumed the testing position, plans for the evening were changed. "Hold on there," spoke the project manager. "We want to go to dinner first. You can start after we can get back."

As we got up to leave, we jammed a sharpened No. 2 pencil into a space between the "o" and "p" keys on the VT105 terminal. The group walked briskly to the lab door, only to be called back by one of the lab denizens. "Something's wrong with the switch!"

A quick trip back to the test terminal showed that the connection manager had crashed, leaving the terminal connected to a privileged port. The test was over. The flag had been captured. Security measures developed over many months had been broken down in a matter of seconds by a sharp stab with an inanimate object.

What happened? One of the best features of the DEC VT105 terminal was its brisk auto-repeat feature. Hold down the space bar, for example, and the terminal would generate hundreds of characters in a few seconds. We knew that. We knew also that the processor controlling the switch had a slow clock rate and not much space for a stack. Lastly, we were aware that the command-parsing software inside the connection manager had been written in C to emulate the TOPS-20 "COMND JSYS", famous for its on-demand token completion and context-sensitive help. The parser was stack-hungry, so we fed it characters until the software came apart at the seams.

We see several lessons for security-conscious developers in this case study. (We learned a lot from it ourselves.) Let's note, first, that all the flaws we found seem to have been introduced during the design phase of the switch's controlling software. The switch--and a neat box it was, too, really state of the art--was, from the hardware side, good and solid.

Some further points to consider:

- If the application's design had been fault-tolerant, and its coders had checked for resource exhaustion, graceful degradation could have protected the connection manager from an ugly demise.

- If the designers had anticipated that the "privileged" connection manager could crash, they might have built in a trap to catch that event and shut down input to control ports. That would have been better than leaving the switch wide open.

- Most importantly, the designers needed to look beyond the mental model they held in their minds of a "switch". If they had been able instead to see what we saw (that is, a bunch of plastic keys, raised over springs, connected by wire to a printed circuit board) they might have been able to imagine, as we did, the damage a well-placed pencil could do.

Site Contents Copyright (C) 2002, 2003 Mark G. Graff and Kenneth R. van Wyk. All Rights Reserved.