The Evolution of Human Interface
By Rene Ritchie
One of my earliest childhood memories is visiting my father at work, watching the giant metal doors of IBM’s computer vault open, and a cart stacked high with punch cards wheel out. Then hit a bump in the ramp and toppled, sending hundreds and hundreds of carefully sequenced cards tumbling out across the floor as technicians, stricken, scrambled after them.
That moment taught me two things: computers were really cool, and punch cards were a really dumb idea. Not only were they fragile in many senses of the word, they were also inhuman, and they restricted computing to a relatively tiny circle of people.
They had to evolve, and that evolution had to be one based on interface.
A few years later my father brought home an Apple II Plus, complete with a bright green monitor, pixels the size of chocolate chips, a floppy drive that required disks to be swapped near-constantly, and a command line interface (CLI) that let us program with a keyboard instead of cards. It was the first computer I, as a child, could use and start to understand.
Yet my mother and sister, brilliant women both, never took to it. They’d type out the occasional letter, but more often than not they’d get frustrated and go back to pen and paper. CLI still constrained computing to geeks, albeit a wider circle of geeks.
My father eventually left IBM and started a consulting company. In his new office was a new type of computer — the Lisa. The computer was built around the screen, and a mouse hung off of it. Yet that screen wasn’t filled with command lines, but rather with a graphical user interface (GUI). The first time I was allowed to use it, I dragged everything I could find into the trash, so engaging and satisfying was the experience. (Not coincidentally, that was also the last time I was allowed to use it.)
Read the rest of this article by subscribing to The Loop magazine on the App Store from your iPhone or iPad.