Science fiction explores the effect of technological progress on society. It is ironic, then, that the majority of SF authors miserably failed to predict the impact of computers and information technology. Why does Google return no meaningful hits for “computer science fiction?” Is it not obvious that this term needs to exist, if we wish to understand the next 50 years at all?
Looking back, it is clear that most SF authors made the same mistake: they took whatever computers existed at the time and envisioned larger, more powerful, smarter versions of the same kind of machine. The genre is filled with this kind of obvious — and, in retrospect, boring — extrapolation. In contrast, the interesting thing that has happened over the past few decades is the development and deployment of new kinds of computer-based systems, which are now pervasively embedded in the world. Something like 10 billion processors are manufactured each year and only a tiny fraction looks anything like a mainframe.
Of course there were other mis-steps, such as overlooking the impact of wireless communication. One of my favorite mispredictions comes from Neuromancer, where Gibson makes a point of showing us that pay phones still exist in the middle of the 21st century. People in Neuromancer have their brains wired directly into the network, but they don’t carry cell phones.
As a professional computer scientist, part of my job is a form of applied science fiction. Instead of trying to predict the impact of science, we try to make the impact happen. Doing this job well, however, requires real insight into the future. Therefore, one of the questions I’m interested in is: Who in SF got it right? That is, who saw beyond the obvious extrapolations about computer technology and really understood what the future might look like? And how did they do it?
The best single example of a computer science fiction author is Vernor Vinge. His story True Names preceded the cyberpunk movement and paved the way for subsequent work like Neuromancer and Snow Crash. Vinge’s books contain a high density of good ideas, on a par with some of the best earlier idea-driven writers like Asimov and Niven. A Fire Upon the Deep gets solid mileage out of pack animals that function as distributed systems and also provides a reasonable exploration of what really powerful machines might be like (“applied theology” indeed). Subsequently, A Deepness in the Sky gave us the term “software archeology” — an ugly but highly plausible window into the future of software maintenance — and is the first and perhaps only work in the sub-sub-genre “sensor network science fiction.” Of course, Vinge depicts pervasive embedded sensing as invariably leading to totalitarian control and then societal collapse.
There are two major CS-related themes running across Vinge’s work. The first is the singularity, and probably too much has been written about it elsewhere. It seems uninteresting to speculate about a point in time that is defined as being immune to speculation. The second centers on the fickle nature of control in computer systems. Very early, Vinge foresaw the kinds of battles for control in networked resources that are playing out now in botnets and in corporate and government networks. Concepts like a trusted computing base and subversion via network attack look like they are probably fundamental. Our understanding of how these ideas will evolve is flawed at best. In fact, the entire history of computer security has been adversary driven: we put an insecure system out into the world, wait for it to be exploited, and then react. This happens over and over, and I have seen very few signs that a more proactive approach is emerging.
How did Vinge do such a good job? This is tough to analyze. Like all good extrapolators, he separated the fundamental from the incidental, and pushed the fundamentals forward in useful ways. He knew enough CS to avoid technical gaffes but somehow failed to let this knowledge interfere with his predictions. It is no coincidence that few of the now-common top-level uses of the Internet were thought of by computer scientists: we’re too busy dealing with other levels of the system. We know how computers are supposed to be used and this is a huge handicap.
As a group, the cyberpunk authors did a good job in seeing the impact of information technology, but as far as I can tell their contributions to computer science fiction were largely derived from things that Vinge and others had already predicted. It’s interesting that almost all of these authors insisted on making the network into a physical space accessed using VR. This was more than a plot device, I think: people really want cyberspace to be somehow analogous to physical space. The real-world failure of VR as a metaphor for understanding and navigating networked computer systems has been interesting to watch; I’m not sure that I understand what is going on, but as I see it the network simply does not demand to be understood as a physical space. Of course, spaces are useful for games and other kinds of interaction, but it’s not at all clear that VR will ever be the primary metaphor for interacting with the network. The poor state of interface technology (Where are those brain plugs? Why are we still using stupid mice?) is obviously a factor as well.
2 responses to “Computer Science Fiction”
Interesting point of view. I think I will check out some of the books by Vinge. Which one do you recommend I start with?
Thanks
Hi Dave– I’d start with either the two part series containing _The Peace War_ and _Marooned in Realtime_, or else the two part series containing _A Fire Upon the Deep_ and _A Deepness in the Sky_.