An important function of science fiction is to help us understand sociological, technological, and other aspects of our future. A really good SF novel — like some of those produced by Asimov, Clarke, Heinlein, Le Guin, Niven, and Vinge — is so full of ideas and possibilities that the reader’s mind is expanded a little.
Although there have been some astonishing predictive successes, such as H. G. Wells’ descriptions of the tank and the atomic bomb, only a small fraction of predictions in SF literature have come true. Even so, those of us working in the technology research sector — which I view basically as applied science fiction — would do well to scour the SF literature for ideas and use cases that can help drive and make sense of the low-level technical work that ends up consuming most of our working lives. Neuromancer is a good example of a book that was often wrong at the technical level, but its overall effect was so amazing that — even 25 years later — a technical person like me would be foolish not to have read it.
It’s puzzling that not many people working on sensor networks seem to have read Vernor Vinge’s novel A Deepness in the Sky. So far, out of the people I’ve talked to about it, only Kris Pister has read this book. Of course Kris was one of the originators and major early drivers of the smart dust vision. Perhaps most researchers don’t have time or inclination to read SF? In any case, in 2000, when Vinge’s book came out, the “smart dust” vision was just getting off the ground with the help of DARPA’s NEST (Networked Embedded Systems Technology) program. The research group at UVA that I was a member of at the time was funded under NEST but, alas, I already had a PhD thesis topic and didn’t have the sense to switch over to this new and exciting area. It was only in 2001-2002, while working as a postdoc at Utah, that I got involved in sensor network research.
But back to the book. One of its central ideas is that following the Dawn Age — where we are now — humanity entered the Age of Failed Dreams where it became clear that many of humanity’s more ambitious goals — nanotechnology, artificial intelligence, anti-gravity, and sustained civilization — were unattainable. A Deepness in the Sky is set 8000 years later: humans are crawling around a limited part of the galaxy at well below the speed of light, using technologies not unfamiliar to us now. One of these technologies is the sensor network: a vast collection of tiny networked embedded systems, each of which contains a weak computer, a small power source, a packet radio, and some sensors and actuators.
The purpose of this piece is to compare and contrast Vinge’s vision of sensor networks with what we have right now, and to use this analysis to identify important open problems in sensor networking. I’ll do this by looking at some of the properties and capabilities that Vinge ascribes to his sensornet nodes, which are called Qeng Ho localizers (QHL from now on).
The QHL is black and about 1mm long by 0.4mm wide. The device’s thickness is not specified, but since it is difficult to detect on human skin and floats around in normal indoor air currents, it would probably have to be pretty thin. For purposes of discussion, let’s say it’s 0.1mm thick.
At 2mm by 2.5 mm, the Berkeley Spec, produced around 2002, is the smallest sensor network node that I’ve heard of. Unfortunately, the smallest actual effective node based on a Spec was quite a bit larger than the Spec itself since it needed at (at least) external power and antenna. This node type was never mass-produced. Today’s sensor network nodes, particularly when packaged and combined with a realistic battery, are quite bulky (I like to tell people: “It’s not smart dust if it hurts when someone throws it at you!”).
The QHL is powered by weak microwave pulses, preferably occurring around 10 times per second. It has very limited onboard energy storage and its internal capacitor runs out in a matter of minutes if it fails to receive energy from the environment. This is in sharp contrast with today’s battery-heavy sensor nodes that can often run for years with appropriate duty cycling. Since the QHL does not use exotic energy storage mechanisms (antimatter, nuclear, etc.), its capacity is bounded above by the amount of energy a 0.04 mm3 capacitor can store. The capacity is bounded below by the amount of energy required to self-destruct a node with a flash of light and “a tiny popping sound.”
Today’s production capacitors are limited to 30 Wh/kg. Some web searching failed to turn up a convincing upper limit on the energy density of an ultracapacitor, so let’s say it’s 500 Wh/kg: on a par with some of today’s best battery materials (we’ll also say 500 Wh/kg ≈ 500 Wh/l). This gives only 20 μWh of stored energy: enough to power one of today’s most energy-efficient microcontrollers for about a minute.
Why don’t the QHLs have photovoltaic elements? Perhaps this would have interfered with the plot or else Vinge didn’t want to get into details that didn’t push the plot along. In any case, an alternative energy harvesting mechanism would seem like an obvious win.
All we know about the QHL’s compute capability is that “each was scarcely more powerful than a Dawn Age computer”: a PC from our era. So its core either runs at a couple of GHz or — more likely — it contains a large number of weaker, specialized processing units. This makes sense both from an energy efficiency point of view, and also because the QHL is more or less a fixed-function device. It is depicted as being retaskable, but not necessarily supporting arbitrary new functionality.
One of the most interesting capabilities of the QHL is that it can capture real-time, omnidirectional video. A device with similar capabilities based on today’s technology would be bulky; probably it would use 6 wide-FOV cameras. How could a small black speck accomplish this task? Well, lensless imaging is not in itself challenging: pinhole cameras work perfectly well, although their inefficiency would be a problem in low-light environments
My napkin-sketch design for a sensor node with omnidirectional imaging has an outer skin made of an LCD-like material that can act as an electronic shutter; it displays a very rapidly changing pattern of slits. Below this skin is a thin layer of transparent material, under which is the light sensor. Due to the moving-slit design, the sensor’s sampling rate has to be much faster than the node’s effective frame rate, but that shouldn’t be a problem. The digital signal processing requirements for backing out diffraction patterns and otherwise turning the sensor data into high-quality omnidirectional video would be hellish. Probably there’s a nice pile of CS and EE PhD theses in there.
The small size of the QHL would limit its imaging quality due to both small aperture and limited light-gathering power. Both problems could be solved using distributed imaging. In the simple case, photons are simply integrated over many nodes. Since there are many points of view, it would be possible to perform very accurate reconstruction of the 3-D positions of objects. An even more interesting capability — if the localization and time synchronization across nodes is extremely tight — would be for a group of nodes to act as an optical interferometer. In other words, if you scattered QHLs over an area 10m in diameter, you’d get a 10m synthetic-aperture telescope. Obviously this would only work on a very stable surface, and probably only in a vacuum. There would have to be a lot of nodes to get a decent amount of light gathering power.
The non-imaging sensors on the QHL seem relatively boring: temperature, pressure, vibration — these are on sensornet nodes today.
Vinge gives few details about the QHL’s actuators, but we know it is capable of inducing signals in computer networks and in the human nervous system. The likely mechanism would be induction using magnetic fields. Unless this actuator is incredibly sophisticated, a complex task such as implementing a heads-up display via the optic nerve would seem to require a collection of cooperating nodes. Perhaps to keep the story simple, Vinge shows a pair of nodes performing this function.
An intriguing capability ascribed to the QHL is building up layers of grit near a switch, to impede the switch’s operation. Presumably, nodes used their actuators to move fragments of ferrous material (maybe QHL nodes themselves were sacrificed) into the appropriate location. This activity is depicted as being very slow.
It would be interesting for nodes to use their magnetic actuators to power up other nodes. There would be substantial inefficiencies, but if one node found itself with lots of extra power (for example, if it is in a patch of bright light) then spreading the energy around would be the right thing to do.
The QHL is not depicted as being able to move itself; it is moved passively around its environment by riding on humans, air currents, etc. However, at least a bit of mobility is implied by the presence of a magnetic actuator and by the QHL’s ability to create a gritty surface on a switch.
The radio communication range of the QHL is not specified, but around 10m seems like a reasonable guess. Since nodes are too small for a traditional antenna, presumably they contain an internal fractal antenna. Probably the antenna has some beamforming capability since this, combined with localization, would permit highly energy-efficient communication with neighbors, and would also make it more difficult for eavesdroppers to detect communicating nodes. The inter-node bandwidth is not specified, but clearly it is sufficient to transmit multiple video streams. The frequencies used by the QHL are not discussed.
There is very little discussion about the protocols used by the QHLs, beyond this:
Ten Ksec after the microwave pulses were turned on, Ritser reported that the net had stabilized and was providing good data. Millions of processors, scattered across a diameter of four hundred meters.
Ten Ksec is about 2 hours, 45 minutes. This seems like a fairly long time for a network to stabilize, but then again “millions” is a lot of nodes. Obviously they QHL’s algorithms scale very well, but beyond that it sounds like the route formation and other parts of the protocol stack are not radically different from things that have already been implemented by the sensornet community.
The QHL determines its position relative to other nodes using time of flight of a radio signal. This is obviously the way to go; today’s signal strength based localization methods are akin to standing with a friend in a large, echoey building and trying to figure out how far apart you are by shouting. This kind of works, but not well.
The problem with time of flight is that it requires part of the sensor network node to be clocked very fast: on the order of 100 GHz to get millimeter-level localization. This doesn’t seem like a fundamental problem because only a handful of gates need to be clocked this fast; they can latch high-fidelity time values and then slower parts of the processor can digest them at their leisure.
Time synchronization is not discussed but the localization capability implies that the QHL net is very tightly time synchronized.
The main thing we know about the QHL’s software is that it is large and complex enough to defeat a years-long “software archeology” effort directed at determining whether the nodes can be trusted (in the end, they cannot). The QHL is self-documenting: it will provide technical manuals when asked, but there are at least three versions of the manuals, some of which are quite coy about the nodes’ actual capabilities. There are plenty of trapdoors. All of this deception is based on the idea that there may be circumstances where it is possible or necessary to trick people into deploying a sensor network of QHLs, and then using its ubiquitous monitoring and actuation capability to take over.
I shouldn’t need to point out that “Programmer-Archaeologist” is a chilling, but very likely, future job description. Programmer-at-Arms is another great job coined by Vinge.
The internal design of the QHL is not discussed, but we can make a few guesses:
- Its skin is dominated by sensing surface.
- Its processing elements are as close as possible to sensing surface, both to facilitate heat transfer and to reduce the distance that high-bandwidth raw sensor data must travel. Thus, the processors are probably structured as a shell just inside the skin.
- By volume, the interior is dominated by the capacitor, nonvolatile memory, and processing elements. Presumably these can be packed together in clever ways, saving space and weight when compared to a design that segregates them.
- The fractal antenna is spread throughout the interior of the node, but takes up only a small fraction of its volume.
Implications for Sensor Network Research
Among the Qeng Ho localizer’s capabilities, these seem to motivate hard but exciting research problems:
- Reading and inducing signals in nerves and wires that are close to nodes.
- Scaling protocols to millions of nodes.
- Robust, precise, and energy-efficient time-of-flight based localization.
- Efficient and precise omnidirectional beamforming from sub-mm scale antennas.
- Efficient energy harvesting from microwave sources that are well below levels that would harm animals and cause other problems.
- Omnidirectional video in real-time.
- Light integration and other distributed image processing across large numbers of nodes.
I’d be remiss if I ignored one of Vinge’s larger messages, which is that cheap, effective sensor nodes lead to a government with the capacity for ubiquitous surveillance, which leads to a police state, which leads to societal collapse or worse. Can we do sensor network research in good conscience? Well, on one hand Vinge makes a valid point. On the other hand, the human race faces many dangers over the next hundred years and this one strikes me as probably not any more serious than a number of others. The problem, as always, is that the effect of a technology on society is tied up in complicated ways with how the technology ends up being used. There are intricate feedback loops and the overall effect is very hard to predict. Our hope, of course, is that in the time frame that sensor network researchers are targeting (5-50 years), the benefits outweigh the costs.
Acknowledgment: Neal Patwari helped me understand time-of-flight localization.