Towards Tinkers

The heroes of Vernor Vinge’s The Peace War are members of a scattered society of tinkers who — without any real industrial base — manage to develop and produce very high-tech devices including fast, small computers. I’m trying to figure out how realistic this is.

The software side seems entirely feasible. Today’s open source community has shown that small groups of talented, motivated people can create enormously sophisticated artifacts.

On the other hand, hardware is a problem. Vinge’s tinkers cannot exist in today’s world where fast computers are made only in billion-dollar fabs. Perhaps it will become possible to grow sophisticated chips using hacked bacteria, or maybe new fabrication technologies such as 3D printing will evolve to the point where they can produce realistic large-scale integrated circuits.

An end point for 3D printing technology is the Diamond Age‘s matter compiler, which can produce almost anything based on a plan and a “feed” of energy and atoms. But of course in that book, the feed is a centrally controlled resource. The “seed” technology from the end of The Diamond Age is tinker-friendly, but very far-fetched for now.

The recent near-doubling of hard disk prices due to flooding in Thailand shows how fragile our supply chains are. It’s nice to think that alternate versions of some high-tech products could be produced by small, autonomous groups of people.

Online University

Yesterday someone in my department’s main office got a request from a student to receive credit for taking the now-infamous free online AI course from Stanford. It is routine for a university to award transfer credit for a course taken at a different school, but this case is trickier since a student taking the AI course isn’t enrolled at Stanford and doesn’t get credit there. This post — which will be disorganized because my thinking on this subject is not yet organized — looks at what the Stanford course, the Khan academy, MIT’s Open Courseware initiative, and related efforts might mean for the future of college education.

Will there be a single, best course filling every niche, and everyone just takes that course? The analogy I’d like to make is between online courses and textbooks. Most subject areas are not dominated by a single textbook, but there is usually a small collection of textbooks that, together, are used as a basis for probably 80% of the courses. Personally I’d much rather learn from a textbook than from an online course — listening to someone talk is exceptionally inefficient. Why didn’t mass-market textbooks wipe out universities sometime during the 20th century? Because, of course, taking a class adds value beyond what can be found in the book. This value takes many forms:

  • A course comes as part of a broader “college experience” that many people want.
  • A course is part of an eventual degree that serves as a kind of certification.
  • Instructors are available to provide additional help.
  • Putting classmates in close proximity creates a sense of community and at least in some cases promotes learning.
  • A course is often part of a curriculum that has been designed in an integrated way.
  • A course serves as a forcing function, making it more difficult to put off learning the material.

I think we have to accept the conclusion that universities as we understand them today will be destroyed more or less to the extent that these sources of value can be provided by online education.

Let’s look at a couple of extremes. First, a course like Calculus I — a big lecture course at most universities. The experience of trying to learn integration while sitting in a big, crowded lecture is so bad that watching the lecture online almost seems attractive. It’s not hard to imagine these courses going away over the next 20 years. There seem to be various possibilities for how this will play out. First, a big university could offer an online version of Calc I, but this is very inefficient because only a few hundred or thousand people take it each year. Rather, courses like this will be handled by a few large organizations (companies or forward-thinking universities) and most institutions will simply contract out Calc I by giving some fraction of the tuition to the course provider. Course providers will make money by scaling — first through outsourcing and increasingly through AI-based techniques for assisting and assessing students. My fear is that these online courses will suck very badly, in the same way that so many web applications suck today. However, realistically, the not-online Calc I course I took 20 years ago sucked too: lectures were boring and recitation was at 7:30am with a TA I truly could not understand.

At the other extreme from big service courses, we have things like the “Senior Software Project” course offered by many departments or the Android Projects class that I’m teaching now. These have a large amount of instructor involvement, a large amount of in-person interaction between students, grading cannot easily be automated, etc. I don’t want to say that online versions of these classes are impossible, but certainly they would have a very different character than the current versions. These courses represent the part of a college education that directly descends from the old apprenticeship system and it would — in the long run — be a big problem if this part of the college experience went away. Of course the most serious students would still apprentice themselves through hackerspaces, internships, and such — but people in for example the 50th through 80th percentiles would likely be left poorly prepared for their future careers by an online-only education.

The picture I am painting is that at least in the near term, universities and traditional college degrees survive, but some of their bread and butter gets eaten by highly scalable providers of low-level courses. There will be fierce competition among providers — similar to the current competition between textbook providers, but the stakes will be higher. As we move forward, some fraction of students — in particular, non-traditional students and those who otherwise don’t want the traditional college experience — will move towards online-only degree programs. At first these will provide an inferior education and therefore they will be sought out by students who just cannot make regular classes work, or who are primarily interested in a degree for its own sake. Perhaps, as time passes, telepresence and related technologies will manage to become solid enough that a real education can be attained online.

A Fire Upon The Deep — Retrospective and E-book

Over the last few weeks I read A Fire Upon The Deep, surely one of the top five works of computer science fiction. The proximate reason for the re-read was the upcoming release of a sequel, Children of the Sky, which I am impatiently awaiting.

I read the “special edition” which contains about 1500 of the author’s annotations. This was a bit of a mixed bag. First, there are so many notes that it became tiresome to flip (electronically) back and forth between them and the main text. Second, a large fraction of the annotations are irrelevant because they apply to obsolete drafts, they contain formatting information for the publisher, or they are just too terse or arcane to figure out. Since Vinge went through the trouble of coding his annotations so they would be greppable, it would have been great if the special edition had exploited these tags — for example, by giving me the option of ignoring notes about typesetting. Around 10% of the annotations contain interesting material such as background information or deleted text; these show that Vinge worked very hard to make the story consistent and physically plausible, and that he was already thinking about the sequel 20 years ago.

One of the fun conceits in A Fire is that galaxy-scale bandwidth and latency constraints force most communication to look and feel just like Usenet did around 1990. While some of Vinge’s Usenet-centric humor (in particular the netnews headers) has become stale, much of it works if translated into terms of today’s message boards. In particular, the “net of a million lies” aspect is a lot more relevant today than it was a few decades ago.

Vinge is probably best known for coining the term technological singularity based on the idea that as a society’s technical prowess increases, progress becomes ever-faster until so much change occurs in such a short time that meaningful predictions about the end state are impossible. This notion does not play a role in A Fire. I’d argue that Vinge’s vision of the future of computer security would be a more appropriate lasting legacy than his thoughts about the singularity. This thread is present in most of his work, but in A Fire it is played up at a very large scale, with entire civilizations being wiped out or worse due to inappropriate network security measures. I shouldn’t need to add that we’re a lot closer to this situation now than we were when the book was written. This sequence stuck in my head even the first time I read the book:

The new Power had no weapons on the ground, nothing but a comm laser. That could not even melt steel at the frigate’s range. No matter, the laser was aimed, tuned civilly on the retreating warship’s receiver. No acknowledgment. The humans knew what communication would bring. The laser light flickered here and there across the hull, lighting smoothness and inactive sensors, sliding across the ship’s ultradrive spines. Searching, probing. The Power had never bothered to sabotage the external hull, but that was no problem. Even this crude machine had thousands of robot sensors scattered across its surface, reporting status and danger, driving utility programs. Most were shut down now, the ship fleeing nearly blind. They thought by not looking that they could be safe.

One more second and the frigate would attain interstellar safety.

The laser flickered on a failure sensor, a sensor that reported critical changes in one of the ultradrive spines. Its interrupts could not be ignored if the star jump were to succeed. Interrupt honored. Interrupt handler running, looking out, receiving more light from the laser far below…. a backdoor into the ship’s code, installed when the newborn had subverted the humans’ groundside equipment….

…. and the Power was aboard, with milliseconds to spare. Its agents — not even human equivalent on this primitive hardware — raced through the ship’s automation, shutting down, aborting. There would be no jump. Cameras in the ship’s bridge showed widening of eyes, the beginning of a scream. The humans knew, to the extent that horror can live in a fraction of a second.

There would be no jump. Yet the ultradrive was already committed. There would be a jump attempt, without automatic control a doomed one. Less than five milliseconds till the jump discharge, a mechanical cascade that no software could finesse. The newborn’s agents flitted everywhere across the ship’s computers, futilely attempting a shutdown. Nearly a light-second away, under the gray rubble at the High Lab, the Power could only watch. So. The frigate would be destroyed.

Apparently all bets are off when Satan is putting back doors in your critical control code.

Aside from the self-imposed problem of looking at every annotation, reading A Fire Upon the Deep on an iPad was a fairly friendly experience. Resolution and contrast were adequate. I liked how easy it was to listen to music while reading. I probably wouldn’t have been able to read in full sunlight, but then again I dislike reading regular books in full sunlight. The iPad is quite a bit heavier than a paperback and it has an annoying way of deciding to change the page orientation when I didn’t want that to happen. I’d consider reading another e-book but am not in a hurry.

One reason I read SF is that it helps me learn to think about alternate and future scenarios. This requires the author not only to have good ideas, but to be able to follow through with a world built on the consequences of those ideas. In terms of his ability to do these things, Vinge is one of the best modern SF authors. Only 50 days until Children of the Sky comes out…

Does a Simulation Really Need to Be Run?

At some point we’ll be able to run a computer simulation that contains self-aware entities. In this piece I’m not going to worry about little details such as how to tell if a simulated entity is self-aware or whether it’s even possible to run such a simulation. The goal, rather, is to look into some philosophical problems posed by simulations.

A computer simulation takes a model in some initial configuration and evolves the state of the model through repeated programmatic application of rules. Usually we run a simulation in order to better understand the dynamics of some process that is hard to study analytically or experimentally. The straightforward way to implement a simulator is to represent the system state in some explicit fashion, and to explicitly run the rule set on every element of the state at every time step. It seems clear that if our simulation includes a self-aware entity, this sort of execution is sufficient to breathe life into the entity. But what about other implementation options?

First, our simulator might be mechanically optimized by a compiler or similar tool that would combine rules or elide rules in certain situations. For example, if the simulation state contains a large area of empty cells, the optimizer might be able to avoid running the rule set at all in that part of the space. Can the entity being simulated “feel” or otherwise notice the compiler optimizations? No — as long as the compiler is correct, its transformations are semantics preserving: they do not affect the computation being performed. Of course a suitable definition of “do not effect” has to be formulated; typically it involves defining a set of externally visible program behaviors such as interactions with the operating system and I/O devices.

(animation is from Wikipedia)

Compiler optimizations, however, are not the end of the story — algorithmic optimizations can be much more aggressive. I’ll use Conway’s Game of Life as the example. First a bit of background: Life is a two-state, rectangular cellular automaton governed by these rules (here I’m quoting from Wikipedia):

  • Any live cell with fewer than two live neighbors dies, as if caused by under-population.
  • Any live cell with two or three live neighbors lives on to the next generation.
  • Any live cell with more than three live neighbors dies, as if by overcrowding.
  • Any dead cell with exactly three live neighbors becomes a live cell, as if by reproduction.

Life has been shown to be Turing complete, so clearly self-aware entities can be encoded in a Life configuration if they can be algorithmically simulated at all. A straightforward implementation of Life maintains two copies of the Life grid, using one bit per cell; at every step the rules are applied to every cell of the old grid, with the results being placed into the new grid. At this point the old and new grids are swapped and the simulation continues.

Hashlife is a clever optimization that treats the Life grid as a hierarchy of quadtrees. By observing that the maximum speed of signal propagation in a Life configuration is one cell per step, it becomes possible to evolve squares of the Life grid multiple steps into the future using hash codes. Hashlife is amazing to watch: it starts out slow but as the hashtable fills up, it suddenly “explodes” into exponential progress. I recommend Golly. Hashlife is one of my ten all-time favorite algorithms.

The weird thing about Hashlife is that time progresses at different rates at different parts of the grid. In fact, two cells that are separated by distance n can be up to n time steps apart. Another interesting thing is that chunks of the grid may evolve forward by many steps without the intermediate steps being computed explicitly. Again we ask: can self-aware Life entities “feel” the optimization? It would seem that they cannot: since the rules of their universe are not being changed, their subjective experience cannot change. (The Hashlife example is from Hans Moravec’s Mind Children.)

If Hashlife is a viable technology for simulating self-aware entities, can we extend its logic to simply replace the initial simulation state with the final state, in cases where we can predict the final state? This would be possible for simulated universes that provably wind down to a known steady state, for example due to some analogue of the second law of thermodynamics. The difference between Hashlife and “single simulation step to heat death” is only a matter of degree. Does this fact invalidate Hashlife as a suitable algorithm for simulating self-aware creatures, or does it imply that we don’t actually need to run simulations? Perhaps to make a simulation “real” it is only necessary to set up its initial conditions. (Greg Egan’s Permutation City is about this idea.)

Aside: Why don’t we have processor simulators based on Hashlife-like ideas? Although ISA-level simulators do not have the kind of maximum speed of signal propagation seen in cellular automata, the programs they run do have strong spatial and temporal locality, and I bet it’s exploitable in some cases. I’ve spent long hours waiting for simulations of large collections of simple processors, daydreaming about all of the stupid redundancy that could be eliminated by a memoized algorithm.

(image is from here)

Techniques like Hashlife only go so far; to get more speedup we’ll need parallelism. In this scenario, the simulation grid is partitioned across processors, with inter-processor communication being required at the boundaries. An unfortunate thing about a straightforward implementation of this kind of simulation is that processors execute basically in lock step: at least at the fringes of the grid, no processor can be very far ahead of its neighbors. The synchronization required to keep processors in lock step typically causes slowdown. Another ingenious simulation speedup (developed, as it happens, around the same time as Hashlife) is Time Warp, which relaxes the synchronization requirements, permitting a processor to run well ahead of its neighbors. This opens up the possibility that a processor will at some point receive a message that violates causality: it needs to be executed in the past. Clearly this is a problem. The solution is to roll back the simulation state to the time of the message and resume execution from there. If rollbacks are infrequent, overall performance may increase due to improved asynchrony. This is a form of optimistic concurrency and it can be shown to preserve the meaning of a simulation in the sense that the Time Warp implementation must always return the same final answer as the non-optimistic implementation.

Time Warp places no inherent limit on the amount of speculative execution that may occur — it is possible that years of simulation time would need to be rolled back by the arrival of some long-delayed message. Now we have the possibility that a painful injury done to a simulated entity will happen two or more times due to poorly-timed rollbacks. Even weirder, an entity might be dead for some time before being brought back to life by a rollback. Depending on the content of the message from the past, it might not even die during re-execution. Do we care about this? Is speculative execution amoral? If we suppress time warping due to moral considerations, must we also turn off processor-level speculative execution? What if all of human history is a speculative mistake that will be wiped out when the giant processor in the sky rolls back to some prehistoric time in order to take an Earth-sterilizing gamma ray burst into account? Or perhaps, on the other hand, these speculative executions don’t lead to “real experiences” for the simulated entities. But why not?

The Hashlife and Time Warp ideas, pushed just a little, lead to very odd implications for the subjective experience of simulated beings. In question form:

First, what kind of simulation is required to generate self-awareness? Does a straightforward simulator with an explicit grid and explicit rule applications do it? Does Hashlife? Is self-awareness generated by taking the entire simulation from initial state to heat death in a single step? Second, what is the effect of speculative execution on self-aware entities? Do speculative birth and death, pain and love count as true experiences, or are they somehow invalid?

These questions are difficult — or silly — enough that I have to conclude that there’s something fundamentally fishy about the entire idea of simulating self aware entities. (Update: Someone told me about the Sorites paradox, which captures the problem here nicely.) Let’s look at a few possibilities for what might have gone wrong:

  1. The concept of self-awareness is ill-defined or nonsensical at a technical level.
  2. It is not possible to simulate self-aware entities because they rely on quantum effects that are beyond our abilities to simulate.
  3. It is not possible to simulate self-aware entities because self-awareness comes from souls that are handed out by God.
  4. We lack programming languages and compilers that consider self-awareness to be a legitimate side-effect that must not be optimized away.
  5. With respect to a simulation large enough to contain self-aware entities, effects due to state hashing and speculation are microscopic — much like quantum effects are to us — and their effects are necessarily insignificant at the macroscopic level.
  6. All mathematically describable systems already exist in a physical sense (see Max Tegmark’s The Mathematical Universe — the source of the title of this piece) and therefore the fire, as it were, has already been breathed into all possible world-describing equations. Thus, while simulations give us windows into these other worlds, they have no bearing on the subjective experiences of the entities in those worlds.

The last possibility is perhaps a bit radical but it’s the one that I prefer: first because I don’t buy any of the others, and second because it avoids the problem of figuring out why the equations governing our own universe are special — by declaring that they are not. It also makes simulation arguments moot. One interesting feature of the mathematical universe is that even the very strange universes, such as those corresponding to simulations where we inject burning bushes and whatnot, have a true physical existence. However, the physical laws governing these would seem to be seriously baroque and therefore (assuming that the multiverse discourages complexity in some appropriate fashion) these universes are fundamentally less common than those with more tractable laws.

Do Small-RAM Devices Have a Future?

Products built using microcontroller units (MCUs) often need to be small, cheap, and low-power. Since off-chip RAM eats dollars, power, and board space, most MCUs execute entirely out of on-chip RAM and flash, and in many cases don’t have an external memory bus at all. This piece is about small-RAM microcontrollers, by which I roughly mean parts that use only on-chip RAM and that cannot run a general-purpose operating system.

Although many small-RAM microcontrollers are based on antiquated architectures like Z80, 8051, PIC, and HCS12, the landscape is changing rapidly. More capable, compiler-friendly parts such as those based on ARM’s Cortex M3 now cost less than $1 and these are replacing old-style MCUs in some new designs. It is clear that this trend will continue: future MCUs will be faster, more tool-friendly, and have more storage for a given power and/or dollar budget. Today’s questions are:

Where does this trend end up? Will we always be programming devices with KB of RAM or will they disappear in 15, 30, or 45 years?

I’m generally interested in the answer to these questions because I like to think about the future of computing. I’m also specifically interested because I’ve done a few research projects (e.g. this and this and this) where the goal is to make life easier for people writing code for small-RAM MCUs. I don’t want to continue doing this kind of work if these devices have no long-term future.

Yet another reason to be interested in the future of on-chip RAM size is that the amount of RAM on a chip is perhaps the most important factor in determining what sort of software will run. Some interesting inflection points in the RAM spectrum are:

  • too small to target with a C compiler (< 16 bytes)
  • too small to run multiple threads (< 128 bytes)
  • too small to run a garbage collected language (< 128 KB)
  • too small to run a stripped-down general-purpose OS such as μClinux (< 1 MB)
  • too small to run a limited configuration of a full-fledged OS (< 32 MB)

These numbers are rough. It’s interesting that they span six orders of magnitude — a much wider range of RAM sizes than is seen in desktops, laptops, and servers.

So, what’s going to happen to small-RAM chips? There seem to be several possibilities.

Scenario 1: The incremental costs of adding transistors (in terms of fabrication, effect on packaging, power, etc.) eventually become so low that small-RAM devices disappear. In this future, even the tiniest 8-pin package contains an MCU with many MB of RAM and is capable of supporting a real OS and applications written in PHP or Java or whatever. This future seems to correspond to Vinge’s A Deepness in the Sky, where the smallest computers, the Qeng Ho localizers, are “scarcely more powerful than a Dawn Age computer.”

Scenario 2: Small-RAM devices continue to exist but they become so deeply embedded and special-purpose that they play a role similar to that played by 4-bit MCUs today. In other words — neglecting a very limited number of specialized developers — they disappear from sight. This scenario ends up feeling very similar to the first.

Scenario 3: Small-RAM devices continue to exist into the indefinite future; they just keep getting smaller, cheaper, and lower-power until genuine physical limits are reached. Eventually the small-RAM processor is implemented using nanotechnology and it supports applications such as machines that roam around our bloodstreams, or even inside our cells, fixing things that go wrong there. As an aside, I’ve picked up a few books on nanotechnology to help understand this scenario. None has been very satisfying, and certainly none has gone into the kind of detail I want to see about the computational elements of nanotechnology. So far the best resource I’ve found is Chapter 10 of Freitas’s Nanomedicine Volume 1.

This third scenario is, I think, the most interesting case, not only because small-RAM devices are lovable, but also because any distant future in which they exist is pretty interesting. They will be very small and very numerous — bear in mind that we already manufacture more MCUs per year than there are people on Earth. What sensors and actuators will these devices be connected to? What will their peripherals and processing units look like? How will they communicate with each other and with the global network? How will we orchestrate their activities?

Externally Relevant Open Problems in Computer Science

Most academic fields have some externally relevant problems: problems whose solutions are interesting or useful to people who are totally ignorant of, and uninterested in, the field itself. For example, even if I don’t want to know anything about virology, I would still find a cure for the common cold to be an excellent thing. Even if I failed calculus I will find it interesting when physicists invent a room-temperature superconductor or figure out why the universe exists.

This piece is about computer science’s externally relevant open problems. I have several reasons for exploring this topic. First, it’s part of my ongoing effort to figure out which problems matter most, so I can work on them. Second, I review a lot of proposals and papers and often have a strong gut reaction that a particular piece of work is either useful or useless. An idea I wanted to explore is that a piece of research is useless precisely when it has no transitive bearing on any externally relevant open problem.

A piece of work has “transitive bearing” on a problem if it may plausibly make the problem easier to solve. Thus, an esoteric bit of theoretical physics may be totally irrelevant or it may be the key to room temperature superconductivity. Of course, nobody thinks they’re doing irrelevant work. Nevertheless, it’s hard to escape the conclusion that a lot of CS (and other) research is in fact irrelevant. My third motivation for writing this piece is that I think everyone should do this sort of analysis on their own, rather than just believing the accepted wisdom about which research topics are worth working on. The analysis is important because the accepted wisdom is — at best — a lagging indicator.

Below is my list. I’ll be interested to see what people think I’ve missed (but note that I’m deliberately leaving out problems like P vs. NP which I don’t think are externally relevant).

Giving More People Meaningful Control Over Computation

There are plenty of people — scientists, analysts, managers, etc. — who are not programmers but who would benefit from gaining better control over computers. I’m using the phrase “meaningful control over computation” as a bit of a hedge because it’s clear that most of these people don’t have 2-5 years to spare in which to become competent programmers. The goal is to give people the power that they need to solve their problems without miring them in the Turing tarpit. A good example is the class of spreadsheet programming languages which expose a useful kind of computation without most of the problems traditionally associated with programming. Overall, this problem is maybe 1/3 programming language design, 1/3 user interface design, and 1/3 domain specific.

Trustworthy Automation

Can we create computer systems that do what we want them to do? This problem encompasses both security and reliability. It’s a good problem to work on because solutions have not only have short-term economic benefit but in the long run they directly support getting humans off the rock, which as I’ve said before is something we need to be working very hard on.

The whole problem of specifying what we want systems to do is enormously hard, and in fact we generally have no precise ideas about this. Even highly mathematical objects like compilers are most commonly specified in human language, if at all. Moreover, the programming languages that we use to specify computations contain elements such as floating point computations, fixed-width integers, and excessive use of side effects — all of which seem designed to impede reasoning.

Intelligent Systems

Can computers be created that interface gracefully with humans? How can augmented cognition be used to sidestep limitations of humans and computers? Which sub-problems of AI are achievable and useful? Here I mean “intelligent” literally, not in the sense it is usually used, where it means “not as obviously bad as the previous thing.” Of course, the goal of AI may turn out to conflict with “trustworthy automation” but we’ll have to cross that bridge when we come to it.

Observing, Simulating, Modeling, and Predicting Everything

The universe produces a gigantic amount of data at scales from atoms to galaxies. Luckily, the theoretical bounds on the amount of computation that can be done using the available energy are very generous. The ugly little space heaters with which we currently compute have not even started to scratch the surface of what’s possible.

This one is pretty vague, but I’m not sure right now how to improve it.

Further Reading

Vernor Vinge and Hans Moravec have created some of the best unconstrained thinking about computers. Asimov was pretty good in his day, but Sladek nailed it. The transhumanist material on the web seems pretty bad, as is Kurzweil’s stuff. Dertouzos’s Unfinished Revolution was not inspiring. There must be more (good and bad) writing on this subject, please make suggestions.

Update from evening of 5/10: This list of external problems is of course subjective. I’d be very interested to see other people’s blog posts describing what they think CS has to offer the non-CS world.

How Can Computer Science Help Us Get To Mars?

SpaceX thinks it can get a person to Mars within 20 years. This seems optimistic, given that SpaceX does not enjoy the significant chunk of the USA’s federal budget that permitted NASA to get to the Moon on a relatively short time scale. Nevertheless, it’s a good goal, and presumably 50 years of improvements in space systems engineering will make up for some of the shortfall.

Since I strongly believe that getting humans off the rock needs to be a priority, I want to help. What are the CS problems that (1) can be addressed in a University and (2) will directly support getting off the rock? Since much of my research is aimed towards increasing embedded software reliability, I suspect that I’m already in the right ballpark, but once we get into the specifics there are a lot of ways to go astray. Perhaps I should see if I can do my next sabbatical at SpaceX. I recently read a book on how space systems can go wrong and (no surprise) there are very many ways, some of them software-related.