Products built using microcontroller units (MCUs) often need to be small, cheap, and low-power. Since off-chip RAM eats dollars, power, and board space, most MCUs execute entirely out of on-chip RAM and flash, and in many cases don’t have an external memory bus at all. This piece is about small-RAM microcontrollers, by which I roughly mean parts that use only on-chip RAM and that cannot run a general-purpose operating system.
Although many small-RAM microcontrollers are based on antiquated architectures like Z80, 8051, PIC, and HCS12, the landscape is changing rapidly. More capable, compiler-friendly parts such as those based on ARM’s Cortex M3 now cost less than $1 and these are replacing old-style MCUs in some new designs. It is clear that this trend will continue: future MCUs will be faster, more tool-friendly, and have more storage for a given power and/or dollar budget. Today’s questions are:
Where does this trend end up? Will we always be programming devices with KB of RAM or will they disappear in 15, 30, or 45 years?
I’m generally interested in the answer to these questions because I like to think about the future of computing. I’m also specifically interested because I’ve done a few research projects (e.g. this and this and this) where the goal is to make life easier for people writing code for small-RAM MCUs. I don’t want to continue doing this kind of work if these devices have no long-term future.
Yet another reason to be interested in the future of on-chip RAM size is that the amount of RAM on a chip is perhaps the most important factor in determining what sort of software will run. Some interesting inflection points in the RAM spectrum are:
- too small to target with a C compiler (< 16 bytes)
- too small to run multiple threads (< 128 bytes)
- too small to run a garbage collected language (< 128 KB)
- too small to run a stripped-down general-purpose OS such as μClinux (< 1 MB)
- too small to run a limited configuration of a full-fledged OS (< 32 MB)
These numbers are rough. It’s interesting that they span six orders of magnitude — a much wider range of RAM sizes than is seen in desktops, laptops, and servers.
So, what’s going to happen to small-RAM chips? There seem to be several possibilities.
Scenario 1: The incremental costs of adding transistors (in terms of fabrication, effect on packaging, power, etc.) eventually become so low that small-RAM devices disappear. In this future, even the tiniest 8-pin package contains an MCU with many MB of RAM and is capable of supporting a real OS and applications written in PHP or Java or whatever. This future seems to correspond to Vinge’s A Deepness in the Sky, where the smallest computers, the Qeng Ho localizers, are “scarcely more powerful than a Dawn Age computer.”
Scenario 2: Small-RAM devices continue to exist but they become so deeply embedded and special-purpose that they play a role similar to that played by 4-bit MCUs today. In other words — neglecting a very limited number of specialized developers — they disappear from sight. This scenario ends up feeling very similar to the first.
Scenario 3: Small-RAM devices continue to exist into the indefinite future; they just keep getting smaller, cheaper, and lower-power until genuine physical limits are reached. Eventually the small-RAM processor is implemented using nanotechnology and it supports applications such as machines that roam around our bloodstreams, or even inside our cells, fixing things that go wrong there. As an aside, I’ve picked up a few books on nanotechnology to help understand this scenario. None has been very satisfying, and certainly none has gone into the kind of detail I want to see about the computational elements of nanotechnology. So far the best resource I’ve found is Chapter 10 of Freitas’s Nanomedicine Volume 1.
This third scenario is, I think, the most interesting case, not only because small-RAM devices are lovable, but also because any distant future in which they exist is pretty interesting. They will be very small and very numerous — bear in mind that we already manufacture more MCUs per year than there are people on Earth. What sensors and actuators will these devices be connected to? What will their peripherals and processing units look like? How will they communicate with each other and with the global network? How will we orchestrate their activities?
19 responses to “Do Small-RAM Devices Have a Future?”
I think the answer is that small RAM devices will continue to exist but will do so by taking over new domains. For example, if a 1kB MCU were to cost ~$0.01 I think you would see it going into things like watches simple calculators and even flashlights.
IIRC, the first two are currently built using very simple ASICs. As for flashlights, the current generation of LED flashlights likely also contain ASICs to control their multiple modes (high low, blink, strobe, etc.), their power regulators and what not. Given a very low cost, very low power MCU, I don’t see why it wouldn’t be a better (i.e. cheaper) way to go.
[…] fun post prompted by John Regehr’s question: “Do Small-RAM Devices Have a Future?” where he proposes several possible scenarios. One in particular is fun to think about: Small-RAM […]
I don’t see microcontrollers going away anytime soon.
One of the things I hate about microcontoller vendors is the lack of microcontrollers with large amounts of onboard RAM.
Most microcontrollers have far more internal flash than ram, like 512KB flash and 64KB ram. I wish that I could find 32KB flash with 1MB ram.
Some projects need more RAM, but you don’t want to waste the PCB space and microcontroller pins to do it. I would rather have a bunch of internal ram, and use serial flash to load my program. The big benefit is that you can load and debug programs a lot easier with them sitting in ram.
As long as there are microwave ovens, washers, dryers, frigs, and other electronic devices, there will be small embedded microcontrollers to do the work. I sure the heck don’t need linux or android running my washing machine.
Let us imagine two extreme futures of technology.
In the first, there are breakthroughs in memory, batteries (or power scavenging), and quantum computing. No longer will embedded computing need to worry about power constraints (which currently handicaps all architectures that use small RAM to conserve power). Also the quantum computing breakthroughs will enable doubly exponential model checkers to zoom into practice, so this will be a bonanza for compiler/specification tools.
The second extreme is a world with a lingering economic downturn and persistent energy constraints on using larger amounts of memory for low-power embedded applications. Note that your first Scenario 1 doesn’t worry about economic costs (say the cost of building yet more high-cost Fabs to get the technology needed).
Scenario 2 reminds me of the “why not use a smart phone everywhere”strategy. Why not just wait until the tech advances to make this possible? Sort of like the wait-for-faster (wink wink) approach to NP-hard problems, just let Moore’s Law take hold. Implicit here is the idea that the market for limited embedded devices is just not so interesting and may even shrink. What evidence do we have either way?
As for Scenario 3, in what sense will this be interesting? Yes, there would be more devices, big numbers, lots of connectivity. Are big numbers by themselves interesting? There are more of many things and commodities than ever before, but why is this interesting? What ideas or intrigue are you getting at? Is there some Kurzweilian Singularity in the offing?
Just thought I’d mention this: http://www.ccs.neu.edu/home/stamourv/picobit-ifl.pdf.
I believe it is a gc’d language that is below the threshold mentioned in the post (but I don’t know too much about this area)
Robby, thanks, I hadn’t seen that paper. Yeah, the resource constants I mention are probably a bit generous. For example people have created fairly realistic Java implementations that run in ~10 KB of RAM.
For things that run in 10kB it might be possible (maybe even practical) to implement GC via static analysis so that it can be as good as by hand malloc/free code.
Sorry, but I just had to point out the gross error that is often made. Garbage collected =/= managed, in fact, these two properties are completely independent. Managed means that the compiled language is controlled by some sort of language runtime, whereas garbage collected means that memory is freed for you.
Sure, usually managed languages usually have GCs and the converse, but that is not always true.
Small RAM Devices will be around for a while. While cost matters, the biggest problem is leakage! If you try to go nano-watts, RAM is the major energy sink if you have to duty-cycle.
@Jason Turner: As an example of a language with a GC but that is not managed: http://www.digitalmars.com/d/
TR, I don’t really believe in the singularity, though it’s fun to think about.
I think the interesting thing about large numbers of MCUs is that new styles of programming and otherwise interfacing with them become possible (and probably necessary). So I guess it’s a pretty nerdy kind of “interesting.”
Thomas, do you not foresee us getting RAM that consumes zero power when it’s not being used? I don’t know much about the HW side but I like to assume this is coming in the not-too-distant future.
Jesus said, “the poor will always be with you.” He might as well have been referring to small-memory devices. I go for scenario 3.
Jason, “managed language” is a recently-coined term that seems to be mainly used to describe Microsoft technologies. At least, I haven’t seen a good definition that is independent of MS stuff. Your definition isn’t good because “controlled by some sort of language runtime” is hopelessly vague. Does “managed” mean anything more than “safe” or “type safe”?
We’d better avoid the term “managed” until we can agree upon its meaning, but it is clear that the lack of GC does not necessarily imply manual memory management (in the sense of malloc/free) or being memory-unsafe.
One example might be Nova, the language for network processors by Lal George and Matthias Blume. Apparently, they went out of their way to avoid dynamic memory management (at least for control).
It’s unlikely that small-RAM parts will ever disappear, at least not until RAM takes up no physical space. I don’t see us defeating the laws of physics any time soon.
Part of the reason that today’s generation of low-end microcontrollers are so cheap is because they can be fabricated by the thousands on a single silicon wafer. The wafer and fabrication steps are the most significant costs involved, so there is a linear relationship between process geometries and final chip cost: smaller features mean more chips per wafer, which means lower cost per chip.
In addition, the overwhelming majority of current and near-term embedded applications do not require much RAM to implement, as indicated by the huge demand for microcontrollers with very little RAM. Developers wouldn’t be buying these parts if they couldn’t actually use them.
I think it’s likely that all three of your scenarios will come to pass, actually. None of them excludes the possibility of the others.
MCUs in flashlights? Already here:
http://www.kickstarter.com/projects/527051507/hexbright-an-open-source-light
What’s most likely to happen is that ARM will end up taking over most MCUs, including those tiny 4 bit ones, with the amount of memory available slowly ramping up, but not much beyond the couple of hundred kb stage. Microwave ovens, remote controls, etc, just don’t have needs for a full featured OS.
Some of these devices are now integrated into larger chips. For example, a high end smartphone will contain several small cpu cores as well as the large one which runs the apps. These run various real-time tasks such as the wifi/ 3G protocol, GPS etc. These cores run tasks for which external RAM is too slow, so they may not be given direct access to the external memory.
I would be surprised of the third scenario happens. The trend in electronics for decades has been towards more integration, not less The economics of small parts makes little sense, as you have to devote significant silicon area to pins or radios which is eliminated when the parts are integrated onto a larger SoC. Yes, the individual parts may be a fraction of a dollar, but it’s the total cost that is interesting, not the cost of a single part.