Belated Introduction

When I started this blog I didn’t write an introductory post because, of course, I had no readers.  Lately Google Reader indicates that I have a bit over 100 subscribers so I thought it might be time for a quick introduction.

For a long time I was uninterested in blogging because I thought the only good blogs were highly focused ones, and I wasn’t interested in writing a blog like that.  Also, I wasn’t interested in taking on a new long-term time commitment since I already have enough of these.  Finally, the nature of my job (and life) means that I’ll likely post in a bursty fashion that would seem annoying to readers.

Obviously I eventually decided to start a blog.  The main triggering event was that I became extremely bored with the FSP blog (reading it is like going to a faculty meeting) which used to be perhaps my favorite blog and was the original source of my observation that a good blog should be focused.  In other words I decided that posting about random topics was OK and perhaps even interesting. Also, I realized that a blog doesn’t need to a be a long-term commitment: if I get tired of blogging (and I expect this will happen in some number of years) I’ll just drop it until it seems interesting again.  The advent of good RSS readers means that bursty posting is not a problem (my view of the web solidified in roughly 1996 so I was effectively the last person to become a user of these readers).  Finally, I’m on sabbatical this year and have been taking a bit more time to think about the big picture than I used to.  Thinking, for me, usually leads to the desire to write.  It’s a common flaw among academics.

Anyway, back to the intro.  I’m an associate processor of Computer Science at the University of Utah, coming up on the end of my first sabbatical.  The sabbatical arrived at about the right time: the tenure runup left me feeling a little burned out and ready to think about what kind of directions I’d like to take my career in, in the long run. I teach courses in computer systems, such as operating systems, embedded systems, and advanced compilers.  I enjoy teaching a lot and it only becomes a drag when students perform poorly or are apathetic. My research is broadly in computer systems and is specifically about compilers and embedded systems.  My work is usually centered on software tools, which are based on the idea that we can use software to create better software.  It’s a lot of fun and it’s a great thing to be doing from academia since in industry they’re generally very focused on the next product: they don’t take the time to create tools that can add a lot of value in the longer run. I supervise a group that currently has six students.  Running a group is fun for multiple reasons.  First, it’s neat to watch students change from their early, tentative selves into mature and confident researchers.  Second, the students (on a good day) act as amplifiers, getting far more work done than I could ever do alone. Most of the time I have the best job in the world.  The thing I like most is that I can choose the problems I work on.  If I want to focus for multiple years on something that is of little interest to most people (e.g. compiler bugs), I can do that.  If I wish to bounce among projects in a flighty way, I can do that too.  It’s liberating.

I’m extremely fortunate to have healthy and inquisitive children, Jonas is 5 and Isaac is 3.  It’s hard to imagine life without them.  My wife Sarah is also a professor at Utah.  She, I think, slacked off on her work a bit when the kids were little (out of necessity — Utah’s parental leave policy only solidified after the boys were born) but lately she has been making up for lost time and is working on tons of projects; her career seems to be progressing well.  She’ll be lucky if she can spend another 10-15 years doing good research before (I predict) being roped into some sort of administrative job.

This summer we’ll have lived in Utah for 10 years, and it has been great.  Salt Lake City itself is OK, probably not much different from Indianapolis or Kansas City, but the setting is amazing.  Mountains can be found in all directions, and in fact the state contains entire mountain ranges that few people have heard of (The Wah Wahs?  The Deep Creeks?  The Silver Island Range?).  Just east of the city, and right outside my front door, is the Wasatch Range, climbing 7000′ above the city.  A bit further away are the red rock deserts including the incredible San Rafael Swell which is barely close enough to visit on a day trip, but really much better suited to 3-4 day car camping trips. Utah’s vast West Desert fills around a third of the state, in this area you can camp out for days and drive hundreds of miles on dirt roads without seeing anyone.  My list of trips to take in Utah never gets shorter since every time I take a trip, I see a half dozen more places to go.  My hiking buddy (and local guidebook author) John Veranth has spent hugely more time outdoors in Utah than I have, and says the same thing is true for him too.

Anyway, to sum up: this blog is probably going to be active for a finite time.  Posting will be bursty and on totally random topics :).

Into the Brooks Range, Part 3

[Continued from Part 1 and Part 2.]

August 6 — We See Bears

Finally we were back to walking a wide river valley not unlike our first day hiking. To stay in the river bed, we had to pass through some dense thickets of willow brush. Since it’s very bad to surprise a brown bear, we made lots of noise. Later, while walking along the riverbank, Eric heard something and motioned for us to stop. Down below on the gravel, a big female bear was standing up on hind legs, making warning noises: we had violated her personal space, which in the arctic encompasses a much larger area than for example in southeast Alaska where the bear density is higher. She calmed down after we stopped coming nearer, and we saw that she had three cubs nearby. Eventually, they wandered off into the willow scrub and we moved on. Later, we camped in a really pretty site up on the river bank.

I hadn’t spent time with brown bears before this trip, and it was interesting to do so. Most of the time, of course, we were managing bear risk as opposed to dealing with actual bears. For example, we carried a separate cook tent and never stored food in our tents overnight. We each had a can of pepper spray at hand basically at all times. Statistically speaking, this stuff is supposedly more effective than carrying a firearm, and certainly it poses less risk to the carrier (though there always seemed to be the possibility of being forced to discharge it into the wind). Even Eric, who has extensive bear experience, was hard-pressed to explain how one might distinguish a mock charge from a real charge before it was too late. A few times we joked that if the bears had their act together, they’d deploy one in front while another snuck up behind us. However, fundamentally, this kind of tactic is unnecessary since a single large bear could easily kill all of a five-person group like ours. However, the Brooks Range bears are not at all habituated to humans; their suspicion about the new shapes and smells causes them to back off more often than not, and attacks are rare (though not unheard of).

[nggallery id=8]

All photos © William B. Thompson or John Regehr 2009.

August 7 — A Warm Day and a Swim

This was a sunny, warm day with generally easy walking. The Ivishak was finally deep enough to contain plausible fishing holes — Ben had carried his fly rod the whole trip waiting for this. But no luck, it was too early for the arctic char sea run. One excellent location had deep, clear water and Ben, Eric, and I couldn’t resist a quick dip to wash off a week’s worth of grime and sweat. I’d guess the water was around 50-55 degrees: cold enough to trigger the gasp and hyperventilation reflexes, but not producing a strong feeling of impending cardiac arrest.

In the evening we found a gorgeous campsite on the river bank and Ben fished again. Around 11:30 Eric started yelling for Ben to come up to camp: a bear was prowling around on the opposite bank. We watched it foraging for a while: it was acting natural and hadn’t heard us over the river noise. Before turning in we banged some pots and pans to make sure it knew we were there: this got its attention right away and it stood on hind legs to try to figure us out. It lost interest quickly and wandered off, but even so most of us too a pot or pan to bed that night as a noise-maker in case he came back to investigate further. As far as we know, he didn’t come back.

Throughout the trip, everyone else did a better job than I did in spotting animals; my vision is about 20/50 and I decided not to wear corrective glasses most of the time. Also, as Sarah enjoys pointing out, I’m not the most observant person in the world. Eric on the other hand has 20/15 vision and his job depends on spotting wildlife in difficult conditions. Throughout the trip we were seeing plenty of caribou and raptors plus a single moose; these sightings quickly became routine and I’m only mentioning the more interesting ones.

[nggallery id=9]

All photos © William B. Thompson or John Regehr 2009.

August 8 — Last Day Walking and a Wedding

Our last walking day was cloudy and cool. The steep valley walls made it best to stick to the gravel bars and we spent most of the day in sandals. The frequent river crossings were uncomfortably cold. Also, as more side drainages added water to the Ivishak, and as it rained around us, the crossings got deeper. They weren’t scary but certainly we had to focus on maintaining our footing in the current. By the end of the day, crossing the main channel would have been dicey.

Finally we arrived at the big alluvial fan containing the takeout air strip. Although we were certain the location was correct (Shannon had been there before, as the starting point of a rafting trip) we had no luck finding any wheel tracks. Shannon went out and put a makeshift windsock on the part of the fan where she thought Kirk would land.

In the evening we had a fun surprise: Shannon and Ben had decided to get married. They asked Eric if he would marry them, and he was happy to (an adult Alaska resident can officiate at a wedding in the state). It was a nice ceremony in the most beautiful possible setting. Afterwards, we had drinks — sort of. Ben had stashed a mini bottle of gin that we mixed up with fizzy electrolyte drink tablets.

Shannon and Ben are a neat couple. They live in a cabin near Denali NP. They do various kinds of work such as guiding in the summer and working in Antarctica in winter. It sounds like an interesting life and I like to secretly think that in some alternate universe I’d have done this kind of thing instead of, or at least prior to, becoming a professional academic.

Overnight, a front rolled through and we had hours of high winds mixed with rain and sleet. We were fortunate to have set up camp in the lee of a small rock outcrop, but even so the biggest gusts brought my tent ceiling more than halfway down to my head. For a while I was pretty sure the tent would collapse or else go airborne. However, it did not, perhaps because I had added three extra guy lines. Nobody slept much and in fact around midnight we found ourselves all outdoors in the miserable driving rain putting extra-large rocks on our tent stakes. Ben and Shannon’s tent had partially blown down and they had to realign it; Bill and Eric had pretty solid tents and I — having probably the least weather-worthy tent — was very lucky to have set it up the right way.

[nggallery id=10]

All photos © William B. Thompson or John Regehr 2009.

August 9 — In a Snowstorm and Not Getting Out

In the morning the winds had died and we found the snow line barely above camp. The cloud level was only a few hundred feet higher. Still, the weather was improving and we hoped the plane could make it in. Eric and I took a short hike, but we didn’t want to wander far in case Kirk arrived.

As the day progressed the weather deteriorated and we realized we were almost certainly in for an extra night. We moved the tents into a slightly more sheltered configuration in case the winds picked up. In the afternoon it began to snow pretty hard and we spent the rest of the day chatting in the cook tent and napping. We had little reserve food and had an extremely light dinner before going to bed hungry.

During the night it kept snowing. My light tent let in every little gust of wind and I started to get cold. As part of a weight-saving plan I brought only a 30 degree sleeping bag, knowing that it would make hiking easier but that I would suffer if things went badly. So I shivered, wearing basically every single piece of clothing I had brought along, including the fleece top that had been serving as a pillow.

[nggallery id=11]

All photos © William B. Thompson or John Regehr 2009.

August 10 — Snow and Sun and Out

We woke to perhaps six inches of snow, which represented a new obstacle to getting out: the bush pilot can’t land if he can’t see the terrain. Someone told a story of a bush pilot who overflew his clients after a snowstorm and dropped them a shovel, circling while they cleared the strip. With the threat of a second extra night out, we rationed food pretty severely and stayed hungry.

As the day progressed it partially cleared and the snow began to burn off. It was incredibly pretty, definitely worth the discomfort and inconvenience. Sometime in the morning we heard a plane and rushed to take down tents — but the plane passed overhead. The rest of the day we read and napped, not wanting to stray far from the air strip. By late afternoon we were resigned to another night, but then around 6:30 Kirk showed up. We packed up and ran for the plane, not wanting to keep him there any longer than we had to. The flight out to Arctic Village was spectacular, with clear air this time.

It turned out Kirk had tried hard to get us out the previous day, but had been turned back by severe turbulence. His brother had also tried, from a different direction, also unsuccessfully. This was something interesting to learn about bush pilots: their clients’ lives are in their hands and they take this responsibility very seriously. This, in combination with the levels of skill and experience of the best pilots, helped put the cost of bush plane travel into perspective (it constituted one of the major parts of the total trip expense).

At Arctic Village it was clear that we weren’t going any further. An old guy with an ATV gave Eric and me a ride into town, where by some stroke a luck the store was still open. We stocked up on high-calorie foods and walked back to the air strip to wait for Ben and Shannon. When they arrived, we ate a huge amount of spaghetti and candy bars. Unfortunately, the little visitor center was locked up, so we slept on its covered porch. I burned a bit of sat phone time to tell Sarah all was well, luckily she had been adequately briefed on the possibility we’d be stranded out.

[nggallery id=12]

All photos © William B. Thompson or John Regehr 2009.

August 11 — Heading Home

Wright Air Service was aware of our situation and sent an extra plane up to Arctic Village, putting us in Fairbanks by noon. Now we were two days late and Bill had missed his flights home. My parents were in Fairbanks with a car and Eric and I planned to ride down to Anchorage with them. However, Eric was stressed about work and stuff and flew home instead. I hadn’t yet missed my scheduled flight, a redeye late on the 11th, so we had a leisurely drive south. Luckily I had taken a shower in Bill’s hotel room or else we’d probably have driven with the windows open the whole way.

[nggallery id=13]

All photos © William B. Thompson or John Regehr 2009.


All pictures are by me or Bill Thompson. If you care, Bill’s pictures all have filenames starting with “dsc” whereas mine all start with “picture.” I shot a number of panoramas too.  We used identical equipment: Nikon D90 with the kit lens, an 18-105 zoom. Bill’s landscape pictures often look better than mine (at least partly) because he shot in raw format and then used a good converter, whereas I let the camera do everything.

Most of my gear performed admirably; the fact is that today’s mid-range equipment is overkill for almost any normal use. My favorite items were the Smartwool midweight wool long underwear top and bottom that became more or less a second skin for the cold parts of this trip. Completely comfortable, and not stinky like synthetic. My puffy synthetic Patagonia jacket was really nice as well, and way too warm for anything except sitting around or sleeping. The Arcteryx Bora 95 pack was awesome: big and bombproof. I have no idea when mine was made, I picked it up used from a guy who was moving away from SLC. Like all Arcteryx products, these are more or less prohibitively expensive to buy new. The La Sportiva Trango Trek GTX boots I took were about perfect: decent support, good waterproofness, and not unbearably heavy. After over a week in very tough country they look and work about like new. To carry water I took a single 1-liter platypus bottle, these are super light and can be collapsed to pocket-size when empty. Probably the main drawbacks are the easily-dropped cap and the narrow opening which makes it slow to fill. My old Cascade Designs Z-Rest performed about as well as it ever does, which is to say that it’s light and respectably comfortable, but really wants to be placed on soft ground.

A few items I was less happy with included the 30 degree Marmot synthetic sleeping bag that I got for cheap, which had very little loft. It weighed about the same as Shannon’s 5 degree bag from Western Mountaineering, which had so much loft it looked inflated. Seriously, you can bounce a quarter off that kind of bag. My Sierra Designs Sirius 2 tent was decent overall, but the open vestibules were a major drawback. First, they provided very little shelter for keeping items dry outside the tent. Second, they acted like wings to catch the wind: not good. Also, this tent is pretty short; I’m six feet and had to lie diagonally to keep from pressing head and feet against the tent ends.

Although I spend a lot of time outdoors and car camp as frequently as possible, my previous backpacking experience was minimal — probably no more than two weeks total, prior to this trip. So it was fun to refine my techniques and learn new tricks. One of the most useful was rigging a clothesline inside the tent so that socks and such could be dried overnight. Another good one was putting clothing and other items at the head and foot of my sleeping bag to keep it from getting wet from condensation due to touching the outside of the tent. A great pair of camp shoes can be improvised out of a pair of tevas and a pair of neoprene socks.

Into the Brooks Range, Part 2

[Continued from Part 1, also see Part 3.]

August 3 — Over the Arctic Divide

Our third hiking day took us over a 5700′ mountain pass where the Wind, Ivishak, and Ribdon river drainages converge. Since the creek-bed of our side drainage was totally impassable, we climbed steep talus slopes, leaving the last tundra behind. Eventually the rocks leveled out and we came to a high bowl containing the Seefar glacier, which appears to be dead. We stopped and climbed its moraines a while: the basin was really spectacular and Bill and I were bummed that the thick smoke eliminated the possibility of good photographs.

To exit the Seefar bowl and get to the pass, we had to bypass a small waterfall on angle-of-repose scree. It was doable, but dicey with the big packs; we went one-by-one to avoid kicking rocks down on each other. After a bit more walking we reached the pass itself, which might as well have been on Mars, it’s as desolate a location as I’ve ever seen. It would have been nice to stick around for a while and climb the unnamed 7500′ peak nearby, but we wanted to get down to a suitable campsite. The mountains in this area have no doubt been climbed before, but definitely not very many times.

The talus slog down from the pass on the Ivishak side was steep, loose, and not much fun. It contained, however, a memorable site: the remains of a Grumman Goose that crashed in 1958, killing Clarence Rhode, the regional director of the US Fish and Wildlife Service, and two others, triggering Alaska’s then-biggest-ever search and rescue operation. The search was unsuccessful and the fate of the plane and its three occupants was a mystery until 1979 when the wreck was found by hikers (Debbie Miller’s book Midnight Wilderness describes this). The bent propellers showed that the plane was powered when it hit the mountain. The arctic climate preserves things well: a can of coffee near the wreck still contained recognizable coffee grounds, and a typed “permit to take wolves and/or coyotes from an airplane” was perfectly legible after 50 years outdoors. The site was disconcerting, probably especially so for Eric — a current employee of the US Fish and Wildlife service and a heavy user of small aircraft in the arctic.

We dropped down to a confluence of small sub-drainages that would have been a very small, rocky campsite, then continued until being stopped by a waterfall. We bypassed this and went to the next confluence, which contained a beautiful meadow where we stopped, exhausted after a very long day.

[nggallery id=5]

All photos © William B. Thompson or John Regehr 2009.

August 4 — Layover

Our schedule had some slack built into it for weather and other difficulties. However, now that we were over the pass most of the risk had disappeared so we decided to take a rest day. Unfortunately, we had descended so far from the pass that nobody had energy to hike back up to it to climb some peaks. We poked around, read books, and generally enjoyed a gorgeous sunny day outdoors. This confluence was some sort of caribou highway and small herds walked past our campsite all day. I poked back upstream to the waterfall that had blocked us the evening before; it was gorgeous. Bill and I hiked around the next bend in the river and saw a group of Dall sheep.

On this trip Eric read much of The Brothers Karamazov, which seemed like a fine choice. I brought along Little, Big, which I’d read before and loved. However, Shannon brought perhaps the best book: A Naturalist’s Guide to the Arctic. It is targeted at the interested layperson, and is packed with information about the kinds of things one wonders about while walking around this part of the world. What is the difference between hibernation and torpor? What is a tussock, exactly? What is the relationship between the moon’s phases and the “moon stays up” periods that correspond to the midnight sun? Among three PhDs, it is possible to speculate endlessly without any actual information, so this book was a godsend.

Although we missed the midnight sun, we didn’t have any real darkness on this trip. I’m used to fall camping trips in Utah where the nights are quite long; it felt really strange to not pack any light source at all for an extended backpacking trip, but that’s what we did. I’ve never had trouble sleeping in the light, luckily. I almost didn’t take a wristwatch on this trip, but I was really glad I did: lacking daylight-based cues about the time, I often had no idea at all what time it was.  Each day shortly after midnight, the sun rose in the southeast, made a near-complete circuit of the sky, and then set in the southwest around midnight. I took a very small tube of sunblock on this trip, guessing that the low sun angle plus likely clouds and rain would make sunburns unlikely; this wasn’t a very good decision.

Hiking in Utah, a person gets used to always filtering, purifying, or boiling water. Each of these is a pain, and one of the things I loved about the Brooks Range is that water is clean enough to drink straight from any moving source.

[nggallery id=6]

All photos © William B. Thompson or John Regehr 2009.

August 5 — Out of the Mountains

After the rest day, the going was easier. Our bodies were getting used to the packs, the packs were getting lighter as we ate food and burned fuel, and we were hiking downhill. However, in the morning we were still in a seriously mountainous area and the stream would often constrict to a narrow gorge. Luckily the tundra benches were wide and fairly level, so we stayed on them most of the day. By the time we made camp, we were back into a fairly wide river valley.

It was claimed, by people on this trip, that a certain kind of tundra moss makes a passable substitute for toilet paper. Not the dry, rough top side of the moss, but rather the soft, damp underside. I just wanted to mention this and won’t bother the reader with details.

[nggallery id=7]

All photos © William B. Thompson or John Regehr 2009.

[Continued in Part 3.]

Into the Brooks Range, Part 1

[Also see Part 2 and Part 3.]

In Summer 2009 I went on a 1.5-week backpacking trip in the Alaskan arctic with my brother Eric, my colleague and hiking buddy Bill, and our guides Shannon and Ben from Arctic Treks. It was an amazing trip through a very rugged part of the world. Not only did we not see any other people, but most days we saw no signs that people had ever been there. If the civilized world had ended, we wouldn’t have found out until nobody came to pick us up.

July 31 — Getting There

It took most of two days to get from Salt Lake City to our starting point: the highest airstrip on the Wind River, on the South Slope of the Brooks Range’s Philip Smith Mountains. Bill and I first flew up to Fairbanks, through Anchorage. Descending into Fairbanks, we couldn’t see anything at all due to smoke from wildfires, and it was lucky that we even got there — earlier in the day they were turning planes back. After dinner we did some last minute gear-sorting and met up with Ben and Shannon, who gave us our share of the group gear. We’d been aiming for 50 pounds but with a full load of fuel and food, mine was around 60; Ben took more than his share of gear and had 70 pounds. Most everything we took ended up being useful or necessary, our main luxury was a tent apiece for Eric, Bill, and me. As Eric puts it, “If weight is so much of a concern that I have to share a tent with a dude, I’m not going.” I hadn’t managed to shake a bad cold, and decided to leave behind a half-bottle of bourbon. I went to bed early; Eric, who lives in Anchorage, had to work late and didn’t show up until early morning.

Next morning we went to the air taxi company to take a single-engine Cessna turboprop to Arctic Village, a town of less than 200 people next to the Chandalar River at the foot of the Brooks Range. The flight could have been spectacular but we hardly saw anything, again due to smoke. The big gravel air strip is maybe a half mile out of town; we dropped packs and slapped bugs waiting for the bush plane. Bill, who has done a number of trips in the arctic, said it could be 15 minutes, could be all day. It wasn’t too long until Kirk Sweetsir and his little Cessna arrived, but we still had to wait — he first had to drop off a couple who had flown out of Fairbanks with us, who were hiking over the continental divide and then packrafting all the way the Arctic Ocean, ending at Kaktovik, a pretty serious trip. When our turn came, Kirk flew us over (or rather through) a rugged and forbidding patch of mountains instead of heading up the Junjik river valley; it was spectacular, thought still very hazy. Before landing, Kirk had spotted a moose and a brown bear. As the plane flew off it began to drizzle and we pitched tents on boggy ground near the airstrip. A little later, Kirk came back with Eric and Ben and then we were left alone in the wide river valley, probably 45 miles from the nearest people. Before Kirk left for the second time, Shannon double checked with him regarding the pickup day and location. This seemed like a fine idea.

The Alaska definition of “airstrip” is something like “someone landed there once.” Guys like Kirk have an amazing job but the level of flying skill, rapid risk assessment, and luck required to grow old in that line of work must be fantastic.

[nggallery id=2]

All photos © William B. Thompson or John Regehr 2009.

August 1 — Up the Wind River Valley

Our first walking day was in the miles-wide Wind River valley. Lacking trails, the main tension in this part of the world is between walking in the river channel, which is easy when on gravel bars but involves lots of river crossings and may be very brushy, and walking the bank, which is generally brushy, hilly, tussocky, and boggy. Tussocks are pillar- or mushroom-shaped tufts of grass that are raised about a foot above the surrounding terrain. If you try to step on them, they tend to flip over or otherwise give way, creating risk of injury. If you try to step between them, you also get unsure footing — the tussocks are so close together you can’t clearly see the gaps. Additionally, the gaps are usually filled with water or mud. You might think (at least, I certainly thought) that a sloped river bank would be well-drained, but somehow in this part of Alaska that is not the case. Although a tussock field looks inviting from afar, since the surface grass is nearly flat, it can be an amazingly effective obstacle to progress. Even a strong hiker wouldn’t expect to make more than about a half mile per hour in a nice, flat tussock field. Happily, on this trip we were mostly in the high country and didn’t get killed by tussock travel.

We ended up spending the first day mainly on the gravel bars, and except for Ben we stayed in sandals since otherwise the braided river would have forced multiple changes of footwear per hour. Ben had gaiters that appeared to keep his boots dry if he crossed quickly. This was the first of August and the river was low, so the crossings were inconvenient instead of scary.

We camped close to where the valley forked three ways and watched a snowstorm hitting the high mountains. Eric and I walked up a big alluvial fan and found a nice place to sit where we could be above the mosquitoes and scan the valley for bears. We didn’t see any, but had a good talk. This was one of the things I had been hoping would happen on this trip; Eric and I aren’t particularly close and in fact I’m not sure I have much of a handle on what kind of adult he’s become. Of course I still think of him as my little brother, though he is 35.

Eric is a field researcher for the US Fish and Wildlife Service, studying polar bears. As far as I can tell, it is the perfect job for him because he enjoys organization (capture work has hellish logistical requirements), is talented at statistics and modeling, and also has a strong commitment to field work. The field work is most often flying over arctic sea ice in a helicopter, darting a bear, and then landing to weigh it, take blood samples, and whatever else. His videos from bear-darting operations are amazing: a white bear is zig-zagging around on white ice under a white sky. Anyway, Eric has a ton of polar bear stories including some that are scary. It’s a neat job.

Throughout this trip, the mosquitoes were a constant low-grade annoyance. After too many years in Utah’s deserts my pain threshold for these pests is pretty low, and I ended up wearing my mosquito net for a while this second evening. Luckily I never felt the need to put it on after that, and in fact only seldom applied any DEET. In early August the mosquito season is just about over and by Alaska standards they were not at all bad. The Brooks Range mosquito adopts a slightly different strategy than those I’m used to: it wants to spend a bit of time hovering about a foot above your head before diving in to strike. I eventually learned that if I found a location where they couldn’t hover above me — such as sitting in my tent with the doors wide open — they wouldn’t bother me at all. Something nice about the arctic is that mosquitoes are the only pest: there are no chiggers, ticks, biting flies, or any of the other little critters that can make life difficult.

[nggallery id=3]

All photos © William B. Thompson or John Regehr 2009.

August 2 — Into the Mountains

On our second moving day the air was clear of smoke: the only really clear day on this whole trip. We walked up the middle fork of the three-way split, which rapidly narrowed as we made progress. Walking was easy on the benches, and the river crossings were at most calf-deep. We saw a wolverine pretty close, which was fun: it was chasing something on opposite side of the creek and didn’t pay much attention to us. By the end of the day the valley had become a canyon and we were walking on talus, having gained enough elevation to leave most vegetation behind. We camped next to a beautiful but imposing waterfall that emerged from the side-drainage we had to hike into the next day.

You might wonder why three strong hikers who can read a map and have plenty of wilderness experience, arctic experience, and even hands-on bear experience would take a guided trip instead of rolling it ourselves. One reason is logistics: we’re busy people who live far from Alaska (well, Eric lives up there but he wasn’t actively involved with planning this trip) and having people on-location getting the gear together and setting up the bush plane was super handy. As our trip progressed I learned that it is pretty damn nice having someone else cook and clean up. In the mornings we generally slept quite late (arctic summer trips seem to tend in this direction due to the near-constant daylight) and Shannon or Ben always had coffee going when we got up. It is not a bad life to get out of the tent long after the morning chill has burned away and then sit around for an hour looking at maps, drinking two pots of coffee, and chatting. Shannon and Ben turned out to be great company, and certainly Bill and I, and Eric and I, would have gotten on each others nerves if it had been just the three of us. Finally, adding people lightens loads (early on, we hadn’t been sure Eric could come) and greatly increases bear safety.

[nggallery id=4]

All photos © William B. Thompson or John Regehr 2009.

[Continued in Part 2.]

Straight Man

Hank Devereaux, chair of the dysfunctional English department at a small university, is having a midlife crisis.  His wife, leaving town, fears he’ll be either in jail or the hospital before she returns — and she is not disappointed.  Straight Man is hilarious, I had to stop reading it in bed because it was too hard to giggle quietly enough to keep from waking Sarah up.  I don’t know anything about Russo but it is clear that he has spent time in academia, his portrayal of the odd personalities and petty politics is too perfect to be guesswork.  This is one of the best books I’ve read in years, I highly recommend it.


A few years ago, I ran across an entire academic paper that was plagiarized.  I was gratified when alerting the (original) author of the paper started a chain of events that eventually lead to the pirated paper being retracted.  Another time, I was reviewing a paper and said to myself something like “that paragraph was really well written.”  Then I realized why I liked it: I had written the paragraph myself in an earlier paper.  Needless to say this paper was rejected with extreme prejudice. As these anecdotes indicate, as a researcher I’ve found plagiarism to be pretty easy to deal with. Since every conference proceedings or journal has a well-known “owner” who is loosely responsible for the integrity of its contents, it suffices to let this person know about the potential problem.

Web-based plagiarism is more difficult.  For example, in an article about the C language’s volatile qualifier, Nigel Jones showed a nice turn of phrase when he said that declaring everything as volatile is a bad idea “since it is essentially a substitute for thought.”  A few weeks ago, while looking for Nigel’s article, I did a Google search on his phrase and was surprised to see how many matches turned up.  Clicking on these links, it turns out that most have not acknowledged Nigel as the original author.  Rather, people who posted the material appear to be taking credit for it.  Since Nigel’s article is quite good, finding one or two copies of it wouldn’t be surprising, but ten copies does seem a bit much.  What can we do about web-based plagiarism?  When the plagiarized content lives in a third-party site (e.g. Wikipedia) it may help to complain.  If plagiarism occurs on a site hosted by an individual, it is probably very difficult to deal with.

My worst experiences with plagiarism have been as an instructor.  The problem is that prosecuting instances of plagiarism, even when it is done properly, is time consuming and very stressful.  It’s not that institutional support is totally lacking — the U’s Student Code outlines a reasonable implementation of academic sanctions — but rather that the actual process of telling students that they’re going to fail a class because they cheated is hard.  One possibility is a tearful confession followed by pleas for mercy (and sometimes, by pleas from parents and other relatives, especially if the failing grade delays the student’s graduation). Another possibility is the student who denies everything, making life difficult because it becomes a bit of a he-said/she-said game. Fortunately, I’ve never been involved with a case where lawyers came into the picture, but I’ve heard this can happen.  In the end, the simplest course of action for overworked instructors is to not try very hard to find instances of plagiarism by students.  My guess is that this is the implicit choice made by most teachers because, unfortunately, it seems that if one looks for plagiarism very hard, one tends to find it.

Margin in Software Systems

Margin of safety is a fundamental engineering concept where a system is built to tolerate loads exceeding the maximum expected load by some factor.  For example, structural elements of buildings typically have a margin of safety of 100%: they can withstand twice the expected maximum load.  Pressure vessels have more margin, in the range 250%-300%, whereas the margin for airplane landing gear may be only 25%.  (All these examples are from the Wikipedia article.)

We can say that a software system has a margin of safety S with respect to some external load or threat L only when the expected maximum load Lmax can be quantified and the system can be shown to function properly when subjected to a load of (1+S)Lmax.  Software systems are notoriously low on margin: a single flaw will often compromise the entire system.  For example, a buffer overflow vulnerability in a networked application can permit an attacker to run arbitrary code at the same privilege level as the application, subverting its function and providing a vector of attack to the rest of the system.

Software is often defined to be correct when, for every input, it produces the correct output.  Margin is an orthogonal concern.  For example, there exist systems that are formally verified to be correct, such as CompCert and seL4, that have no margin at all with respect to flaws not covered by the proof — a bug or trojan in the assembler, linker, or loader invalidates the safety argument of either system.  Similarly, there exist systems that are obviously not correct, that have considerable margin.  For example, my Windows laptop has enough RAM that it can tolerate memory leak bugs for quite a while before finally becoming unusable.

There are other examples of margin in software systems:

  • Many classes of real-time systems have a characteristic utilization bound: a fraction of total available CPU time that, if not exceeded, permits all sub-computations to meet their time constraints.  Real safety-critical systems are usually designed to use less CPU time than their theoretical utilization bounds, providing margin against spurious interrupts or other unforeseen demands on the processor.
  • Forward error correction provides margin against data corruption.
  • n-version programming and replication provide margin respectively against software and hardware defects.
  • It is common to pick a cryptographic key larger than the smallest currently-unbreakable key, providing some margin against future improvements in processing power and cryptanalysis.

The piece that is missing, as far as I can tell, is a collection of broader results about how margin in software systems relates to overall system reliability, and how useful kinds of software margin can be obtained at acceptable cost.

What are some general ways to gain margin?  Overprovisioning a resource, as in the examples above, is very common.  Defense in depth is also important: many software systems have only two lines of defense against attack: safety checks at the application level, and safety checks in the OS kernel.  If both of these defenses fall — as is common — the entire system has been subverted.  Virtual machines, safe language runtimes, and similar mechanisms can add layers of defense, as can firewalls and other external barriers.

Something I want to see is “margin-aware formal methods.”  That is, ways to reason about software systems under attack or load.  The result, ideally, would be analogous to well-understood principles of safety engineering.  Examples already exist:

  • Symbolic robustness analysis is an early proof-of-concept technique for showing that small perturbations in the input to a control system result in only small changes to the control signal
  • The critical scaling factor in scheduling theory is the largest degree of slowdown computations can incur before deadlines start being missed
  • Byzantine fault tolerance is a body of theory showing that a distributed system can produce correct results if fewer than a third of its nodes are compromised

An important obstacle to margin in software systems is the high degree of coupling between components.  Coupling can be obvious, as when multiple activities run in the same address space, or it can be insidiously indirect, including common failure modes in independent implementations, as Knight and Leveson observed in 1986.  There are many other kinds of coupling, including reliance on multiple copies of a flawed library, a single password used across multiple systems, etc.  It can be very hard to rule out all possible forms of coupling between components — for example, even if we use Byzantine fault tolerance and n-version programming, we can still be compromised if all chips come from the same (faulty or malicious) fab.

In summary: engineered physical systems almost always have margin with respect to failures.  Too often, software does not.  This should be fixed.  I want my OS and web browser to each come with a statement such as “We estimate that no more than 25 exploitable buffer overflows remain in this product, therefore we have designed it to be secure in the presence of up to 70 such problems.”

Picking a Research Topic in Computer Systems

This post is a collection of observations and advice for people who want to choose a research topic in computer systems.  I’m not claiming to be some kind of genius in this area, but I have enough ideas that they seemed worth writing down. This advice is probably most useful for graduate students in CS, but may also be helpful for junior profs and for undergrads interested in doing research.

Picking a research area

By “research area” I mean a sub-area of computer systems, such as object-oriented operating systems, distributed hash tables, or whatever.  This often ends up being a pretty pragmatic decision and it is understood that a good researcher can work concurrently in multiple areas or change area as often as every few years.

The easiest way is to choose an area that a lot of other people are working on.  I’m quite serious, and this method has some important advantages.  First, all those people at MIT and Berkeley aren’t stupid; if they’re working on ultra-reliable byzantine transcoders, there’s probably something to it.  Second, if you work on the same thing others are working on, the odds are good they’ll be interested in your work and will be more willing to fund and publish it.  The potentially serious drawback is that you’ll likely miss the initial, exciting phase of research where lots of new things are being discovered and there’s not yet a lot of commonly accepted wisdom about what the best solutions are.  In the worst case, you’ll miss the entire main period of interest and arrive late when the best people have moved on to other topics.

The research area that you choose should have a chance of making a difference.  Systems research tends to be applied, and a great deal of it is engineering rather than science.  The focus, in most cases, is making something work better for someone.  The more people benefit, the better.

Fundamentally, the area you choose needs to be one that you have some natural affinity for.  You should enjoy working on it and your intuition should strongly suggest that all is not right: there is work that needs to be done.

If you’re a PhD student seeking a research assistantship, it would be practical to choose a research area that your advisor has a grant to work on.

Departing from the accepted wisdom

Every good research idea is a departure from the accepted wisdom, but it’s important to depart at the right level.  Consider these extremes:

  1. You reject the notion of binary computers.  Instead, ternary computation will sidestep most of the fundamental problems faced by computer science.  Everything from instruction sets to complexity theory must be thrown out and rebuilt from scratch.
  2. You reject the notion of the semicolon as a statement terminator in programming languages.  Instead, the dollar sign should be used.  A revolution in software productivity will ensue.

The first idea diverges from the status quo in a fundamental way, but it will be difficult to get buy-in from people.  The second departure is too minute to matter, and nobody will care even if you do a big user study showing that the dollar sign is better with p<0.05.  In both cases, the research idea does not feel like one that will change the world.  In contrast, some great examples of departing from the conventional wisdom in the right way can be found in David Patterson’s work: see RISC and RAID.

Focusing a skeptical personality

If the point is to challenge the accepted wisdom in some fashion, you can’t very well go believing everything people tell you.  Computer systems is not exactly a rigorous area of science and you will hear all manner of ridiculous explanations for things.

Good systems problems, in my experience, come from noticing something wrong, some discrepancy between how the world works and how it should work.  However, it doesn’t appear possible to do good work based on someone else’s observations.  For example, you can tell me that parallel programming is hard or software transactional memory is too slow, but the fact is that if I haven’t seen the problems for myself, if long and bitter experience hasn’t trained my intuition with a thousand little facts about what works and what does not, then I’m probably going to come up with an irrelevant or otherwise bad solution to the problem.

How does this work in practice?  You go about your business building kernels or web servers or whatever, and along the way there are a hundred irritations.  Most of them are incidental, but some are interesting enough that you start pulling at the thread.  Most of these threads die quickly when you discover that the problem was adequately solved elsewhere, is boring, or is too difficult to solve at present.  Every now and then it becomes clear that there’s a real problem to be solved, that nobody else is attacking it (at least in the right way), and that it would be fun to work on.  These are your potential research projects.  There are probably other ways to find research ideas in computer systems, but this is the only one I know of.

Here’s an anecdote.  As an instructor of embedded systems courses I’d long been annoyed by buggy cross compilers for microcontrollers.  Students who are struggling to write correct code do not need this extra level of hassle.  Finally, one day I was showing a small snippet of assembly code in lecture and a student noticed that it was wrong.  I assumed that it was a cut-and-paste error, but it turned out the compiler was mistranslating a 2-line function.  This seemed beyond the pale, so I wrote some more test cases and tested some more compilers and kept finding more and more bugs.  Even at this point, after I’d spent probably 40 hours on the problem, it was not at all clear that I was moving towards any kind of research result.  It was only after hundreds of additional hours of work that it became obvious the project had legs: everyone knows that embedded compilers contain bugs, but probably most people would be surprised that we were able to find (so far) 200 compiler bugs including major wrong-code bugs in every C compiler we’ve tested.  So in this case, the surprising result is quantitative.

A few more observations

  • The best research problems are often those that are not yet of major industrial interest, but that will be addressed by billion-dollar industries in 10-20 years.  Once a problem becomes the focus of intense industrial interest, it becomes a difficult target to attack from academia.  At the very least, you need a seriously new angle.
  • Don’t get into a research area late in its life cycle.
  • Thomas Edison said it’s 1% inspiration, 99% perspiration.  In computer systems it’s more like 99.9% perspiration: if you’re not careful you can build things for years without getting any research results.
  • It’s really not possible to figure out in advance which promising ideas are going to pan out and which ones are distractions.  Keep a number of different ideas in mind and work in directions that cut off as few options as possible.
  • Ignore sunk costs.  Always be ready to drop an idea, no matter how proud of it you are.
  • If you become aware of someone credible who is working on your exact idea, drop it.  First, at this point you have a 50% chance of winning the race to publication.  Second, duplicated work wastes time and tax dollars.  Life’s too short.
  • Often, problems and obstacles that seem insurmountable at first can be flipped around and turned into interesting features of your work or even advantages.
  • Periodic re-examination of assumptions is useful.  A recent example I like is Google native client.  Most efforts to isolate untrusted binary code just take whatever the compiler outputs.  The Google project gets good mileage by hacking the compiler so that the code doesn’t contain so many stupid things.  It’s a good idea — who said the compiler was set in stone?
  • If your project seems boring, think about dropping it.  If you’re not excited why would anyone else be?
  • Writing a couple of paragraphs about an idea has the effect of revealing bad ideas for what they are, and making good ideas better.  It’s almost comical how many of my ideas look silly when written down.  It’s definitely possible to read too much, but it’s probably impossible to write too much.
  • Smart people love to solve problems, regardless of their relevance.  Avoid the trap where you improve a result far beyond the point of diminishing returns.
  • Code tweaking is seductive; if you do it, consider it time spent on a hobby, not time spent doing research.
  • Once you’re invested in a research area, it’s tempting to stay there since you’re on top of the learning curve.  This can turn into a trap.  When it’s time to move on, just do it.  Many of the best researchers change areas several times during their careers.
  • Having a grand vision up-front is OK as long as it’s somewhat vague and doesn’t prevent you from seeing the actual situation.
  • Computer systems speak to you, in their own way.  Always listen to what they’re telling you.


Picking a good research problem is at least half the battle, especially for PhD students.  It’s worth studying why some ideas and approaches are great while others are boring.

The Compiler Doesn’t Care About Your Intent

A misunderstanding that I sometimes run into when teaching programming is that the compiler can and should guess what the programmer means.  This isn’t usually quite what people say, but it’s what they’re thinking.  A great example appeared in a message sent to the avr-gcc mailing list.  The poster had upgraded his version of GCC, causing a delay loop that previously worked to be completely optimized away.  (Here’s the original message in the thread and the message I’m quoting from.)  The poster said:

… the delay loop is just an example, of how simple, intuitive code can throw the compiler into a tizzy. I’ve used SDCC(for mcs51) where the compiler ‘recognises’ code patterns, and says “Oh, I know what this is – it’s a delay loop! – Let it pass.”(for example).

Another example comes from the pre-ANSI history of C, before the volatile qualifier existed.  The compiler would attempt to guess whether the programmer was accessing a hardware register, in order to avoid optimizing away the critical accesses.  I’m sure there are more examples out there (if you know of a good one, please post a comment or mail me).

The problem with this kind of thinking is that the correctness of code now depends on the compiler correctly guessing what the programmer means.  Since the heuristics used to guess aren’t specified or documented, they are free to change across compilers and compiler versions.  In the short term, these hacks are attractive, but in the long run they’re little disasters waiting to happen.

One of the great innovations of the past 50 years of programming language research was to separate the semantics of a language from its implementation.  This permits the correctness of an implementation to be judged solely by how closely it conforms to the standard, and it also permits programs to be reasoned about as mathematical objects.  C is not the most suited to mathematical reasoning, but there are some excellent research projects that do exactly this.  For example Michael Norrish’s PhD thesis formalized C in the HOL theorem prover, and Xavier Leroy’s CompCert compiler provably preserves the meaning of a C program as it is translated into PPC or ARM assembly.

Of course, the intent of the programmer does matter sometimes.  First, a well-designed programming language takes programmer intent into account and makes programs mean what they look like they mean.  Second, intent is important for readability and maintainability.  In other words, there are usually many ways to accomplish a given task, and good programmers choose one that permits subsequent readers of the code to easily grasp what is being done, and why.  But the compiler does not, and should not, care about intent.

C and C++ Make It Hard to Read a Register for Its Side Effects

[ This post was co-written with Nigel Jones, who maintains an excellent embedded blog Stack Overflow.  Nigel and I share an interest in volatile pitfalls in embedded C/C++ and this post resulted from an email discussion we had.  Since we both have blogs, we decided to both post it.   However, since comments are not enabled here, all discussion should take place at Nigel’s post. ]

Once in awhile one finds oneself having to read a device register, but without needing nor caring what the value of the register is. A typical scenario is as follows. You have written some sort of asynchronous communications driver. The driver is set up to generate an interrupt upon receipt of a character. In the ISR, the code first of all examines a status register to see if the character has been received correctly (e.g. no framing, parity or overrun errors). If an error has occurred, what should the code do? Well, in just about every system we have worked on, it is necessary to read the register that contains the received character — even though the character is useless. If you don’t perform the read, then you will almost certainly get an overrun error on the next character. Thus you find yourself in the position of having to read a register even though its value is useless. The question then becomes, how does one do this in C? In the following examples, assume that SBUF is the register holding the data to be discarded and that SBUF is understood to be volatile. The exact semantics of the declaration of SBUF vary from compiler to compiler.

If you are programming in C and if your compiler correctly supports the volatile qualifier, then this simple code suffices:

void cload_reg1 (void)

This certainly looks a little strange, but it is completely legal C and should generate the requisite read, and nothing more. For example, at the -Os optimization level, the MSP430 port of GCC gives this code:

   mov &SBUF, r15

Unfortunately, there are two practical problems with this C code. First, quite a few C compilers incorrectly translate this code, although the C standard gives it an unambiguous meaning. We tested the code on a variety of general-purpose and embedded compilers, and present the results below. These results are a little depressing.

The second problem is even scarier. The problem is that the C++ standard is not 100% clear about what the code above means. On one hand, the standard says this:

In general, the semantics of volatile are intended to be the same in C++ as they are in C.

A number of C++ compilers, including GCC and LLVM, generate the same code for cload_reg1() when compiling in C++ mode as they do in C mode. On the other hand, several high-quality C++ compilers, such as those from ARM, Intel, and IAR, turn the function cload_reg1() into object code that does nothing. We discussed this issue with people from the compiler groups at Intel and IAR, and both gave essentially the same response. Here we quote (with permission) from the Intel folks:

The operation that turns into a load instruction in the executable code is what the C++ standard calls the lvalue-to-rvalue conversion; it converts an lvalue (which identifies an object, which resides in memory and has an address) into an rvalue (or just value; something whose address can’t be taken and can be in a register). The C++ standard is very clear and explicit about where the lvalue-to-rvalue conversion happens. Basically, it happens for most operands of most operators – but of course not for the left operand of assignment, or the operand of unary ampersand, for example. The top-level expression of an expression statement, which is of course not the operand of any operator, is not a context where the lvalue-to-rvalue conversion happens.

In the C standard, the situation is somewhat different. The C standard has a list of the contexts where the lvalue-to-rvalue conversion doesn’t happen, and that list doesn’t include appearing as the expression in an expression-statement.

So we’re doing exactly what the various standards say to do. It’s not a matter of the C++ standard allowing the volatile reference to be optimized away; in C++, the standard requires that it not happen in the first place.

We think the last sentence sums it up beautifully. How many readers were aware that the semantics for the volatile qualifier are significantly different between C and C++? The additional implication is that as shown below GCC, the Microsoft compiler, and Open64, when compiling C++ code, are in error.

We asked about this on the GCC mailing list and received only one response which was basically “Why should we change the semantics, since this will break working code?” This is a fair point. Frankly speaking, the semantics of volatile in C are a bit of mess and C++ makes the situation much worse by permitting reasonable people to interpret it in two totally different ways.

Experimental Results

To test C and C++ compilers, we compiled the following two functions to object code at a reasonably high level of optimization:

extern volatile unsigned char foo;
void cload_reg1 (void)
void cload_reg2 (void)
  volatile unsigned char sink;
  sink = foo;

For embedded compilers that have built-in support for accessing hardware registers, we tested two additional functions where as above, SBUF is understood to be a hardware register defined by the semantics of the compiler under test:

void cload_reg3 (void)

void cload_reg4 (void)
  volatile unsigned char sink;
  sink = SBUF;

The results were as follows.


We tested version 4.4.1, hosted on x86 Linux and also targeting x86 Linux, using optimization level -Os. The C compiler loads from foo in both cload_reg1() and cload_reg2() . No warnings are generated. The C++ compiler shows the same behavior as the C compiler.

Intel Compiler

We tested icc version 11.1, hosted on x86 Linux and also targeting x86 Linux, using optimization level -Os. The C compiler emits code loading from foo for both cload_reg1() and cload_reg2(), without giving any warnings. The C++ compiler emits a warning “expression has no effect” for cload_reg1() and this function does not load from foo. cload_reg2() does load from foo and gives no warnings.

Sun Compiler

We tested suncc version 5.10, hosted on x86 Linux and also targeting x86 Linux, using optimization level -O. The C compiler does not load from foo in cload_reg1(), nor does it emit any warning. It does load from foo in cload_reg2(). The C++ compiler has the same behavior as the C compiler.


We tested opencc version 4.2.3, hosted on x86 Linux and also targeting x86 Linux, using optimization level -Os. The C compiler does not load from foo in cload_reg1(), nor does it emit any warning. It does load from foo in cload_reg2(). The C++ compiler has the same behavior as the C compiler.

LLVM / Clang

We tested subversion rev 98508, which is between versions 2.6 and 2.7, hosted on x86 Linux and also targeting x86 Linux, using optimization level -Os. The C compiler loads from foo in both cload_reg1() and cload_reg2() . A warning about unused value is generated for cload_reg1(). The C++ compiler shows the same behavior as the C compiler.

CrossWorks for MSP430

We tested version, hosted on x86 Linux, using optimization level -O. This compiler supports only C. foo was not loaded in cload_reg1(), but it was loaded in cload_reg2().


We tested version, hosted on Windows XP, using maximum speed optimization. The C compiler performed the load in all four cases. The C++ compiler did not perform the load for cload_reg1() or cload_reg3(), but did for cload_reg2() and cload_reg4().

Keil 8051

We tested version 8.01, hosted on Windows XP, using optimization level 8, configured to favor speed. The Keil compiler failed to generate the required load in cload_reg1() (but did give at least give a warning), yet did perform the load in all other cases including cload_reg3() suggesting that for the Keil compiler, its IO register (SFR) semantics are treated differently to volatile variable semantics.


We tested version 9.70, hosted on Windows XP, using Global optimization level 9, configured to favor speed. This was very interesting in that the results were almost a mirror image to the Keil compiler. In this case the load was performed in all cases except cload_reg3(). Thus the HI-TECH semantics for IO registers and volatile variables also appears to be different – just the opposite to Keil! No warnings was generated by the Hi-TECH compiler when it failed to generate code.

Microchip Compiler for PIC18

We tested version 3.35, hosted on Windows XP, using full optimization level. This rounded out the group of embedded compilers quite nicely in that it didn’t perform the load in either cload_reg1() or cload_reg3() – but did in the rest. It also failed to warn about the statements having no effect. This was the worst performing of all the compilers we tested.


The level of non-conformance with the C compilers, together with the genuine uncertainty as to what the C++ compilers should do, provides a real quandary. If you need the most efficient code possible, then you have no option other than to investigate what your compiler does. If you are looking for a generally reliable and portable solution, then the methodology in cload_reg2() is probably your best bet. However it would be just that: a bet. Naturally, we (and the other readers of this blog) would be very interested to hear what your compiler does. So if you have a few minutes, please run the sample code through your compiler and let us know the results.


We’d like to thank Hans Boehm at HP, Arch Robison at Intel, and the compiler groups at both Intel and IAR for their valuable feedback that helped us construct this post. Any mistakes are, of course, ours.