Lately I’ve run across some posts (here and here, for example) based on the idea that for grad students and academics, working 40 hours per week is a good thing. Of course if this makes people happy then great — but I dislike the idea strongly. Most of the time, a week in which I work 40 hours sucks. If I’m working on interesting things, 40 hours is not enough. If I’m working on boring things, 40 hours is far too many. Either way, not so fun. 40 hours is a compromise week in which I don’t actually get a lot of work done, but I’m probably stuck in the office a lot.
On the other hand, here are some weeks that make me happy:
A video game spree, when I was younger. Hours worked: 0.
A hacking spree, back in grad school. Hours worked: 90.
A hacking or writing spree now. Hours worked: 60.
These weeks are great because they are focused and uncompromising. Instead of context switching, there is flow. The point is: whatever you’re doing, it’s better to just do it, even if this violates some arbitrary ideal time management scheme. People who hate their jobs often work 40 hours weeks, why be one of them?
Update from Friday 10/28: Lots of good discussion on hacker news. A few random clarifications:
This piece isn’t criticizing the 40 hour week as an idea, but rather it’s criticizing the 40 hour week as an ideal, if that makes sense.
As a commenter noted, I’m married and have kids, which is why 60 hours is about the most I can realistically manage now. That’s a regular 40 hour week plus 3 hours every night after the kids are in bed. This isn’t that bad.
My personality is fairly obsessive and I average about six hours of sleep per night. At some level I’d like to sleep more, but when I try this, the extra time is spent lying awake — which I hate.
My job, tenured professor, gives a lot of flexibility in when and where I work.
My core responsibilities — teaching and meetings, mostly — require probably 20 hours a week, averaged over the whole year. Thus, if I want to be a “full-time” researcher too, I’m suddenly working 60 hours.
In summary, commenter xarien at HN hit the nail on the head:
“I see a lot of posts disagreeing with the OP in one form or another and I think I know why. It takes a bit of OCD and a dash of perfectionism to emphasize with the OP. Unfortunately, I know exactly how the OP feels.”
This post is about a tiny thing that makes a big difference in practice because I spend so much time writing. Usually, people compose paragraphs as monolithic blocks of text. For several years now, I’ve written paragraphs like this:
Integer overflow bugs in C and C++ programs are difficult to track
down and may lead to fatal errors or exploitable vulnerabilities.
Although a number of tools for finding these bugs exist, the
situation is complicated because not all overflows are bugs.
Better tools need to be constructed---but a thorough understanding
of the issues behind these errors does not yet exist.
For the non-LaTeX users out there, the percent symbol indicates a comment line. When this text is typeset, the sentences will flow together in the usual fashion. Why do I do it this way? The most important reason is that it calls my attention to the individual sentences in a paragraph. Frequent offenders like run-on sentences, paragraphs that lack a topic sentence, groups of sentences with repetitive structure, and paragraphs that contain a single sentence become trivial to spot. I often find that when I take normally formatted text — whether written by me or by a co-author — and split it into sentences like this, hidden problems in the writing become obvious. A secondary benefit comes out only when interacting with a revision control system: because editing individual sentences does not cause an entire paragraph to require re-wrapping, diffs are much easier to read and conflicts become less likely. The cost of writing this way is that I suspect it annoys co-authors sometimes.
There must be other tricks like this that people use — I’d be interested to learn about them. As a random example, when using some old word processor (MultiMate, I think — but modern word processors support this as well) I used to use a switch that made certain non-printing characters visible.
A Fire had two story lines. The first was set in a very high-tech environment and followed the expansion of the Blight — a Satan/Sauron character operating at galactic scale. The second was set in a medieval society of tines — intelligent packs of individually unintelligent dog-like creatures. The Children follows only the latter story, picking up shortly after A Fire ended. Ravna, the lone human adult, has awakened some of the human children who slept through the previous book. Perhaps unsurprisingly, they grow up into young adults with goals and viewpoints that set them at odds with Ravna and with various groups of tines.
All of Vinge’s fiction can be seen as an exploration of approximately three different routes to power that people tend to follow. First, there are nice people who are not actively seeking power, but who often manage to accrue it by building trust-based relationships with like-minded people. Second, there are nerds who are not perhaps totally personable, but who tinker and innovate endlessly, usually accomplishing much. Third, there are Machiavellian characters whose primary tools are deceit, manipulation, and betrayal. The importance of computer security in Vinge’s work is in service of this thread: in the future, those who control the machines will have all of the power.
A large part of the strength of Vinge’s work comes from his ability to create tension by balancing the abilities of characters following the three paths. The rest of the goodness comes from Vinge’s relentless imagination and ability to play through the consequences of his premises. While other space opera writers (Alastair Reynolds, Charles Stross, and Dan Simmons come to mind) start out strong, they usually lose control of all the story lines that they start, leading to confusing, muddled endings. Vinge, like Iain Banks, seems to have a much better idea of where he’s going, and gets there in style.
Children of the Sky is an obvious middle chapter, leaving plenty of loose ends for a subsequent book. Let’s hope it doesn’t take 20 more years to come out.
“Intel vs. Arm” returns 65,000 hits from Google, and in general much is made of the contrast between Intel’s near-dominance in the high-performance market and ARM’s near-dominance in the phone/tablet market. But it seems that:
Intel’s important asset is its massive fab capacity. In a web-based and mobile world, their aged ISA is increasingly irrelevant.
ARM has no fabs. They do, on the other hand, have great processor designs for the medium-performance, high efficiency space.
This doesn’t sound to me like a battle of the titans. It sounds like a wonderful partnership waiting to happen. What am I missing? Of course I am aware of Intel’s ARM license, but that just makes this more mysterious.
Across your research projects, make sure there is potential for both short-term and long-term payoff.
Understand your institution’s retention, promotion, and tenure policies. More importantly, understand what is being left unsaid in those policies.
Get one good PhD student right away, like in your first year. Two or three would be great. For most people, six students would be a disaster.
Stop working with a student as soon as it becomes clear that the relationship isn’t going to work out.
Be careful about taking on a new course prep; try to do this only every two years or so.
Do not suck at teaching.
Take student course evaluation reports very seriously while also not taking them seriously at all.
Avoid projects that have massive infrastructure-building requirements.
Let students take ownership of their projects.
Keep doing research yourself, you’re not just a manager.
Learn office politics. Make friends and allies. Do not make enemies. Do not get caught in the middle of existing disputes. Speak up at faculty meetings, but not (often) to say how something was done at your previous institution.
Get on conference program committees right away. Conference chairs love hard-working PC members who are not yet saddled with all of the responsibilities that senior faculty accumulate.
Always be in the middle of several papers and proposals.
Become known for something (positive). Better yet, become famous for something.
Drop untenable hobbies and outsource household duties that you don’t enjoy.
Take advantage of early-career funding sources.
One of the hardest parts of being an assistant professor is knowing when to really listen to advice and when to nod politely while giving someone both middle fingers inside your brain. I fear that I almost always erred on the side of not listening, and I generally expect that promising young faculty will do the same. In other words, if a person has the courage and tenacity to actually pursue a research program that matters, surely they’re not going to spend a lot of time listening to random life advice.
The San Rafael Swell is a large uplifted area in southeast Utah that has eroded into numerous badlands and canyon systems. The Swell is not particularly well-known outside of Utah because it contains no visitor centers, motels, restaurants, or any other services — it’s the kind of place you enter with maps, plenty of water, and a full tank of gas. The network of bladed roads put in during the brief uranium boom in the 1950s is pervasive enough to make backpacking an unattractive prospect in much of the Swell. Chimney Canyon is an exception.
The other day I posted about a simple, low-effort way to improve the bug-finding performance of a random tester. We now have a draft paper about this topic, it’s joint work between my group at Utah and Alex Groce’s group at Oregon State. The key claim is:
… for realistic systems, randomly excluding some features from some tests can improve coverage and fault detection, compared to a test suite that potentially uses every feature in every test. The benefit of using of a single inclusive default configuration— that every test can potentially expose any fault and cover any behavior, heretofore usually taken for granted in random testing—does not, in practice, make up for the fact that some features can, statistically, suppress behaviors.
Last Spring I had a lucky conversation. I was chatting with Vikram Adve, while visiting the University of Illinois, and we realized that we working on very similar projects — figuring out what to do about integer overflow bugs in C and C++ programs. Additionally, Vikram’s student Will and my student Peng had independently created very similar LLVM-based dynamic checking tools for finding these bugs. As a researcher I find duplicated effort to be bad at several levels. First, it’s a waste of time and grant money. Second, as soon as one of the competing groups wins the race to publish their results, the other group is left with a lot of unpublishable work. However, after talking things through, we agreed to collaborate instead of compete. This was definitely a good outcome since the resulting paper — submitted last week — is almost certainly better than what either of the groups would have produced on its own. The point is to take a closer look at integer overflow than had been taken in previous work. This required looking for integer overflows in a lot of real applications and then studying these overflows. It turns out they come in many varieties, and the distinctions between them are very subtle. The paper contains all the gory details. The IOC (integer overflow checker) tool is here. We hope to convince the LLVM developers that IOC should be part of the default LLVM build.
We would be happy to receive feedback about the draft.