If you’re going to work on something, you might as well choose something that people care about.
The alternative is a nightmare: Choose a project. Work on it as hard as you can, and as cleverly as you can, for a long time. Then discover that nobody cares whether you have succeeded or not. That’s just a complete waste of resources.
Don’t choose a project just for “the challenge” or because it is an intriguing puzzle. The problem with puzzles and games is that even if you win, it’s still just a game. The world is full of non-made-up problems crying for a solution. With just a little extra wisdom up front, you can choose a real project that is just as interesting as any puzzle or game – and if you solve it, you’ve made the world a better place.
Also, you will find it easier to get resources and attract collaborators if you work on something that the real world cares about.
When picking projects, there are two fundamental forces in opposition:
Or to say it the other way, there are two big mistakes that can be made:
Mistake #1: Most people don’t know how to deal with risk, so they respond by being extremely risk-averse. They solve every problem in the conventional, non-creative, non-risky way. You might think that by never taking a risk, you never make a mistake – but that’s a mistake unto itself. Being overly risk-averse is unwise. You are never conspicuously wrong, but every day you pass up another opportunity to be conspicuously right by finding the new, creative solution. It’s like being a zombie – you’re not obviously dead, but you’re not really alive, either. Sooner or later, your competitors will find the new way of doing things. You’ll be left in the dust, and you’ll never know what went wrong.
Mistake #2 occurs among researchers and other people who are fortunate enough to be allowed some scope for creativity. The mistake is to take too many risks. This includes inventing things that will never be used.
Customers want the whole solution; they are not interested in partial solutions. The solution is like a big chain; the customers won’t tolerate a chain with missing links.
On the other hand, researchers have to start somewhere. Somebody has to create one isolated piece, and then another, and then another. More often than not, the pieces are created in no particular order, and we have to collect quite a few of them before we can start linking anything together.
The research world would be crippled if researchers were required to build every chain in order, link by link. It is extremely common for pieces to be invented in isolation, and linked up only later.
Still, there ought to be a plausible vision-story. There ought to be some sort of vision as to where each piece might plausibly fit into a useful chain. Otherwise it’s just an idle puzzle: even if you figure out the puzzle, nobody cares.
The vision-story doesn’t need to be rigorous or super-detailed, but it does need to be plausible. You need to check for what I call
By that I mean the following:
To repeat: When you have an idea, you are not obliged to pursue the details of every possible series/parallel ramification. But it would be wise to check for obvious show-stoppers.
Sometimes when you have a new idea, the attempt to find a vision-story fails. Sometimes that means you should abandon the project, but not always. Consider the following analogy: You start out with an alleged duck-egg. You incubate it for a while. It turns into a really ugly duckling. If you really have your heart set on raising nice ducks, you have to give up at this point and start over with a new egg. Or you could stick with it and see if it turns into a swan. Usually it won’t. Usually it’s just a ugly sickly mutant duck. But sometimes you get lucky.
The trick, then, is to manage risk wisely. Running no risks at all is unwise. Running too many risks, or running the wrong sort of risks, is also unwise. The trick is to run risks that will pay off, on average.
There is a formalism for evaluating the payoff, loosely modeled on the standard “business case” formalism:
That is, the Net Present Value is:
|NPV = ∑i Ri P(Ri) e−λ ti − ∑j Cj P(Cj) e−λ tj (1)|
where Cj is the jth contribution to the cost, Ri is the ith contribution to the revenue, ti and tj are the times (relative to the present) that the cost or revenue will occur, P(...) is the probability that it will occur that way, and λ is the discount rate (roughly speaking, the interest rate).
If there are multiple ways of solving the customer’s problem, evaluate the NPV of each. Do not just evaluate your favorite method in isolation. Do not just evaluate your favorite method and some straw-man alternative.
One way of organizing such an analysis is to use a spreadsheet with columns for the competing methods and rows for the advantages and disadvantages.
Usually, alas, the NPV formula can’t be applied with much precision, because it is based on costs and revenues that can only be estimated. (The precision can be somewhat improved by doing scenario planning, but that is beyond the scope of this note.)
But still the structure of the formula sheds light on some fundamental notions, including those discussed in section 4. But first, a simple example:
In Calandra’s parable about measuring the height of a building using a barometer (http://www.rbs0.com/baromete.htm), one of the methods is to drop the barometer, measure the time, and solve the formula S = ½ a t2. First of all, that’s bad physics, because aerodynamic effects would introduce horrible irreproducible errors on top of systematic errors. But even if we could neglect the aerodynamic effects, it would be a foolish method because it flunks the payoff test. Remember, you must evaluate all the plausible alternatives. In this case, an obvious alternative would be to drop a golf ball rather than a barometer; the physics would work out at least as well, and the cost would be far less.
Sometimes, when people are criticized for doing useless research, they respond by saying it is “long-term research” that will be useful “eventually” and they cite examples of discoveries that were made long ago that we still value today.
That is a completely bogus argument, based partly on ambiguity and partly on a non-understanding of the ideas in section 3.
First, we must remove the ambiguity between long-delayed impact and long-enduring impact. The exponential factors in equation 1 tell us that work with long-delayed impact has greatly-reduced value. The summation tells us that work with long-enduring impact has somewhat-increased value.
To repeat: If somebody starts talking about long-term research, demand clarification: is it long-delayed impact, or long-enduring impact? Long-enduring impact is good. Long-delayed impact is very bad. Ideally we want projects that have prompt and enduring impact.
Much of what happens in the world cannot be described by the usual laws of economics. All in all, the not-for-profit sector (including charities, hobbies, pets, and all that) is comparable in size to the government sector and the for-profit business sector.
I sit on the board of one not-for-profit organization, and belong to others.
I enjoy gardening as a hobby. I don’t pretend that it is an economical way to to produce flowers or food; if you figure what my time is worth, I grow some astonishingly costly tomatoes. I don’t expect anybody to buy them at price=cost or anything like that.
Some people play chess as a hobby. It takes a certain amount of time, and produces nothing salable. Some people do amateur physics as a hobby. Once again, it produces nothing salable. All this is perfectly understandable.
The place where I get confused is when people do hobbyist-grade physics and expect taxpayers to pay for it. I don’t expect the government to subsidize my tomato-garden, and if I do “physics” that is of no interest to anybody but myself, I wouldn’t dare ask the government to pay for that, either.
If I do something to satisfy my own intellectual curiosity, I pay for it with my own resources. If I claim to do it to satisfy the public’s intellectual curiosity, I have a responsibility to focus my intellect on areas that the public is curious about.
As another example of a non-scientific reason for doing something, consider the Apollo project. There were political reasons for doing it, which were clearly articulated at the time. The political argument is understandable, even if you don’t happen to agree with it. In contrast, I have never seen anything approaching an understandable justification for the project in terms of the science.
“Science” should not be used as the explanation for projects that cannot be explained in rational terms. That is the exact opposite of what “science” ought to mean.
You may be able to find some things that are valuable now that were invented a long time ago “on a lark”. But there are not nearly as many such things as most people suppose. Selecting the data a posteriori is highly unscientific. For every lark that paid off, there are untold others that didn’t pay off, and selectively calling attention to the ones that did pay off is unfair. It seems obvious that investing at random, without regard to payoff, is not a good investment strategy.
Discoveries are often made out of order, as discussed in section 2. Life would be simpler if discoveries could be made in order, but they can’t, so we do what we can and fill in the blanks later.
There is a world of difference between doing things at random (without regard to value) and doing valuable things slightly out of order. Let’s be clear about this:
We are talking about shades of gray here. People who can only think in terms of black versus white will get it wrong every time. That is, we are talking about judgment here. You can find examples of bad judgment, such as the “experts” who scoffed at the Wright brothers. But that doesn’t mean we should react to occasional instances of bad judgment by never having any judgment at all.
The trick is to run risks wisely. Good researchers run risks all the time, risks that would cause ordinary mortals to instantly die of adrenalin poisoning. The risks don’t pay off 100% of the time. That’s why there are P(...) probability factors in equation 1. The only requirement is that the risks pay off often enough, and pay off big enough, that you win on average.
The research often pays off in ways that were not foreseen in detail. The wise research manager takes that into account. From time to time one has to make the argument that “this is almost certainly good for something, but I can’t yet tell you exactly what”. That’s very different from saying “this cannot possibly be useful”.
By way of analogy, consider fishing for tuna. There are two ways of catching tuna using hooks. (We won’t discuss nets.) Method #1 is to put a piece of bait on the hook and dangle it in the water until a tuna takes that bait and that hook all at once. Method #2 is to throw a bunch of chum in the water. The tuna show up in great numbers and go into a feeding frenzy. You then dangle hooks in the water and some of the tuna will get hooked. The accounting is tricky, because you can’t prove that a particular piece of attractant contributed to hooking a particular tuna (like you could with method #1) but it turns out that method #2 works fine on average.
It would be stupid to spend money on chum and then not bother to dangle the hooks. Similarly it would be stupid to chum with arugula or some other expensive substance that the tuna aren’t interested in.
So it is with research. The accounting is tricky. All we ask is that things work out on average. But averaging doesn’t give you a license to spend research money on arugula or other things that have no chance of bringing you closer to the goal.
Here’s another analogy: In the research business, we often speak of “hitting a home run” i.e. making a really great discovery. It’s hard enough to do that, and it becomes impossibly hard if the researcher is required to “call the shot” i.e. to specify exactly what part of the bleachers the ball will land in. So do your best. Hit the ball hard. Steer it enough that it doesn’t go foul, but don’t constrain yourself to hitting a single pre-determined seat. More often than not, your best effort will result in an inglorious strikeout. Sometimes it will be a sacrifice fly. Sometimes it will be a grounds-rule double. Sometimes it will actually be a home run. We will let you try a few times and take the average. But remember, the averaging doesn’t give you a license to be stupid or lazy. Getting paid to do research is a privilege and a high honor, given only to those who try their best every time they come to the plate. Don’t abuse the privilege.