With this in mind, what's our best option for making the case the government investment in academic research is worthwhile and should be sustained, and if anything, redoubled?
To me this isn't actually as obvious a proposition as it may sound, in part because over the course of my career I've actually seen quite a few failures by government research policy. Here are some examples that comes to mind, and I could easily summon up additional ones:
- You may be aware that I worked with the French to revamp their air control system architecture in the 1990's, and built a reliable communications architecture for the US Navy AEGIS and a floor trading communication system for the New York Stock Exchange in that timeframe too (this is back when I was commercializing a software library we created at Cornell). What was striking was that in all three cases, there were advanced prototyping and research programs dedicated to those projects that failed to produce the needed solutions. In fact the French ATC project approached me at the same time as a much-touted US FAA rebuild started, using cutting edge ideas from industry. The US project collapsed just years later, and billions were lost. The French project was actually launched in a spirit of catch-up, and they approached me with some sense of frustration that within France, the country lacked the technology know-how to solve the problem. And this is true of the other projects I mentioned too: each one of them turned to me after the "official" pipeline of technology broke down and didn't deliver what they needed. So here we have three examples, and I could give many more, in which the main government R&D projects that should have solved the problems stumbled badly.
- Back in the 1992 timeframe, a program manager at DARPA proudly told me that he had made a gutsy decision to kill all DARPA research investments in database technologies. Today, as we look at the amazing successes that machine learning on big data has already delivered, it is sobering to realize that DARPA actually turned its back on the whole area back when in fact, it should have been investing to stimulate the development of these exciting opportunities, and to ensure that the military side of the research didn't get overlooked (this latter point is because the military often needs chocolate flavored research in a world that often limits itself to vanilla when focused purely on non-classified commercial opportunities. If nobody is pushing to create chocolate flavored technology, it doesn't necessarily get done all by itself).
- Under the Bush-Cheney administration, the head of DARPA was a fellow named Tony Tether, who had some sort of complex family tie to Dick Cheney. And Tony felt that our research mostly helped Indian and Chinese students learn American technology secrets which they would then sneak back home to commercialize. The upshot of this zenophobic view was that DARPA would only fund academic research as subcontracts to proposals from companies, and surprisingly often, one got strong hints that those companies should be from a group people started calling the "FOT four": the companies run by "friends of Tony." The evidence, years later? Tony did a ton of damage, but it harmed the military much more than anyone else. Today's military has really struggled to take advantage of the cloud (the new "government cloud" projects at NSA are signs that we seem to finally be getting there), deep learning, and even the new Internet of Things technology spaces.
- A while back remember being told by an NSF leader, in one of those hallway conversations, voices lowered, that I needed to quit pushing quite so hard for the money for the top proposals in a particular research program where I was on the selection panel. The message was: a certain percentage simply has to go to weak proposals from institutions that weren't know for doing quality research at all, for political reasons, and that by fighting to divert that earmarked money back into the pool (to spend it on work from a place like MIT), I was just making trouble.
A second side of the failing is the vanilla flavored technology issue. If we ask why industry didn't have solutions that could succeed in the US FAA rebuild, or for that matter the French ATC project, or the NYSE of the time, what stands out is that US industry is heavily optimized by Wall Street to focus narrowly and exclusively on short-term commercial opportunities with very large pay-back for success. By and large, industry and investors punish companies that engage in high-risk research even if there is a high possible reward (the point being that, by definition, the work wouldn't be highly risky if it didn't run a big risk of failing, and investors want a high likelihood of a big payday). So nobody looks very hard at niche markets, or at technical needs that will matter in 10 years but that haven't yet become front and foremost in their priorities, or that represent stabs in the dark -- attempts to find new ways to solve very stubborn problems, but without even a premise that there might be some kind of high reward payout in the near term (high-risk research usually at least has a potential to hit some kind of identifiable home run and in a fairly short period of time).
A third is the one I mentioned briefly in the final point. When the folks who write the checks deeply misunderstand the purpose of research, it makes perfect sense for them to insist that some portion of the money go to the big state school in their home district, or be dedicated to research by teams that aren't really representative of the very best possible research ideas but that do have other attributes the funding agency wants to promote. I'm all for diversity, and for charity to support higher education, seriously, but not to the point where my assessment of the quality of a research proposal would somehow change if I realized that the proposal was put forward by a school that has never had a research effort before, or by a team that had certain characteristics. To me the quality of a research idea should be judged purely by the idea and by asking whether the team behind it has the ability to deliver a success, not by other "political" considerations.
If I distill it down, it comes to the following. The value of research is really quite varied. One role is to invent breakthroughs, but I think this can be overemphasized and that doing so is harmful, because it reflects a kind of imbalance: while some percentage of research projects certainly should yield shiny new toys that the public can easily get excited about, if we only invest in cool toys, we end up with the kinds of harmful knowledge gaps I emphasized above.
So a second role is to bridge those knowledge gaps: to invest also in research that seeks to lower the conceptual barriers that can prevent us from solving practical problems or fully leveraging new technologies. Often this style of research centers on demonstrating best of breed ways to leverage new technologies and new scientific concepts, and then reducing the insights to teaching materials that can accompany the demonstrations themselves. This second kind of research, in my view, is often somewhat missed: it comes down to "investing in engineering better solutions", and engineering can seem dull side by side with totally out-there novelties. Yet if we overlook the hard work to turn an idea into something real...
And finally, we've touched upon a third role of research, which is to educate, but not merely to educate the public. We also have the role of educating industry thought leaders, students, government, and others in decision-making situations. This education involves showing what works, and what doesn't, but also showing our cliental a style of facts-driven thinking and experimentally grounded validation that they might otherwise lack.
So these are three distinct roles, and for me, research really needs to play all three at once. Each of them requires a different style of investment strategy.
So where does all of this lead? Just as there isn't one style of research that is somehow so much better than all others as to dominate, I think there isn't a single answer, but really a multitude of narrow answers that live under a broader umbrella, and we need a new kind of attention to the umbrella. The narrow answers relate to the specific missions of the various agencies: NSF, DARPA, DOE, OSD, AFOSR, AFRL, ONR, ARO, NIH, etc. These, I think, are fairly well-defined; the real issue is to maintain attention on the needs of the technology consumers, and on the degree to which the funding programs offer balanced and full coverage of the high priority needs.
The umbrella question is the puzzle, and I think it is on the umbrella point that many past mistakes have centered, including the examples I highlighted.
What it comes down to is this: when we have serious technology failings, I think this points to a failure of a kind of oversight. Government leadership needs to include priority-setting and a requires a process for deliberately parceling out ownership, at least for urgent technology questions that are of direct relevance to the ability of our nation to do things that really matter, like operating the bulk electric power grid in ways that aren't only safe (we've been exceptionally good at power safety), but that are also defensible against new kinds of terrorist threats, and that are also effective in leveraging new kinds of energy sources. We have lots of narrow research, but a dearth of larger integrative work.
That's one example; the issue I'm worried about isn't specific to that case but more generic. My point is that it was during this integrative step that the failures I've pointed to all originated: they weren't so much narrow failures, but rather reflected a loss of control at the umbrella level: a failure to develop a broad mission statement for government-sponsored research, broadly construed, and to push that research mission down in ways that avoid leaving big gaps.
This issue of balance is, I think, often missed, and I would say that it has definitely been missing in the areas where I do my work: as noted in a previous one of these postings, there has been a huge tilt towards machine learning and AI -- great ideas -- but at the expensive of research on systems of various kinds -- and this is the mistake. We need both kinds of work, and if we do one but starve the other, we end up with an imbalanced outcome that ill-serves the real consumers, be those organizations like the ones that run the stock exchange, or power grid, or air traffic control systems, or even industry, where the best work ultimately sets broad directions for whole technology sectors.
A new crowd is coming in Washington, and sooner or later, they will be looking hard at how we spend money to incent research. Those of us doing the work need to think this whole question through, because we'll be asked, and we need to be ready to offer answers. It will come down to this: to the degree to which we articulate our values, our vision, and our opportunities, there is going to be some opportunity for a realignment of goals. But conversely, we mustn't assume that the funding agencies will automatically get it right.
This will require considerable humility: we researchers haven't always gotten it right in the past, either. But it seems to me that there is a chance to do the right thing here, and I would hope that if all of us think hard not just in parochial terms about our own local self-interest, but also in broader terms about the broader needs, we have chance to really do something positive.