Shares

Matt Ridley: Specious arguments against government research funding.

Matt Ridley: Specious arguments against government research funding.


I’m a clinician, but I’m actually also a translational scientist. It’s not uncommon for those of us in medicine involved in some combination of basic and clinical research to argue about exactly what that means. The idea is translational science is supposed to be the process of “translating” basic science discoveries in the laboratory into medicine, be it in the form of drugs, treatments, surgical procedures, laboratory tests, diagnostic tests, or anything else that physicians use to diagnose and treat human disease. Trying to straddle the two worlds, to turn discoveries in basic science into usable medicine, is more difficult than it sounds. Many are the examples of promising discoveries that appeared as though they should have led to useful medical treatments or tests, but, for whatever reason, didn’t work when attempted in humans.

Of course, if there’s one thing that the NIH and other funding agencies have been emphasizing, it’s been “translational research,” or, as I like to call it, translation über alles. Here’s the problem. If you don’t have basic science discoveries to translate, then translational science becomes problematic, virtually impossible even. Translational research depends upon a pipeline of basic science discoveries to form the basis for translational scientists to use as the starting point for developing new treatments and tests. Indeed, like many others who appreciate this, I’ve been concerned that in recent years, particularly with tight budgets, the NIH has been overemphasizing translational research at the expense of basic research.

So it was with interest and disappointment that I read Matt Ridley’s latest op-ed in the Wall Street Journal entitled “The Myth of Basic Science.” (Many of you shared my disappointment, to the point where I felt obligated to post a modified and reformatted version of this post—which first appeared on my not-so-super-secret other blog—to SBM.) In his article, Ridley tries to argue that it is innovation and technology that drive scientific discovery, not scientific discovery that drives technological breakthroughs. It’s a profoundly misguided argument that boils down to two central ideas and, more importantly, incorrectly conflates invention and innovation with basic science and failing to understand that innovation in technology often depends on basic science that is decades old. I’m quoting him directly because on Twitter he seems to be claiming that his critics are attacking straw men (they aren’t):

  • “Most technological breakthroughs come from technologists tinkering, not from researchers chasing hypotheses. Heretical as it may sound, ‘basic science’ isn’t nearly as productive of new inventions as we tend to think.”
  • “Governments cannot dictate either discovery or invention; they can only make sure that they don’t hinder it.” (Or, as he quotes others elsewhere, government funding of research is not particularly productive.)

Ridley starts out with what struck me as one of the stranger arguments I’ve ever seen, namely that because there has been so much parallel technological discovery, technology is fast on the way to “developing the kind of autonomy that hitherto characterized biological entities.” (One can’t help but wonder if he’s been watching too many Terminator movies.) Here’s a taste:

Suppose Thomas Edison had died of an electric shock before thinking up the light bulb. Would history have been radically different? Of course not. No fewer than 23 people deserve the credit for inventing some version of the incandescent bulb before Edison, according to a history of the invention written by Robert Friedel, Paul Israel and Bernard Finn.

The same is true of other inventions. Elisha Gray and Alexander Graham Bell filed for a patent on the telephone on the very same day. By the time Google came along in 1996, there were already scores of search engines. As Kevin Kelly documents in his book “What Technology Wants,” we know of six different inventors of the thermometer, three of the hypodermic needle, four of vaccination, five of the electric telegraph, four of photography, five of the steamboat, six of the electric railroad. The history of inventions, writes the historian Alfred Kroeber, is “one endless chain of parallel instances.”

All of which is true. However, the relevance of this observation to basic science being a “myth” is tenuous at best. So where’s the straw man? It comes later in the article:

Politicians believe that innovation can be turned on and off like a tap: You start with pure scientific insights, which then get translated into applied science, which in turn become useful technology. So what you must do, as a patriotic legislator, is to ensure that there is a ready supply of money to scientists on the top floor of their ivory towers, and lo and behold, technology will come clanking out of the pipe at the bottom of the tower.

But, wait, you say. Isn’t that what I just said, that there must be a continual flow of new scientific discoveries to be translated into therapies (or technologies)? Yes, and no. First of all, no one in who knows anything about science believes that “innovation can be turned on and off like a tap” or that you can just throw money at basic scientists and expect technology to come “clanking out of the pipe at the bottom of” the ivory tower. I have no doubt that there are probably politicians who might believe that, but I bet Ridley would be hard pressed to find a scientist involved in applied science, particularly medicine, who believes that. The process is way more complicated. Basic science is hit or miss; you can’t predict what discoveries will or will not be translatable into something useful. In medicine, for instance, it’s virtually impossible to predict whether the discovery of, say, a given enzyme involved in cancer progression will be a useful drug target. Moreover, anyone who knows anything about basic science being translated into useful products knows that both kinds of science are important. You need the basic science as the grist for translational science; there must be a balanced approach. In the case of medicine (and because I’m medical researcher I naturally concentrate mostly on medical research), complaints about the NIH are not that it’s funding translational research but that its emphasis has become unbalanced.

Indeed, unwittingly, Ridley’s examples actually support this view. For the sake of argument, let’s not get into the weeds of whether technological advances are becoming akin to a self-sustaining, evolving system in which human beings a “just along for the ride,” as Ridley puts it, because for what I’m about to say it really doesn’t matter if that’s true or not. (Personally, I think Ridley’s view is exaggerated.) Think about why these various inventions were invented in parallel by so many people, and I bet you’ll see where I’m going with this.

What if the reason for parallel inventions was that the necessary prerequisite discoveries in basic science and engineering had been made, thus making those inventions finally possible? By the early 1800s, the basic physics for photography, for instance, had been around for centuries, dating all the way back to the pinhole camera and the camera obscura. Optics had been worked out for microscopes and telescopes. All that was required was a means of recording images, and that took chemistry, and a number of scientists and inventors were working on that, leading to the Daguerreotype and William Fox Talbot’s silver images on paper. Given that at the time a number of people were working on the problem of photography, it is not surprising that more than one discovered the chemistry that was needed to make photographs a reality.

Elsewhere, Ridley argues:

When you examine the history of innovation, you find, again and again, that scientific breakthroughs are the effect, not the cause, of technological change. It is no accident that astronomy blossomed in the wake of the age of exploration. The steam engine owed almost nothing to the science of thermodynamics, but the science of thermodynamics owed almost everything to the steam engine. The discovery of the structure of DNA depended heavily on X-ray crystallography of biological molecules, a technique developed in the wool industry to try to improve textiles.

This is a profound misunderstanding of how basic science is translated into useful products. For instance, it is true that there were steam engines before the laws of thermodynamics were worked out, and it’s true that the drive to improve steam engine design had a huge influence in the formalization of the laws of thermodynamics in the 19th century. Of course, one has to ask which steam engine Ridley is referring to, given that rudimentary steam engines date back to the first century AD and there were several varieties of steam engines developed in the 17th century. I’m guessing that what he means is the Newcomen engine, developed in 1712. Or perhaps he means James Watt’s steam engine, patented in 1781, which was the precursor to the steam engines that powered ships and industry in the 19th century and beyond. Whichever steam engine he means, Ridley’s description glosses over thermodynamic research done before the steam engine, such as Boyle’s Law, which led to Denis Papin building a steam digester, which was a closed vessel with a tightly fitting lid that built up a high pressure of steam. (In fact, Papin worked closely with Boyle from 1676-1679 to develop the steam digester.) Papin later added a steam release valve that kept his machine from exploding. Watching the valve move up and down, he came up with the idea of a piston and cylinder engine but didn’t follow through with his design himself. That was left to Thomas Savery and later Thomas Newcomen and then, decades later, to James Watt. In the 19th century, the steam engine was an excellent tool that helped scientists formalize the laws of thermodynamics. Basically, discoveries in thermodynamics, such as Boyle’s Law, facilitated designing the steam digester and steam engine and later improving the steam engine. In turn, engineering improvements in the steam engine contributed to the understanding of thermodynamics during the 19th century.

Why did I go through all this? It’s because, even if, as Ridley states, there was a linear view of progress, of translation if you will, from basic science discoveries to products, be they medicines or the steam engine, that view is long gone. It is now understood that basic science drives the development of products and those products drive basic science. So, yes, elucidating the double helical structure of DNA was not possible until the development of X-ray crystallography. So what if X-ray crystallography was originally developed for the wool industry? If I go back another step, X-ray crystallography itself depends on the understanding of so much basic physics that it couldn’t exist until after (1) X-rays were discovered and (2) diffraction patterns and X-ray scattering were understood. These all depended on discoveries taking place over a roughly 25 year period from 1895, when X-rays were discovered, to 1920, by which time the technique of X-ray crystallography had been validated on several crystals. Without the basic science of X-rays, diffraction, scattering, and crystallography, the structure of DNA wouldn’t have been elucidated more than three decades after the 1920s.

All of this leaves aside all the basic science discoveries in genetics and biochemistry that led scientists to know that DNA is the basis of heredity and to know a fair amount about its chemical structure before X-ray crystallography nailed it down. Even taking this view ignores all the science from genetics and biochemistry from the preceding decades that had identified DNA as the basis of heredity, determined its chemical constituents, and gone a long way towards teasing out hints of how DNA might encode information, all information without which the X-ray crystallographic structure would have meant little.

As I said, you never know what basic science will discover, which basic science discoveries will lead to useful products, or ultimately what sorts of uses they will be put to.

Basically, Ridley is attacking a straw man version of basic science, and nothing in his article rebuts the “myth” of basic science, as the title calls it. Through it all, he seems not to understand the difference between R&D, which is “translational research,” research intended to result in a product or the improvement of a product, and research, which is, well, research with the intent of discovering new scientific knowledge. In computer companies, R&D might lead to computer chips. In a car company, R&D might lead to a more efficient engine that can be produced more cheaply. In basic science research, the goal is not nearly as defined, and scientists don’t necessarily know what they will find or where their investigation will take them. In any case, there is nothing contradictory about a bunch of inventors or engineers tinkering and producing inventions like the electric light together, because basic science and technology have to progress enough to produce the prerequisite understanding before such inventions become possible. When that happens, when the conditions are ripe for inventions like the telephone to be invented by several people, it’s because the basic science groundwork has been laid. It might have been laid decades ago and practical applications incrementally developed so that a specific invention becomes possible, or, as in the case with Papin working with Boyle to invent a steam engine, the basic science groundwork and practical application might progress rapidly hand-in-hand.

As I read Ridley’s op-ed, I kept asking myself: What, exactly is he getting at? Why did he choose this example? So what if technological progress happens simultaneously in many places by many people? So what if technology is like a biological species, evolving in response to whatever selective pressures there might be?

Ridley’s purpose becomes clear when he starts citing Terence Kealey, a biochemist turned economist. Now, I had heard of Matt Ridley before, although not recently. He wrote a science book I enjoyed a long time ago, Genome: The Autobiography of a Species in 23 Chapters. However, besides that, I was not very familiar with him. Kealey, on the other hand, I had never heard of. So I Googled him, and I quickly learned that he is an adjunct scholar at the Cato Institute and is known for arguing that government money distorts the scientific enterprise. I also learned that he’s an anthropogenic global climate change denialist and even chaired the Global Warming Policy Foundation, which has been described as the “UK’s most prominent source of climate-change denial” and whose “review” of temperature records has been seriously criticized as incompetent and ideologically-driven.

OK, so Kealey is an anthropogenic climate change denialist, which casts his critical thinking skills with respect to science in great doubt, but maybe he knows economics:

For more than a half century, it has been an article of faith that science would not get funded if government did not do it, and economic growth would not happen if science did not get funded by the taxpayer. It was the economist Robert Solow who demonstrated in 1957 that innovation in technology was the source of most economic growth—at least in societies that were not expanding their territory or growing their populations. It was his colleagues Richard Nelson and Kenneth Arrow who explained in 1959 and 1962, respectively, that government funding of science was necessary, because it is cheaper to copy others than to do original research.

“The problem with the papers of Nelson and Arrow,” writes Mr. Kealey, “was that they were theoretical, and one or two troublesome souls, on peering out of their economists’ aeries, noted that in the real world, there did seem to be some privately funded research happening.” He argues that there is still no empirical demonstration of the need for public funding of research and that the historical record suggests the opposite.

This argument is strange. For one thing, no one that I’m aware of claims that “science would not get funded if the government didn’t do it and economic growth would not happen if science did not get funded by the taxpayer.” The question is what kinds of science that would and wouldn’t be funded by private sources. Overwhelmingly, the kinds of science funded by nongovernmental sources tend to be R&D (e.g., pharmaceutical or technology companies, or lone inventors, doing research that can be directly translated into a product) or philanthropy-funded research, research funded by private charitable organizations (e.g. The Susan G. Komen Foundation, The March of Dimes, or other philanthropic organizations that fund research on a specific topic). So, yes, research would be funded without the government. It would tend to be much more “translational” and/or targeted at specific problems.

There’s also this dubious assertion by Kealey, cited approvingly by Ridley:

After all, in the late 19th and early 20th centuries, the U.S. and Britain made huge contributions to science with negligible public funding, while Germany and France, with hefty public funding, achieved no greater results either in science or in economics. After World War II, the U.S. and Britain began to fund science heavily from the public purse. With the success of war science and of Soviet state funding that led to Sputnik, it seemed obvious that state funding must make a difference.

Huh? In the late 19th and early 20th century Germany ruled physics, producing scientists like Max Planck, Albert Einstein, Werner Heisenberg, and many others, who made revolutionary discoveries that laid the groundwork for modern quantum physics. Germany was a powerhouse in science back then (and still is, only nowhere near as dominant). For example, the early 20th century, Germany won 14 out of first 31 Nobel Prizes in Chemistry. Just look at the Nobel Prizes in sciences! Until 1965, Germany won a larger percentage of science Nobel Prizes than any other country. I also note that the US didn’t catch up with France on that score until the 1940s. Obviously, Nobel Prizes are not in and of themselves an measure of how good a country is at science, but they do suggest where the most innovative research has been occurring a one to a few decades earlier, given that the science that wins Nobel Prizes is usually at least a decade old, to give time to see its significance. One can’t help but note that there is a correlation between the dominance of the US in Nobel Prizes and the start of government funding of science. Does correlation mean causation in this case? Not necessarily, given all the other factors that could impact this measure, but this observation is still a piece of data that at least calls Kealey’s assertion into serious doubt.

Basically, Ridley postulates the “myth” of basic science as a means of arguing that current patent policy is too stringent and protects monopoly (which is an arguable point) and that government funding “crowds out” private funding and prevents discoveries from being made:

To most people, the argument for public funding of science rests on a list of the discoveries made with public funds, from the Internet (defense science in the U.S.) to the Higgs boson (particle physics at CERN in Switzerland). But that is highly misleading. Given that government has funded science munificently from its huge tax take, it would be odd if it had not found out something. This tells us nothing about what would have been discovered by alternative funding arrangements.

And we can never know what discoveries were not made because government funding crowded out philanthropic and commercial funding, which might have had different priorities. In such an alternative world, it is highly unlikely that the great questions about life, the universe and the mind would have been neglected in favor of, say, how to clone rich people’s pets.

At this point, I gave up on Ridley. First, he’s downplaying the number of discoveries made with government funding, such as NIH and NSF funding. Also, the Internet is rather a big deal to dismiss so breezily as a “highly misleading” example, given how it has so thoroughly transformed our world over the last 25 years or so—mostly by private companies taking advantage of and building on the government-supported infrastructure and protocols. As for the last “what if” assertion, I did facepalm on that one, given that we actually do have a sort of “living experiment” going on right now regarding what happens when government funding dries up. The NIH budget has been more or less static for over a decade and thus has declined significantly in real dollars. As a result, private sources have stepped in. Have their priorities been better for the country? Not really:

Yet that personal setting of priorities is precisely what troubles some in the science establishment. Many of the patrons, they say, are ignoring basic research — the kind that investigates the riddles of nature and has produced centuries of breakthroughs, even whole industries — for a jumble of popular, feel-good fields like environmental studies and space exploration.

And:

Historically, disease research has been particularly prone to unequal attention along racial and economic lines. A look at major initiatives suggests that the philanthropists’ war on disease risks widening that gap, as a number of the campaigns, driven by personal adversity, target illnesses that predominantly afflict white people — like cystic fibrosis, melanoma and ovarian cancer.

A Nature editorial describes the problem well:

We applaud and fully support the injection of more private money into science, whether clinical or basic. Nevertheless, it is important for each funding body to take into account the kinds of research being heavily supported by the others, to avoid putting all our eggs into a few baskets and shortchanging areas that may yet have crucial contributions to make.

Ridley, given his seeming free market proclivities, might prefer market-based philanthropy to fund science over government funding (and that is his right), but he is sadly deluded if he thinks that private sources don’t “distort” scientific priorities every bit as much as he accuses government funding of doing and quite possibly even more. At least governments try to look at what will benefit the nation (or large parts of the nation); private philanthropists might or might not do that. Many simply respond to personal interests, personal tragedies, and, sometimes, crackpot ideas. Now it’s true that the government is by no means immune to crackpot ideas (witness the NCCIH, formerly known as NCCAM), but funding such ideas has to work within already established rules for peer review. The same is not true of private funding, where the philanthropist or foundation can basically make up any rules he or it likes to determine what research gets funded. Indeed, if he wishes, a philanthropist can even fund a scientist just because he takes a fancy to the work, no peer review needed.

In the end, I would argue that science should be funded by both government and private sources. What the optimal balance is will depend on the country, its priorities, and its economic resources. Contrary to what Ridley and Kealey argue, it doesn’t have to be a zero sum game. Even if Ridley is correct that technological discovery cannot be regulated by government (which he does argue with his likening of technology to a biological organism evolving), it would not follow that government should not fund research.

To see why, let’s revisit Ridley’s picture of technological progress as developing to become like a biological organism, complete with evolution in response to selective pressures. Now let’s carry that analogy farther than Ridley did. Just because evolution by natural selection still occurs in animals and plants doesn’t mean that selective breeding (i.e., guiding that evolution with human intent) doesn’t remain useful and effective in specific cases, such as breeding crops, dogs, horses, pigs, and other animals. Similarly, even if scientific and technological progress is evolving like species of organisms, that doesn’t mean that guiding certain the evolution of specific “species” of science by directing funding to human-decided priorities through government funding isn’t useful and effective.

In the end, all Ridley is arguing is that he prefers the science priorities of the private sector over priorities decided by government. To him one (private funding of science) is desirable and “natural,” while the other (government funding of science) is somehow “distorting” and therefore “unnatural” and wrong. Yet there is no real reason, other than personal preference, to view government funding of science as somehow less “natural” than private funding. To do so invokes an argument from antiquity, mainly that because private sources funded nearly all research until relatively recently that’s the “natural” state of things. Nor can Ridley convincingly show any fundamental difference in how much the whims of a few private donors and various industries “distort” science compared to government. In the end, Ridley just likes how private sources influence research priorities but doesn’t like how government influences them. That’s really all there is to his essay on science funding: Private sector good, government bad.

Shares

Author

Posted by David Gorski

Dr. Gorski's full information can be found here, along with information for patients. David H. Gorski, MD, PhD, FACS is a surgical oncologist at the Barbara Ann Karmanos Cancer Institute specializing in breast cancer surgery, where he also serves as the American College of Surgeons Committee on Cancer Liaison Physician as well as an Associate Professor of Surgery and member of the faculty of the Graduate Program in Cancer Biology at Wayne State University. If you are a potential patient and found this page through a Google search, please check out Dr. Gorski's biographical information, disclaimers regarding his writings, and notice to patients here.