To bookmark:

Login or Sign Up


By Zoey O’Toole

It has become fashionable in the media to lament a modern lack of “faith” in science. National Geographic’s March 2015 cover story written by Joel Achenbach, “Why Do Many Reasonable People Doubt Science?”, exemplifies this trend with the caption “We live in an age when all manner of scientific knowledge—from climate change to vaccinations— faces furious opposition. Some even have doubts about the moon landing.” As the holder of a bachelor’s degree in physics who happens to be quite proud of my father’s contribution to the moon landings, yet has the temerity to question the wisdom of widespread vaccines and GMOs, I decided it was time to debunk the idea that questioning vaccines is “anti-science.”

Despite his title, Achenbach makes the case that those who “doubt science” are not in fact “reasonable.” Rather, they are driven by emotion—what he calls intuitions or “naïve beliefs.” “We have trouble digesting randomness,” he says. “Our brains crave pattern and meaning,” implying that we use our own experiences to see patterns where none exist. Achenbach uses logical inconsistencies and false assumptions that, taken together, make a better case against his thesis than for it—at least with regard to vaccine science.

“You Keep Using That Word. I Do Not Think It Means What You Think It Means.” – Inigo Montoya

In The Structure of Scientific Revolutions, philosopher Thomas Kuhn instigated a revolution of his own— in our understanding of how science progresses. Kuhn’s main idea is that scientific understanding is not simply a gradual accretion of knowledge, but is instead more episodic in nature with periods of “normal science,” “puzzle-solving” guided by the prevailing paradigm, punctuated by periods of “revolutionary science” as an old paradigm gives way to one that better explains the totality of observed phenomena. An established paradigm is generally not abandoned until overwhelming evidence accumulates that an alternate credible hypothesis does a better job of explaining the data.

As a science journalist for National Geographic, Achenbach ought to know Kuhn’s work, and indeed he seems to understand it when he says “Scientific results are always provisional, susceptible to being overturned by some future experiment or observation.” Having sat through many lectures on scientific theories once accepted and later discarded when they did not fully account for the data, I fully concur. Oddly, Achenbach undercuts that understanding with, “The media would also have you believe that science is full of shocking discoveries made by lone geniuses. Not so. The (boring) truth is that it usually advances incrementally, through the steady accretion of data and insights gathered by many people over many years.”

This statement is patently false. First off, the media tends to downplay, if not ignore, “lone geniuses” until their contributions are thoroughly accepted by the mainstream. Secondly, “the steady accretion of data and insights gathered by many people over many years” cannot by its nature bring about the biggest advancements in science—the scientific revolutions. As Wikipedia puts it,

In any community of scientists, Kuhn states, there are some individuals who are bolder than most. These scientists, judging that a crisis exists, embark on what Thomas Kuhn calls revolutionary science…. Those scientists who possess an exceptional ability to recognize a theory’s potential will be the first whose preference is likely to shift in favour of the challenging paradigm.

Eventually a “paradigm shift” occurs that ushers in a scientific revolution, resulting in an explosion of new ideas and directions for research. Achenbach recognizes this tension between the bolder and more conservative scientists to a degree:

Even for scientists, the scientific method is a hard discipline. Like the rest of us, they’re vulnerable to what they call confirmation bias— the tendency to look for and see only evidence that confirms what they already believe. But unlike the rest of us, they submit their ideas to formal peer review before publishing them.

While Achenbach acknowledges that, as human beings, scientists are subject to biases, he implies that those biases are held in check by the magical process of peer review. What he fails to mention, however, is the fact that peer review is so imperfect in practice that Richard Smith, former editor of the British Medical Journal, wrote in his 2006 article, “Peer Review: A Flawed Process at the Heart of Science and Journals”:

This pastiche—which is not far from systems I have seen used—is little better than tossing a coin, because the level of agreement between reviewers on whether a paper should be published is little better than you’d expect by chance.

That is why Robbie Fox, the great 20th century editor of the Lancet, who was no admirer of peer review, wondered whether anybody would notice if he were to swap the piles marked “publish” and “reject.” He also joked that the Lancet had a system of throwing a pile of papers down the stairs and publishing those that reached the bottom. When I was editor of the BMJ I was challenged by two of the cleverest researchers in Britain to publish an issue of the journal composed only of papers that had failed peer review and see if anybody noticed. I wrote back “How do you know I haven’t already done it?”

Marcia Angell, M.D., former editor in chief of the New England Journal of Medicine, believes that problems with scientific research, especially pharmaceutical research, go much deeper than peer review. In May 2000 she wrote an editorial in the NEJM asking “Is Academic Medicine for Sale?” about the increasingly blurry lines between academic research and the pharmaceutical companies that fund it. The editorial was prompted by a research article written by authors whose conflicts-ofinterest disclosures were longer than the article itself. [Read more about her on page 29 of this issue.]

What determines who will be among the bold scientists who usher in a paradigm shift and those who oppose it? Those who can take a step back from the narrow focus of “normal science” to see the bigger picture will be those who possess the “exceptional ability to recognize a theory’s potential.” Pediatric neurologist and Harvard researcher Martha Herbert, M.D., Ph.D., describes this tension well:

Ironically the exquisite precision of our science may itself promote error generation. This is because precision is usually achieved by ignoring context and all the variation outside of our narrow focus, even though biological systems in particular are intrinsically variable and complex rather than uniform and simple. In fact our brains utilize this subtlety and context to make important distinctions, but our scientific methods mostly do not. The problems that come back to bite us then come from details we didn’t consider.

The ability, as Herbert describes it, to use “subtlety and context to make important distinctions” constitutes the difference between the scientific revolutionaries and those who defend an error long past the point it has been proven to be an error. It is an ability that Albert Einstein possessed to a larger degree than most. Einstein felt that “All great achievements of science must start from intuitive knowledge,” and claimed, “At times I feel certain I am right while not knowing the reason.” Gavin de Becker, private security expert and author of The Gift of Fear, considers intuition a valid form of knowledge that does not involve the conscious mind. Rather than denigrating intuition as an irrational response based on “naïve beliefs,” he teaches people to recognize, honor and rely upon their intuition in order to keep themselves and their loved ones safe.

Achenbach, on the other hand, claims that our intuition will lead us astray as it may prompt us to take actions that we would not regard as “rational.” With regard to an apparent cluster of cancers near a hazardous waste dump, he says:

To be confident there’s a causal connection between the [hazardous waste] dump and the [local cluster of] cancers, you need statistical analysis showing that there are many more cancers than would be expected randomly, evidence that the victims were exposed to chemicals from the dump, and evidence that the chemicals really can cause cancer.

That’s true, of course, but surely it’s not all that one would—or should—take into account when deciding whether or not to build one’s house next to the hazardous waste dump. Denying your intuitive urge to avoid the hazardous waste dump with the rational thought “science hasn’t proven it’s a problem yet” could turn out to be the worst decision you ever make.

Achenbach’s thesis ultimately fails due to his reliance on Yale University law professor Dan Kahan’s theory that people fall into two camps, those who have an “egalitarian” mindset and those who have a “hierarchical” and “individualistic” mindset. Kahan thinks that people who “doubt science” will believe whatever their tribe says “we believe,” because to do otherwise will get them thrown out of the tribe. This is crystallized by Marcia McNutt, editor of Science magazine: “We’ve never left high school. People still have a need to fit in, and that need to fit in is so strong that local values and local opinions are always trumping science.”

The problem with this viewpoint is that it is inherently contradictory. On the one hand, Achenbach pretends that only science that fits the prevailing viewpoint is worthy of note, when that is clearly not the case. For instance, he pretends that there is no other science than the infamous 1998 case study of 12 children written by Andrew Wakefield that supports a link between vaccines and autism, when there are in fact a large number of studies that do so.

Then Achenbach argues that we should ignore climate science that doesn’t fit the prevailing paradigm because, “It’s very clear, however, that organizations funded in part by the fossil fuel industry have deliberately tried to undermine the public’s understanding of the scientific consensus by promoting a few skeptics.” I tend to agree with Achenbach on this point. While I have a healthy distrust of scientific “consensus,” that is coupled with an even stronger skepticism of science conducted by an industry that stands to gain from the outcome of that science. Illogically, however, Achenbach doesn’t display the same skepticism toward science financed by an industry that controls the prevailing paradigm. Vaccines are one of the fastest-rising sectors in what Marcia Angell calls the most profitable industry for more than two decades, and the vast majority of vaccine science is conducted by manufacturers themselves or the CDC, which Robert F. Kennedy Jr. describes as a “cesspool of corruption” due to myriad conflicts of interest. [His overview is on page 8 of this issue.] Julie Gerberding, M.D., M.P.H., who left the CDC to run the vaccine division at Merck after overseeing research that “exonerates” vaccines in rising autism rates, was not an anomaly. And the situation is eerily similar when it comes to GMO safety studies conducted by Monsanto and rubber-stamped by the FDA.

The most ironic part of Achenbach’s piece is that, in practically the same breath he tells us to ignore science outside the consensus, he lauds scientists who are so dedicated to truth that they break with their “tribe” to report what they have observed, despite censure, loss of prestige, or even career. In other words, the very scientists operating outside the consensus! Few scientists have sacrificed more by speaking the truth than Andrew Wakefield. Prior to the publication of his case study, Wakefield was a well-respected gastroenterologist with a prestigious position at the Royal Free Hospital in London—a deeply entrenched member of the “tribe” who, as a result of standing behind his work, has since had his medical license revoked and almost never sees his name in print without the word “discredited” next to it. Yet Wakefield still performs work that undercuts the prevailing paradigm. By Achenbach’s own argument, Andrew Wakefield is inherently more credible than all the scientists clinging to the “vaccines are [all] safe and effective” consensus position.

Fear of betraying the tribe can never explain those who question vaccine safety. Time and time again I have heard of people losing friends, loved ones, and even jobs when they do so. It can be such a lonely position to take that many express profound relief when they find like-minded people online. In effect, having given up their place in the tribe, they must seek a new tribe. Evangelical Christians, the very people considered most likely to be “hierarchical individualists” may have the loneliest road of all, as many of their organizations have come out strongly in support of the current vaccine program.

Doctors who express concerns are vilified by the media and a vitriolic group of self-identified “science” bloggers, despite the fact that many of them start out as vocal believers in the basic premise of vaccines. Surprisingly, there are still quite a few who have the courage to buck the tribe, including Bernadine Healy, M.D., former head of the National Institutes of Health (a de-facto “tribal chief”) who, in a 2008 interview with CBS correspondent Sharyl Attkisson, disclosed that “when she began researching autism and vaccines she found credible published, peer-reviewed scientific studies that support the idea of an association. That seemed to counter what many of her colleagues had been saying for years. She dug a little deeper and was surprised to find that the government has not embarked upon some of the most basic research that could help answer the question of a link.”

The biggest problem with Achenbach’s piece, and every other piece that laments the “rejection of science,” is that it confuses rejection of technology with rejection of science. As Alice Dreger, professor at Northwestern University’s Feinberg School of Medicine, illustrates in her article in Pathways issue 35 (“The Hard Science Supporting Low-Tech Birth”) technology does not equal science. “In fact,” says Dreger, “if you look at scientific studies of birth, you find over and over again that many technological interventions increase risk to the mother and child rather than decreasing it.” She says that the technological aspects of medicine market well to our technology-obsessed and death-denying culture, while “a low-interventionist approach to medical care— no matter how scientific—does not.”

The “Precautionary Principle” asserts that, when in doubt, it is better to err on the side of caution. Intuitively and logically this should be obvious. Science can take a long time to prove something is harmful—so long, in fact, that many drugs have done tremendous damage before they were withdrawn: Thalidomide, Vioxx, DES and Darvon, to name a few. The Precautionary Principle isn’t anti-science. It even supports one of Achenbach’s goals—making efforts to avoid disastrous climate change. It would also support making sure GMOs can’t do systemic damage before licensing them, testing vaccines against true placebos and in the recommended combinations before giving them to every newborn in the country, and comparing health outcomes of vaccinated vs. unvaccinated people after licensing them. Hold on…this can’t be right. I’m recommending science!

When it comes down to it, science is a tool. And like any tool, it can be used ethically or unethically in pursuit of ends that range from sublime to unquestionably evil. Is it anti-science to deplore Mengele’s experiments on concentration camp captives? Or the ethics of the “Tuskegee Study of Untreated Syphilis in the Negro Male”? Was Hans Albrecht Bethe, a director of Los Alamos during the Manhattan Project, anti-science when he called upon other scientists to refuse to make atomic weapons? As a tool, science can serve corporate interests or it can serve humanity’s interests. When those interests are in opposition, it is not “anti-science” to insist that science serve humanity over corporations.