The scientific research that informs regulations falls prey to perverse incentives when it’s funded by the government doing the regulating. Here’s a better alternative.

Media Name: a_libertarian_vision_for_regulatory_science.jpg

Patrick J. Michaels is the director of the Center for the Study of Science at the Cato Institute. Michaels is a past president of the American Association of State Climatologists and was program chair for the Committee on Applied Climatology of the American Meteorological Society. He was a research professor of Environmental Sciences at University of Virginia for thirty years.

For decades, many areas of science have been systematically distorted in order to advance regulatory policies that can deprive people of their property, and even their health. In the regulatory science arena—which includes areas such as climate change and regulation of ionizing radiation and other carcinogens or of fine particulate emissions—research is generally funded by the federal government.

Many distortions arise from the incentive structure for scientific advancement, noted in detail below. Others, often involving property issues, are based on a misuse of scientific authority, which itself derives from scientists who have benefited from the incentive structure. Again, these issues will be examined in detail.

Nonetheless, such distortions do not necessarily apply to all areas of regulatory science, nor are federal efforts always pernicious. A shining example of scientific success in an issue with substantial political overtones was the identification of the causative agent for AIDS and the development of treatments that greatly prolong the life of those with that condition. Here, we are interested in the opposite situation, which is dismally frequent.

The near‐​monopoly nature of science funding in certain fields induces distortions. Later in this chapter, I will provide a more market‐​based solution that should serve to diversify the scientific enterprise in the regulatory arena. But first we will examine four scientific distortions that should respond to our prescription: (a) the regulatory paradigm for ionizing radiation and carcinogens; (b) the “regulation by committee” of the mining of highly significant deposits of uranium in Virginia; (c) the distortion of the law to prevent exploitation of the world’s largest copper, gold, and molybdenum deposits; and (d) the increasing divergence between observed and modeled tropical temperatures. By exploring how each of these areas is distorted by the federal government’s de facto monopoly on funding research, and the incentives that funding creates, I hope to provide a clearer, more concrete picture of the kinds of problems a market‐​based and more open system of scientific funding will solve.

Ionizing Radiation and Carcinogens: Using the Wrong Dose–Response Model

An example of regulatory science gone bad involves ionizing radiation and carcinogens. Ionizing radiation comes from the portions of the electromagnetic spectrum with sufficient energy to remove an electron from an atom, which makes the atom a positively charged ion that is more reactive. X‐​rays, gamma rays, and the ultraviolet portion of sunlight are common examples. Carcinogens, possible causes of cancer, include ionizing radiation as well as a tremendous number of natural and anthropogenerated compounds in our environment.

In a Cato Institute book titled Scientocracy, University of Massachusetts toxicologist Ed Calabrese writes:

Current regulations are based upon a deliberate misrepresentation of the scientific basis for the dose response for ionizing radiation‐​induced mutations by the former leaders of the radiation genetics community. These actions culminated in a successful attempt to manipulate the scientific community and the general public of the U.S. and world community by the prestigious U.S. National Academy of Sciences (NAS) Biological Effects of Atomic Radiation Committee (BEAR I)–Genetics Panel in 1956 when it recommended that the dose response for radiation‐​induced mutation be changed from a threshold to a linear dose response.

… This model, called the “linearity–no dose threshold” (LNT) searches for the lowest exposure to a carcinogen or ionizing radiation that is associated either with substantial mutations or cancer itself. That should be the starting point for any dose–response model, but it is not. Instead, a line is drawn backward from the detection threshold data point to the origin on the graph. The implication is obvious. By forcing the response through the origin, any exposure—including the most minuscule—is claimed to be dangerous. 1

Calabrese has extensively documented that strong evidence existed against the LNT when it was adopted for radiation. 2 He continued:

During the process of their assessment, the BEAR I 3 Genetics Panel would falsify and fabricate the research record concerning the estimate of radiation‐​induced genomic risk, as I have documented in detail. 4 It made a decision not to share the profound degree of uncertainty amongst the Panel members but rather, to misrepresent it by removing and changing data concerning estimates of genetic mutations in the U.S. population at a certain level of radiation exposure.

One must remember the times. The BEAR I panel met during the heyday of atmospheric testing of nuclear weapons. One constituent of the widely dispersed fallout from those tests was strontium‐​90. With a half‐​life of 28.8 years, strontium‐​90 is metabolized the same as calcium (being in the same column on the periodic table). Consequently, it is incorporated into animal bones and cow’s milk, which is consumed by growing children. By calling even the first ingested atom dangerous, the BEAR Committee heightened public fear and pressure for an atmospheric test ban. One can only imagine that the BEAR Committee thought it was performing virtuous public service by using the LNT model.

As Calabrese has repeatedly shown, the dose–response model is wrong. In fact, low doses of some forms of ionizing radiation, like sunlight, can confer health benefits, such as the final synthesis of vitamin D. It is also now known that our cells self‐​repair DNA, and that repair more than protects against the harmful effects of low‐​dose radiation.

Calabrese has meticulously documented that BEAR I was heavily influenced by Nobel Prize winner Hermann Muller’s work and his dominant personality. 5 Membership on panels like BEAR I is only awarded to people of considerable academic accomplishment, that is, those who have been very successful in the federal funding game, producing significant publications and therefore attaining academic promotion. The professional literature defines the dominant paradigm, which was (and remains) the LNT model. The price of an invitation to BEAR I was professional prestige and acceptance of the LNT model.

Use and Misuse of Authority by the National Academy of Sciences and Federal Committees

It is common for the National Academy of Sciences (NAS), via its subsidiary the National Research Council (NRC), to commission committees of prestigious academics to examine complicated issues like environmental risk. These committees are highly influential and can promote policies that may result in deprivation of people’s property rights, as we show here regarding large, privately owned deposits of uranium and copper.

In 1863, President Lincoln requested and signed legislation creating the NAS to “investigate, examine, experiment, and report upon any subject of science.” Of course, at the time, he was really interested in new technologies and ideas to successfully prosecute the Civil War. Another believer in expansive government, and again a wartime president, Woodrow Wilson created the NRC (initially called the National Research Foundation) as the research arm of the NAS.

Because they are usually prominent scientists, NRC committee members for a given task are vetted and well‐​known by the NAS. In controversial areas, a majority view will be “balanced” by one or a few dissenters. By selecting the participants, the NAS gets the results it wants: the incentive structure of science creates biased bodies of literature, and the creators of that literature form biased review bodies. Federal agencies also provide a veneer of authority by forming committees analogous to those of the NRC but tasked with recommending policy options in particular provinces of the regulatory sphere.

Uranium Mining in Virginia: Authority and Misleading Science

On March 28, 1979, two remarkable things happened. Marline Uranium discovered what turned out to be the largest known uranium deposit in the United States, in Pittsylvania County, Virginia, hard along the North Carolina border. Virtually the entire deposit sits on and underneath a single private property, known as Coles Hill, for its owner, Walter Coles Sr. On the very same day, reactor number 2 at Three Mile Island, Pennsylvania, experienced a cooling malfunction that resulted in a partial meltdown. The release of radiation was very limited, but public reaction was stoked by a concurrent (and totally fictional) movie about a nuclear meltdown, The China Syndrome. The hoopla eventually resulted in a 1982 ban on uranium mining that can only be lifted by new Virginia legislation.

Exploitation of a new uranium deposit wasn’t economically feasible until uranium’s price began to rise early in the 21st century. Instead of selling mining rights and reaping an instant profit, Walter Coles tapped his son, Walter Jr., a Wall Street fund manager, to raise enough money to start up a new company, Virginia Uranium, which traded on the Toronto Stock Exchange. Coles raised considerable capital, but the success of the company depended on the lifting of the moratorium.

Coles wanted to play it straight. He proposed that the NAS commission a report on the state of science and technology regarding mining his deposit. He believed it would be fair and impartial, and Coles footed the entire bill for the report. The National Research Council empaneled a committee to do the report, Uranium Mining in Virginia: Scientific, Technical, Environmental, Human Health and Safety, and Regulatory Aspects of Uranium Mining and Processing in Virginia (UMV).

The NRC selected 14 prominent individuals to sit on the committee. Four were viewed as being likely to support exploitation, as they either had consulted in defense of the industry concerning environmental matters or had been involved in mining themselves. An ecosystem hydrologist and a uranium geochemist on the committee were harder to predict, although the hydrologist was a board member of a local environmental activist organization near his home institute, Frostburg State University, in the rugged Maryland panhandle. The vitae of the other eight clearly indicated that they were predisposed against exploitation, including that many were strict believers in the LNT model.

On December 19, 2011, the NRC released the report, knowing full well that one paragraph would be extensively cited: 6

If the Commonwealth of Virginia rescinds the existing moratorium on uranium mining, there are steep hurdles to be surmounted before mining and/​or processing could be established within a regulatory environment that is appropriately protective of the health and safety of workers, the public, and the environment. There is only limited experience with modern underground and open‐​pit uranium mining and processing practices in the wider United States, and no such experience in Virginia. 7

The core arguments creating the “steep hurdles” were geophysical: largely climatic and to a lesser extent tectonic. The climate argument can be summarized as “there have been some big floods in Virginia” and plenty of hurricane strikes. The main climate reference was a non‐​peer‐​reviewed University of Virginia News Letter article that I coauthored in 1982 in my role as the then–state climatologist. 8

The body of the UMV text repeatedly refers to “extreme” climate‐​related events (26 instances), and the notion of “extremes” captured the public discourse as a reason to keep the moratorium in place. In town hall meetings, newspaper editorials, and news stories, the specter of contamination of the water supply of Virginia Beach (175 air miles from Coles Hill) by a flood‐​induced impoundment failure of modestly radioactive mine tailings was repeatedly raised.

In UMV, the NRC made little attempt to compare its geophysical analysis of Virginia with conditions in other areas of the country or with the parts of the world where uranium mining or processing has taken or is successfully taking place. Further, no attempt was made to differentiate the localized Coles Hill climate and seismic risk from those of a general survey of the state. In reality, the climate extremes expected at Coles Hill are much less than those experienced elsewhere where uranium is mined. UMV contained so many geophysical exaggerations that I published a peer‐​reviewed article on the topic.

The section on hurricanes is typical of the distortion that mars UMV:

In the period from 1933 to 1996, 27 hurricanes or tropical storms made landfall in Virginia … bringing with them the threats of flooding, high winds, and tornadoes. 9

The relevant citation for that statement is an obscure publication from the National Weather Service Forecast Office in Wakefield, Virginia, titled “Historical Hurricane Tracks, 1933–1998, Virginia and the Carolinas” (emphasis added). Examination of the historical tropical cyclone tracks from the National Hurricane Center reveals that only eight tropical storms (and no hurricanes) made landfall in Virginia itself from 1933 through 1998; 19 were in North and South Carolina. 10 UMV is clearly misleading in its documentation of Virginia’s general tropical cyclone history, an inaccuracy that leads to the assessment of a greater risk than is actually present in the state’s current climate. But who is going to check the august NRC for accuracy?

UMV represents a remarkable distortion that can only be characterized as abuse of scientific authority. It had consequences, too. Virginia Uranium (now Virginia Energy Resources) traded in the $5 range before UMV was published. Following its publication, the price dropped to about $0.04 per share. The estimated $7 billion valuation of the Coles Hill deposit became worthless because it could not be mined. 11 In effect, the misleading arguments in UMV resulted in a massive taking from the Coles family.

The Virginia Uranium fiasco is an indirect result of federal funding. All members of an NRC panel are distinguished in some fashion, meaning they have published enough research to be promoted. When the NAS wants an answer (actually, a specific answer), a potential panelist’s research and public record determine where they are likely to lie on a particular issue. NRC committees are hardly selected randomly.

But other, related bodies can directly use scientific authority against private property: panels selected by various federal agencies. We need to look to Alaska for a prime example.

Alaska’s Pebble Mine

In southern Alaska, about 150 miles west of Anchorage, lies what is currently thought to be the largest known copper, gold, and molybdenum deposit on earth. It is on private land zoned by the state of Alaska for mining. The deposit is drained by a small creek that empties into pristine Lake Iliamna, by volume the seventh‐​largest lake in the United States. Lake Iliamna ultimately drains into Bristol Bay, home to the world’s largest wild sockeye salmon fishery. The deposit is known as the Pebble Mine.

In Pebble, the U.S. Environmental Protection Agency (EPA) colluded with environmental organizations to create what could best be termed a science fiction to prevent the exploitation of private property. It is a textbook case of an empowered agency creating “science” with the sole purpose of executing a policy.

The permitting process for a mine like Pebble is governed by the 1970 National Environmental Policy Act (NEPA). An application for a mining permit is made to the U.S. Army Corps of Engineers and has to include a detailed environmental impact statement. In anticipation, by 2012, Pebble’s owner, Northern Dynasty Minerals of Vancouver, had already spent about $150 million in baseline ecological studies of the site, including the local hydrology and geology.

Instead of having the Pebble project proceed with its own environmental impact statement, the EPA substituted its own assessment of the impact of the Pebble project on the Bristol Bay watershed—in place of the formal and comprehensive NEPA environmental impact statement decision process.

The EPA’s statement appeared in the form of a draft report. In May 2012, the EPA issued “An Assessment of the Potential Mining Impacts on Salmon Ecosystems of Bristol Bay, Alaska.” On April 20, shares of Northern Dynasty Minerals (NAK on the New York Stock Exchange) traded at $5.80; the stock was considered a fairly conservative investment and certainly a staple in many Canadian retirement accounts. After discovering the massive deposit, NAK acquired major financial backing from one of the world’s largest mining concerns, Anglo‐​American, which invested over $500 million in startup expenses for NAK. By May 25, NAK sold for $2.48. The draft EPA report had stripped nearly 60 percent of the stock’s value in a month.

According to the EPA, it became involved in the permitting of the project because of petitions against the mine from Native Alaskan tribes in 2010. Verbal statements from EPA employees and official agency documents reveal the existence of an internal EPA “options paper” that makes clear the agency opposed the mine on ideological grounds and had already decided to veto the proposal in the spring of 2010. The draft Bristol Bay report, a scientific figleaf, was not released until two years later.

The EPA designed a fictional Pebble mine in its ultimate (2014) “Bristol Bay watershed assessment,” which it then used to preempt the real Pebble under the Clean Water Act. This fictional mine was a worst‐​case mine design. 12 The EPA charged a senior biological scientist named Philip North to design an open‐​pit “mine” that would have no chance of being approved when reviewed by a professional mining engineer.

The mine plan fabrication is an egregious example of federal agency deception and distortion of “science” reported in a “scientific assessment.” The application of this process to deny a person or corporation of property rights is hardly unique, as illustrated by the Virginia Uranium story. In this case, however, it was the direct action of a federal agency concocting a “scientific” rationale rather than a team of established scientists handpicked to sit on an NRC panel. This example is more about the broad scope of federal influence than the specific power of money in the careers of successful scientists.

Global Warming: How Incentives Distort

In academia, achievement of promotion and permanent positions requires large numbers of peer‐​reviewed publications, which, in climate research, can only be produced with massive financial support. Consequently, scientists seeking advancement must largely support whatever paradigm currently reigns in their field. Federal funders are cautious and are guided to that caution by those they rely on for advice. Large science programs challenging existing paradigms are rare.

Individual proposals that are counterparadigms are also rarely approved, as the peer review will likely be highly critical. Peer reviewers tend to support research that doesn’t stray too far from the dominant paradigm, as they likely have done the same in their careers. Therefore, scientists looking to challenge the status quo have difficulty finding the money they need to support their work. Consequently, to succeed, they revert to the existing paradigm. There is a further problem: given the costs involved in climate research, and given that research pushing established boundaries runs a higher risk of producing no publishable results, funders naturally feel more comfortable approving proposals that hew closer to already‐​established findings or paradigms. The cost of this caution is a reduction in scientific diversity, which is often a prerequisite for scientific breakthroughs.

In the world of climate change, the reigning paradigm is that large computer simulations—known as atmospheric general circulation models or total earth system models—are capable of producing reliable and realistic forecasts of climatic regimes that can be used with confidence to support various policies.

One would think that these models should be systematically tested, but the models suffer from having all been “tuned” to mimic the evolution of surface temperature since 1900. 13

This tuning makes independent testing of surface temperature predictions very difficult. As an alternative, economist Ross McKitrick and climate scientist John Christy isolated the strongest model signal above the surface, which is an enhanced warming that increases with height in the tropical atmosphere. Because it is far from the surface, it wasn’t tuned in. As a family, they found that the models systematically predict between two and three times as much warming as has been observed over the past 60 years. 14

Political power and large financing accrue when a perceived need exists for large‐​scale regulations because of some existential threat. Global warming is often portrayed as such. And federal agencies, which tend to regress to existing paradigms, will preferentially fund models that produce large rates of warming with large consequences from that warming. They will also studiously avoid funding research that hypothesizes lessening effects or increasing adaptation, as dilution of a threat imperils future funding. It is noteworthy that the critical test described by McKitrick and Christy was not federally funded.

Gavin Schmidt—director of the National Aeronautics and Space Administration’s Goddard Institute for Space Studies, who is also in charge of its climate model—recently documented the incredible number of parameters that are “tuned” in climate models. 15 Clearly, one can produce pretty much any amount of future warming (or cooling) with such broad latitude. 16 Pierre Hourdin, head of the French climate modeling team, notes that models are tuned to give what he calls an “anticipated acceptable result.” 17

Given the newly revealed plasticity of the models, there is an incentive to tune them to output significant warming. Without those forecasts, large funding for the paradigm—which believes in the utility of the highly tuned models—would dry up. As a result, the model‐​driven paradigm is one of substantial and consequential warming.

The problem with overtuning a model is that it can induce instabilities. The models are all tuned to simulate the warming of the early 20th century, almost a half degree Celsius from the 1910–1945 period, as largely a function of human changes in the atmosphere. But, in fact, atmospheric carbon dioxide was only slightly elevated from its nominal 1850 background in 1910, so the only way that a model could simulate this much warming would be with a sensitivity to temperature that is too high. This tuning is what likely generates the disparity between simulated and observed temperatures away from the tropical surface. 18

It turns out that only one model, the Russian INM-CM4, tracks the observed history. That is also the model with the least 21st‐​century warming. In other words, with one exception, there has been a massive systematic failure in the universe of climate models. The fact that they continue as the basis for policy is testimony to the power of incentives and the inability of a community to apply obvious corrective measures because of those incentives.

A Modest Libertarian Proposal

It is good to speculate on an ideal libertarian world. However, it’s fair to stipulate that the federal government is going to continue to fund regulatory science and also to stipulate that scientists will continue to respond to the incentives that will result in advancement and permanent employment.

Let’s also stipulate that the incentives to remain funded are strong, and they are creating biased canons of knowledge. Stanford’s Daniele Fanelli provided strong evidence for this fact when he uncovered a significant and large increase in the number of papers reporting support for a stated (and usually funded) hypothesis. 19 Further, he noted, along with coauthor John Ioannidis, that the addition of an American author to an international team doubles the likelihood of a positive result. 20 A positive finding is one in which the data support a previously stated hypothesis. In fact, hypotheses rather than facts are largely the recipients of public largesse.

That’s strong evidence that scientists in fact do try to please their funders. 21 So here is a modest recommendation to fix regulatory science.

Establish a Cap‐​and‐​Trade System for Science Funding

A more diverse and richer science will emerge from a more diverse and richer source of funders. Imagine that a finite amount of federal money is available for research on global warming. Currently, in a highly competitive environment, scientists will strive for the most lurid and headline‐​grabbing results, and they are preferentially published in the flagship journals like Science or Nature. 22 But certainly other parties are interested in funding global warming science—maybe one of the major energy producers. If they were to contribute, say, 10 percent of the total global warming research outlay, the federal contribution could be dropped by the same amount. One can envision similar buy‐​in by the enormously wealthy World Wildlife Fund (with an annual budget of approximately $325 million 23 ), or major foundations like the Carnegie Endowment and the MacArthur Foundation.

Contributors then get proportional representation on the various supervisory boards as well as a choice of which ones they want to sit on. This benefit gives them important input into the programmatic direction of funded science areas. Making this membership public will encourage investigators with differing points of view to apply, and it can be assumed that their incentive is still continued funding. So like most scientists, they will continue to do research and cite research that supports their continued success.

Diversifying the Canon

Funded results are usually published—otherwise funding has a way of drying up. Funding a broader base of bias will in fact diversify science, whereas not diversifying will narrow the scope of the canon.

Consider global warming and deaths related to urban heat waves. Everyone interested in the field remembers the great French heat wave in August 2003, with nearly 15,000 excess deaths. There was plenty of blame to go around. August is the prime vacation month in France, and so many physicians weren’t working. Further, many of the elderly lived in homes without air conditioning, which had been vacated by vacationing children. The more frequent such heat waves become, the more deaths there would be—or so it would seem.

But opposing evidence also exists. Large numbers of deaths result in greater pressure for adaptation (including more air conditioning) and for increasing public health awareness and health care availability. Three years later, in July 2006, a longer and even hotter heat wave hit France, but about 4,400 fewer deaths occurred than would have been expected for that long stretch of high temperatures.

We surely see many more papers—which means much more funded research—on increasing (or prospectively increasing) deaths rather than on decreasing mortality rates. The fact is, from the 1960s through the 1990s—despite warmer temperatures (which would have occurred even without global warming, thanks to the urban heat island effect)—population-adjusted heat‐​related mortality declined in all studied U.S. urban areas, except in one (Seattle) where heat waves are rare. 24 But no substantial related literature is studying whether we are reducing deaths below the prospective elevated background because of warming. But if you fund it, they will come. In fact, the paper showing reduced mortality was so groundbreaking that it won the Association of American Geographers “Paper of the Year” award in the “climate” section in 2005.

Or consider Ed Calabrese’s revolutionary research on dose response, which provides strong evidence that the current regulatory model—where the first photon of ionizing radiation or the first molecule of an ultimate poison is just as deadly as the bajillionth—is wrong. The only reason he has been so successful is that the military has an abiding interest in organismal responses in the low‐​dose ranges that service members are often exposed to. So the U.S. Air Force has been funding Calabrese’s work for years, not the National Institutes of Health, which largely funds work within the existing paradigms.

The regulatory science literature is richer and more diverse thanks to Calabrese’s more than 600 publications countering the LNT model. Scientific progress and scientific diversity go hand in glove.

Diversifying the Academy

Applying cap‐​and‐​trade to regulatory science funding should diversify the associated permanent faculty, as more diverse funding will diversify publications and therefore broaden the span of knowledge within universities and their departments. This approach can be good, both by fertilizing academic discussion within a faculty and by fostering student interaction with mentors more diversified in their research.

To return to the discussion of climate, disproportionate funding has been disbursed to modeling efforts. And judging from the volume of literature, very little has gone to rigorous testing of climate models (although within that sparse literature, evidence exists for systematic problems). Generally, those advocating for more testing will be programmatically supported preferentially by funding sources within the cap that are not taxpayer based, something that cannot happen in today’s funding model. Generally speaking, those advocating testing (or complaining about the lack of it) are marginalized within the community. This modest proposal will demarginalize quantitative skepticism, something that does not happens now because of the current incentive structure.

Summary and Conclusions

Regulatory science is being distorted by incentives for professional advancement interacting with a monopoly provider of paradigm‐​supporting funding. The result has been distortion of the canon of knowledge and the consequent distortion of rational policy outcomes. In addition, and across many areas of regulatory science, a monolithic paradigm structure is inhibiting the research diversity that is necessary to advance science. The prime examples in this chapter deal with global warming and regulation of ionizing radiation and carcinogens.

A related problem is that professional advancement confers authority, which is often used by prestigious organizations like the National Academy of Sciences and its associated National Research Council to pursue policy objectives that can be harmful to property rights and even human health. Prestige‐​based panels populated by various agencies, like the EPA, have the same unfettered power. Here are three examples: the EPA’s preemptive veto of the Pebble Mine under the Obama administration, 25 the BEAR I Committee’s adoption of the LNT model for radiation (subsequently broadly applied to toxic chemicals and other carcinogens), and the NRC’s misleading geophysical arguments devaluing the largest uranium deposit in the United States.

In response, I propose to substitute a cap‐​and‐​trade system for regulatory science research funding, in which public funds, which are currently spent largely in the service of existing paradigms, are partially displaced by more diverse interests. This approach should have the salutary effect of diversifying existing research, rewarding a more diverse faculty with promotion and advancement, creating a more diverse academy, and enhancing scientific progress.

1. Patrick J. Michaels and Terence Kealey, eds., Scientocracy: The Tangled Web of Public Science and Public Policy (Washington: Cato Institute, 2019), pp. 185–186.

2. Edward J. Calabrese, “Key Studies Used to Support Cancer Risk Assessment Questioned,” Environmental and Molecular Mutagenesis 52, no. 8 (2011): 595–606; Edward J. Calabrese, “How the US National Academy of Sciences Misled the World Community on Cancer Risk Assessment: New Findings Challenge Historical Foundations of the Linear Dose Response,” Archives of Toxicology 87, no. 12 (2013): 2063–81.

3. The National Academy of Sciences established the Biological Effects of Atomic Radiation panel in 1956.

4. Edward J. Calabrese, “On the Origins of the Linear No‐​Threshold (LNT) Dogma by Means of Untruths, Artful Dodges and Blind Faith,” Environmental Research 142 (2015): 432–42; Edward J. Calabrese, “LNTgate: How Scientific Misconduct by the U.S. NAS Led to Governments Adopting LNT for Cancer Risk Assessment,” Environmental Research 148 (2016): 535–46.

5. Calabrese, “On the Origins of LNT Dogma”; Calabrese, “LNTgate.”

6. Theo Emery, “Uranium Mining Debate in Virginia Takes a Step,” New York Times, December 19, 2011.

7. National Research Council, Uranium Mining in Virginia: Scientific, Technical, Environmental, Human Health and Safety, and Regulatory Aspects of Uranium Mining and Processing in Virginia (Washington: National Academies Press, 2012), p. 9.

8. Although the reference is dated 2001, the article was written in 1981. The only change in 2001 was to update the 30‐​year climate “normals” to 1971–2000. Bruce P. Hayden and Patrick J. Michaels, “Virginia’s Climate,” University of Virginia News Letter 57, no. 5 (1981): 17–20.

9. National Research Council, Uranium Mining in Virginia, p. 41.

10. Historical maps of North Atlantic tropical cyclones are available at the “Unisys Weather” webpage.

11. Virginia Energy Resources was serially unable to get Virginia’s moratorium on uranium mining reversed. Ultimately it appealed to the courts that the ban was unconstitutional and that regulation of uranium mining was the sole province of the U.S. Department of Energy. The former Atomic Energy Commission was folded into that department, which was created by President Jimmy Carter. District and appellate courts rejected Coles’s claims, but, remarkably, the Supreme Court granted certiorari on the question of Virginia’s authority for its 2018–2019 session.

12. Daniel McGroarty, principal, Carmot Strategic Group, Testimony on “EPA’s Bristol Bay Watershed Assessment: A Factual Review of a Hypothetical Scenario” before the Subcommittee on Oversight of the Committee on Science, Space, and Technology, U.S. House of Representatives, 113th Cong., 1st sess., August 1, 2013.

13. Paul Voosen, “Climate Scientists Open Up Their Black Boxes to Scrutiny,” Science 354, no. 6311 (2016): 401–2.

14. Ross McKitrick and John Christy, “A Test of the Tropical 200- to 300‐​hPa Warming Rate in Climate Models,” Earth and Space Science 5, no. 9 (2018): 529–36.

15. Gaven A. Schmidt et al., “Practice and Philosophy of Climate Model Tuning across Six US Modeling Centers,” Geoscientific Model Development10, no. 9 (2017): 3207–23.

16. Schmidt et al. “Practice and Philosophy of Climate Model Tuning.”

17. Frédéric Hourdin et al., “The Art and Science of Model Tuning,” Bulletin of the American Meteorological Society 98, no. 3 (2017): 589–602.

18. John R. Christy, professor, University of Alabama at Huntsville, and director, Earth System Science Center, Testimony before the House Committee on Science, Space, and Technology, 115th Cong., 1st sess. Figure 2, Tropical Mid‐​Tropospheric Temperature Changes, Models vs. Observations, 1979–2016. March 29, 2017. Note: The one model that tracks the observations is the Russian model INM-CM4. The data are also summarized in tabular form in the American Meteorological Society’s “State of the Climate” report for 2016. Jessica Blunden and Derek S. Arndt, eds., “State of the Climate in 2016,” Special Supplement, Bulletin of the American Meteorological Society 98, no. 8 (2017). Further Note: Christy’s testimony, linked above, is as devastating an indictment of the distortions in climate science that are driving global policies. The author cannot more strongly recommend downloading and reading it.

19. Daniele Fanelli, “Negative Results Are Disappearing from Most Disciplines and Countries,” Scientometrics 90, no. 3 (2012): 891–904.

20. Daniele Fanelli and John P. A. Ioannidis, “U.S. Studies May Overestimate Effect Sizes in Softer Research,” Proceedings of the National Academy of Sciences 110, no. 37 (2013): 15031–36.

21. If you don’t, your proposal may be defunded at annual review time.

22. See the op‐​ed by Randy Schekman, “How Journals Like Nature, Cell and Science Are Damaging Science,” Guardian, December 9, 2013. Two days earlier, he was awarded the Nobel Prize in Physiology or Medicine.

23. World Wildlife Fund, “Financial Info” webpage.

24. Robert E. Davis et al., “Changing Heat‐​Related Mortality in the United States,” Environmental Health Perspectives 111, no. 14 (2003): 1712–18.

25. In May 2017, the Trump administration allowed Pebble to apply for a permit to mine, which will include its environmental impact statement. Then on January 26, 2018, according to an EPA official news release, “EPA is suspending its process to withdraw those proposed restrictions, leaving them in place while the Agency receives more information on the potential mine’s impact on the region’s world‐​class fisheries and natural resources.” See U.S. Environmental Protection Agency, “EPA Administrator Scott Pruitt Suspends Withdrawal of Proposed Determination in Bristol Bay Watershed, Will Solicit Additional Comments,” news release, January 26, 2018.