The climate industry is a
parasite organism like the EU and the UN aimed at making
trillions for its insiders while brutally raping the individual and the environment.
7/28/11, "
Climate Witchcraft and Post-Normal Science," American Thinker, Norman Rogers
"Certain important climate scientists are very eager to reorganize society. They proclaim, on weak evidence, that the Earth is doomed by global warming unless we follow a green plan to remake the economy and the social order. We have to give up cars for trolleys. Windmills will become ubiquitous. The most famous climate scientist, James Hansen, wants to put his opponents on trial for crimes against humanity.
[i] Implicit in all this is the idea that a central committee of Dr. Strangeloves should rule the world. Instead of prince this and duke that, we will have doctor this and doctor that. These radical intellectuals secretly despise the present system of rule by the rabble, otherwise known as democracy. Some intellectuals think that they don't get attention and status commensurate with their importance. This is especially true in America, where the cleaning lady or plumber is inclined to treat them as equals. One way to be important is to proclaim a theory that something very bad is going to happen. If the theory has some scientific basis and is backed by other prominent scientists, the claims will be credible.
A lot of this doomsday science, disguised as environmental concern, has been going around during the last 50 years. Global warming is just the latest example of ideologically motivated catastrophe theory. James Delingpole's book, Watermelons, describes the phenomenon in amusing detail. Like radical environmentalists, watermelons are green on the outside and red on the inside.
If it weren't for the prophecies of doom, climate science would be an obscure academic niche. Global warming has made everyone in the field rich, at least in academic currency if not dollars. The wealth has spread to other academic niches that have become more important in light of connections to climate. Global warming is a huge bonanza for the do-good environmental organization industry. Organizations like the Sierra Club or the Environmental Defense Fund[ii] need a perpetual stream of impending environmental disasters. When the public becomes bored with an impending disaster that never materializes, a new impending disaster must be found. Climate science has embraced computer climate models as the tool it uses to compute the magnitude of the warming effect of CO2. The climate models are riddled with problems. Kevin Trenberth, a noted climate scientist and a prominent promoter of global warming alarmism, said this about the models: "none of the climate states in the models correspond even remotely to the current observed climate." The effect of CO2 is measured by a theoretical number called climate sensitivity. There are more than 20 climate modeling groups around the world. These groups each spend millions on programmers and supercomputers, searching for the value of climate sensitivity. They all get different answers, differing by a ratio of more than two to one. This failure of consensus would normally be considered a sign that the approach is not working. But if climate science can't make predictions of doom, it will cease to be important and funding will collapse. The climate science establishment had to relax the normal rules of science for its own survival and for the sake of its post-normal-science political goals.
The global warming establishment devised a solution. They decided to take the average of the various disagreeing models and claim that the average is closer to the truth than any of the models incorporated in the average. They call this a multi-model ensemble. The skeptic will ask if averaging together the results from more modeling groups makes the result better, why not spend a few billion dollars more and establish another 20 or 50 modeling groups to still better zero in on the truth? To read the justifications for multi-model ensembles is to enter a reality distortion field.
The climate models make predictions that cannot be tested because you would have to wait 50 or 100 years to see if the predictions are correct. The models are evaluated and calibrated by simulating the observed climate of the 20th century. The entirely unjustified assumption is made that if the models can match the 20th-century climate they must be working well and will be able to predict the future. This is known as backtesting. The problem with backtesting is that models may fit the historical data for the wrong reasons. If a model is complicated, with enough adjustable parameters, it may be capable of fitting almost anything. Many people have devised stock market models that work well when tested against history. If such models could predict the future movement of markets or pick winning stocks it would be far easier to make money in the stock market than it is.
The climate models have dozens of adjustable parameters. Inputs to the models, related to physical drivers of climate, are highly uncertain. For example, one input is aerosols or reflective particles injected into the air from smokestacks and natural sources. These have an effect on climate but the historical aerosol record is difficult to quantify. We don't know very accurately how much and what kind of aerosols there were year by year in the 20th century, and we don't know what the effect on the energy flows was. In order to model the 20th century, you must supply the model with a historical record of the effect of aerosols. Since this is poorly known you might be tempted to fabricate the historical record so as to make the model fit the 20th century better.[iii] This is either a clever strategy or circular reasoning.
Climate scientists call this the inverse method of computing the effects of aerosols.[iv] Ocean heat storage provides another example of a necessary but poorly known aspect of climate models. Adjusting the internal model parameters related to this effect provides another lever for making a model fit the historical climate of the 20th century. The unfortunate result is that different climate models treat ocean heat storage quite differently, but the Earth has only one way of treating ocean heat storage.[v] The International Panel on Climate Change, otherwise known as the IPCC, or perhaps as the Vatican of climate change, has an established procedure for making predictions using multi-model ensembles. Each of the modeling groups is instructed to fit or calibrate its model to the 20th century and then to run the model into the 21st century to get a prediction of the future. Each group is directed to use inputs as it deems appropriate for the 20th-century fitting.[vi] The modeling groups can independently adopt their own set of assumptions about the reality of the 20th-century climate. It's like the parallel Earths, in parallel universes, often seen in science fiction. There is only one Earth. There are no parallel universes.
The net result from these tricks is that fitting the models to the 20th century becomes an exercise in curve-fitting implemented by custom fudging with a different fudge recipe at each modeling laboratory. The result of this exercise in inventing historical data is illustrated by the figure below from the 2007 IPCC report.
The ensemble mean fits the observed temperature history[vii] very well, even taking dips when volcanos erupt and inject cooling aerosols. The only place where the fit fails is the early-20th-century warming from 1910 to 1940. The problem during that period is that there is nothing plausible to explain this early warming that is also consistent with the doctrine that CO2 is the only cause of the late-century warming. Most of the modelers assume that the early warming is due to a change in the sun's output, but they don't dare go too far with that because in general they have to minimize the effect of the sun and maximize the effect of CO2 to avoid giving comfort to the skeptic school that thinks climate is controlled mostly by the sun.
Multiple runs of the same model are included in the graph. Slightly different starting conditions are used for runs using the same model, so the results of different runs by the same model are not identical, and in fact exhibit considerable chaotic variation. The chaotic or random variations average out to a characteristic climate associated with that model, if enough runs from the same model are averaged. The interesting fact about the graph is that the 13 different models, averaged, give an excellent fit to the temperature history even though we know that the models disagree sharply on the effect of the rapidly rising CO2 in the second half of the 20th century.[viii]
The apparent good performance of the models in the graph is a consequence of stacking the deck by adjusting the assumptions about the Earth independently for each model. That adding more models to the mix makes the graph fit the observed climate better, as the IPCC claims, is an elementary result of curve fitting theory. If you use several different curve-fitting methods (e.g., different models) and the errors in the fits are random or uncorrelated, then the errors are reduced proportional to the square root of the number of fits averaged together. This has nothing to do with climate. It is a mathematical and statistical result.
- Of course all this is well-known to climate scientists.
Why would the IPCC use such an unscientific scheme for predicting the future climate? A better scheme comes easily to mind. Why not have a contest to pick the best model? The conditions of the test against the 20th-century observed climate should be set strictly so that inputs are the same for all models and non-physical or physically inconsistent assumptions internal to the models would be prohibited. Although this scheme would hardly be guaranteed to result in reliable predictions of the future climate, it would surely be sounder than the corrupt scheme currently used.
But, wait a minute. If one laboratory out of 20 was picked as having the best model, what would the reaction of the other 19 laboratories be? After all, one can assume that the other 19 labs have 19 times the political influence that the winning lab would have. Wouldn't the other labs be deeply worried that their funding would be cut or diverted to the winning lab? Suppose the winning lab was an American lab. Might the European labs suspect cheating or bias? Suppose a French lab won. What would the Americans think? Would the Congress support research at a French lab, at the expense of the American labs? Obviously, a climate model shootout
- would break the unity of the climate science establishment and is thus unthinkable.
Climate models are useful heuristic tools that help in understanding climate. Most of the work done in developing models is honest. But the models are not remotely good enough to make predictions about the future climate under the influence of CO2. The IPCC and its allies have created a bizarre scheme to force doomsday predictions out of the disagreeing models in order to pursue bureaucratic and political goals. The resultant predictions are looking very foolish in the face of 14 years of no general climate warming, and of
- no ocean warming since a reliable monitoring system was deployed in 2003.
President Eisenhower anticipated post-normal science in his 1961 farewell address when he warned that public policy could become the captive of the scientific-technological elite. We are accustomed to various special interest groups cooking the books to promote their interests in Washington. We don't expect the science establishment to be cooking the science, but that is what is happening. The arrogance and irresponsibility exhibited by the science establishment is quite amazing. It will take a while for the public to adjust to the idea that organized science is
.