Saturday, 15 July 2017

Climate modeling is not science. It's not even good modeling.

Climate models cannot model the climate

  • Models rely on untested, assumptions e.g. of constant relative humidity with rising temperatures. This is an 120 year old assumption no climate modeler thinks worth testing. Why not?
  • The ground station data that models use is mostly incomplete. Especially so over oceans which are 70% of earth's surface
  • Models omit many causative factors, such as the Sun (it's various cycles both long and short-term), Volcanoes, ...
  • Models do a poor job describing ocean circulation, and ocean heat emission (e.g. from El Niño). Oceans act as heat reservoirs, and hold 1000 × more heat than the atmosphere can. So oceans are crucial to any good model. Climate modelers understand oceans badly.
  • Scientists have an incomplete understanding of weather and climate. e.g. Do clouds have a net warming or cooling effect? They cannot say for certain.
  • Models work at too course a resolution to be 'simulations', which they, wrongly, claim to be.
  • The climate is more complex than modelers make out. They can only run their models by grossly simplifying things.
  • It would take about a hundred million, trillion years to run a computer model at something close to the correct resolution.

Leading experts at modeling have consistently explained that climate models cannot be trusted. So anyone claiming climate model accuracy is denying both modeling best practice and science.

1. Leading Expert Modeler, Prof. Christopher Essex, tells Why Climate Models Hardly Better Than Hocus Pocus: “Welcome To Wonderland”!

2. According to expert modelers: Kesten Green and J. Scott Armstrong:

Scientific forecasting knowledge has been summarised in the form of principles by 40 leading forecasting researchers and 123 expert reviewers. The principles summarise the evidence on forecasting from 545 studies that in turn drew on many prior studies. Some of the forecasting principles, such as ‘provide full disclosure’ and ‘avoid biased data sources,’ are common to all scientific fields. The principles are readily available in the Principles of Forecasting handbook.


We then audited the IPCC forecasting procedures using the Forecasting Audit Software available on Our audit found that the IPCC followed only 17 of the 89 relevant principles that we were able to code using the information provided in the 74-page IPCC chapter. Thus, the IPCC forecasting procedures violated 81% of relevant forecasting principles. It is hard to think of an occupation for which it would be acceptable for practitioners to violate evidence-based procedures to this extent. Consider what would happen if an engineer or medical practitioner, for example, failed to properly follow even a single evidence-based procedure.

- Kesten C. Green & J. Scott Armstrong, in Climate Change: The Facts.

Saturday, 8 July 2017

Where do alarming climate projections come from?

The answer in a nutshell : mathematical trickery.

The IPCC equation for the Feedback factor, used to calculate climate sensitivity, is given on AR4, WG1, page 631, footnote 6. It is:

Under these simplifying assumptions the amplification of the global warming from a feedback parameter λ (in W m-2 °C­-1) with no other feedbacks operating is
1 ÷ (1 + λ ÷ λp) where λp is the ‘uniform temperature’ radiative cooling response (of value approximately –3.2 W m-2 °C-1; Bony et al., 2006). If n independent feedbacks operate, λp is replaced by (λ1 + λ2 + ... λn).
Feedback Factor:0.300.350.400.450.500.550.600.650.70
- 40% varianceFF[low]
Feedback Factor:FF[mid]0.300.350.400.450.500.550.600.650.70
+ 40% varianceFF[high]0.420.490.560.630.700.770.840.910.98
Climate sensitivity[low]

Let's consider just how easily we can arrive at a high climate sensitivity value from what looks like a midling feedback factor. The IPCC give their modelers a feedback factor of 0.5 to use
(= λ ÷ λp above, which is a unitless number). Jessica Vial's team were tasked with coming up with (inventing?) this number; as they did. To this central estimate, they add and subtract ±40% (2 standard deviations up or down) because they say they want to cover 95% of eventualities. This is shown in the table (above). Rows 2, 3, and 4 show the feedback factor with -40%, 0%, and +40% adjustments (labeled: FF[low], FF[mid], FF[high]). With a feedback factor of 0.65 (only 0.15 more than their central estimate), the +40% figure for climate sensitivity = 11! That means the equation projects a doubling of CO2 to 560 ppm from pre-industrial times will give an average 11C temperature increase at earth's surface. Don't worry. It's a maths trick it's not real. Unfortunately the likes of Angela Merkel, Ed Miliband, Jeremy Corbyn, countless Tories seem to believe in magic, faeries, and impossible maths equations.

Christopher Monckton has a lot to say on this mathematical trickery here and here. I've yet to read chapter 3 of Bode's "Network Analysis and Feedback Amplifier Design", 1945, from which it looks like the climate modelers stole their feedback ideas. But pray, don't blame Dr. Bode (RIP). The climate modelers did a slight of hand by not using the whole of the forcing in their equation. By only taking the difference in forcing, they created an equation just balanced on the edge of a catastrophe. This has been the climate sensitivity equation since 1979. It predates the IPCC and is used for all 5 IPCC reports. I will elaborate more in another blog. For now: please watch Monckton's talk at the Heartland’s 12th International Conference on Climate Change. After I think I can explain it better, I'll blog it again. I want to show the difference between just using differences (as they do) and what they should do (putting all the forcing in).

How and why does this con work?

You may think the boy that cried wolf story is 'true' of people, in the sense the story chimes with us. That we disbelieve people we know are lying to us. It ain't so. When the liars pose an existential threat to our existence, when they make it a matter of the survival of humanity, then, sadly, we listen to them, again and again. That's why the climate feedback equation is like that. Because with just a bit of tweaking, it can threaten our very existence, and guarantee climate alarmists an audience for their doom-mongering. It's not really about the climate for them. Don't be fooled. It's about putting the brakes on human technological progress. Tying us down, enslaving us to our fears, so we won't be able to harm the environment.

Saturday, 17 June 2017

Electric cars are overhyped.

I read here about a revolutionary new battery which:

  • "would allow electric cars to be recharged instantly" That is not true.
  • The energy density of batteries is still about 1% that of gasoline. So the engine and fuel of electric cars still weighs a lot more and the journey range is a lot less.
  • Modern electric cars only drive very well on a full charge. Once they lose a proportion of their charge they are much less responsive.
  • So there are 3 or 4 big issues with electric cars: (1) The long time taken to recharge during which the car is useless, (2) Short range, (3) Lack of infrastructure, (4) Low energy density of batteries compared to liquid fuels like gasoline. This causes the weight of the engine/fuel to be much higher. So lowering the efficiency.

    I discussed the prospects of electric cabs with one of the cab drivers who drives me on my daily journey to work. He thinks electric cars need to be a lot better to be useable as cabs. Meanwhile the local council want every cabbie to have an electric cab in 5 years. My cabbie thinks the local council don't give a toss whether the tech works or not. I think they just want to be seen to be 'saving the planet'.

    PS: The local council in question is St Albans in England. It's not a "socialist" council, nor is it Enviro-Stalinist. It is split between Tories and Lib Dems. The electric cab initiative is mostly Lib Dem - who are like a light green Green Party.

    "We are now consulting on the Councils proposals to introduce fully electric Hackney Carriages and Private Hire Vehicles to be licensed. The consultation will last for 12 weeks and will end on 15th June 2017,we hope to report the responses to the Licensing and Regulatory Committee on 18th July."

    Sunday, 4 June 2017

    How did the UN come to believe that 99.9% of substances/activities they'd tested might pose a cancer risk?

    The Campaign for Accuracy in Public Health Research wrote a recent article about how the UN's cancer agency IARC flat out refuse to say that coffee is safe to drink.

    For decades, the International Agency for Research on Cancer (IARC) warned coffee drinkers that their favorite beverage might cause cancer. Finally, the agency updated its assessment in June 2016 and downgraded coffee to Group 3 or “not classifiable as carcinogenic to humans.” While this decision is a step in the right direction, it raises new questions and concerns.

    First, IARC did not categorize coffee as Group 4, “probably not carcinogenic to humans,” even though there is considerable evidence supporting the health benefits of coffee consumption, including protection against Parkinson disease, liver disease, type 2 diabetes and liver cancer. Second, IARC’s decision to classify coffee in Group 3 rather than Group 4 represents a pattern of ignoring scientific evidence that supports certainty and the safety of products and behaviors. In fact, IARC has examined almost 1,000 agents over the past 30 years, only once classifying a substance as Group 4. IARC has explained this by saying that to be downgraded to Group 4, science would have to “prove a negative,” a statement that is neither reasonable nor useful to the goal of providing meaningful information to the public. In the end, IARC’s treatment of coffee provides another example of the urgent need to reform both the Agency and its processes.

    This blog is my attempt to explain how this peculiar state of affairs arose

    The idea that science should 'have to “prove a negative,”' seems to me to come straight out of what's now called 'precautionary thinking'. It also defies the scientific method. How did they do that? The IARC seem to have taken the precautionary principle, PP, and cubed it. The original PP said we should place a moratorium upon technologies which might have the potential to cause widespread environmental change (foreseen or unforeseen), posing a potential existential threat to life. The PP was the environment movement's alternative to cost benefit analysis, CBA. A kind of 'radical' risk analysis. Their arguments against GMOs, nuclear power, atmospheric carbon dioxide, and recently, nanotechnology, try to derive existential threats from otherwise benign technology. I sense the PP was only ever there to avoid CBA. Today enviros often call it 'precautionary thinking', with an implication that it's a way to looking at the world, rather than a principle to be applied in extremis (as the PP was supposed to be). I would not be surprised to find the IARC have never published or acknowledged a CBA of coffee. Please tell me I'm wrong.

    Saturday, 13 May 2017

    How big is human CO2 contribution compared to earth's CO2 budget?

    Nicholas Schroeder May 13, 2017 at 10:10 am

    Per IPCC AR5 Figure 6.1 prior to year 1750 CO2 represented about 1.26% of the total biospheric carbon balance (589/46,713). After mankind’s contributions, 67 % fossil fuel and cement – 33% land use changes, atmospheric CO2 increased to about 1.77% of the total biosphere carbon balance (829/46,713). This represents a shift of 0.51% from all the collected stores, ocean outgassing, carbonates, carbohydrates, etc. not just mankind, to the atmosphere. A 0.51% rearrangement of 46,713 Gt of stores and 100s of Gt annual fluxes doesn’t impress me as measurable let alone actionable, attributable, or significant.

    And in some other words.

    Earth’s carbon cycle contains 46,713 Gt (E15 gr) +/- 850 Gt (+/- 1.8%) of stores and reservoirs with a couple hundred fluxes Gt/y (+/- ??) flowing among those reservoirs. Mankind’s gross contribution over 260 years was 555 Gt or 1.2%. (IPCC AR5 Fig 6.1) Mankind’s net contribution, 240 Gt or 0.53%, (dry labbed by IPCC to make the numbers work) to this bubbling, churning caldron of carbon/carbon dioxide is 4 Gt/y +/- 96%. (IPCC AR5 Table 6.1) Seems relatively trivial to me. IPCC et. al. says natural variations can’t explain the increase in CO2. With these tiny percentages and high levels of uncertainty how would anybody even know? BTW fossil fuel between 1750 and 2011 represented 0.34% of the biospheric carbon cycle.

    ----- = 1.2%

    Sunday, 12 March 2017

    Why increasing CO2 can not lead to catastrophic global warming

    From the blog: Knowledge Drift; The Science of Human Error

    The effectiveness of CO2 as a greenhouse gas, GHG, tails off logarithmically. On doubling CO2, from 280ppm (pre-industrial level) to 560ppm an extra 3.7 W/m2 warming is expected.

    That is expected to lead to about 1ºC warming of the average global surface temperature. Note: more CO2 does not cause 'global warming', it just slows down the rate of loss of atmospheric warmth. All other things being equal that would cause warming. All other things are not equal. There is a negative feedback operating called the Stefan–Boltzmann effect. As temperature rises the rate at which black bodies emit heat increases according to the 4th power of temperature. When the temperature increases the Stefan–Boltzmann relation means that everything else (ground and oceans) emits more black body heat. This extra heat is eventually radiated to space so is lost to the climate. This built-in negative feedback on temperature rise keeps earth's temperature at a reasonable level : a temperature rise leads to a faster emission of black body (LWIR) heat. The black body formula (Stefan–Boltzmann relation) used to calculate how much heat is being dissipated to space is P = 5.76 × 10-8 × T4 where P is power in watts per square meter and T is temperature in degrees K or Kelvin. So the amount of LWIR emitted increases according to the fourth power of the temperature.

    CO2 radiative forcingStefan–Boltzmann effect
    CO2 ppmAddition
    (W / m²)
    (W / m²)
    PT (ºC)Net warming
    (W / m²)

    Comment: multiple catastrophic errors

    The errors climate alarmists seem to have made are many-fold:

    1. Eliding how the CO2 GHG effect tails off logarithmically,
    2. Assuming the atmosphere is a heat sink. It is not. The oceans are earth's heat sink. Earth's atmosphere has a tiny heat capacity compared to its oceans. Oceans have over 1000 times the heat capacity of the atmosphere
    3. Misuse of this saw them to invent 'catastrophic warming' by putting all their extra heat into the atmosphere! (which it isn't going to hold!). It was a convenient con because a body with a small heat capacity can (in theory) be made to warm quite fast!
    4. Logically, it would make more sense to put their extra heat into the oceans. Because the amount of heat which could, in theory, warm the atmosphere by 10ºC can only warm the oceans by 0.01ºC. Fail. Put the heat in the oceans and catastrophic global warming is not 'catastrophic'.
    5. Ignored basic physics of the Stefan–Boltzmann negative feedback.
    6. The majority of climate models miss (forget, or never bothered to consider) many ocean oscillation effects. These are like smaller versions of El Nino. In the North and South Atlantic and Indian oceans. In this situation an area of the ocean collects warm water. Heat is transferred to the atmosphere by evaporative cooling, etc. So oceans heat the atmosphere. Not CO2. CO2 just slows down the rate of cooling. Ocean oscillations give global warming records a bumpy or spikey appearance.
    7. ... on to infinity. There will always be yet one more 'error' they're prepared to make to push their alarmist/Luddite/Malthusian political agenda.

    Friday, 10 March 2017

    Global circulation model hindcasting - real or fabricated?

    Global circulation models - AKA climate models claim to be legitimate because they say they can hindcast previous atmospheric temperatures. i.e. They claim their model projections reproduce past climate. For example: The global cooling period from the early 1940s to mid-1970s. This was done by adding a special factors (aerosols) for this period which they claim is no longer important today. Some people this is just fabricated data to give the GCMs a gloss of legitimacy. Just about all GCMs run too hot. They mis-forecast future temperatures too hot.

    This is another 'reblog' of a comment.

    Allan M.R. MacRae January 9, 2017 at 5:47 am,

    Ladies and Germs,

    Have you looked at the model-hindcasting/fabricated-aerosol issue, as described below?

    The climate models do not honestly hindcast the global cooling period from ~1940 to ~1975, because their authors fabricated false aerosol data to force hindcasting.

    Therefore, the models cannot forecast anything, because they cannot hindcast. except through fraudulent inputs.


    The climate models cited by the IPCC typically use values of climate sensitivity to atmospheric CO2 (ECS) values that are significantly greater than 1C, which must assume strong positive feedbacks for which there is NO evidence. If anything, feedbacks are negative and ECS is less than 1C. This is one key reason why the climate models cited by the IPCC greatly over-predict global warming.

    I reject as false the climate modellers’ claims that manmade aerosols caused the global cooling that occurred from ~1940 to ~1975. This aerosol data was apparently fabricated to force the climate models to hindcast the global cooling that occurred from ~1940 to ~1975, and is used to allow a greatly inflated model input value for ECS.

    Some history on this fabricated aerosol data follows:

    More from Douglas Hoyt in 2006: