1

Emissions and the carbon cycle

In the climate discussion, the so-called “CO2 footprint” of living beings, especially humans and farm animals, is increasingly declared as a problem, to the point,

  • to discredit the eating of meat,
  • slaughter farm animals (e.g. in Ireland),
  • or even discouraging young people from having children.

This discussion is based on false premises. It is pretended that exhaling CO2 has the same “climate-damaging” quality as burning coal or petroleum.
A closer analysis of the carbon cycle shows the difference.

The carbon cycle

All life on earth is made up of carbon compounds.
The beginning of the so-called food chain is plants, which use photosynthesis to produce mainly carbohydrates, and in some cases fats and oils, from CO2 in the atmosphere, thus storing both carbon and energy.

  • The further processing of these carbon compounds is divided into several branches, where again a conversion into CO2 takes place:
  • the immediate energy consumption of the plant, the “plant respiration”,
  • the — mainly seasonal — decay of part or all of the plant, and humus formation,
  • the energy supply of animals and humans as food. Here, apart from the direct energy supply, a transformation into proteins and fats takes place, partly also into lime.
  • Proteins and fats are passed along the food chain.
  • In the course of life, plants, animals and humans release some of the carbon absorbed from food through respiration as CO2, and in some cases also as methane.
  • With the decomposition of animals and humans, the remaining CO2 is released again.
  • The formed lime binds the CO2 for a long time. E.g. each eggshell binds 5g CO2 for a very long time.

Abstractly speaking, all CO2 from all living things, whether bound or exhaled, ultimately comes from the atmosphere via photosynthesis. This is very nicely explained by the famous physicist Richard Feynman:

All living beings are temporary stores of CO2. The described mechanisms cause different half-lives of this storage.
Human interventions usually cause a prolongation of the storage and consequently a more sustainable use of CO2:

  • Mainly by conservation and thus stopping the decay processes. This refers not only to the preservation of food, but also through long-term conservation of wood, as long as wood utilization is sustainable. In this way, building with wood is a long-term commitment of CO2.
  • Last year’s grain is usually stored and only processed into bread etc. about a year later. In the meantime, this year’s grain plants have already grown again. Thus, the metabolic emissions from humans and animals are already compensated before they take place. If the grain were to rot without being processed, it would have already decomposed into CO2 again last fall.
  • The rearing of farm animals also means CO2 storage, and not only in the form of the long-lived bones. However, the use of fossil energy in mechanized agriculture and fertilizers must be taken into account here.

Limitation – fertilization and mechanization of agriculture

3 factors mean that the production of food may still release more CO2 than in “free nature”, namely when processes are involved that use fossil fuels:

  • The use of chemically produced fertilizers
  • the mechanization of agriculture
  • the industrialization of food production.

Because of very different production processes, it is very misleading to speak of a product-specific carbon footprint.

To pick an important example, beef is usually given an extremely high “carbon footprint.” Beef that comes from cattle raised largely on pasture — fertilized without artificial fertilizers — has a negligible “carbon footprint,” contrary to what is disseminated in the usual tables. The same is true for wild animals killed in hunting.

An example that illustrates the duplicity of the discussion is the production of bio-fuels. This uses fertilizers and mechanical equipment powered by fossil energy in much the same way as the rest of agriculture. However, the fuels produced are considered sustainable and “CO2-free.”

Dependencies

The most important insight from biology and ecology is that it is not within our arbitrary power to remove individual elements of the sensitive ecology without doing great harm to the whole.
Typical examples of such harmful influences are:

  • Overgrazing, i.e., desolation by eating away at the (plant) bases of life. Examples of this are widely known. “Overgrazing” can also occur as a result of “well-intentioned” and assumed positive interventions such as “water quality improvement” in Lake Constance, with the result that there is no longer enough food for plants and animals in the water.
  • Less well known is “undergrazing,” particularly the failure to remove withered tumbleweeds in the vast semi-arid areas of the world. To address this problem, Alan Savory has introduced the concept of “Holistic Management” with great success. This concept includes as a major component the expansion of livestock production.If plants are not further utilized by “larger” animals, then they are processed by microorganisms and generally decompose again quickly, releasing the bound CO2; in some cases they are converted into humus. So nothing is gained for the CO2 concentration of the atmosphere if e.g. cattle or pigs are slaughtered to allegedly improve the CO2 balance. On the contrary, the animals prolong the life of the organic carbon-binding matter.

Dependence of plant growth on CO2

Plants thrive better the higher the atmospheric CO2 concentration, especially C3 plants:

For plant growth, the increase in CO2 concentration over the last 40 years has been markedly favorable, and the world has become significantly greener, with the side effect of sink effect, i.e., uptake of the additional anthropogenic CO2:

C3 plants do not reach the same uptake of CO2 as C4 plants below a concentration of 800 ppm. That is why many greenhouses are enriched with CO2.

Conclusions

Knowing these relationships, compelling conclusions emerge:

  1. Because of the primacy of photosynthesis and the dependence of all life on it, the totality of living things is a CO2 sink, so in the medium and long term the CO2 concentration can only decrease, never increase, because of the influence of living things.
    All living beings are CO2-storages, with different storage times.
  2. There are at least 3 forms of long-term CO2-binding, which lead to a decrease of the CO2-concentration:

    • Calcification
    • humus formation
    • non-energy wood utilization

  3. The use of “technical aids” that consume fossil energy must be separated from the natural carbon cycle in the considerations. It is therefore not possible to say that a particular foodstuff has a fixed “CO2 footprint”. It depends solely on the production method and animal husbandry.
  4. A “fair” consideration must assume here, just as with electric vehicles, for example, that the technical aids of the future or the production of fertilizers are sustainable.

In addition, taking into account the knowledge that more than half of current anthropogenic emissions are reabsorbed over the course of the year, even a 45% reduction in current emissions leads to the “net zero” situation where atmospheric concentrations no longer increase. Even if we make little change in global emissions (which is very likely given energy policy decisions in China and India), an equilibrium concentration of 475 ppm will be reached before the end of this century, which is no cause for alarm.




5 Simple Climate Facts


[latexpage]

1. Global CO2 emissions have reached their maximum level in 2018 and are not expected to increase further

There was a massive drop in emissions in 2020 due to Corona. But the maximum $CO_2$ emissions had already been reached in 2018 at 33.5 Gt, and 2019 and 2021 emissions are below that level as well:

Already since 2003, there has been a clear trend of decrease in relative CO2 emissions growth (analogue to economic growth), where between 2018 and 2019, the 0% line was then reached, as described in this article:

The reason for this is that the growth in emissions in emerging economies now roughly balances the decline in emissions in industrialized countries. Also, there has already been a sharp bend in the emission growth in China in 2010.

So the actual “business-as-usual” scenario is not the catastrophic scenario RCP8.5 with exponentially growing emissions that is still widely circulated in the media, but de facto a continuation of global CO2 emissions at the plateau reached since 2018.   So the partial target of the Paris climate agreement, “Countries must reach peak emissions as soon as possible“, has already been achieved for the world as a whole since 2018.

2. It is enough to halve emissions to avoid further growth of CO2 levels in the atmosphere

To maintain the current level of CO2 in the atmosphere, it would be sufficient to reduce emissions to half of today’s level (https://www.youtube.com/watch?v=JN3913OI7Fc&t=291s (in German)). The straightforward mathematical derivation and various scenarios (from “business as usual” to “zero carbon energy transition” can be found in this article. Here the result of the predicted CO2 concentration levels:

Dieses Bild hat ein leeres Alt-Attribut. Der Dateiname ist image-11-1024x397.png

In the worst case, with future constant emissions, CO2 concentration will be 500 ppm by 2100 and remain below the equlibrium concentration of 544 ppm, which is below double the pre-industrial concentration. The essential point is that in no case will CO2 levels rise to climatically dangerously high levels, but they would probably fall to dangerously low levels if the global energy transition were “successful”, because current peak grain harvests are 15% larger than they were 40 years ago due to increased CO2 levels.  
Literally, the Paris climate agreement states in Article 4.1:
Countries must reach peak emissions as soon as possible “so as to achieve a balance between anthropogenic emissions by sources and removals by sinks of greenhouse gases in the second half of this century.”
This means that the balance between anthropogenic emissions and CO2 removals must be achieved in the 2nd half of this century. Fact is that the balance will be reached when total emissions are halved. The time target to reach this 50% goal is between 2050 and 2100, these two limits correspond to the blue and turquoise green scenario. So the Paris climate agreement does not call for complete decarbonization at all, but allows for a smooth transition rather than the disruption implied by the complete decarbonization.

3. According to radiative physics, climate sensitivity is only half a degree

The possible influence of $CO_2$ on global warming is that its absorption of thermal radiation causes that radiation to reach space in a weakened form. The physics of this process is radiative transfer. To actually measure this greenhouse effect, the infrared radiation emitted into space must be measured. The theoretically expected greenhouse effect is so tiny, at 0.2 $\frac{W}{m^2}$ per decade, that it is undetectable with current satellite technology, which has a measurement accuracy of about 10 $\frac{W}{m^2}$.
Therefore, one has no choice but to settle for mathematical models of the radiative transfer equation. However, this is not a valid proof for the effectiveness of this greenhouse effect in the real, much more complex atmosphere.
There is a widely accepted simulation program MODTRAN that can be used to simulate the emission of infrared radiation into space, and thus the $CO_2$ greenhouse effect, in a physically clean way. If I use this program to calculate the so-called CO2 sensitivity (the temperature increase when CO2 doubles from 280 to 560 ppm) under correct conditions, the result is a mere 1/2 °C:

Dieses Bild hat ein leeres Alt-Attribut. Der Dateiname ist image-12-1024x590.png

The facts are discussed in this article. Also, in order to understand the mindset of IPCC-affiliated scientists, I describe there their, in my opinion, incorrect approach to sensitivity calculations using MODTRAN simulation.

Accordingly, if all real-world conditions are correctly accounted for, the temperature increase from doubling $CO_2$ from 280 ppm to 560 ppm is just 1/2 °C, well below the Paris Climate Agreement targets.

4. The only detectable effect of CO2 increase is the greening of the Earth

While the greenhouse effect is so far a theoretical hypothesis, which because of its small effect (less than 0. 2 $\frac{W}{m^2}$ in 10 years, which is only a fraction of the measurement errors of infrared satellite measurements (10 $\frac{W}{m^2}$), so far not provable beyond doubt, another welcome effect of increased $CO_2$ content has been abundantly demonstrated: Between 1982 and 2009, the greening of the Earth has increased by 25-50%, 70% of which is due to increases in CO2. Notably, parts of Earth’s drylands have also become greener because plants have a more efficient water balance at higher $CO_2$ levels.

5. The increase in world mean temperature over the past 40 years has been caused by decreased cloud formation

It is a fact that the mean temperature of the world has increased considerably since 1970. If it is not only due to increased CO2 concentration, what could be the cause?

A simple calculation shows that 80% of the temperature increase over the last 40 years is due to the real and measurable effect of reduced cloud reflectivity, and at most 20% is due to the hypothetical and so far not definitively proven CO2 greenhouse effect:

The causes  of reduced cloud formation may indeed be partly man-made, because the basic mechanism of heat regulation by evaporation through plants and the resulting clouds depends on the way humans farm and treat the natural landscape (see also this video (in German)). The most important man-made risk factors are

All 3 factors contribute to the 5% decrease of average cloud cover since 1950 ( https://taz.de/Wasser-und-Klimaschutz/!5774434/ ), which explains at least 80% of the temperature rise since then as described above.
To stop the warming caused by reduced cloud formation, CO2 emission reductions by stopping use of fossil fuels are of no use. A refocus on solving the real problems instead of ideological fixation on CO2 is overdue.




Global Temperature predictions


[latexpage]

Questioning the traditional approach

The key question to climate change is how much does the $CO_2$ content of the atmosphere influence the global average temperature? And in particular, how sensitive is the temperature to changes in $CO_2$ concentration?
We will investigate this by means of two data sets, the HadCRUT4 global temperature average data set, and the CMIP6 $CO_2$ content data set.
The correlation between these data is rather high, so it appears to be fairly obvious, that rising $CO_2$ content causes rising temperatures.
With a linear model it appears easy to find out how exactly temperatures at year i $T_i$ is predicted by $CO_2$ content $C_i$ and random (Gaussian) noise $\epsilon_i$. From theoretical considerations (radiative forcing) it is likely that the best fitting model is with $log(C_i)$:
$T_i = a + b\cdot log(C_i) + \epsilon_i$
The constants a and b are determined by a least squares fit (with the Python module OLS from package statsmodels.regression.linear_model):
a=-16.1, b=2.78
From this we can determine the sensitivity, which is defined as the temperature difference when $CO_2$ ist doubled:
$\Delta(T) = b\cdot log (2) °C = 1.93 °C $
This is nearly 2 °C, a number close to the official estimates of the IPCC.

What is wrong with this, it appears to be very straightforward and logical?
We have not yet investigated the residue of the least squares fit. Our model says that the residue must be Gaussian noise, i.e. uncorrelated.
The statistical test to measure this is the Ljung-Box test. Looking at the Q-criterion of the fit, it is Q = 184 with p=0. This means, that the residue has significant correlations, there is structural information in the residue, which has not been covered with the proposed linear model of log($CO_2$) content. Looking at the diagram which shows the fitted curve, we get a glimpse why the statistical test failed:

We see 3 graphs:

  • The measured temperature anomalies (blue),
  • the smoothed temperature anomalies (orange),
  • the reconstruction of the temperature anomalies based on the model (green)

While the fit looks reasonable w.r.t. the noisy original data, it is obvious from the smoothed data, that there must be other systematic reasons for temperature changes besides $CO_2$, causing temporary temperature declines as during 1880-1910 or 1950-1976. Most surprizingly, from 1977-2000 the temperature rise is considerably larger than would be expected from the model of the $CO_2$ increase.

The systematic model deviations, among others a 60 year cyclic pattern, can also be observed when we look at the residue of the least squares fit:

Enhancing the model with a simple assumption

Considering the fact that the oceans and to some degree the biosphere are enormeous heat stores, which can take up and return heat, we enhance the temperature model with a memory term of the past. Not knowing the exact mechanism, this way we can include the “natural variability” into the model. In simple terms this corresponds to the assumption: The temperature this year is similar to the temperature of last year. Mathematically this is modelled by an extended autoregressive process ARX(n),, where the Temperature at year i is assumed to be a sum of

  • a linear function of the logarithm of the $CO_2$ content,log($C_i$), with offset a and slope b,
  • a weighted sum of the temperature of previous years,
  • random (Gaussian) noise $\epsilon_i$

$ T_i = a + b\cdot log(C_i) + \sum_{k=1}^{n} c_k \cdot T_{i-k} +\epsilon_i $

In the most simple case ARX(1) we get

$ T_i = a + b\cdot log(C_i) + c_1\cdot T_{i-1} +\epsilon_i $

With the given data the parameters are estimated, again with the Python module OLS from package statsmodels.regression.linear_model:
$a=-7.33, b=1.27, c_1=0.56 $
The reconstruction of the training data set is much closer to the original data:

The residue of the fit now looks much more like a random process, which is confirmed by the Ljung-Box test with Q=20.0 and p=0.22

By considering the natural variability the sensitivity to $CO_2$ is reduced to
$\Delta(T) = b\cdot log (2) °C = 0.88 °C $

In another post we have applied the same type of model to the dependence of the atmospheric $CO_2$ content on the anthropogenic $CO_2$ emissions, and used this as a model for predictions of future atmospheric $CO_2$ content. 3 scenarios are investigated:

  • “Business as usual” re-defined from latest emission data as freezing global $CO_2$ emissions to the level of 2019 (which is what is actually happening)
  • 100% worldwide decarbonization by 2050
  • 50% worldwide decarbonization by 2100

The resulting atmospheric $CO_2$ has been calculated as follows:

Feeding these predicted $CO_2$ content time series into the temperature ARX(1) model, the following global temperature scenarios can be expected for the future:

Conclusions

The following conclusions are made under the assumption that there is in fact a strong dependence of the global temperature on the atmospheric $CO_2$ content. I am aware that this is contested, and I myself have argued at other places that the $CO_2$ sensitivity is as low as 0.5°C and that the influence of cloud albedo is much larger than that of $CO_2$. Nevertheless it is worth taking the mainstream assumptions serious and take a look at the outcome.

Under the “business as usual” scenario, i.e. constant $CO_2$ emissions at the 2019 level, we can expect a further temperature increase by appr. 0.5°C by 2150. This is 1.4°C above pre-industrial level and therefore below the 1.5° C mark of the Paris climate agreement.
Much more likely and realistic is the “50% decarbonization by 2100” scenario, with a further 0.25°C increase, followed by a decrease to current temperature levels.

The politically advocated “100% decarbonization by 2050”, which is not only completely infeasible without economic collapse of most industrial countries, brings us back to the cold pre-industrial temperature levels which is not desireable.




How much CO2 will remain in the atmosphere?


[latexpage]

There are two parts of the climate discussion:

  • The sensitivity of the temperature w.r.t. the atmospheric $CO_2$ content
  • The amount of $CO_2$ in the atmosphere

While the $CO_2$ sensitivity dominates the scientific climate discussion, the political decisions are dominated by “carbon budget” criteria on the basis of numbers, which are hardly publicly discussed.
It has been claimed that more that 20% of the emitted $CO_2$ will remain in the atmosphere for more than 1000 years.

This article will investigate the functional relation between $CO_2$ emissions and the actual $CO_2$ content in the atmosphere.
Atfter finding this relation several future emission scenarios and their effect on the atmospheric $CO_2$ content are investigated.

Carbon dioxid emissions in the past

Starting point are the actual $CO_2$ emissions during the last 170 years

It is very informative to look at the relative changes of this time series (here the mathematical derivation). This is the equivalent of economic growth for $CO_2$ emissions.

The largest increase in $CO_2$ emissions was between 1945 and 1980, the period of great growth in wealth and quality of life primarily in the industrialized countries, with the absolute peak of global emissions growth passed in 1970, interestingly 3 years before the first oil crisis. At the turn of the millennium, there was another increase in emissions, this time caused by the economic boom of the emerging economies. Since 2003, the growth of emissions has been steadily declining, and has de facto already fallen below the zero line, i.e. from now on, emissions are not expected to grow, despite the growth in China, India and other emerging and developing countries.
This is convincingly illustrated in the time-series graph of the Global Carbon Project:

Source: Global Carbon Project (Time Series)

The long-standing decline in emissions in industrialized countries is currently balancing out with the slowing rise in emerging economies China and India since 2010.
Accordingly, it is realistic to call constant $CO_2$ emissions from 2019 onward “business as usual”. While 2020 was a Covid-19 driven emissions decline, the rebound in 2021 is expected to remain 1.2% below the 2019 level.

CO2 content of the atmosphere with simple emission models

It is assumed that before 1850 the $CO_2$ level was approximately constant and that the measured $CO_2$ content is the sum of the pre-industrial constant level and a function of the $CO_2$ emissions. The aim of this chapter is to find a simple function, which explains the atmospheric content.

Three different models are tested:

  • The first model assumes that all $CO_2$ emissions will remain in the atmosphere forever. This means that the additional – on top of the pre-industrial leval – $CO_2$ content would be the cumulative sum of all $CO_2$-emissions.
  • The second model assumes an exponential decay of emitted $CO_2$ into the oceans or biosphere with a half life time of 70 years, i.e. half of all emitted $CO_2$ is absorbed after 70 years. This is achieved by a convolution with an exponential decay kernel and a time constant $70/ln(2) \approx 100 $ years
  • The third model assumes an exponential decay of emitted $CO_2$ into the oceans or biosphere with a half life time of 35 years, i.e. half of all emitted $CO_2$ is absorbed after 35 years. This is achieved by a convolution with an exponential decay kernel and a time constant $35/ln(2) \approx 50 $years.

In order to make the numbers comparable, the emissions, that are measured in Gt have to be converted to ppm. This is done with the equivalence of 3210 Gt $CO_2$ = 410 ppm .

The yellow graph are the measured actual emissions from the diagram above, and the blue graph is the measured actual $CO_2$ content.

The first “cumulative” model approximates the measured $CO_2$ content quite well from 1850 to 1910, but heavily overpredicts the $CO_2$ content after 1950. This falsifies the hypothesis that $CO_2$ stays in the atmosphere for “thousands of years”.
Also the second model model with a half life time of 70 years of emitted $CO_2$ overshoots considerably after 1950, it approximates the time between 1925 and 1945. The third model with a half life time for emissions of 35 year fits the actual $CO_2$ content from 1975 till now.

This confirms, what has very recently been published in Nature , that the rate of $CO_2$ absorption into the oceans increases with increasing atmospheric $CO_2$ content.

figure2
Source: https://www.nature.com/articles/s41467-020-18203-3

The same relation, in particular the increasing “carbon sink” of oceans and biosphere, is reported from the Global Carbon Project in this graphics:

Co2 sources sinks

Although we can expect a further increase of the $CO_2$ flux into the ocean in the future, we can therefore safely use the third model with a half life time of 35 years for conservative, i.e. non-optimistic predictions.

Future scenarios

In order to evaluate policy decisions, I will apply this model to predict the future $CO_2$ content with 3 different emission scenarios:

  • The first scenario (red) I’d like to call the “Business as usual” scenario, in the sense that China already now increases $CO_2$ emissions only marginally and has committed itself to stop increasing $CO_2$ emissions after 2030. Today emissions are not growing any more. This scenario means, that we keep global emission on the 2019 level.
  • The second scenario (green) is the widely proclaimed decarbonisation by 2050
  • The third scenario (blue) is a compromise proposal, reducing emissions to 50% of the 2019 value (37 Gt) by 2100. This scenario reflects the facts that fossil fuels are finite, and that research and development of sound new technologies takes time:

The consequences for the $CO_2$-content based on the simple model with 35 years half life time are these:

  • The first scenario (red) increases $CO_2$ content, but not beyond 510ppm in the long distant future, which is less double the amount of the pre-industrial era. Depending on the sensitivity this means a hypothetical $CO_2$ induced temperature increase of 0,16° to 0,8° from current temperatures, resp. 0,45° to 1,3° since pre-industrial times, depending on the sensitivity.
  • The second scenario — worldwide fast decarbonisation — (green) hardly increases the $CO_2$ content any more, and eventually reduces the atmospheric $CO_2$ content to pre-industrial levels.
    Do we really want this??? This would mean deprivation of all plants, which thrive best at $CO_2$-levels larger that 400 ppm. Not even the IPCC ever formulated this as a desirable goal.
  • The compromise scenario (blue) will slightly raise $CO_2$ content but keep it below 460 ppm, and then gradually reduce it to the 1990 level. The atmospheric $CO_2$ levels will begin to fall after 2065.
Atmospheric $CO_2$ content prediction based on simple model with 35 years half life time.

A rigorous mathematical model based on the the single simple assumption that oceanic and biological $CO_2$ absorption is proportional to $CO_2$ concentration comes to essentially the same result, with the nice side effect to have an error estimate of the prediction:

Conclusion

Not even the most pessimistic of the scenarios described above reaches a “catastrophic” $CO_2$ content in the atmosphere.
The complete decarbonisation scenario by 2050 can only be judged as utter nonsense. No one can wish to go back to pre-industrial $CO_2$ levels.
On the other hand, the limited fossile resources motivate to replace them in a feasible and humane way. This is reflected in the “compromise” scenario, which gradually reduces long-term emissions to the level of approximately the year 1990.




Temperature data tampering

Relevant data analysis crucially depends on the availability of reliable data. Historically it has been of utmost importance to have temperature data that are as precise as possible, because this is one of the essential predictors for the expected weather. Also the long term monitoring of climate and climate trends requires maximum quality temperature data.

What, if people and institutions started messing with such data, because the data as they actually are, do not fit to a given political agenda? This would invalidate – at least partially – the conclusions that we draw from these observations.

Unfortunately exactly such a deliberate tampering of temperature data actually happened. One of the milestones of the events is a paper from James Hansen “GISS analysis of surface temperature change”. In this paper Hansen describes a number of necessary adjustments that — in typically rare cases — need to be made to temperature data in order to make the temperature anomaly averaging consistent:

  • The most common adjustment is the correction for the urban heat island effect. Typically this is a consequence of urban growth: In the past the thermometer was outside of a town in a green environment, with the growth of the town it is now surrounded by houses and subject to the urban heat effect, which raises the temperature. In order to make this consistent with the previous measurement, either the past temperatures must be raised or the future temperatures must be lowered. It is usually easier to adjust the past temperature, pretending that the city was always as large as today. It is questionable, whether such an adaptation is justified, or whether it would be wiser to track the urban heat effect explicitely and not change actual measurements of the past. Fact is that a change in past temperatures changes also the global mean temperature which is not justified under any circumstances.
  • A second – understandable – situation to adapt temperature data occurs, when a thermometer location is moved to higher or lower altitude. E.g. in the case of a downhill altitude change of 160m this would correspond with a temperature increase of the past data by 1° C, with an assumed adiabatic lapse rate of -6°/km . The physical meaning of this is the invariance of potential temperature when the energy content of the whole system is not changed. In my judgement this is the only legitimate adaptation to temperature measurements, because it doesn’t change the original true measurement, but maps it to another location, where no measurement had been made previously.

Both these adapatations have been justified in Hansen’s paper. It must be noted that the dominant case of urban heat islands would lead to an increase of past temperatures, or a decrease of current and future temperatures.

The time series of US mean temperatures have been published by Hansen on p. 47 of his paper (bottom left corner of the page):

It can clearly be seen that the 3 highest temperatures in the 20th century were in 1934, 1921, and 1931. Also the moving average has its peak clearly in the early 1930s, and a downward trend from the 30s to the end of the century.

When we look at today’s temperature data, which are available online from NOAA, we are surprised to see this:

image.png

Looking carefully at the diagram, one can observe that now the 1998 temperature is larger than the previously largest 1934 temperature, and quite a few of the later 20th century temperatures have been increased while reducing the earlier data. This is exactly the opposite of what one would expect from an urban heat island correction.

I have been made aware of this problem, which in my understandig can only be interpreted as willful manipulation, by a video of Tony Heller.

Detailled data analysis by Prof. Friedrich Ewert

The fact that the NASA/NOAA temperature data have been manipulated, has been carefully analyzed and evaluated by Prof. Friedrich Ewert. He found out in a tedious analysis that many temperature data before 2010 have been changed by 2012. With today’s data sets the original data from before 2010 cannot be found any more. Prof. Ewert was able to make the comparions, because he had archived the earlier data sets.

The manipulations are not only on US temperature data, but also on data of other countries. For the 120 randomly selected stations, Ewert recorded the tens of thousands of individual data given by NASA for each year before and after 2010. To print out his data would result in a list 6 meters long. It can be seen that ten different methods were used to produce the climate warming. They are all documented in the study with examples. 6 of the 10 examples were applied most frequently:

  • A lowering of the annual mean values in the initial phase.
  • A reduction of individual higher values in the first warm phase.
  • An increase of individual values in the second warm phase.
  • A suppression of the second cooling phase beginning around 1995.
  • A shortening of the data series by the earlier decades.
  • For long-term series, the data series were even shortened by the early centuries.

The Climategate Emails

The leaked “Climategate Emails”, which became public in 2009, provide further evidence that deliberate temperature data tampering was not a conspiracy theory, but a real conspiracy between multiple institutions and persons from at least the US and Great Britain, carefully investigated by Stephen McIntyre and Ross McKitrick, who in 2009 debunked and uncovered the deception of Michael Mann’s “hockey stick” in their paper “Proxy inconsistency and other problems in millennial paleoclimate reconstructions”.

The most famous example of the deliberate temperature manipulation has been expressed by Phil Jones of the British Met office in an email to Michael Mann and .. Briffa:

“I’ve just completed Mike’s Nature trick of adding in the real temps to each
series for the last 20 years (ie from 1981 onwards) and from 1961 for Keith’s
to hide the decline.”

Here the diagram from the dossier by Stephen McIntyre and Ross McKitrick:




Carbon foot print of Photovoltaic Power Generation – a reality check


[Latexpage/]

Solar energy is considered to be emitting no $CO_2$, the true solution to the desire to build a carbon free energy supply. This directs the focus of attenation to the “active” life of PV production, which appears to produce “free” energy, free of cost and free of $CO_2$.

This focus changes when you actually plan to install photovoltaic supply for e.g. a private home. This requires quite a lot of costly components, costly not only in terms of price, but also in terms of energy:

  • The solar panals and (thick high current) cables
  • the inverter module,
  • backup batteries to at least bridge the day/night volatility

In particular the solar panels and Li-Ion batteries require a lot of energy and other resources. Being required in large quantity due to the low energy density and volatility of sunlight, for a complete calculation of energy budget and carbon footprint the costs of production must be taken into account.

We investigate the carbon footprint of a system that does not depend on fossil fuels. In 2013 Mariska de Wild-Scholten has investigated this in the publication “Energy payback time and carbon footprint of commercial photovoltaic systems“. This will be used as a basis in this analysis.

Most assumptions for the calculations appear to be fair and realistic. Two of them I want to look at more closely:

  • Energy output is assumed 1275 kWh/a per installed $kW_p$ module size,
  • Life time is assumed to be 30 years

Taking the real Energy output from Germany, based on the official statistics from the Fraunhofer Institute, the total produced volatile energy 2020 from $54 GW_p$ installed solar modules was $50 TWh$, which is an average real energy output of $926 kWh/a$ for each installed $kW_p$ module. The assumptions in the paper explicitely state that they are valid for southern Europe, so there is no intentional bias, but the numbers are just not valid for e.g. Germany and other central European countries.

The life time assumption is also very optimistic. Taking into account that there may be damage to the modules due to thunderstorms, hail storms, or defects in the solar cells, it is more reasonable to assume a life time which is identical to the guaranteed product life time, which is typically 25 years.

Most PV modules are produced in China, so regarding the $CO_2$ emissions, the base number for the carbon footprint is

$ CF_{base}=80 \frac{g CO_2}{kWh} $

Taking into account the real solar energy delivery, and the more realistic, insurance validated life expectancy of 25 years, the real PV carbon footprint for Central Europe, represented by the statistics of Germany, is

$ CF_{Germany} = 80\cdot \frac{1275}{926}\cdot\frac{30}{25} \frac{g CO_2}{kWh} = 132 \frac{g CO_2}{kWh}$

Including Short term storage

Due to the volatile character of solar energy the actual power generation is not yet the end of the story, when the goal is an energy production without fossil fuels. The first type of volatility is the day/night cycle and short term weather variations. This type of volatility can be covered with a 1-7 day battery storage. A reasonably safe value is a storage equivalent to 3.5 day energy consumption, which covers appr. 7 days, assuming half the consumption is during the – solar active – day time. With this scenario chances are good to cover nearly the whole time range from march to October. This is more capacity than is typically bought currently in Germany, but the political enouragement of storage is just beginning. In the US there are more fully autonomous “island” installations, which usually have even larger battery stores.
Therefore as a rule of thumb the battery capacity is assumed to have 1% of the yearly expected energy yield to cover most requirements except the 3-4 winter months. This is documented by practical experiences. For each installed module with 1 $kW_P$ the required capacity C is
$ C = 926 \cdot 0.01 kWh \approx 9 kWh $
The carbon footprint for Li-Ion batteries is appr. 75 kg per kWh Storage capacity – it is assumed that it is the same for EVs as for power wall usage, and the guaranteed life time is 10 years, although there are — not guaranteed — claims of 20 year lifetime. In order to estimate conservatively, we assume 10 years of lifetime, considering the fact that the battery capacity cannot be used 100% . Extending the life time of a battery typically means to reduce the active capacity by 30-50%. The calculations can be adapted for 15 or 20 years, when new generations of batteries will be available. Therefore the storage carbon footprint per installed module is distributed on the total energy production of 10 years:
$CF_{Battery}= \frac{75 kg \cdot 9}{10*925 kWh} \approx 73 \frac{g}{kWh} $

Therefore a feasible standard installation for private homes, essentially the electric energy supply for 8 months from march to october implies a total carbon footprint of $205 \frac{g}{kWh}$.

Long term storage – through the winter

There are several ways to estimate the carbon footprint effect of seasonal volatility, i.e. that in winter there is hardly any usable solar insolation whereas in summer there is a peak. Assuming that the yearly total solar energy generation corresponds to the yearly total consumption, it is obvious that the seasonal volatility, which leads to the deficit in winter, means a surplus during the summer months.

Following the analysis of Prof. Hans-Werner Sinn, the main storage problem is not the short term storage, but the long term storage. Solving the problem with solar energy alone along the lines discussed above, we have to add at least 3 additional months of storage, as there is hardly any solar energy between mid november to mid february – this is dependent on latitude, the statement is made for appr. 50 degrees (Germany). There is unanimous consent that this longterm storage not possible by means of Li-Ion batteries, neither from a price perspective nor from a carbon footprint perspective.

Local longterm solution – solar energy only

The currently favoured approach to long term storage is the so-called Power-to-Gas concept: The surplus electrical energy in the summer is converted to Hydrogen by electrolysis. Due to the fact that hydrogen is difficult to store and handle, it is further processed to methane (the calculations for the currently discussed alternatives ammonia or methanol are similar). This is identical to natural gas and can be easily stored (e.g. in liquid form as LNG) and reconverted to electricity with a gas power plant. This can be done at the scale of the community, city, county, or state, where both the electrolyser and the gas power plants are run.
The problem of this concept is that the effective storage efficiency is only 25%. In order to get 1 kWh in a winter month you have to invest 4 kWh during the rest of the year. 1 kWh of average yearly consumptions costs
$ \frac{9}{12}+\frac{3*4}{12} = 1.75 kWh $
of volatile input solar energy. This increases the total carbon footprint to
$ CF_{total} = 132*1.75 + 73 \frac{g}{kWh} = 304 \frac{g}{kWh} $

This is nearly half of the carbon footprint of a traditional gas power station, and cannot be neglected.

If we consider only the winter months, the solar P2G process has 4 times the carbon footprint, so $4\cdot 132 \frac{g}{kWh} = 528 \frac{g}{kWh}$. This is almost the same as that of a normal gas-fired power plant (436-549 $\frac{g}{kWh}$, see also here ) . Accordingly, it is irrelevant for the $CO_2$ balance whether the electricity needed in winter is generated from fossil natural gas or via solar-powered power-to-gas processes. The price of methane produced with power-to-gas, however, is about 10 times that of fossil natural gas.

If it is possible to store hydrogen directly without extremely high pressure or extreme cold, then the efficiency of storage can be increased to 50%, resulting in a $CO_2$ footprint of at least $264 \frac{g}{kWh}$, this is without taking into account $CO_2$ generation for storage (e.g. construction of the plant).

Large scale longterm solution – including wind energy

In his analysis Prof. Sinn took into account that solar energy is not the only “regenerative” source, but also wind, and that the availability of wind is partially complementary to the solar power. The result of his calculations was that the total storage requirment for smoothing the seasonal volatility (essentially the problem of winter) would be 11 TWh based on the total electricity consumption in 2014 of 163 TWh, approximately 6.7% of that total electricity consumption. Due to the fact of large yearly changes, a safety margin requires at least 7 to 7.5%, provided one insists on a 100% fossil fuel free supply.

1-s2.0-S0014292117300995-gr5_lrg.jpg (1500×1597)

This would mean a minimum longterm energy overhead factor for a power-to-gas storage of
$ \frac{93.3}{100}+\frac{4*6.7}{100} \approx 1.2 $

and a total carbon footprint of
$ CF_{total} = 132*1.2 + 73 \frac{g}{kWh} = 231 \frac{g}{kWh} $

The carbon footprint of wind power energy generation is not treated explicitely here, which means that it is implicitely assumed to be the same as of solar PV generation. For the original question of private household electricity supply it plays a minor role, it is only relevant as a regenerative partial provider during winter time.

Consequences for Electric vehicles

In the current political understanding in the EU, electric vehicles are by definition considered to be carbon neutral. There are, however, serious discussions about the true carbon footprint of EVs compared to e.g. Diesel cars, emerging from a study of Prof. Hans-Werner Sinn et al. Their analysis, which states that the carbon footprint is higher than a comparable Diesel car, is based on the current electricity mix of the German grid. Based on their sources the carbon footprint of a 75 kWh car battery is at least 73 $\frac{g CO_2}{km}$, possibly up to 98 $\frac{g CO_2}{km}$. Indeed if $CO_2$ is an active greenhouse gas, it doesn’t care about a political tabu.
The consumption realisitically is considered to be 15 kWh per 100 km. Therefore the carbon footprint of battery and solar based consumption is (this ignores the carbon footprint to build the car) optimally
$CF_{EV} = (73 + \frac{15\cdot 231}{100})\frac{g CO_2}{km} = 108 \frac{g CO_2}{km} $

This is considerably higher than the 2020 EU limit of 95 $\frac{g CO_2}{km}$ and more than 50% above the 2025 limit of 70 $\frac{g CO_2}{km}$.

Why are there double standards in politics and among the authorities? For nature it is irrelevant on which way the $CO_2$ gets into the atmosphere. Politically it is more honest to let the $CO_2$ into the own airspace with a gasoline or diesel vehicle, than to keep the own air allegedly “clean” by $CO_2$-colonialism, shifting the emissions to China or other “cheap countries” by means of the production of the necessary components.




Climate Sensitivity


[latexpage]

The central question of the whole climate discussion revolves around a single issue: how does the climate, in particular the world average temperature, change if the $CO_2$ content of the atmosphere doubles. This is called climate sensitivity, the adjusting screw of all climate policy. Based on the different modeling assumptions of the International Panel on Climate Change, the IPCC, we are threatened with an average temperature increase of 2°-5° C by the end of the century. The political “optimal target” of the Paris climate agreement is a limit of 1.5° C.
The problem is that the resulting targets in terms of $CO_2$ avoidance are based on an assumed climate sensitivity of 2°-5° C with a doubling of $CO_2$.

Is this correct? Immense costs, the loss of industrial strength and the impoverishment caused by it, not least our freedom depend on the correct answer to this question.

A simple climate model

We want to use the well-established MODTRAN simulation as a one-dimensional mini-climate model to answer the question of climate sensitivity. MODTRAN incorporates a well accepted radiative transfer model. This simplification is legitimate in that the radiative equilibrium can in principle be calculated at any location on Earth, and if a consistent final result emerges under the various conditions, then we consider it reliable. The program is publicly available, so everything is verifiable. At this point we limit ourselves to an example calculation with the standard atmosphere, which is considered to be the optimal global average.

To do this, we set the MODTRAN program as the atmosphere had been in 1850 or so, in particular the $CO_2$ content was 280 ppm at that time. All other air constituents remain at the preset “standard” average value. The atmosphere model is the so-called US Standard Atmosphere, as used in International Aviation. As cloud model I chose those clouds which are most common, the cumulus clouds between 660m and 2700m altitude. The water vapor content is then adjusted so that the outgoind infrared radiation just gets the correct value of about 240 $\frac{W}{m^2}$ (corresponding to the average equilibrium insolation with an average albedo of 0.3). This is given with an average relative water vapor content of 0.25. The assumed average surface temperature of the standard atmosphere is 15.2° C.

Simulation of the pre-industrial atmosphere

The dark blue spectrum shows the well known $CO_2$ hole, the influence of water vapor at both right and left tails is also clearly visible. As auxiliary lines those curves, which mark the ideal radiation behavior without greenhouse gases at the temperatures 220° K to 300° K, are additionally plotted. This allows to estimate the radiation temperature and thus the energy at any point in the spectrum.

As the next test scenario, let’s set today’s $CO_2$ concentration at 415 ppm. The 1850 curve will remain in the background as the blue reference curve, and the red curve from today is drawn as an overlay.

Simulating today’s atmosphere

It is noticeable that the curves are almost identical and indistinguishable to the naked eye. The red curve almost completely obscures the blue one. Only at the calculated values in the left box you can see a slight difference. The appr. 1 $\frac{W}{m^2}$ lower radiation is compensated by a temperature increase of the earth surface of 0.3°. This average increase of 0.3° represents the hypothetical effective greenhouse effect from the beginning of industrialization until today, on the basis of the radiative transfer equations of the widely accepted MODTRAN model.

Now, what if we assume a doubling of $CO_2$ content from 280 ppm to 560 ppm? Again, the new curve is superimposed on the original blue curve.

Simulating the atmosphere when CO2 levels are doubled

And again, to the naked eye, there is little difference – a few small blue peaks peek out at wavenumber 500, and the “$CO_2$ hole” has become marginally wider, reducing the IR emission by $1.92\frac{W}{m^2}$. To compensate for the reduced infrared radiation due to this minimal greenhouse effect, the ground temperature is increased by a total of 0.5°C. Thus, the climate sensitivity according to the MODTRAN simulation is pretty much half a degree. A model to describe this measured sensitivity is described in this article.

In order to validate this result with data of the climate zone most relevant to a potential global warming, a corresponding scenario with a tropical atmosphere (27° C ground temperature, water vapor scale 0.5, Cumulus clouds) leads to a $CO_2$ sensitivity of 0.67° C in the tropical zone:

Therefore, there is no reason for any alarmism. These values ranging between 0.5 and 0.67 are far below the lowest assumptions of the Intergovernmental Panel on Climate Change.

Why does the IPCC reach different conclusions?

The natural question after these considerations is why the Intergovernmental Panel on Climate Change, which after all includes the best climate scientists, comes to such much more pessimistic conclusions?
A key problem here is that their climate models are extremely complex and claim to represent the full complexity of climate events. There are good reasons to believe that this is fundamentally impossible under current conditions, for example, because turbulent high-energy phenomena such as ocean currents or tropical storms are not adequately represented in these models. Similar models are used for weather forecasting, and these are already known to fail frequently for forecasts that extend beyond a few days.

One important reason to doubt the validity of “global circulation models,” or GCMs, is that they have consistently over-predicted past climate data in past forecasts.
On the left is the average temperature trend (red bar) 1993-2012 – 0.15°/10 years, on the right the same in the period 1998-2012 – 0.03°/10 years, and in addition the results of 110 different climate models. Almost all had estimated much higher temperatures.

Source: http://www.blc.arizona.edu/courses/schaffer/182h/climate/overestimated%20warming.pdf

Simulation of IPCC assumptions

With the MODTRAN simulation program, however, one can reproduce the thinking based on data published by authors close to the IPCC. This is done by first assuming an atmosphere entirely without water vapor and without the other greenhouse gases, and measuring the $CO_2$ sensitivity in such a hypothetical atmosphere.

With the MODTRAN simulation, this situation is achieved when everything in the standard atmosphere is set to 0 except for the $CO_2$ content, removing all clouds and water vapor.

This, of course, raises the hypothetical radiation to an unrealistically high value of 347 $\frac{W}{m^2}$. Clearly, the only deviation from the “ideal curve” is the well-known $CO_2$-hole.

When the $CO_2$ content is doubled and the ground temperature remains the same, the radiation now decreases by 3.77 $\frac{W}{m^2}$ due to the greenhouse effect.

This is pretty much the value of $CO_2$ conditional “radiative forcing” published by the Intergovernmental Panel on Climate Change. The reduced radiative forcing is compensated by temperature increase:

According to this, a temperature increase of 0.75° offsets the doubling of $CO_2$, which would be the sensitivity according to MODTRAN. However, many scientists arrive at an even higher sensitivity of about 1°.
But this sensitivity is called — in a way rightly — the “pure $CO_2$ sensitivity” by scientists close to the IPCC, because it does not yet take into account the influence of water vapor. But since water vapor is an even more potent greenhouse gas, and more water vapor is produced by the $CO_2$-induced temperature increase, in this way of thinking the $CO_2$ sensitivity is thereby effectively doubled. Thus it is possible to arrive at a sensitivity of 2°, which can then be arbitrarily increased by other catastrophic scenarios such as hypothetical melting of polar ice. They completely disregard cumulus cloud formation, which would also be enhanced by increasing the water vapor content and which would lead to a reduction of the incident energy, i.e. to a strong negative feedback. At best, the cloud issue is used by arguing that the very high cirrus clouds may lead to an enhancement of the greenhouse effect.

Explanation of the discrepancy and conclusions

The large discrepancy between the IPCC published sensitivity of more than 2° C and the 0.5° C found by MODTRAN simulation requires a plausible explanation. Without entering the feedback discussion — which is of minor relevance in the case of very small sensitivitiy — here are two reasons for the deviations between the “pure” $CO_2$-sensitivity of $3.77 \frac{W}{m^2}$ and the “Cloud-and-Water vapour” sensitivity of $1.92 \frac{W}{m^2}$:

  • As mentioned above the pure $CO_2$ sensitivity is based on an outgoing radiation of $347 \frac{W}{m^2}$, which is $1.1$%. Based on the real $240 \frac{W}{m^2}$, the same relative forcing of $1.1$% would result in $2.46 \frac{W}{m^2}$.
  • The remaining difference between $2.46 \frac{W}{m^2}$ and $1.92 \frac{W}{m^2}$ can be explained with the fact that the presence of water vapour leads to a certain amount of competition in the emission of radiation in the uppor troposphere, with the result of a reduced forcing – some radiation which would be held back when no water vapour is present, is emitted through water vapour.
  • One reason, why other investigations based on MODTRAN reach higher sensitivity, is the fact that some of them calculate the spectra from a height of 20 km instead of 70 km. This cuts off a considerable amount of the stratospheric $CO_2$ emissions, which have a — constant — cooling effect, thereby magnifying the warming effect of the remaining rest of $CO_2$.

Therefore, tearing apart $CO_2$, clouds, and water vapor when calculating $CO_2$ sensitivity is unwarranted. All factors need to be considered simultaneously, this leads to the low sensitivity of 0.5°.




Cloud and Water climate feedback


[latexpage]

One key question in the climate discussion is whether the atmospheric water vapor and the clouds provide a net positive or negative feedback: When the atmosphere gets warmer, will the “water content” magnify this or reduce it?

The issue is extremely complex, with the result that science is very divided about this, there are publications favoring positive and others favoring negative feedback.

In order to have a chance of logical reasoning, I will try to reduce the problem to its core by eliminating as much as possible of the dynamics.

The first point to make is that there are two kinds of feedback:

  • The first feedback is the direct interaction between $CO_2$, water vapor, and clouds w.r.t. IR radiative behaviour as well as the SW relevant albedo. This will be the main focus of this contribution.
  • Usually only a secondary water vapor and cloud feedback is considered, based on an assumed temperature rise by $CO_2$ alone. The very strong negative feedback of (mainly tropical) storms is hardly ever discussed. All these are very complex and are not handled here. For the time being I refer to the work of Prof. Richard Lindzen.

The approach uses the basic atmospheric radiative model as implemented in the MODTRAN software, which is well known and widely accepted.

3 model cases are investigated:

  • Energetic equilibrium for an atmosphere without greenhouse gases
  • Energetic equilibrium and $CO_2$ climate sensitivity for an atmosphere without water vapor but all other greenhouse gases
  • Energetic equilibrium and $CO_2$ climate sensitivity for an atmosphere with water vapor and greenhouse gases, calibrated for the current albedo.

Atmosphere without greenhouse gases:

When there are no greenhouse gases, this implies that there is no water vapor in the atmosphere and therefore there are no clouds. From the earth’s energy budget (Fig 1) it follows that the average albedo a would be
$a=\frac{23}{23+161} =0.125$ instead of the current atmospheric value of appr. 0.3.
This thought experiment requires the “ceteris paribus” assumption, i.e. everything else is assumed to be the same as in our real world climate system.
I explicity discard the discussion about “freezing oceans” or about the question, how it is possible to have no water vapor when oceans are in fact creating it, because for the following considerations only the radiative behaviour of the atmosphere is relevant.

Lower average Albedo 0.125 instead of 0.3 implies: With this albedo the solar insolation and therefore the energy flux equiblibrium would be at

$340\frac{W}{m^2}\cdot (1-0.125) = 297.5 \frac{W}{m^2}$ instead of $240\frac{W}{m^2}$ as with $a=0.3$

and an average equilibrium temperature of 271 K = -2 C:

Therefore the answer to the question “What is the net warming effect from the
atmosphere, including all its processes, without changing anything else?”
is 17 K and not 33 K as is usually communicated. The usual model assumption of constant albedo under the condition of “no greenhouse gases” or even “no atmosphere” is deeply flawed and misleading, because it implicitely makes cloud albedo to a constant without any justification. The contrary is the case – the change of cloud albedo turns out to be the dominant contribution to the atmospheric warming of the last 40 years.

Atmosphere with no water vapor, only CO2 and other GHG

This case investigates an atmosphere with no water vapor, containing only CO2 and other GHG.
Again there are no clouds nor water vapor, leading again to the lower average albedo $a=0.125$ instead of 0.3. With this albedo the energy flux equiblibrium would be at $297.5 \frac{W}{m^2}$ (instead of 240 as with $a=0.3$) and an average equilibrium temperature of 279.5 K = 6.5 C:

CO2-Sensitivity of doubling from 280ppm to 560ppm according to MODTRAN in atmosphere without water vapor would be $3.2 \frac{W}{m^2}$, resulting in a temperature sensitivity of 0.75 C. This is determined by adjusting the Temperature Offset in MODTRAN, until the “Difference New-BG” becomes 0:

Including cloud model and water vapor

This investigation cannot provide a perfect cloud model. The only purpose is to come to a qualitative conclusion, whether clouds and water vapor feedback increases or reduces $CO_2$ sensitivity. The “reality constraint” is that the cloud model should achieve the average equilibrium IR flux of $240 \frac{W}{m^2}$.
With the “Cumulus Cloud”, and a water vapor scale of 0.25 this can be achieved. The only other cloud model to achieve this is “Altostratus Cloud” and a water vapor scale of 0.2. With both configurations the $CO_2$ sensitivity of doubling $CO_2$ from 280ppm to 560 ppm is $1.92 \frac{W}{m^2}$, considerably less than without water vapor content.

The temperature sensitivity of doubling $CO_2$ from 280ppm to 560 ppm in this case, which is the closest model to the real world, is reduced to 0.52 C:

The change of $CO_2$-concentration between pre-industrial times (280 ppm) and now (415 ppm) therefore corresponds to an increase of average global temperature of 0.29 C.

This investigation has shown, that $CO_2$ sensitivity in the presence of water vapor and clouds is consistently smaller than without water content in the atmosphere. The consequence of this is that water vapor and clouds together reduce rather than enhance the greenhouse effect of $CO_2$, and total feedback of water vapor and clouds is negative, based on the MODTRAN radiative model.
The sensitivity of 0.52 C is so small that there is no reason to worry about future increases of $CO_2$ content in the atmosphere.




A trap for fools

When facing a climate of climate alarmism, the question is how to deal with it. The problem of the subject is, that it is complex and difficult. It would be nice to have some simple argument for winning and ending the discussion. The physics professor Denis Rancourt, emeritus of the Univerity of Ottawa, in his article about global warming and the greenhouse effect pointed to some often used arguments of sceptics, which are not correct and should never be used.

After initially stating that “Sceptics are correct that warming alarmism has not been justified from scientific principles or from empirical facts. Sceptics are correct that warming alarmism seems to be motivated by careerism and corporate/finance opportunism”, he warns, not to use the following incorrect arguments (see pp. 17-18) It makes the case of sceptics untrustworthy – the very last needed to stand up against alarmism.

CO2 is only a trace gas

CO2 is only a trace gas.” Yes, but that is not relevant. What is relevant is CO2’s contribution to the atmosphere’s longwave absorption. It is a question of actual cross section, not absolute concentration. Satelite spectroscopic measurements are unambiguous that CO2 contributes 1/4 to 1/3 of all longwave absorption by the atmosphere (the rest being due to water vapour and clouds, depending on sky conditions) and that CO2 absorption is saturated in its main absorption band, meaning that infrared is completely absorbed within a short distance of about 25m.

Not a radiation balance effect

“It’s not principally a radiation balance effect.” Other effects like pressure or lapse rate are stated as explaining principles instead. Rancour gives a simple answer: “Turn off the Sun and calculate Earth’s temperature! “. Energy from the sun comes as shortwave radiation and infrared radiation leaves the atmosphere into space. Radiation is the only form of energy transport through empty space. A high percentage of the IR is emitted from the atmosphere. Only “greenhouse” gases like CO2 or water vapor are able to emit IR.

Violation of thermodynamics?

“Heating the surface by a greenhouse effect violates thermodynamics.” This argument is often stated in the context of “re-radiation” or downwelling infrared radiation. While it is problematic to state that heat flows from a cold part of the atmosphere, which it does not, the expression “downwelling radiation” is used as a synonym that local temperatures adjust towards steady state to balance energy fluxes. The steady state is not a state of equal temperature, but an atmosphere with lapse rate, and whereever heat is introduced into the system, it adjusts towards the corresponding equilibrium state.

There is no greenhouse effect

“There is no such thing as a greenhouse effect.” It is true, that the – open – atmosphere cannot honestly be compared with a greenhouse, the main effect of which if to retain heat by reduction of convection, the atmospheric greenhouse effect depends

  • on the thickness of the atmosphere
  • on the lapse rate of the atmosphere
  • on the concentration of the different greenhouse gases such as CO2, especially at the top of atmosphere

A planet’s surface (and atmosphere) heats up without any greenhouse gas present but it heats up faster and reaches higher temperatures with greenhouse gases.

Good arguments?

This website was made to provide a solid foundation to understand climate and to discuss climate related topics. Here is good start for understanding the greenhouse effect.




Recent albedo change and climate sensitivity


[latexpage]

The climate discussion is overwhelmingly dominated by the discussion about the influence of $CO_2$, which according the radiation transport equations describing the outgoing infrared radiation (OLR) has a certain influence on the earth’s radiation budget.
There are, however, more factors that need to be considered. The incoming energy flux, the SW insolation, is obviously of decisive importance. The actual amount of energy flux coming from the sun, the “solar constant” has proven to be extremely stable, but the global albedo, which decides how much solar flux enters the atmosphere, is an extremely important control parameter.

The dilemma about the earth’s albedo has been, that there is not a “nice theory” of how to determine it from a simple (possibly human induced) cause. On the contrary it involves many currently badly understood factors, some of which are, but others are not under human control:

  • Clouds of different types at different altitudes,
  • reflecting and scattering aerosols,
  • influences on cloud generation, such as cosmic rays and magnetic fields,
  • surface and atmospheric properties as a consequence of snow cover, urbanisation, agriculture, air pollution, etc.,
  • possible feedback effects of temperature via water vapor.

Lacking a comprehensive theory, the influence of earth albedo has for a long time been neglected or ignored in the mainstream discussion.
But there is another approach. Satellite measurements have made it possible to actually measure the direct effect of albedo on the actual solar insolation. Recently a 30 year analysis has been published by J. Herman et al.: Net decrease in the Earth’s cloud, aerosol, and surface 340 nm reflectivity.

This careful analysis shows a clear, significant trend for the overall reflexivity, representing 75% of the earth’s total solar insolation (latitude -60…60 degree, smaller insolation and larger albedo near the poles):

Measured earth’s reflexivity from 1980 to 2010

The changed reflectivity has been converted into changes of SW incoming resp. reflected flux:

Energy budget of the incoming SW flux, with changes due to albedo decrease

This means that in the 30 years from 1980 to 2010 the solar insolation flux has increased by $2.33\frac{W}{m^2}$. This albedo caused increase within 30 years is larger than the estimated forcing caused by the increase of $CO_2$ since the beginning of the industrialisation.

Based on the energy balance between incoming SW insolation $S_a=S\cdot(1-a)$ and outgoing LW radiation by means of the Stefan-Boltzmann law $$T=\sqrt[4]{\frac{S\cdot(1-a)}{4\cdot\sigma}} $$ the temperature sensitivity to Insolation changes (without feedback) is $$\frac{\Delta T}{T} = 0.25\cdot\frac{\Delta S_a}{S_a} $$
$$\Delta T = 0.25\cdot \frac{2.33\cdot 255}{161.3+78} K = 0.62 K $$
During the same time of 30 years the estimated change of OLR flux due to $CO_2$ is $0.6 \frac{W}{m^2}$ resulting in the temperature sensitivity $$ \Delta T = 0.25\cdot \frac{0.6\cdot 255}{161.3+78} K = 0.16 K $$
Due to the assumed constant lapse rate the temperature change at the surface is the same as the calculated temperature change which is somewhere in the mid troposphere.
There are 2 important and simple consequences from this obervation and calculations:

  • The influence of $CO_2$ accounts for 20% of the temperature change during the last 30 years, whereas the change of albedo accounts for 80% of the temperature change.
  • Both forcings together would have created an average temperature change of 0.8 K. The actual change of global temperature is significantly less, depending on the institution measuring the temperature it is 0.48..0.55 K, which is slightly more than half of the forcing to retain SB-balance. Therefore there must be rather strong negative feedback, which reduces the effect of additional SW flux (from albedo) resp. diminished LW flux (from $CO_2$) by appr. 50%. The most obvious and likely “feedback” is the buffering by heat absorption into and evaporation from the oceans, covering about 70% of the earth surface.
Global average satellite temperatures (UAH)
Global average surface temperature (HadCRUT4)

For the $CO_2$ sensitivity (Temperature change when doubling $CO_2$) this feedback means that at 600 ppm we can expect a temperature change less than 0.5 degree K compared to pre-industrial times. This conclusion is drawn under the assumption, that albedo changes are independent of $CO_2$.

Obviously the question arises, what caused the decrease of the albedo. There is no clear answer to this currently. The size of the effect and the lack of a plausible theory appear to rule out the possibility that it is related to the global $CO_2$ level. Currently it is more likely that there is a causal relation of influence on cloud generation from aerosols, sun activity and cosmic rays. But also factors of human influence through increasing surface absorption (“urban heat islands“, desertification ) are challenges for further research and constructive activity.