Are the natural sinks at an End?

Articles are currently being circulated in the media claiming that natural CO2 sinks have “suddenly and unexpectedly” ceased to function, such as the article in the British magazine Guardian “Trees and land absorbed almost no CO2 last year. Is nature’s carbon sink failing?“. The natural CO2 reservoirs are the biological world, consisting of all living organisms, plants, animals and humans. In addition, the oceans, which store around 50 times the amount of CO2 in the atmosphere. It is known and has been proven for many decades that both the biological world and the oceans are strong CO2 sinks. Currently, more than half of all anthropogenic emissions are absorbed by the two major sink systems, as shown in Figure 1.

Figure 1: Emissions and natural sink systems, oceans and land life

What has happened that suddenly the sink effect is supposedly diminishing? Even at first glance, the diagram reveals that the sink effect shown, which is attributed to land plants, is subject to extraordinarily strong fluctuations, much more so than in the case of the oceans, for example. This should immediately make us suspicious when we talk about a “one-off” event within the past year.

A closer look at all the articles published on this topic quickly reveals that they all refer to a single publication. The “scientific” basis and trigger for the current discussion is apparently this article: “Low latency carbon budget analysis reveals a large decline of the land carbon sink in 2023“. 

In order to find an appropriate answer to this, it is necessary to take a closer look and use original data to examine how the changes in concentration develop. In the publications “Emissions and CO 2 Concentration – An Evidence  Based Approach” and “Improvements and Extension of  the Linear Carbon Sink Model” I carefully analyzed the relationship between emissions, concentration increase and sink effect and developed a robust, simple model of the sink effect that not only reproduces the measurement data of the last 70 years very accurately, but also allows reliable forecasts. For example, the concentration data for the years 2000-2020 could be predicted with extremely high accuracy from the emissions and the model parameters determined before the year 2000. However, the most recent series of measurements used in the publications ended in December 2023 and annual averages were used, so the phenomena that are currently causing so much excitement are not yet taken into account.

Detailed analysis of the increase in concentration until August 2024

Since details of the last two years are now important here, the calculations of the publication mentioned are continued here with monthly data up to August 2024 in order to get a clear picture of the details and to include the latest data. The starting point is the original Maona Loa measurement datawhich are shown in Figure 2.

Figure 2: Measured Maona Loa CO2 concentration data

The monthly data is subject to seasonal fluctuations caused by the uneven distribution of land mass between the northern and southern hemispheres. Therefore, the first processing step is to remove the seasonal influences, i.e. all periodic changes with a period of 1 year. The result can also be seen in Figure 2 (orange color).

The global sea surface temperature is also subject to seasonal fluctuations, but to a much lesser extent, as shown in Figure 3.

Figure 3: Global sea surface temperature anomalies (HadSST4)

Formation and analysis of the monthly increase in concentration

The “raw” increase in concentration is calculated by subtracting successive measuring points:    

Figure 4: Growth of the CO2 concentration, original (blue) and smoothed (orange).

It is easy to see that the monthly fluctuations in the increase in concentration are considerable and that the particularly high peak at the end of 2023 is by no means a singular event; in particular, the positive peak is preceded by a much larger negative one, which has not been mentioned in the press.  The much higher increase in 2015 was also not taken into account. This would have made it easy to refute the bold hypothesis of a declining sink effect, as the smoothed data (orange) shows that there is a clear trend of a declining increase in concentration after 2015.

After smoothing (orange), it is easier to recognize the actual trend than with the raw, noisy differences. As these are monthly values, the values must be multiplied by 12 in order to draw conclusions about the annual increase in concentration. There is no doubt that the right-hand side of the diagram shows that there has actually been an increase in concentration since 2023, which is interpreted in the current discussion as a decrease in sink performance.

Interpretation of the increase in concentration growth as a result of natural emissions

To illustrate this, the figure shows the sink capacity (green) from the difference between anthropogenic emissions (blue) and concentration growth (orange).
This calculation is a consequence of the ontinuity equation based on the conservation of mass, according to which the concentration growth Gi in month i results from the difference between the total emissions and all absorptions Ai , whereby the total emissions are the sum of the anthropogenic emissions Ei and the natural emissions Ni , i.e.
Gi = Ei + Ni – Ai
The effective monthly sink capacity Si is calculated as the difference between the monthly anthropogenic emissions Ei and the monthly concentration growth Gi , i.e.
Si = Ei – Gi
Following the continuity equation above, the effective sink capacity Si therefore is the difference of ocean and plant absorptions Ai and natural emissions Ni:
Si = Ai – Ni

Figure 5: Anthropogenic emissions (blue), CO2 concentration growth (orange) and sink effect (green)

It is easy to see that the effective sink capacity (smoothed over several months) does not fall below 0 at the right-hand edge of Figure 5 either. However, it is actually decreasing in 2023-2024. We must now remember that, according to the continuity equation, the “effective sink capacity” consists of absorptions (sinks in the narrower sense) and natural emissions. It could therefore also be that the peaks in concentration growth are caused by natural emissions.  These are not mentioned at all in the publications that are currently sounding the alarm.

It is generally known and a consequence of Henry’s Law that the gas exchange between the sea and the atmosphere depends on the sea surface temperature. We therefore expect increased CO2 emissions from the oceans as the temperature rises, somewhat exaggeratedly comparable to a bottle of bubbles in an oven.
This consideration motivates the introduction of temperature as a model parameter in the description of the effective sink capacity. The details of the elaboration of this model extension can be found in the above-mentioned publication and in its simplified description.

Figure 6 shows what the addition of temperature means for modeling the increase in concentration.

Figure 6: Smoothed concentration growth (blue) and its modeling with temperature-independent (green) and temperature-dependent (orange) sink model

While the green curve represents the widely known linear sink model (see here (2024), here (2023), here (2019) and here (1997)), as described in the publication or in the simplified representation, the orange curve also takes into account the dependence on temperature according to the new publication. It turns out that the current “excessive” increase in concentration is a natural consequence of the underlying rise in temperature. The sink performance, the linear dependence of absorption on the CO2 concentration, has not changed at all.

It is therefore high time to establish temperature as a “normal” cause of CO2 concentration changes in the public debate as an influencing factor instead of making wild speculations about the absence of sinks without evidence. 




Deconstructing the climate narrative

Introduction – How does the climate narrative work?

There is no doubt that there is a climate narrative propagated by scientists, the media and politicians according to which we are on a catastrophic course due to anthropogenic CO2 emissions, which can supposedly only be halted by reducing emissions to zero by 2050.

All those who contradict the narrative, even in subtle details, are blacklisted like lepers, even if they are renowned scientists, even Nobel Prize winners[1][2] – with devastating consequences for applications, publications or funding applications.    

How is it possible to bring all the important universities, including the most renowned ones such as Harvard, MIT and Stanford, Oxford, Cambridge and Heidelberg, onto the same consensus line? How can the most famous journals such as Nature[3] and Science, as well as popular science journals such as Spektrum der Wissenschaft, only accept a narrow “tunnel of understanding” without obviously ruining their reputation?

In order for the narrative to have such a strong and universal impact, a solid scientific foundation is undoubtedly necessary, which cannot be disputed without embarrassment. Those who do so anyway are easily identified as “climate deniers” or “enemies of science”. 

On the other hand, the predictions and, in particular, the political consequences are so misanthropic and unrealistic that not only has a deep social divide emerged, but an increasing number of contemporaries, including many scientists, are questioning what kind of science it is that produces such absurd results . [4]

A careful analysis of the climate issue reveals a pattern that runs like a red thread through all aspects. This pattern is illustrated in the example of 4 key areas that are important in climate research.

The pattern that has emerged from many years of dealing with the topic is that there is always a correct observation or a valid law of nature at the core. In the next step, however, the results of this observation are either extrapolated into the future without being checked, the results are exaggerated or even distorted in their significance. Other, relevant findings are omitted or their publication is suppressed.

The typical conclusion that can be drawn from the examples mentioned and many others is that each aspect of the climate threatens the most harmful outcome possible. The combination of several such components then leads to the catastrophic horror scenarios that we are confronted with on a daily basis. As the statements usually relate to the future, they are generally almost impossible to verify.

The entire argumentation chain of the climate narrative takes the following form:

  1. Anthropogenic emissions are growing – exponentially.
  2. Atmospheric concentration increases with emissions as long as emissions are not completely reduced to zero
  3. The increase in the concentration of CO2 in the atmosphere leads to a – dramatic – increase in the average temperature
  4. In addition, there are positive feedbacks when the temperature rises, and even tipping points beyond which reversal is no longer possible.
  5. Other explanations such as hours of sunshine or the associated cloud formation are ignored, downplayed or built into the system as a feedback effect.
  6. The overall effects are so catastrophic that they can be used to justify any number of totalitarian political measures aimed at reducing global emissions to zero.

A detailed examination of the subject leads to the conclusion that each of these points shows the pattern described above, namely that there is always a kernel of truth that is harmless in itself. The aim of this paper is to work out the true core and the exaggerations, false extrapolations or omissions of essential information.

1. anthropogenic emissions are growing – exponentially?

Everyone knows the classic examples of exponential growth, e.g. the chessboard that is filled square by square with double the amount of rice. Exponential growth always leads to disaster. It is therefore important to examine the facts of emissions growth.

Figure 1 Relative growth in global emissions

Figure 1 shows the relative growth of global anthropogenic emissions over the last 80 years.   To understand the diagram, let us remember that constant relative growth means exponential growth[7] . A savings account with 3% interest grows exponentially in principle. Accordingly, we find exponential growth in emissions with a growth rate of around 4.5% between 1945 and 1975. This phase was once known as the “economic miracle”. After that, emissions growth fell to 0 by 1980. This period was known as the “recession”, which resulted in changes of government in the USA and Germany.

A further low point in the growth of emissions was associated with the collapse of communism around 1990, with a subsequent rise again, mainly in the emerging countries. Since 2003, there has been an intended reduction in emissions growth as a result of climate policy.

It should be noted that emissions growth has currently fallen to 0.

Recently, Zeke Hausfather found that the sum of global anthropogenic emissions since 2011 has been constant within the measurement accuracy[8] , shown in Figure 2.

As a result, current emissions are no longer expected to be exceeded in the future[9] .

Figure 2  Anthropogenic emissions have been constant since 2011

The longer-term extrapolation of the current planned future emissions, the so-called “Stated Policies” scenario (from 2021), expects constant global emissions until 2030 and a very slight reduction of 0.3% per year thereafter.

Figure 3: Stated Policies scenario of IEA, , almost constant emissions .[11]

As a result, the two future scenarios most frequently used by the IPCC (RCP 8.5 and RCP6.2) are far removed from reality[12] of the emission scenarios that are actually possible. Nevertheless, the extreme scenario RCP8.5 is still the most frequently used in the model calculations . [13]

The IPCC scenario RCP4.5 and the similar IEA scenario “Stated Policies” shown in Figure 3 (p. 33, Figure 1.4) are the most scientifically sound .[14]

This means that if the realistic emission scenarios are recognized without questioning the statements about the climate disseminated by the IPCC, a maximum emission-related temperature increase of 2.5°C compared to pre-industrial levels remains. 

2. atmospheric CO2 concentration increases continuously — unless emissions are reduced to zero?

The question is how anthropogenic emissions affect the CO2 concentration in the atmosphere.  It is known and illustrated in Fig. 4 by the International Energy Agency that by no means all the CO2 emitted remains in the atmosphere, but that a growing proportion of it is reabsorbed by the oceans and plants.

The statistical evaluation of anthropogenic emissions and the CO2 concentration, taking into account the conservation of mass and a linear model of the natural sinks oceans and biosphere, shows that every year just under 2% of the CO2 concentration exceeding the pre-industrial natural equilibrium level is absorbed by the oceans and the biosphere.

Figure 4: Sources (anthropogenic emissions and land use), sinks of CO2  (oceans and
land sinks) and concentration growth in the atmosphere

are absorbed [15][16] .  This is currently half of anthropogenic emissions and the trend is increasing, as shown in Figure 5.   

Figure 5 CO2 balance and linear sink model: anthropogenic emissions (blue),
concentration growth (orange), natural sinks and their modeling (green)

The most likely global future scenario of the International Energy Agency – the extrapolation of current political regulations (Stated Policies Scenario STEPS) shown in Fig. 3 – includes a gentle decrease (3%/decade) in global emissions to the 2005 level by the end of the century. These emission reductions are achievable through efficiency improvements and normal progress.

If we take this STEPS reference scenario as a basis, using the linear sink model leads to an increase in concentration of 55 ppm to a plateau of 475 ppm, where the concentration then remains.   

Figure 6  Measured and predicted CO2 concentration with 95% error bar

It is essential that the CO2 concentration does not rise to climatically dangerous levels.   Article 4.1 of the Paris Climate Agreement[17]
 states that countries must reach their maximum emissions as soon as possible “in order to achieve a balance between anthropogenic greenhouse gas emissions and removals by sinks in the second half of this century“. The Paris Climate Agreement therefore by no means calls for complete decarbonization. 

The net-zero balance between emissions and absorption will be achieved in 2080 by extrapolating today’s behavior without radical climate measures. 

Without going into the details of the so-called sensitivity calculation, the following can be simplified for the further temperature development:

Assuming that the CO2 concentration is fully responsible for the temperature development of the atmosphere, the CO2 concentration in 2020 was 410 ppm, i.e. (410-280) ppm = 130 ppm above the pre-industrial level. Until then, the temperature was about 1° C higher than before industrialization. In the future, we can expect the CO2 concentration to increase by (475-410) ppm = 65 ppm based on the above forecast. This is just half of the previous increase. Consequently, even if we are convinced of the climate impact of CO2 , we can expect an additional half of the previous temperature increase by then, i.e. ½° C. This means that by 2080, the temperature will be 1.5° C above pre-industrial levels, meeting the target of the Paris Climate Agreement, even without radical emission reductions.

3. atmospheric CO2 concentration causes – dramatic? – rise in temperature

After the discussion about possible future CO2 quantities, the question arises as to their impact on the climate, i.e. the greenhouse effect of CO2 and its influence on the temperature of the earth’s surface and the atmosphere.  

The possible influence of CO2 on global warming is that its absorption of thermal radiation causes this radiation to be attenuated when it reaches outer space. The physics of this process is radiative transfer[18] . As the topic is fundamental to the entire climate debate on the one hand, but on the other hand demanding and difficult to understand, the complicated physical formulas are not used here.

In order to be able to measure the greenhouse effect, the infrared radiation emitted into space must be measured. However, the expected greenhouse effect of 0.2 W/m2 per decade[19] is so tiny that it is not directly detectable with today’s satellite technology, which has a measurement accuracy of around 10 W/m 2[20] .

 We therefore have no choice but to make do
with mathematical models of the physical radiative transfer equation. However, this is not valid proof of the effectiveness of this CO2  greenhouse effect in the real, much more complex atmosphere.

 There is a widely recognized simulation program MODTRAN[21] , with which the radiation of infrared radiation into space and thus also the CO2 greenhouse effect can be physically correctly simulated:

Figure 7 shows that the MODTRAN reconstruction of the infrared spectrum is in excellent agreement with the infrared spectrum measured from space. We can thus justify the applicability of the simulation program and conclude that the simulation can also be used to describe hypothetical constellations with sufficient accuracy.

With this simulation program we want to check the most important statements regarding the greenhouse effect.

Figure 7: Comparison between measured infrared spectrum and infrared spectrum simulated with MODTRAN

To start in familiar territory, we first try to reproduce the commonly published “pure CO2 greenhouse effect” by allowing the solar radiation, which is not reduced by anything, to warm the earth and its infrared radiation into space to be attenuated solely by the CO2 concentration. The CO2 concentration is set to the pre-industrial level of 280 ppm.

We use the so-called standard atmosphere[22] , which has proven itself for decades in calculations that are important for aviation, but remove all other trace gases, including water vapor. However, the other gases such as oxygen and nitrogen are assumed to be present, so that nothing changes in the thermodynamics of the atmosphere. By slightly correcting the ground temperature to 13.5°C (reference temperature is 15°C), the infrared radiation is set to 340 W/m2 . This is just ¼ of the solar constant[23] , so it corresponds exactly to the solar radiation distributed over the entire surface of the earth. 

The “CO2 hole”, i.e. the reduced radiation in the CO2 band compared to the normal Planck spectrum[24] , is clearly visible in the spectrum.

Figure 8 Simulated IR spectrum: only pre-industrial CO2

What happens if the CO2 concentration doubles?

Illustration 9 Radiative forcing in Figure 9a Temperature increase for
CO2 doubling (no albedo, compensation of radiative forcing
no water vapor)                                                                from Fig. 9.

Fig. 9 shows that doubling the CO2 concentration to 560 ppm reduces the heat flux of infrared radiation by 3.77 W/m2 . This figure is used by the IPCC and almost all climate researchers to describe the CO2 forcing.  In Fig. 9a, we change the ground temperature from -1.5°C to -0.7°C in order to achieve the radiation of 340 W/m2 again. This warming of 0.8°C with a doubling of the CO2 concentration is referred to as “climate sensitivity”.  It is surprisingly low given the current reports of impending climate catastrophes.

Especially when we consider that the settings of the simulation program used so far are completely at odds with the real Earth’s atmosphere:

  • No consideration of the albedo, the reflection of light,
  • No consideration of clouds and water vapor

We will now approach the real conditions step by step. The scenarios are summarized in Table 1:

Scenario Albedo Irradiation
(W/m )2
CO2 before (ppm) Temperature (
°C)
CO2 after (ppm) Drive
(W/m )2
Temperature Increase for balance (°C)
Pre-industrial CO only2 , no clouds, No water vapor 0 340 280 13,7 560 -3,77 0,8
No greenhouse gases, No clouds
(CO 2 from 0-280 ppm)
0,125 297,5 0 -2 280 -27 7
CO only2 , Albedo, no clouds, No water vapor 0,125 270 280 5 560 -3,2 0,7
Pre-industrial standard atmosphere 0,3 240 280 15 560 -2 0,5
Pre-industrial standard atmosphere, CO2 today Concentration 0,3 240 280 15 420 -1,1 0,3
Table 1: MODTRAN scenarios under different conditions, see text.

The scenario in the first row of Table 1 is the “pure CO2 ” scenario just discussed.

In the second line, we go one step back and also remove the CO2 , i.e. a planet without greenhouse gases, without clouds, without water vapor. But the Earth’s surface reflects sunlight, so it has an albedo[25] . The albedo value of 0.125 corresponds to that of other rocky planets as well as the ocean surface. Surprisingly, in this case the surface temperature is -2°C (and not -18°C as is often claimed!). This is because there is no cloud albedo without water vapor. If the CO2 concentration is now increased to the pre-industrial level of 280 ppm, the infrared radiation is reduced by 27 W/m2 . This large radiative forcing is offset by a temperature increase of 7°C.

We can see that there is a considerable greenhouse effect between the situation without any greenhouse gases and the pre-industrial state, with a warming of 7°C.

The third line takes this pre-industrial state, i.e. Earth’s albedo, 280 ppm CO2 , no clouds and no water vapor, as the starting point for the next scenario. If the CO2 concentration is doubled, the radiative forcing is -3.2 W/m2 , i.e. slightly less than in the first “pure CO2 scenario”. As a result, the warming of 0.7°C to achieve radiative equilibrium is also slightly lower here.

After these preparations, the pre-industrial standard atmosphere with albedo, clouds, water vapor and the real measured albedo of 0.3 is represented in the 4th row, with the ground temperature of 15°C corresponding to the standard atmosphere.  There are now several ways to adjust cloud cover and water vapor in order to achieve the infrared radiation of 340 W/m 2. (1-a) = 240 W/m2 corresponding to the albedo a=0.3. The exact choice of these parameters is not important for the result as long as the radiation is 240 W/m2 .

In this scenario, doubling the CO2 concentration to 560 ppm causes a radiative forcing of -2 W/m2 and a compensating temperature increase, i.e. sensitivity of 0.5°C

In addition to the scenario of a doubling of the CO2 concentration, it is of course also interesting to see what the greenhouse effect has achieved to date. The current CO2 concentration of 420 ppm is just in the middle between the pre-industrial 280 ppm and double that value.

In the 5th row of the table, the increase from 280 ppm to 420 ppm causes the radiative forcing of -1.1 W/m2 and the temperature increase of 0.3°C required for compensation.   From this result it follows that since the beginning of industrialization, the previous increase in CO2 concentration was responsible for a global temperature increase of 0.3°C.

This is much less than the average temperature increase since the beginning of industrialization.  The question therefore arises as to how the “remaining” temperature increase can be explained.

There are several possibilities:

  • Positive feedback effects that intensify CO2 -induced warming. This is the direction of the Intergovernmental Panel on Climate Change and the topic of the next chapter.
  • Other causes such as cloud albedo. This is the subject of the next but one chapter
  • Random fluctuations. In view of the chaotic nature of weather events, chance is often used. This possibility remains open in the context of this paper.

4. feedback leads to — catastrophic?  — consequences

The maximum possible climate sensitivity in the previous chapter, i.e. temperature increase with a doubling of the CO2 concentration, is 0.8°C, under real conditions rather 0.5°C.

It was clear early on in climate research that such low climate sensitivity could not seriously worry anyone in the world. In addition, the measured global warming is greater than predicted by the radiative transfer equation.

This is why feedbacks were brought into play; the most prominent publication in this context was by James Hansen et al. in 1984: “Climate Sensitivity: Analysis of Feedback Mechanisms”[26] (Climate Sensitivity: Analysis of Feedback Mechanisms). It was James Hansen who significantly influenced US climate policy with his appearance before the US Senate in 1988[27] . Prof. Levermann made a similar argument at a hearing of the German Bundestag’s Environment Committee[28] , claiming that the temperature would rise by 3°C due to feedback.

The high sensitivities published by the IPCC for a doubling of the CO2 concentration between 1.5°C and 4.5°C arose with the help of the feedback mechanisms.

In particular, the question arises as to how a small warming of 0.8°C can lead to a warming of 4.5°C through feedback without the system getting completely out of control?

By far the most important feedback in this context is the water vapor feedback.

How does water vapor feedback work?

The water vapor feedback consists of a 2-step process:

  • If the air temperature rises by 1°C, the air can absorb 6% more water vapor[29] .  It should be noted that this percentage is the maximum possible water vapor content. Whether this is actually achieved depends on whether sufficient water vapor is available.
  • The radiation transport of infrared radiation depends on the relative humidity:
    Additional humidity reduces the emitted infrared radiation as a result of absorption by the additional water vapor.
    Using the MODTRAN simulation program already mentioned, the reduction of infrared radiation by 0.69 W/m2 is determined by increasing the humidity by 6%, e.g. from 80% to 86%[30]

This reduced infrared radiation is a negative radiative forcing. The temperature increase compensating for this attenuation is the primary feedback g (“gain”). This is 0.19°C as a result of the original temperature increase of 1°C, i.e. g=0.19.

The total feedback f results as a geometric series[31] due to the recursive application of the above mechanism – the 0.19°C additional temperature increase results in further additional water vapor formation. This relationship is described by James Hansen in his 1984 paper[32] :

f = 1+ g + g2 + g3 … = 1/(1-g).  

With g=0.19, the feedback factor f = 1.23. 

Assuming a greenhouse effect from radiative transfer of 0.8°C, together with the maximum possible feedback, this results in a temperature increase of
0.8°C. 1.23 = 0.984 °C  1°C, with the sensitivity determined here of 0.5°C. 1.23 = 0.62 °C. 

Both values are lower than the lowest published sensitivity of 1.5°C of the models used by the IPCC.

The warming that has occurred since the beginning of industrialization is therefore 0.3°C. 1.23 = 0.37°C even with feedback.

This proves that even the frequently invoked water vapor feedback does not lead to exorbitant and certainly not catastrophic global warming.

5. but it is warming up? – Effects of clouds.

To stop at this point will leave anyone dealing with the climate issue with the obvious question: “But the earth is warming, and more than would be possible according to the revised greenhouse effect including feedback?”.

For this reason, the effects of actual cloud formation, which until recently have received little attention in the climate debate, are examined here[33] .

Investigation of changes in global cloud cover

Jay R Herman from NASA[34] has calculated and evaluated the average reflectivity of the Earth’s cloud cover with the help of satellite measurements over a period of more than 30 years:

Figure 10 Cloud reflectivity between 1979 and 2011

He identified a clear trend of decreasing cloud cover. From this he calculated, how this affects the affected components of the global energy budget: 

Figure 11: Change in the energy budget due to the change in cloud reflectivity

The result was that due to the reduced cloud cover, solar radiation increased by 2.33 W/m2 in 33 years. That is 0.7 W/m2 of radiative forcing per decade. In contrast, the decrease in radiation due to the increase in CO2 concentration amounted to a maximum of 0.2 W/m2 per decade . [35]

According to this study, at 78% the influence of clouds on the climate is at least 3.5 times greater than that of CO2 , which therefore has an influence of 22% at most. 

Conclusion – there is no impending climate catastrophe

Let us summarize the stages of these observations on the deconstruction of the climate narrative once again:

  1. There is no exponential growth in CO2 emissions. This phase existed until 1975, but it is long gone and global emissions have reached a plateau in the last 10 years.
  2. The CO2 concentration is still growing despite constant emissions, but its growth has already slowed and will stop in the second half of the century assuming the most likely emissions scenario.
  3. The physically plausible greenhouse effect of CO2 is much lower than is usually claimed; the sensitivity that can be justified under real atmospheric conditions is only 0.5°C.
  4. Estimating the maximum possible feedback effect of water vapor results in the upper limit of the feedback factor as 1.25. This does not justify temperature increases of 3°C or more
  5. There are plausible simple explanations for the earth’s temperature development. The most important of these is that, as a result of various air pollution control measures (reduction of wood and coal combustion, catalytic converters in cars, etc.), aerosols in the atmosphere have decreased over the last 70 years, which has led to a reduction in cloud formation and therefore to an increase in solar radiation.  

Footnotes

[1]https://www.eecg.utoronto.ca/~prall/climate/skeptic_authors_table.html
[2]https://climatlas.com/tropical/media_cloud_list.txt
[3]https://www.cfact.org/2019/08/16/journal-nature-communications-climate-blacklist/
[4]e.g. https://clintel.org/
[5]Raw data: https://ourworldindata.org/co2-emissions
[6]Relative growth: https://www.statisticshowto.com/relative-rate-of-change-definition-examples/#:~:text=Relative%20rates%20of%20change%20are,during%20that%20ten%2Dyear%20interval.
[7]https://www.mathebibel.de/exponentielles-wachstum
[8]https://www.carbonbrief.org/global-co2-emissions-have-been-flat-for-a-decade-new-data-reveals/
[9]https://www.carbonbrief.org/analysis-global-co2-emissions-could-peak-as-soon-as-2023-iea-data-reveals/
[10]https://www.carbonbrief.org/global-co2-emissions-have-been-flat-for-a-decade-new-data-reveals/
[11]https://www.iea.org/data-and-statistics/charts/co2-emissions-in-the-weo-2021-scenarios-2000-2050
[12]https://www.nature.com/articles/d41586-020-00177-3
[13]https://rogerpielkejr.substack.com/p/a-rapidly-closing-window-to-secure
[14]https://iea.blob.core.windows.net/assets/4ed140c1-c3f3-4fd9-acae-789a4e14a23c/WorldEnergyOutlook2021.pdf
[15]https://judithcurry.com/2023/03/24/emissions-and-co2-concentration-an-evidence-based-approach/
[16]https://www.mdpi.com/2073-4433/14/3/566
[17]https://eur-lex.europa.eu/legal-content/DE/TXT/?uri=CELEX:22016A1019(01)
[18]http://web.archive.org/web/20210601091220/http:/www.physik.uni-regensburg.de/forschung/gebhardt/gebhardt_files/skripten/WS1213-WuK/Seminarvortrag.1.Strahlungsbilanz.pdf
[19]https://www.nature.com/articles/nature14240
[20]https://www.sciencedirect.com/science/article/pii/S0034425717304698
[21]https://climatemodels.uchicago.edu/modtran/
[22]https://www.dwd.de/DE/service/lexikon/Functions/glossar.html?lv3=102564&lv2=102248#:~:text=In%20der%20Standardatmosph%C3%A4re%20werden%20die,Luftdruck%20von%201013.25%20hPa%20vor.
[23]https://www.dwd.de/DE/service/lexikon/Functions/glossar.html?lv3=102520&lv2=102248#:~:text=Die%20Solarkonstante%20ist%20die%20Strahlungsleistung,diese%20Strahlungsleistung%20mit%20ihrem%20Querschnitt.
[24]https://de.wikipedia.org/wiki/Plancksches_Strahlungsgesetz
[25]https://wiki.bildungsserver.de/klimawandel/index.php/Albedo_(simple)
[26]https://pubs.giss.nasa.gov/docs/1984/1984_Hansen_ha07600n.pdf
[27]https://www.hsgac.senate.gov/wp-content/uploads/imo/media/doc/hansen.pdf
[28]https://www.youtube.com/watch?v=FVQjCLdnk3k&t=600s
[29]A value of 7% is usually given, but the 7% is only possible from an altitude of 8 km due to the reduced air pressure there.
[30]h ttps://klima-fakten.net/?p=9287
[31]https://de.wikipedia.org/wiki/Geometrische_Reihe
[32]https://pubs.giss.nasa.gov/docs/1984/1984_Hansen_ha07600n.pdf
[33]The IPCC generally treats clouds only as potential feedback mechanisms.
[34]https://www.researchgate.net/publication/274768295_A_net_decrease_in_the_Earth%27s_cloud_aerosol_and_surface_340_nm_reflectivity_during_the_past_33_yr_1979-2011
[35]https://www.nature.com/articles/nature14240




How large is the Greenhouse Effect in Germany? — A statistical Analysis.


[latexpage]

High correlation as an indication of causality?

The argument that CO2 determines the mean global temperature is often illustrated or even justified with this diagram, which shows a strong correlation between CO2 concentration and mean global temperature, here for example the mean annual concentration measured at Maona Loa and the annual global sea surface temperatures:

Although there are strong systematic deviations between 1900 and 1975 – 75 years after all – the correlation has been strong since 1975.
If we try to explain the German mean temperatures with the CO2 concentration data from Maona Loa available since 1959, we get a clear description of the trend in temperature development, but no explanation of the strong fluctuations:

The “model temperature” $\hat{T}_i$ estimated from the logarithmic CO2 concentration data $ln(C_i)$ measured in year $i$ using the least squares method is given by
 $\hat{T}_i = 7.5\cdot ln(C_i)- 35.1 $ (°C)

 If we add the annual hours of sunshine as a second explanatory variable, the fit improves somewhat, but we are still a long way from a complete explanation of the fluctuating temperatures. As expected, the trend is similarly well represented, and some of the fluctuations are also explained by the hours of sunshine, but not nearly as well as one would expect from a causal determinant:

The model equation for the estimated temperature $\hat{T}_i$ becomes with the extension of the hours of sunshine $S_i$ to
$ \hat{T}_i = 5.8\cdot ln(C_i) + 0.002\cdot S_i – 28.5 $  (°C)
The relative weight of the CO2 concentration has decreased slightly with an overall improvement in the statistical explanatory value of the data.

However, it looks as if the time interval of 1 year is far too long to correctly treat the effect of solar radiation on temperature. It is obvious that the seasonal variations are undoubtedly caused by solar radiation.
 The effects of irradiation are not all spontaneous; storage effects must also be taken into account. This corresponds to our perception that the heat storage of summer heat lasts for 1-3 months and that the warmest months, for example, are only after the period of greatest solar radiation. We therefore need to create a model based on the energy flow that is fed with monthly measured values and that provides for storage.

Energy conservation – improving the model

To improve understanding, we create a model with monthly data taking into account the physical processes (the months are counted with the index variable $i$ ):

  • Solar radiation supplies energy to the earth’s surface, which is assumed to be proportional to the number of hours of sunshine per month $S_i$,

  • assuming the greenhouse effect, energy is also supplied; a linear function of $ln(C_i)$ is assumed for the monthly energy input (or prevented energy output),
  • the top layer of the earth’s surface stores the energy and releases it again; the monthly release is assumed to be a linear function of the surface temperature $T_i$,
  • the monthly temperature change in Germany is assumed to be proportional to the energy change.

This results in this modeled balance equation, the constant $d$ makes it possible to use arbitrary measurement units:
$ \hat{T}_i – \hat{T}_{i-1} = a\cdot \hat{T}_{i-1} + b\cdot S_i + c\cdot ln(C_i) + d $
On the left-hand side of the equation is the temperature change as a representative of the energy balance change, while the right-hand side represents the sum of the causes of this energy change.
To determine the coefficients $a,b,c,d$ using the least squares method, the measured temperature $T_i$ is used instead of the modeled temperature $\hat{T}_i$.

Here are the monthly temperature and sunshine hour data. It can be seen that the temperature data lags behind the sunshine hours data by around 1-2 months, but has a similar overall trend:

This fits with the assumption that we actually have a storage effect. The balance equation should therefore provide meaningful values. However, we need to take a closer look to evaluate the estimated result.

In this diagram, the values of the respective coefficients are shown in the first column, their standard error in the second column, followed by the so-called T-statistic, followed by the probability that the assumption of the coefficient other than 0 is incorrect, the so-called probability of error. This means that a coefficient is only significant if this probability is close to 0. This is the case if the T-statistic is greater than 3 or less than -3. Finally, the last two columns describe the so-called 95% confidence interval. This means that there is a 95% probability that the actual estimated value is within this interval.

     Coefficient  Std.Error   t-Value    P>|t|    [0.025     0.975]
--------------------------------------------------------------------
a -0.4826 0.0142 -33.9049 0.0000 -0.5105 -0.4546
b 0.0492 0.0013 38.8127 0.0000 0.0467 0.0517
c 0.6857 0.9038 0.7587 0.4483 -1.0885 2.4598
d -6.3719 5.3013 -1.2020 0.2297 -16.7782 4.0344

Here, the error probabilities of the coefficients $c$ and $d$ are so high, at 45% and 23% respectively, that we must conclude that both $c=0$ and also $d=0$. $c$ measures the significance of the CO2 concentration for the temperature. This means that the CO2 concentration has had no statistically significant influence on temperature development in Germany for 64 years. However, this is the period of the largest anthropogenic emissions in history.
The fact that also $d$ assumes the value 0 is more due to chance, as this constant depends on the units of measurement of the CO2 concentration and the temperature.

As a result, the balance equation is adjusted:
$ T_i – T_{i-1} = a\cdot T_{i-1} + c\cdot S_i + d $
 with the result:

       Coefficient  Std.Error   t-Value    P>|t|    [0.025    0.975]
--------------------------------------------------------------------
a -0.4823 0.0142 -33.9056 0.0000 -0.5102 -0.4544
b 0.0493 0.0013 38.9661 0.0000 0.0468 0.0517
d -2.3520 0.1659 -14.1788 0.0000 -2.6776 -2.0264

The constant $d$ is now valid again with high significance due to the fact that $c=0$. The other two coefficients and have hardly changed. They deserve a brief discussion:

The coefficient $a$ indicates which part of the energy measured as temperature is released again over the course of a month. This is almost half. This factor is independent of the zero point of the temperature scale; choosing K or anomalies instead of °C would result in the same value. The value corresponds approximately to the subjective perception of how the times of maximum temperature in summer shift in time compared to the maximum solar radiation.
The coefficient $b$ indicates the factor by which the hours of sunshine translate into monthly temperature changes.

The result is not just an abstract statistic, it can also be visualized by reconstructing the monthly temperature curve of the last 64 years with the help of the model described.

The reconstruction of the entire temperature curve is based on the time series of sunshine hours and a single temperature starting value $\hat{T}_{-1}=T_{-1}$ , the temperature of the month preceding the beginning of the time series under investigation since 1959, in this case December 1958.
The reconstruction is carried out using this recursion from the sunshine hours over the 768 months from January 1959 to December 2023:
$\hat{T}_i = \hat{T}_{i-1} + a\cdot \hat{T}_{i-1} + b\cdot S_i + d$ $(0\leq i < 768 ) $
Here is the complete reconstruction of the temperature data in comparison with the original temperature data:

 The last 10 years are shown enlarged for a clearer presentation:

It is noticeable that the residual, i.e. the deviations of the reconstruction from the actual temperatures up to the end of the investigated period around 0, appears symmetrical and shows no obvious systematic deviations. The measure of the error of the reconstruction is the standard deviation of the residual. This is 2.5°C. Since we are investigating a long period of 64 years, a fine analysis of the long-term trends of original temperatures, reconstruction and residual could find a possible upper limit of the possible influence of CO2

Detailed analysis of the residue

If we determine the average slope of the three curves – original temperature data, reconstruction and residual – over the entire 64-year period by estimating an equalization line, we obtain the following long-term values:

  • Original temperature data: 0.0027 °C/month = 0.032 °C/year
  • Reconstructed temperature data: 0.0024°C/month = 0.029 °C/year
  • Residual: 0.00028 °C/month = 0.0034 °C/year

Of the original temperature trend, 90% is explained by the number of hours of sunshine. This leaves only 10% of unexplained variability for other causes. Until proven otherwise, we can therefore assume that the increase in CO2 concentration is responsible for at most these 10%, i.e. for a maximum of 0.03° C per decade over the last 64 years. Statistically, however, the contribution of the CO2 concentration cannot be considered significant. It
should be borne in mind that this simple model does not take into account many influencing factors and inhomogeneities, meaning that the influence of the CO2 concentration is not the only factor that is effective in addition to the hours of sunshine. This is why the CO2 influence is not considered statistically significant.

Extension – correction by approximation of the actual irradiation

So far, we have used the hours of sunshine as a representative of the actual energy flow. This is not entirely correct, because an hour of sunshine in winter means significantly less irradiated energy than in summer due to the much shallower angle of incidence.

The seasonal course of the weighting of the incoming energy flow has this form. The hours of sunshine must be multiplied by this weighting to obtain the energy flow.

With these monthly weightings, the model is again determined from solar radiation and CO2. Again, the contribution of CO2 must be rejected due to lack of significance. Therefore, the reconstruction of the temperature from the irradiating energy flow is slightly better than the above reconstruction.

The standard deviation of the residual has been reduced to 2.1°C by correcting the hours of sunshine to the energy flow.

Possible generalization

Worldwide, the recording of sunshine hours is far less complete than that of temperature measurements. Therefore, the results for Germany cannot simply be reproduced worldwide.
 However, satellites are used to measure cloud cover and the reflection of solar radiation on clouds. This data leads to similar results, namely that the increase in CO2 concentration is responsible for at most 20% of the global average temperature increase. As this is lower on average than the temperature increase in Germany, this also ultimately leads to an upper limit of 0.03°C per decade for the consequences of the CO2 -induced greenhouse effect.




How does the atmospheric Greenhouse Effect work?

Much has been written about the greenhouse effect and many comparisons have been made. However, much of this is misleading or even wrong.
The greenhouse effect is caused by the fact that with increasing CO2 a slightly increasing proportion of infrared radiation is emitted from the upper, cold layers of the earth’s atmosphere (i.e. the stratosphere) into space.
 The facts are complicated in detail, which is why it is so easy to scare people with exaggerations, distortions or lies. Here I would like to describe the basics of the atmospheric greenhouse effect, in which CO2 plays an important role, in a
physically correct way and without formulas.

Viewed from space, the temperature balance of the Earth’s surface and atmosphere is determined by

  • irradiation of short-wave, largely visible sunlight and through
  • Radiation of long-wave invisible infrared radiation.

If the energy content of the incoming radiation is equal to the energy content of the outgoing radiation, there is an equilibrium and the average temperature of the earth remains constant. Warming always takes place when either the radiation decreases or the irradiation increases, until equilibrium is restored.

Infrared radiation is the only way the Earth can emit energy (heat) into space. It is therefore necessary to understand how the mechanisms of infrared radiation work.

The mechanisms of infrared radiation into space

There are only 2 ways in which the Earth can release energy into space:

  • The molecules of the earth’s surface or the sea surface emit infrared waves at ground temperature (average 15°C = 288 K).
  • The molecules of the so-called greenhouse gases, mainly water vapor and CO2 (to a much lesser extent methane and some other gases), emit infrared waves from the atmosphere at the temperature prevailing in their environment. The other gases in the atmosphere, such as oxygen or nitrogen, are unable to emit significant amounts of infrared radiation.
    CO2 differs from water vapor in that it is only active in a small wavelength range. On the other hand, the proportion of water vapor molecules in the atmosphere decreases very quickly from an altitude of 5 km because the water vapor condenses back into clouds when it cools down and then rains down. We can see that from this: In an airplane at an altitude of 10 km, we are always above the clouds. And there is virtually no water vapor above the clouds. However,
    CO2 is evenly mixed with other gases, primarily oxygen and nitrogen, right up to the highest layers of the atmosphere.

CO2 and water vapor are therefore like two competing handball teams, one of which (the water vapor) is only allowed to run up to the halfway line and the other (CO2 ) can only move within a narrow longitudinal strip of the playing field. This narrow longitudinal strip becomes a little wider when the “CO2 team” gets more players (more CO2 ). The goal is the same for both teams (space) and stretches across the entire width of the pitch. As long as the ball is still far away from the goal, another player catches it rather than it entering the goal. This other player passes the ball back in a random direction. The closer the players are, the quicker the ball is caught and played back. The closer the ball gets to the goal, the further apart the players stand. This means that it is easier for the ball to get between the players and into the goal.

As long as there are other greenhouse gas molecules in the vicinity, the infrared radiation cannot reach outer space (the other molecules are too close together); it is collected again by the other molecules and emitted by them. Specifically, the infrared radiation in the lower atmosphere only has a range of around 25m until it is intercepted again by another greenhouse gas molecule, usually a water molecule or CO2 . The thinner the greenhouse gases (fewer players) in the atmosphere become with increasing altitude, the more likely it is that the infrared radiation will reach space.

From this we can conclude that there are in principle 3 layers from which infrared radiation reaches space:

  • When the air is dry and without clouds, there is a part of the infrared called the “atmospheric window” that radiates directly from the ground into space (this is when there are no or very few water vapor players in the field),

  • between 2 and 8 km altitude, on average at 5 km altitude, is the upper edge of the clouds, from where the water vapor molecules of the clouds emit a large proportion of the infrared radiation into space at an average of 255 K = -18°C
  • the proportion of infrared radiation in the wavelength range around 15 micrometers (the narrow strip of the playing field) is transported by CO2 into the high cold layers of the stratosphere, from where it is emitted into space at around 220 K = -53°C.

This leads to a competitive situation as to whether a water molecule can radiate directly or whether its infrared radiation is still intercepted by a CO2 molecule and transmitted to the heights of the stratosphere.

The greenhouse effect

How does a growing CO2 concentration lead to reduced energy radiation into space and thus to warming?

It is important to know that the radiated energy decreases sharply with decreasing air temperature and that the temperature decreases with increasing altitude. If the CO2 concentration increases over time, the wavelength range in which the CO2 is “responsible” for radiation becomes a little wider (the narrow strip of the playing field). This means that a small part of the infrared radiation that would otherwise be emitted by the water vapor at 255 K is now emitted by the CO2 at 220 K, i.e. with significantly lower energy. As a consequence, this means that the energy of the total radiation is slightly reduced – the radiation from sunlight, which is assumed to be constant, therefore predominates and a warming effect occurs.

However, the effect is not as great as it is usually portrayed in the media:
Since the beginning of industrialization, the earth’s infrared radiation has decreased by just 2 watts/sqm
 with a 50%
increase in CO2 concentration from 280 ppm to 420 ppm. With an average radiation of 240 watts/sqm, that is1 only just under 1% in 170 years.
We now know the first possibility of how the balance mentioned at the beginning is disturbed by a change in radiation. But so far only to a very small extent.

The effects of changes in irradiation are greater than the greenhouse effect

The second way of disturbing the balance is through changes in irradiation.
The fluctuations in irradiation caused by changing cloud cover are up to 100 times greater than the aforementioned 2 W/sqm (which owners of photovoltaic systems can confirm), which can be attributed to the greenhouse effect. Looking at Germany, according to the German Weather Service, the number of hours of sunshine in Germany has been increasing by 1.5% per decade for 70 years2. In other words, in less than 10 years, the effect has been greater than that of the greenhouse effect in 170 years. For a more precise numerical comparison, both measurement data to be compared must be available in the relevant period: In the period of the last 40 years, there was 6 times the warming due to the increase in hours of sunshine in Germany compared to the greenhouse effect. The changes in solar radiation are therefore responsible for global warming to a far greater extent than the changes in CO2 concentration.

This describes and classifies the generally known positive greenhouse effect. There is therefore no reason to use the greenhouse effect to justify fear and panic. And there is an urgent need for research, the media and politicians to look into the influence and causes of the increasing hours of sunshine. An initial, more detailed analysis of the data from the German Weather Service shows that the changes in hours of sunshine in Germany explain 90% of the monthly temperatures over the last 70 years and that the greenhouse effect in Germany has no statistically significant influence.

One important phenomenon is still missing: in the Antarctic, the increase in CO2 concentration leads to cooling, which is known as the negative greenhouse effect.

The negative greenhouse effect in the Antarctic

There is a peculiar effect when we look at the one area of the earth where the earth’s surface is at times even colder than the 220 K at which the infrared radiation of CO2 is emitted into space: In the Antarctic, where temperatures below -60°C (=213 K) are not uncommon, we actually find a negative greenhouse effect.
In other words, where cooling occurs as the CO2 concentration increases.
As the CO2 concentration
increases, the proportion of infrared radiation from the CO2 increases as usual. However, at 220 K, the CO2 layer is now warmer than the surface of the Antarctic. This means that more heat is dissipated from the CO2 in the atmosphere than from the Earth’s surface below.
 In other words: In the Antarctic, the increase in CO2 concentration means that heat dissipation into space is increased, and it is therefore getting colder there, not warmer.

  1. Reason for the 240 W/sqm: https://www.zamg.ac.at/cms/de/klima/informationsportal-klimawandel/klimasystem/umsetzungen/energiebilanz-der-erde ︎
  2. Calculation: 10*168h/72 years = 23 h/decade => (23h/decade)/1544h = 1.5%/decade



Water Vapour Feedback


[latexpage]

In the climate debate, the argument of feedback through water vapor is used to amplify the climate effect of greenhouse gases – the sensitivity to a doubling of their concentration in the atmosphere – which, according to the radiative transfer equation and general consensus, is a maximum of 0.8°, by an alleged factor of 2-6. However, this is usually not quantified more precisely, only formulas with the “final feedback” are usually given.

Recently, David Coe, Walter Fabinski and Gerhard Wiegleb described and analyzed precisely this feedback in the publication “The Impact of CO2, H2O and Other ‘Greenhouse Gases’ on Equilibrium Earth Temperatures“. Based on her publication, this effect is derived below using partly the same and partly slightly different approaches. The results are almost identical.

All other effects that occur during the formation of water vapor, such as cloud formation, are ignored here.

The basic mechanism of water vapor feedback

The starting point is an increase in atmospheric temperature by ∆T0, regardless of the cause. Typically, the greenhouse effect is assumed to be the primary cause. The argument is now that the warmed atmosphere can absorb more water vapor, i.e. the saturation vapor pressure (SVP) increases and it is assumed that consequently the water vapor concentration ∆H2O also increases, as a linear function of the temperature change. (The temperature change is so small that linearization is legitimate in any case):
$\Delta H_2O = j\cdot \Delta T_0 $
where $j$ is the proportionality constant for the water vapor concentration.
An increased water vapor concentration in turn causes a temperature increase due to the greenhouse effect of water vapor, which is linearly dependent on the water vapor concentration:
$\Delta T_1 = k\cdot \Delta H_2O $
In summary, the triggering temperature increase ∆T0 causes a subsequent increase in temperature ∆T1:
$\Delta T_1 = j\cdot k\cdot \Delta T_0 $
Since the prerequisite of the method is that the cause of the triggering temperature increase is insignificant, the increase by ∆T1 naturally also causes a feedback cycle again:
$\Delta T_2 = j\cdot k\cdot \Delta T_1 = (j\cdot k)^2\cdot \Delta T_0$
This is repeated recursively. The final temperature change is therefore a geometric series:
$\Delta T = \Delta T_0\sum_{n=0}^\infty(j\cdot k)^n = \Delta T_0\cdot \frac{1}{1-j\cdot k} $
If $j\cdot k\ge 1$, the series would diverge and the temperature would grow beyond all limits. It is therefore important to be clear about the magnitude of these two feedback factors.

For the determination of the first term, $j$ we can apply a simplified approach by accepting the statement commonly used in the mainstream literature, that for each degree C of temperature increase the relative air moisture may rise up to 7%. In the German version of this post I did the explicit calculations and came to the result that the realistic maximum air moisture rise is 6% per degree temperature rise, which has hardly any effect on the final result.

Dependence of the greenhouse effect on the change in relative humidity

Infrared radiation transport in the atmosphere is dependent on relative humidity. This is taken into account in the well-known and proven MODTRAN simulation program. With increasing humidity, the outgoing infrared radiation decreases due to the greenhouse effect of water vapor.

The decrease in radiation is linear between 60% and 100% humidity. Therefore, the increase in relative humidity from 80% to 86% is considered to determine the decrease in radiant power and the temperature increase required for compensation.

To do this, we set the parameters of the MODTRAN simulation to

  • the current CO2 concentration of 420 ppm,
  • a relative humidity of 80%,
  • and a cloud constellation that comes close to the average IR radiant power of 240 $\frac{W}{m^2}$.

The temperature offset is now increased until the reduced iR radiation of 0.7 \frac{W}{m^2} is compensated for by increasing the temperature. This is the case when the ground temperature is increased by 0.215 °C.

A 7% higher relative humidity therefore causes a greenhouse effect, which is offset by a temperature increase of 0.215°C. Extrapolated to a (theoretical) change of 100% humidity, this results in $k=3.07$°C/100%.

The final feedback factor and the total greenhouse effect

This means that a 1 degree higher temperature in a feedback cycle causes an additional temperature increase of $k\cdot j = 0.215$.

The geometric series leads to an amplification factor $f$ of the pure CO$_2$ greenhouse effect by
$f=\frac{1}{1-0.215} = 1.27 $

This means that the sensitivity amplified by the water vapor feedback when doubling the CO$_2$ concentration $\Delta T$ is no longer $\Delta T_0=0.8$°C, but
$\Delta T = 1.27\cdot 0.8$ °C = 1.02°C $\approx$ 1°C

This result does not take into account the increase in temperature caused by the higher water vapor concentration




The Extended Carbon Sink Model (work in progress)


[latexpage]

Introduction – potential deficit of the simple linear carbon sink model

With the simple linear carbon sink model the past relation between anthropogenic emissions and atmospheric CO2 concentration can be excellently modelled, in particular when using the high quality emission and concentration data after 1950.
The model makes use of the mass conservation applied to the CO2-data, where $C_i$ is the CO2 concentration in year $i$, $E_i$ are the anthropogenic emissions during year $i$, $N_i$ are all other CO2 emissions during year $i$ (mostly natural emissions), and $A_i$ are all absorptions during year $i$. We assume emissions caused by land use change to be part of the natural emissions, which means that they are assumed to be constant. Due to the fact that their measurement error is very large, this should be an acceptable assumption.
With the concentration growth $G_i$
$G_i = C_{i+1}-C_i $
we get from mass conservation the yearly balance
$ E_i + N_i – A_i = G_i $
$E_i$ and $G_i$ are measured from known data sets (IEA and Maona Loa), and we define the effective sink $S_i$ as
$S_i = A_i – N_i$
The atmospheric carbon balance therefore is
$E_i – G_i = S_i $
The effective sink ist modelled as a linear function of the CO2-concentration by minimizing
$\sum_i (S_i – \hat{S}_i)^2$
w.r.t. $a$ and $n$, where
$\hat{S}_i = a\cdot C_i + n $
The equation can be re-written to
$\hat{S}_i = a\cdot (C_i – C^0)$
where
$C^0 = -\frac{n}{a}$
is the average reference concentration represented by the oceans and the biosphere. The sink effect is proportional to the difference of the atmospheric concentration and this reference concentration. In the simple linear model the reference concentration is assumed to be constant, implying that these reservoirs are close to inifinite. Up to now this is supported by the empirical data.
This procedure is visualized here:

This results in an excellent model reconstruction of the measured concentration data:

It is important to note that the small error since 2010 is an over-estimation of the actual measured data, which means that the estimated sink effect is under-estimated. Therefore we can safely say that currently we do not see the slightest trend of a possible decline of the 2 large sink systems, the ocean sink and the land sink from photosynthesis.

Nevertheless it can be argued that in the future both sink systems may enter a state of saturation, i.e. a lack of the ability to absorb surplus carbon from the atmosphere. As a matter of fact it is claimed from the architects of the Bern model and representatives of the IPCC that the capacity of the ocean is not larger than 5 times the capacity of the atmosphere, and therefore future ability to take up extra CO2 will rapidly decline. We don’t see this claim justified by data, but before we can prove that the claim is not justified, we will adapt the model to make it capable of calculating varying sink capacities.

Extending the model with a second finite accumulating box

In order to take care of the finite size of both the ocean and the land sinks, we do not pretend that these sink systems are infinite, but assume a second box besides the atmosphere with a concentration $C^0_i$, taking up all CO2 from both sink systems. The box is assumed to be $b$ times larger than the atmosphere, therefore for a given sink-related change of atmosphere concentration ($-S_i$) we get an increase of concentration in the “sink box” of the same amount ($S_i$) but reduced by the factor b:
$ C^0_{i+1} = C^0_i + \frac{1}{b}\cdot S_i $
The important model assumption is that $C^0_i$ is the reference concentration, which determines future sink ability.
The initial value is the previously calculated equilibrium concentration $C^0$
$C^0_0 = C^0$
Therefore by evaluation of the recursion we get
$C^0_i = C^0 + \frac{1}{b}\sum_{j=1}^i S_i$
The main modelling equation is adapted to
$\hat{S}_i = a\cdot (C_i – C^0_i)$
or
$\hat{S}_i = a\cdot (C_i – \frac{1}{b}\sum_{j=1}^i S_i) + n $

Obviously measurements must be started at the time where the anthropogenic emissions are still close to 0. Therefore we begin with the measurements from 1850, being aware that the data before 1959 are much less reliable than since then. There are reasons to assume that before 1950 land use change induced emissions play a stronger role than later. But there are strong reasons, that the estimated IEA values are too large, so in order to reach a reference value $C^0$ close to 280 ppm, an aequate weight for land use change emissions is 0.5.

Results for different scenarios

We will now evaluate the actual emission and concentration measurements for 3 different scenarios, for b=5, b=10, and b=50.
The first scenario (b=5) is considered to be the worst case scenario, rendering similar results as the Bern model.
The last scenario (b=50) corresponds to the “naive” view that the CO2 in the oceans is equally distributed, making use of the full potential buffer capacity of the oceans.
The second scenario (b=10) is somewhere in between.

Szenario b=5: Oceans and land sinks have 5 times the atmospheric capacity

The “effective Concentration” used for estimating the model reduces the measured concentration by the weighted cumulative sum of the the effective sinks with $b=5$. We see, that before 1900 there is hardly any difference to the measured concentration:

First we reconstruct the original data from the model estimation:

Now we calculate the future scenarios:

Constant emissions after 2023

In order to understand the reduced sink factor, we first investigate the case where emissions remain constant after 2023. By the end of 2200 CO2 concentration would be close to 600 ppm, with no tendency to flatten.

Emission reductions to reach equilibrium and keep permanently constant concentration

It is easy to see that under the given conditions of a small CO2 buffer, the concentration keeps increasing when emissions are constant. The interesting question is, how the emission rate has to be reduces in order to reach a constant concentration.
From the model setup one would assume that the yearly emission reduction should be $\frac{a}{b} \approx 0.005$, and indeed, with a yearly emission reduction of 0.5% after 2023, we reach a constant concentration eventually and hold it. This means that emission rates have to be cut to half within 140 years – provided the pessimistic assumption $b=5$ turns out to be correct:

Fast reduction to 50% emissions, then keeping concentration constant

An interesting scenario is the one, which cuts emissions to half the current amout within a short time, and then trying to keep the concentration close to the current level:

Scenario b=10: Oceans and land sinks have 10 times atmospheric capacity

Assuming the capacity of the (ocean and plant) CO2 reservoir to be 10-fold results, as expected, to half the sink reduction.

It does not change significantly the model approximation quality to the actual CO2 concentration data:

Constant emissions after 2023

The growth of the concentration for constant emissions is now smaller than 550 ppm by the end of 2200, but still growing.

Emission reductions to reach equilibrium and keep permanently constant concentration

The emission reduction rate can be reduced to 0.2% in order to compensate the sink reduction rate:

Fast reduction to 50% emissions, then keeping concentration constant

This is easier to see for the scenario, which reduces swiftly emissions to 50%. with peak concentration below 440 ppm, the further slow reduction with 0.2% p.a. keeps the concentration at about 415 ppm.

Szenario b=50: Oceans and land sinks have 50 times the atmospheric capacity

This scenario comes close to the original linear concentration model, which does not consider finite sink capacity.

Again, the reconstruction of the existing data shows no large deviation:

Constant emissions after 2023
Emission reductions to reach equilibrium and keep permanently constant concentration

We only need a yearly reduction of 0.05% for reaching a permanently constant CO2 concentration of under 500 ppm:

Fast reduction to 50% emissions, then keeping concentration constant

This scenario hardly increases today’s CO2-concentration and approximates eventually 400 ppm:

How to decide which model parameter b is correct?

It appears that with measurement data up to now it cannot be decided whether the sink receivers are finite, and if so, how limited they are.

The most sensitive detector from simple non-disputed measurements appears to be the concentration growth. I can be measured from both the actually measured data in the past,

but also in the modelled data at any time. When comparing the concentration growth with future constant emissions of the 2 cases b=5 and b=50, we get this result:

This implies that with the model b=5 concentration growth will never be under 0.8 ppm, whereas with the model b=50 the concentration growth decreases to appr. 0.1 ppm. But these large differences will only show up in many years, apparently not before 2050.

Preliminary Conclusions

Due to the fact that measurement data up to the current time can be reproduced well by both the Bern model as well as the simple linear sink model, it cannot be reliably decided with current data yet how large the effective size of the carbon sinks are. When emissions remain constant for a longer period of time, we expect to be able to perform a statistical test for the most likely value of the sink size factor b.

Nevertheless this extended sink model allows us to calculate the optimal rate of emission reduction for a given model assumption. Even in the worst case the required emission reduction is so small, so that any short term “zero emission” targets are not justified.

A related conclusion is the possibility of a re-calculation of the available CO2-budget. Given a target concentration C$_{target}$ the total bugdet is the amount of CO2 required to fill up both atmosphere and accumulating box up to the target concentration.
Obviously the target concentration must be chosen in such a way, that it is compatible with the environmental requirements.




A Computational Model for CO2-Dependence on Temperature in the Vostok Ice cores


[latexpage]

The Vostok Ice core provides a more than 400000 year view into the climate history with several cycles between ice ages and warm periods.

It hat become clear that CO2 data are lagging temperature data by several centuries. One difficulty arises from the necessity that CO2 is measured in the gas bubbles whereas temperature is determined from a deuterium proxy in the ice. Therefore there is a different way of determining the age for the two parameters – for CO2 there is a “gas age”, whereas the temperature series is assigned an “ice age”. There are estimates of how much older the “ice age” is in comparison to the gas age. But there is uncertainty, so we will have to tune the relation between the two time scales.

Preprocessing the Vostok data sets

In order to perform model based computations with the two data sets, the original data must be converted into equally spacially sampled data sets. This is done by means of linear interpolation. The sampling interval is chosen 100 years, which is approximately the sampling interval of the temperature data. Apart from this, the data sets must be reversed, and the sign of the time axis must be set to negative values.
Here is the re-sampled temperature data set from -370000 years to -10000 years overlayed over the original temperature data:

And here the corresponding CO2-data set:

The two data sets are now superimposed:

Data model

Due to the fact of the very good predictive value of the temperature dependent sink model for current emission, concentration, and temperature data (equation 2) , we will use the same model based on CO2 mass balance, and possible linear dependence of CO2 changes on concentration and temperature, but obviously without the anthropogenic emissions. Also the time interval is no longer a single year, but a century.

G$_i$ is growth of CO2-concentration C$_i$ during century i:

$G_i = C_{i+1}- C_i$

T$_i$ is the average temperature during century i. The model equation without anthropogenic emissions is:

$ – G_i = x1\cdot C_i + x2\cdot T_i + const$

After estimating the 3 parameters x1, x2, and const from G$_i$, C$_i$, and T$_i$ by means of ordinary least Squares, the modelled CO$_2$ data $\hat{C_i}$ are recursively reconstructed by means of the model, the first actual concentration value of the data sequence $C_0$, and the temperature data:
$\hat{C_0} = C_0$
$ \hat{C_{i+1}} = \hat{C_i} – x1\cdot \hat{C_i} – x2\cdot T_i – const$

Results – reconstructed CO$_2$ data

The standard deviation of $\{\hat{C_i}-C_i\}$ measures the quality of the reconstruction. Minimizing this standard deviation by shifting the temperature data is optimized, when the temperature data is shifted 1450..1500 years to the past:

Here are the corresponding estimated model parameters and the statistical quality measures from the Python OLS package:

The interpretation is, that there is a carbon sink of 1.3% per century, and an emission increase of 0.18 ppm per century and 1 degree temperature increase.

Modelling the sinks (-G$_i$) results in this diagram:

And the main result, the reconstruction of CO$_2$ data from the temperature extended sink modell looks quite remarkable:

Equilibrium Relations

The equilibrium states are more meaningful than the incremental changes. The equlibrium is defined by equality of CO2 sources and sinks, resulting in $G_i = 0$. This creates a linear relation between CO2 concentration C and Temperature T:

$C = \frac{0.1799\cdot T + 3.8965}{0.0133}$ ppm

For the temperature anomaly $T=0$ we therefore get the CO2 concentration of

$C_{T=0}=\frac{3.8965}{0.0133} ppm = 293 ppm$.
The difference of this to the modern data can be explained by different temperature references. Both levels are remarkably close, considering the very different environmental conditions.

And relative change is
$\frac{dC}{dT} = 13.5 \frac{ppm}{^\circ C} $

This is considerably different from the modern data, where we got $ 66.5 \frac{ppm}{°C}$.
There is no immediate explanation for this deviation. We need, however, consider the fact that we have time scale differences of at least 100 if not more. Therefore we can expect totally different mechanisms at work.




Temperature Dependent CO2 Sink model


[latexpage]

In the simple model of CO2 sinks and natural emissions published in this blog and elsewhere, the question repeatedly arose in the discussion: How is the — obvious — temperature dependence of natural CO2 sources, for example the outgassing oceans, or sinks such as photosynthesis, taken into account?

The model shows no long-term temperature dependence trend, only a short-term cyclical dependence. A long-term trend in temperature dependence over the last 70 years is not discernible even after careful analysis.
In the primary publication, it was ruled out that the absorption coefficient could be temperature-dependent (Section 2.5.3). However, it remained unclear whether a direct temperature dependence of the sources or sinks is possible. We re-visit the sink model in order to find a way to consider temperature dependence adequately.

Original temperature-independent model

For setting up the equation for mass conservation of CO2 in the atmosphere (see equations 1,2,3 of the publication), we split the total yearly emissions into anthropogenic emissions $E_i$ in year $i$, and all other, predominantly natural emissions $N_i$ . For simplification, the — more unknown than known — land use caused emissions are included in the natural emissions.
The increase of CO2 in the atmosphere is
$G_i = C_{i+1} – C_i$,
where $C_i$ is atmospheric CO2 concentration at the beginning of year $i$.
With absorptions $A_i$ the mass balance becomes:
$E_i – G_i = A_i – N_i$
The difference between the absorptions and the natural emissions was modeled linearly with a constant absorption coefficient $a^0$ expressing the proportionality with concentration $C_i$ and a constant $n^0$ for the annual natural emissions
\begin{equation}E_i – G_i = a^0\cdot C_i – n^0\end{equation}

The estimated parameters are:
$a^0=0.0183$,
$n^0=5.2$ ppm

While the proportionality between absorption and concentration by means of an absorption constant $a^0$ is physically very well founded, the assumption of constant natural emissions appears arbitrary.
Effectively this assumed constant contains the sum of all emissions except the explicit anthropogenic ones and also all sinks that are balanced during the year.
Therefore it is enlightening to calculate the estimated natural emissions $\hat{N_i}$ from the measured data and the mass balance equation with the estimated absorption constant $a^0=0.0183$:
$\hat{N_i} = G_i – E_i + a^0\cdot C_i $

The mean value of $\hat{N_i}$ results in the constant model term $n^0$. A slight smoothing results in a cyclic curve. Roy Spencer has attributed these fluctuations to El Nino. By definition a priori it cannot be said whether the fluctuations are attributable to the absorptions $A_i$ or to the natural emissions $N_i$. In any case no long-term trend is seen.

The reconstruction $\hat{C_i}$ of the measured concentration data is done recursively from the model and the initial value taken from the original data:
$\hat{C_0} = C_0$
$\hat{C_{i+1}} = \hat{C_i} + E_i +n^0 – a^0\cdot \hat{C_i}$

Extending the model by Temperature

The sink model is now extended by a temperature term $T_i$:
\begin{equation}E_i – G_i = a\cdot C_i + b\cdot T_i + c\end{equation} These 3 regression parameters can be estimated directly, but we do not know how the resulting numbers relate to the estimation without temperature dependence. Therefore we will motivate and build this model in an intuitive way.

The question arises why and how sources or sinks should be dependent on El Nino? It implies a temperature dependence. But why can’t the undeniable long term temperature trend be seen in the model? Why is there no trend in the estimated natural emissions?
The answer is in the fact that CO2 concentration and temperature are highly correlated, at least since 1960, i.e. during the time when CO2 concentration was measured with high quality:

Therefore any longterm trend dependent on temperature would be attributed to CO2 concentration when the model is based on concentration. This has been analysed in detail. We make no claim of causality between CO2 concentration and temperature, in neither direction, but just recognise their strong correlation. The optimal linear CO2 modelling for temperature anomaly based on the HadSST4 temperature data is:
$T_i^C = d\cdot C_i + e$
with $d=0.0082 \frac{^{\circ} C}{ppm}$ and $e = -2.7$°C

The actual temperature $T_i$ is the sum of the modelled Temperature $T_i^C$ and the residual Temperature $T_i^R$
Therefore the new model equation becomes
$E_i – G_i = a\cdot C_i + b\cdot (T_i ^C + T_i^R)+ c$
Replacing $T_i^C$ with its CO2-concentration proxy
$E_i – G_i = a\cdot C_i + b\cdot (d\cdot C_i + e + T_i^R)+ c$
and re-arrangement leads to:
$E_i – G_i = (a + b\cdot d)\cdot C_i + b\cdot T_i^R+ (c + b\cdot e)$.

Now the temperature part of the model depends only on zero mean variations, i.e. without trend.
All temperature trend information is covered by the coefficients of $C_i$. This model corresponds to Roy Spencer’s observation that much of the cyclic variability is explained by El Nino, which is closely related to the “residual temperature” $T_i^R$.
With $b=0$ we would have the temperature independent model above, and the coefficients of $C_i$ and the constant term correspond to the known estimated parameters. Due to the fact that $T_i^R$ does not contain any trend, the inclusion of the temperature dependent term does not change the other coefficients.

The estimated parameters of the last equation are:
$a + b\cdot d = 0.0183 = a^0$ ,
$b = -2.9\frac{ppm}{^{\circ}C}$,
$c + b\cdot e = -5.2 ppm = -n^0 $ .

The first and last parameter correspond to those of the temperature independent model. But now, from the estimated $b$ coefficient, we now can evaluate the contribution of Temperature $T_i$ to the sinks and the natural emissions

The final determined parameters are
$a = a_0 – b\cdot d = 0.0436$,
$b = -0.29 \frac{ppm}{^{\circ}C}$,
$c = -n_0 – b\cdot e = -13.6ppm $

It is quite instructive how close the yearly variations of temperature matches the variations of the measured sinks:

The smoothed residual is now mostly close to 0, with the exception of the Pinatubo eruption (after 1990) being the most dominant non-accounted signal after application of the model. Curiously in 2020 there is a reduced sink effect, most likely due to higher average temperature, effectively compensating the reduced emissions due to Covid lockdowns.
The model reconstruction of the concentration is now extended by the temperature term:
$\hat{C_0} = C_0$
$\hat{C_{i+1}} = \hat{C_i} + E_i – a\cdot \hat{C_i} – b\cdot T_i – c$

This is confirmed when looking at the reconstruction. The reconstruction only deviates at 1990 due to the missing sink contribution from the Pinatubo eruption, but follows the shape of the concentration curve precisely. This is an indication, that the Concentration+Temperature model is much better suited to model the CO2-concentration.
In order to compensate the deviations after 1990, the sink effect due to Pinatubo A$_i^P$must be considered. It is introduced as a negative emission signal into the recursive modelling equation:
$\hat{C_{i+1}} = \hat{C_i} + E_i -A_i^P- a\cdot \hat{C_i} – b\cdot$
This reduces the deviations of the model from the measured concentration significantly:

Consequences of the temperature dependent model

The concentration dependent absorption parameter is in fact more than twice as large as the total absorption parameter, and increasing temperature increases natural emissions. As long as temperature is correlated to CO2 concentration, the to trends cancel each other, and the effective sind coefficient appears invarant w.r.t. temperature.

The extended model becomes relevant, when temperature and CO2 concentration diverge.

If temperature rises faster than according the above CO2 proxy relation, then we can expect a reduced sink effect, while with temperatures below the expectancy value of the proxy the sink effect will increase.

As a first hint for further research we can estimate the temperature equilibrium concentration based on current measurements. This is given by (anthropogenic emissions and concentration growth at 0 by definition):
$a\cdot C + b\cdot T + c = 0$
$C = \frac{-b\cdot T – c}{a}$
For $T = 0°$ (= 14° C worldwide average temperature) we get as the – no emissions – equilibrium concentration.
$C = \frac{-c}{a} = \frac{-13.6}{0.0436} ppm = 312 ppm$

The temperature sensitivity is the Change of equilibrium concentration for 1° temperature change:
$\frac{\Delta C}{\Delta T} = \frac{-b}{a} = 66.5 \frac{ppm}{°C}$
Considering the fact the the temperature anomaly was appr. T = -0.5° in 1850, this corresponds very well with the assumed pre-industrial equilibrium concentration of 280 ppm.

A model for paleo climate?

An important consequence of the temperature enhanced model is for understanding paleo climate, which is e.g. represented in the Vostok ice core data:

Without analysing the data in detail, with the temperature dependence of the CO2 concentration we have a tool for e.g. estimating the equilibrium CO2 concentration depending on temperature. Stating the obvious, it is clear that CO2 concentration is controlled by temperature and not the other way round – the time lag between temperature changes and concentration changes is several centuries.

The Vostok data have been analysed with the same model of concentration and temperature dependent sinks and natural sources. Although the model parameters are substantially different due to the totally different time scale, the measured CO2 concentration is nicely reproduced by the model, driven entirely by temperature changes:




The inflection point of CO2 concentration


[latexpage]

And rising and rising…?

At first glance, the atmospheric CO2 concentration is constantly rising, as shown by the annual mean values measured at Maona Loa (ftp://aftp.cmdl.noaa.gov/products/trends/co2/co2_mm_mlo.txt):

The central question that arises is whether the concentration is growing faster and faster, i.e. whether more is being added each year? If so, the curve would be concave, i.e. curved upwards.

Or is the annual increase in concentration getting smaller and smaller? Then it would be convex, i.e. curved downwards.

Or is there a transition, i.e. a turning point in the mathematical sense? This could be recognized by the fact that the annual increase initially increases and then decreases from a certain point in time.

At first glance, the overall curve appears concave, which means that the annual increase in concentration appears to increase with each year.

The answer to this question is crucial for the question of how urgent measures to curb CO2 emissions are.

Closer examination with the measured annual increase

To get a more accurate impression, we calculate the — raw and slightly smoothed — annual increase in CO2 concentration:

This confirms that until 2016 there was a clear trend towards ever higher annual concentration increases, from just under 0.75 ppm/year in 1960 to over 2.5 ppm/year in 2016.

Since 2016, however, the annual increase has been declining, initially slightly, but significantly more strongly in 2020 and 2021. The corona-related decline in emissions certainly plays a role here, but this does not explain the decline that began in 2016.

There is therefore an undisputed turning point in the concentration curve in 2016, i.e. a trend reversal from increasing concentration growth to decreasing concentration growth. Is there a satisfactory explanation for this? This is essential, because if we can foresee that the trend of decreasing concentration growth will continue, then it is foreseeable that the concentration will stop increasing at some point and the goal of the Paris Climate Agreement, the balance between CO2 sources and CO2 sinks, can be achieved in the foreseeable future.

Explanation due to stagnating emissions

As part of the Global Carbon Brief project, Zeke Hausfather 2021 revised the values of global CO2 emissions over the last 20 years based on new findings, with the important result that global emissions have been constant for 10 years within the limits of measurement accuracy:

To assess the implications of this important finding, one needs to know the relationship between emissions and CO2 concentration.

From my own research on this in a publication and in a subsequent blog post, it follows that the increase in concentration results from the emissions and absorptions, which are proportional to the CO2 concentration.

This model has also been described and published in a similar form by others:

Trivially, it follows from the conservation of mass that the concentration $C_i$ at the end of the year $i$ results from the concentration of the previous year $C_{i-1}$, the natural emissions $N_i$, the anthropogenic emissions $E_i$ and the absorptions $A_i$:
\begin{equation}\label{mass_conservation}C_i = C_{i-1} + N_i + E_i – A_i \end{equation} This directly results in the effective absorption calculated from emissions and the measured increase in concentration:
\begin{equation}\label{absorption_measurement}$A_i – N_i = E_i – (C_i – C_{i-1}) \end{equation} Assuming constant annual natural emissions
$N_i = n$
and the linear model assumption, i.e. that the absorptions are proportional to the concentration of the previous year,
$A_i = a\cdot C_{i-1}$
the absorption model is created (these two assumptions are explained in detail in the publication above), where $n = a\cdot C_0$ :
\begin{equation}\label{absorption_equ}A_i – N_i = a\cdot(C_{i-1} – C_0)\end{equation} with the result $a=0.02$ and $C_0 = 280 ppm $. In this calculation, emissions due to land use changes are not taken into account. This explains the numerical differences between the result and those of the cited publications. The omission of land-use changes is justified by the fact that in this way natural emissions lead to the pre-industrial equilibrium concentration of 280 ppm.

With this model, the known concentration between 2000 and 2020 is projected very accurately from the data between 1950-2000:

Growth rate of the modelled concentration

The growth rate of the modelled concentration $G^{model}i$ is obtained by converting the model equation:
$G^{model}_i = E_i – a\cdot C{i-1} + n$
This no longer shows the cyclical fluctuations caused by El Nino:

The global maximum remains, but the year of the maximum has moved from 2016 to 2013.
These El Nino-adjusted concentration changes confirm Zeke Hausfather’s statement that emissions have indeed been constant for 10 years.

Evolution of CO2 concentration at constant emissions

In order to understand the inflection point of the CO2 concentration, we want to calculate the predicted course with the assumption of constant emissions $E_i = E$ and the equations (\ref{absorption_measurement}) and (\ref{absorption_equ}):
\begin{equation}\label{const_E_equ}C_i – C_{i-1} = E- a\cdot(C_{i-1} – C_0)\end{equation} The left-hand side describes the increase in concentration. On the right-hand side, an amount that increases with increasing concentration $C_{i-1}$ is subtracted from the constant emissions $E$, which means that the increase in concentration decreases with increasing concentration. This can be illustrated with a special bank account. As soon as the concentration reaches the value $\frac{E}{a} + C_0$, the equilibrium state is reached in which the concentration no longer increases, i.e. the often used “net zero” situation. With current emissions of 4.7 ppm, “net zero” would be at 515 ppm, while the “Stated Policies” emissions scenario of the International Energy Agency (IEA), which envisages a slight reduction in the future, reaches equilibrium at 475 ppm, as described in the publication above. According to the IEA’s forecast data, this will probably be the case in 2080:

According to this, constant emissions are sufficient justification for a convex course of CO2 concentrations, as we have seen since 2016. At the same time, this proves that CO2 absorption does indeed increase with increasing concentration.




Invariance of natural CO2 sources and sinks regarding long time temperatur trend


[latexpage]

In the simple model of CO2 sinks and natural emissions published in this blog and elsewhere, the question repeatedly arose in the discussion: How is the — obvious — temperature dependence of natural CO2 sources, for example the outgassing oceans, or sinks such as photosynthesis, taken into account? This is because the model does not include any long-term temperature dependence, only a short-term cyclical dependence. A long-term trend in temperature dependence over the last 70 years is not discernible even after careful analysis.
In the underlying publication, it was ruled out that the absorption coefficient could be temperature-dependent (Section 2.5.3). However, it remained unclear whether a direct temperature dependence of the sources or sinks is possible. And why this is not recognizable from the statistical analysis. This is discussed in this article.

Original temperature-independent model

The simplified form of CO2 mass conservation in the atmosphere (see equations 1,2,3 of the publication) with anthropogenic emissions $E_i$ in year $i$, the other, predominantly natural emissions $N_i$ (for simplification, the land use emissions are added to the natural emissions), the increase of CO2 in the atmosphere $G_i = C_{i+1} – C_i$ ($C_i$ is atmospheric CO2 concentration) and the absorptions $A_i$ is:
$E_i – G_i = A_i – N_i$
The difference between the absorptions and the other emissions was modeled linearly with a constant absorption coefficient $a$ and a constant $n$ for the annual natural emissions:
$A_i – N_i = a\cdot C_i + n$

While the absorption constant and the linear relationship between absorption and concentration are physically very well founded and proven, the assumption of constant natural emissions appears arbitrary. Therefore, instead of a constant expression $n$, it is enlightening to calculate the residual from the measured data and the calculated absorption constant $a$ instead
$N_i = G_i – E_i + a\cdot C_i $
must be considered:

The mean value of $N_i$ results in the constant model term $n$. A slight smoothing results in a periodic curve. Roy Spencer has attributed these fluctuations to the El Nino, although it is not clear whether the fluctuations are attributable to the absorptions $A_i$ or the natural emissions $N_i$. But no long-term trend is discernible. Therefore, the question must be clarified as to why short-term temperature dependencies are present, but long-term global warming does not appear to have any correspondence in the model.

Temperature-dependent model

We now extend the model by additionally allowing a linear temperature dependence for both the absorptions $A_i$ and the other emissions $N_i$. Since our measurement data only provide their difference, we can represent the temperature dependence of this difference in a single linear function of the temperature $T_i$, i.e. $b\cdot T_i + d$. Assuming that both $A_i$ and $N_i$ are temperature-dependent, the difference between the corresponding linear expressions is again a linear expression. Accordingly, the extended model has this form.
$A_i – N_i = a\cdot C_i + n + b\cdot T_i + d$
In principle, $n$ and $d$ could be combined into a single constant. However, since $d$ depends on the temperature scale used, and $n$ on the unit of measurement of the CO2 concentration, we leave it at 2 constants.

CO2 concentration as a proxy for temperature

As already explained in the publication in section 2.3.2, there is a high correlation between CO2 concentration and temperature. Where this correlation comes from, i.e. whether there is a causal relationship (and in which direction) is irrelevant for this study. However, we are not establishing the correlation between $T$ and $log(C)$ here, but between $T$ (temperature) and $C$ (CO2 concentration without logarithm).

As a result, the temperature anomaly can be derived from the concentration using the linear function
$T_i = e\cdot C_i + f$
with
$e=0.0083, f=-2.72 $
can be approximated.

Use of the CO2 proxy in the temperature-dependent equation

If we now experimentally insert the proxy function for the temperature into the temperature-dependent equation, we obtain the following equation:
$A_i – N_i = a\cdot C_i + n + b\cdot (e\cdot C_i + f) + d $
and
$A_i – N_i = (a+b\cdot e)\cdot C_i + (n+b\cdot f\cdot) + d $
The expression on the right-hand side now has the same form as the original equation, i.e.
$A_i – N_i = a`\cdot C_i + n` $
with
$ a`= a + b\cdot e $
$ n` = n + b\cdot f + d $

Conclusions

Therefore, with a linear dependence of temperature on CO2 concentration, temperature effects of sinks and sources cannot be distinguished from concentration effects, both are included in the “effective” absorption constant $a$ and the constant of natural emissions $n$. Therefore, the simple source and sink model contains all linear temperature effects.
This explains the astonishing independence of the model from the global temperature increase of the last 50 years.
This correlation also suggests that the absorption behavior of atmospheric sinks will not change in the future.

However, if we want to know exactly how the temperature will affect the sources and sinks, other data sources must be used. This knowledge is not necessary for forecasting future CO2 concentrations from anthropogenic emissions due to the correlation found.