How large is the Greenhouse Effect in Germany? — A statistical Analysis.


[latexpage]

High correlation as an indication of causality?

The argument that CO2 determines the mean global temperature is often illustrated or even justified with this diagram, which shows a strong correlation between CO2 concentration and mean global temperature, here for example the mean annual concentration measured at Maona Loa and the annual global sea surface temperatures:

Although there are strong systematic deviations between 1900 and 1975 – 75 years after all – the correlation has been strong since 1975.
If we try to explain the German mean temperatures with the CO2 concentration data from Maona Loa available since 1959, we get a clear description of the trend in temperature development, but no explanation of the strong fluctuations:

The “model temperature” $\hat{T}_i$ estimated from the logarithmic CO2 concentration data $ln(C_i)$ measured in year $i$ using the least squares method is given by
 $\hat{T}_i = 7.5\cdot ln(C_i)- 35.1 $ (°C)

 If we add the annual hours of sunshine as a second explanatory variable, the fit improves somewhat, but we are still a long way from a complete explanation of the fluctuating temperatures. As expected, the trend is similarly well represented, and some of the fluctuations are also explained by the hours of sunshine, but not nearly as well as one would expect from a causal determinant:

The model equation for the estimated temperature $\hat{T}_i$ becomes with the extension of the hours of sunshine $S_i$ to
$ \hat{T}_i = 5.8\cdot ln(C_i) + 0.002\cdot S_i – 28.5 $  (°C)
The relative weight of the CO2 concentration has decreased slightly with an overall improvement in the statistical explanatory value of the data.

However, it looks as if the time interval of 1 year is far too long to correctly treat the effect of solar radiation on temperature. It is obvious that the seasonal variations are undoubtedly caused by solar radiation.
 The effects of irradiation are not all spontaneous; storage effects must also be taken into account. This corresponds to our perception that the heat storage of summer heat lasts for 1-3 months and that the warmest months, for example, are only after the period of greatest solar radiation. We therefore need to create a model based on the energy flow that is fed with monthly measured values and that provides for storage.

Energy conservation – improving the model

To improve understanding, we create a model with monthly data taking into account the physical processes (the months are counted with the index variable $i$ ):

  • Solar radiation supplies energy to the earth’s surface, which is assumed to be proportional to the number of hours of sunshine per month $S_i$,

  • assuming the greenhouse effect, energy is also supplied; a linear function of $ln(C_i)$ is assumed for the monthly energy input (or prevented energy output),
  • the top layer of the earth’s surface stores the energy and releases it again; the monthly release is assumed to be a linear function of the surface temperature $T_i$,
  • the monthly temperature change in Germany is assumed to be proportional to the energy change.

This results in this modeled balance equation, the constant $d$ makes it possible to use arbitrary measurement units:
$ \hat{T}_i – \hat{T}_{i-1} = a\cdot \hat{T}_{i-1} + b\cdot S_i + c\cdot ln(C_i) + d $
On the left-hand side of the equation is the temperature change as a representative of the energy balance change, while the right-hand side represents the sum of the causes of this energy change.
To determine the coefficients $a,b,c,d$ using the least squares method, the measured temperature $T_i$ is used instead of the modeled temperature $\hat{T}_i$.

Here are the monthly temperature and sunshine hour data. It can be seen that the temperature data lags behind the sunshine hours data by around 1-2 months, but has a similar overall trend:

This fits with the assumption that we actually have a storage effect. The balance equation should therefore provide meaningful values. However, we need to take a closer look to evaluate the estimated result.

In this diagram, the values of the respective coefficients are shown in the first column, their standard error in the second column, followed by the so-called T-statistic, followed by the probability that the assumption of the coefficient other than 0 is incorrect, the so-called probability of error. This means that a coefficient is only significant if this probability is close to 0. This is the case if the T-statistic is greater than 3 or less than -3. Finally, the last two columns describe the so-called 95% confidence interval. This means that there is a 95% probability that the actual estimated value is within this interval.

     Coefficient  Std.Error   t-Value    P>|t|    [0.025     0.975]
--------------------------------------------------------------------
a -0.4826 0.0142 -33.9049 0.0000 -0.5105 -0.4546
b 0.0492 0.0013 38.8127 0.0000 0.0467 0.0517
c 0.6857 0.9038 0.7587 0.4483 -1.0885 2.4598
d -6.3719 5.3013 -1.2020 0.2297 -16.7782 4.0344

Here, the error probabilities of the coefficients $c$ and $d$ are so high, at 45% and 23% respectively, that we must conclude that both $c=0$ and also $d=0$. $c$ measures the significance of the CO2 concentration for the temperature. This means that the CO2 concentration has had no statistically significant influence on temperature development in Germany for 64 years. However, this is the period of the largest anthropogenic emissions in history.
The fact that also $d$ assumes the value 0 is more due to chance, as this constant depends on the units of measurement of the CO2 concentration and the temperature.

As a result, the balance equation is adjusted:
$ T_i – T_{i-1} = a\cdot T_{i-1} + c\cdot S_i + d $
 with the result:

       Coefficient  Std.Error   t-Value    P>|t|    [0.025    0.975]
--------------------------------------------------------------------
a -0.4823 0.0142 -33.9056 0.0000 -0.5102 -0.4544
b 0.0493 0.0013 38.9661 0.0000 0.0468 0.0517
d -2.3520 0.1659 -14.1788 0.0000 -2.6776 -2.0264

The constant $d$ is now valid again with high significance due to the fact that $c=0$. The other two coefficients and have hardly changed. They deserve a brief discussion:

The coefficient $a$ indicates which part of the energy measured as temperature is released again over the course of a month. This is almost half. This factor is independent of the zero point of the temperature scale; choosing K or anomalies instead of °C would result in the same value. The value corresponds approximately to the subjective perception of how the times of maximum temperature in summer shift in time compared to the maximum solar radiation.
The coefficient $b$ indicates the factor by which the hours of sunshine translate into monthly temperature changes.

The result is not just an abstract statistic, it can also be visualized by reconstructing the monthly temperature curve of the last 64 years with the help of the model described.

The reconstruction of the entire temperature curve is based on the time series of sunshine hours and a single temperature starting value $\hat{T}_{-1}=T_{-1}$ , the temperature of the month preceding the beginning of the time series under investigation since 1959, in this case December 1958.
The reconstruction is carried out using this recursion from the sunshine hours over the 768 months from January 1959 to December 2023:
$\hat{T}_i = \hat{T}_{i-1} + a\cdot \hat{T}_{i-1} + b\cdot S_i + d$ $(0\leq i < 768 ) $
Here is the complete reconstruction of the temperature data in comparison with the original temperature data:

 The last 10 years are shown enlarged for a clearer presentation:

It is noticeable that the residual, i.e. the deviations of the reconstruction from the actual temperatures up to the end of the investigated period around 0, appears symmetrical and shows no obvious systematic deviations. The measure of the error of the reconstruction is the standard deviation of the residual. This is 2.5°C. Since we are investigating a long period of 64 years, a fine analysis of the long-term trends of original temperatures, reconstruction and residual could find a possible upper limit of the possible influence of CO2

Detailed analysis of the residue

If we determine the average slope of the three curves – original temperature data, reconstruction and residual – over the entire 64-year period by estimating an equalization line, we obtain the following long-term values:

  • Original temperature data: 0.0027 °C/month = 0.032 °C/year
  • Reconstructed temperature data: 0.0024°C/month = 0.029 °C/year
  • Residual: 0.00028 °C/month = 0.0034 °C/year

Of the original temperature trend, 90% is explained by the number of hours of sunshine. This leaves only 10% of unexplained variability for other causes. Until proven otherwise, we can therefore assume that the increase in CO2 concentration is responsible for at most these 10%, i.e. for a maximum of 0.03° C per decade over the last 64 years. Statistically, however, the contribution of the CO2 concentration cannot be considered significant. It
should be borne in mind that this simple model does not take into account many influencing factors and inhomogeneities, meaning that the influence of the CO2 concentration is not the only factor that is effective in addition to the hours of sunshine. This is why the CO2 influence is not considered statistically significant.

Extension – correction by approximation of the actual irradiation

So far, we have used the hours of sunshine as a representative of the actual energy flow. This is not entirely correct, because an hour of sunshine in winter means significantly less irradiated energy than in summer due to the much shallower angle of incidence.

The seasonal course of the weighting of the incoming energy flow has this form. The hours of sunshine must be multiplied by this weighting to obtain the energy flow.

With these monthly weightings, the model is again determined from solar radiation and CO2. Again, the contribution of CO2 must be rejected due to lack of significance. Therefore, the reconstruction of the temperature from the irradiating energy flow is slightly better than the above reconstruction.

The standard deviation of the residual has been reduced to 2.1°C by correcting the hours of sunshine to the energy flow.

Possible generalization

Worldwide, the recording of sunshine hours is far less complete than that of temperature measurements. Therefore, the results for Germany cannot simply be reproduced worldwide.
 However, satellites are used to measure cloud cover and the reflection of solar radiation on clouds. This data leads to similar results, namely that the increase in CO2 concentration is responsible for at most 20% of the global average temperature increase. As this is lower on average than the temperature increase in Germany, this also ultimately leads to an upper limit of 0.03°C per decade for the consequences of the CO2 -induced greenhouse effect.




How does the atmospheric Greenhouse Effect work?

Much has been written about the greenhouse effect and many comparisons have been made. However, much of this is misleading or even wrong.
The greenhouse effect is caused by the fact that with increasing CO2 a slightly increasing proportion of infrared radiation is emitted from the upper, cold layers of the earth’s atmosphere (i.e. the stratosphere) into space.
 The facts are complicated in detail, which is why it is so easy to scare people with exaggerations, distortions or lies. Here I would like to describe the basics of the atmospheric greenhouse effect, in which CO2 plays an important role, in a
physically correct way and without formulas.

Viewed from space, the temperature balance of the Earth’s surface and atmosphere is determined by

  • irradiation of short-wave, largely visible sunlight and through
  • Radiation of long-wave invisible infrared radiation.

If the energy content of the incoming radiation is equal to the energy content of the outgoing radiation, there is an equilibrium and the average temperature of the earth remains constant. Warming always takes place when either the radiation decreases or the irradiation increases, until equilibrium is restored.

Infrared radiation is the only way the Earth can emit energy (heat) into space. It is therefore necessary to understand how the mechanisms of infrared radiation work.

The mechanisms of infrared radiation into space

There are only 2 ways in which the Earth can release energy into space:

  • The molecules of the earth’s surface or the sea surface emit infrared waves at ground temperature (average 15°C = 288 K).
  • The molecules of the so-called greenhouse gases, mainly water vapor and CO2 (to a much lesser extent methane and some other gases), emit infrared waves from the atmosphere at the temperature prevailing in their environment. The other gases in the atmosphere, such as oxygen or nitrogen, are unable to emit significant amounts of infrared radiation.
    CO2 differs from water vapor in that it is only active in a small wavelength range. On the other hand, the proportion of water vapor molecules in the atmosphere decreases very quickly from an altitude of 5 km because the water vapor condenses back into clouds when it cools down and then rains down. We can see that from this: In an airplane at an altitude of 10 km, we are always above the clouds. And there is virtually no water vapor above the clouds. However,
    CO2 is evenly mixed with other gases, primarily oxygen and nitrogen, right up to the highest layers of the atmosphere.

CO2 and water vapor are therefore like two competing handball teams, one of which (the water vapor) is only allowed to run up to the halfway line and the other (CO2 ) can only move within a narrow longitudinal strip of the playing field. This narrow longitudinal strip becomes a little wider when the “CO2 team” gets more players (more CO2 ). The goal is the same for both teams (space) and stretches across the entire width of the pitch. As long as the ball is still far away from the goal, another player catches it rather than it entering the goal. This other player passes the ball back in a random direction. The closer the players are, the quicker the ball is caught and played back. The closer the ball gets to the goal, the further apart the players stand. This means that it is easier for the ball to get between the players and into the goal.

As long as there are other greenhouse gas molecules in the vicinity, the infrared radiation cannot reach outer space (the other molecules are too close together); it is collected again by the other molecules and emitted by them. Specifically, the infrared radiation in the lower atmosphere only has a range of around 25m until it is intercepted again by another greenhouse gas molecule, usually a water molecule or CO2 . The thinner the greenhouse gases (fewer players) in the atmosphere become with increasing altitude, the more likely it is that the infrared radiation will reach space.

From this we can conclude that there are in principle 3 layers from which infrared radiation reaches space:

  • When the air is dry and without clouds, there is a part of the infrared called the “atmospheric window” that radiates directly from the ground into space (this is when there are no or very few water vapor players in the field),

  • between 2 and 8 km altitude, on average at 5 km altitude, is the upper edge of the clouds, from where the water vapor molecules of the clouds emit a large proportion of the infrared radiation into space at an average of 255 K = -18°C
  • the proportion of infrared radiation in the wavelength range around 15 micrometers (the narrow strip of the playing field) is transported by CO2 into the high cold layers of the stratosphere, from where it is emitted into space at around 220 K = -53°C.

This leads to a competitive situation as to whether a water molecule can radiate directly or whether its infrared radiation is still intercepted by a CO2 molecule and transmitted to the heights of the stratosphere.

The greenhouse effect

How does a growing CO2 concentration lead to reduced energy radiation into space and thus to warming?

It is important to know that the radiated energy decreases sharply with decreasing air temperature and that the temperature decreases with increasing altitude. If the CO2 concentration increases over time, the wavelength range in which the CO2 is “responsible” for radiation becomes a little wider (the narrow strip of the playing field). This means that a small part of the infrared radiation that would otherwise be emitted by the water vapor at 255 K is now emitted by the CO2 at 220 K, i.e. with significantly lower energy. As a consequence, this means that the energy of the total radiation is slightly reduced – the radiation from sunlight, which is assumed to be constant, therefore predominates and a warming effect occurs.

However, the effect is not as great as it is usually portrayed in the media:
Since the beginning of industrialization, the earth’s infrared radiation has decreased by just 2 watts/sqm
 with a 50%
increase in CO2 concentration from 280 ppm to 420 ppm. With an average radiation of 240 watts/sqm, that is1 only just under 1% in 170 years.
We now know the first possibility of how the balance mentioned at the beginning is disturbed by a change in radiation. But so far only to a very small extent.

The effects of changes in irradiation are greater than the greenhouse effect

The second way of disturbing the balance is through changes in irradiation.
The fluctuations in irradiation caused by changing cloud cover are up to 100 times greater than the aforementioned 2 W/sqm (which owners of photovoltaic systems can confirm), which can be attributed to the greenhouse effect. Looking at Germany, according to the German Weather Service, the number of hours of sunshine in Germany has been increasing by 1.5% per decade for 70 years2. In other words, in less than 10 years, the effect has been greater than that of the greenhouse effect in 170 years. For a more precise numerical comparison, both measurement data to be compared must be available in the relevant period: In the period of the last 40 years, there was 6 times the warming due to the increase in hours of sunshine in Germany compared to the greenhouse effect. The changes in solar radiation are therefore responsible for global warming to a far greater extent than the changes in CO2 concentration.

This describes and classifies the generally known positive greenhouse effect. There is therefore no reason to use the greenhouse effect to justify fear and panic. And there is an urgent need for research, the media and politicians to look into the influence and causes of the increasing hours of sunshine. An initial, more detailed analysis of the data from the German Weather Service shows that the changes in hours of sunshine in Germany explain 90% of the monthly temperatures over the last 70 years and that the greenhouse effect in Germany has no statistically significant influence.

One important phenomenon is still missing: in the Antarctic, the increase in CO2 concentration leads to cooling, which is known as the negative greenhouse effect.

The negative greenhouse effect in the Antarctic

There is a peculiar effect when we look at the one area of the earth where the earth’s surface is at times even colder than the 220 K at which the infrared radiation of CO2 is emitted into space: In the Antarctic, where temperatures below -60°C (=213 K) are not uncommon, we actually find a negative greenhouse effect.
In other words, where cooling occurs as the CO2 concentration increases.
As the CO2 concentration
increases, the proportion of infrared radiation from the CO2 increases as usual. However, at 220 K, the CO2 layer is now warmer than the surface of the Antarctic. This means that more heat is dissipated from the CO2 in the atmosphere than from the Earth’s surface below.
 In other words: In the Antarctic, the increase in CO2 concentration means that heat dissipation into space is increased, and it is therefore getting colder there, not warmer.

  1. Reason for the 240 W/sqm: https://www.zamg.ac.at/cms/de/klima/informationsportal-klimawandel/klimasystem/umsetzungen/energiebilanz-der-erde ︎
  2. Calculation: 10*168h/72 years = 23 h/decade => (23h/decade)/1544h = 1.5%/decade



Water Vapour Feedback


[latexpage]

In the climate debate, the argument of feedback through water vapor is used to amplify the climate effect of greenhouse gases – the sensitivity to a doubling of their concentration in the atmosphere – which, according to the radiative transfer equation and general consensus, is a maximum of 0.8°, by an alleged factor of 2-6. However, this is usually not quantified more precisely, only formulas with the “final feedback” are usually given.

Recently, David Coe, Walter Fabinski and Gerhard Wiegleb described and analyzed precisely this feedback in the publication “The Impact of CO2, H2O and Other ‘Greenhouse Gases’ on Equilibrium Earth Temperatures“. Based on her publication, this effect is derived below using partly the same and partly slightly different approaches. The results are almost identical.

All other effects that occur during the formation of water vapor, such as cloud formation, are ignored here.

The basic mechanism of water vapor feedback

The starting point is an increase in atmospheric temperature by ∆T0, regardless of the cause. Typically, the greenhouse effect is assumed to be the primary cause. The argument is now that the warmed atmosphere can absorb more water vapor, i.e. the saturation vapor pressure (SVP) increases and it is assumed that consequently the water vapor concentration ∆H2O also increases, as a linear function of the temperature change. (The temperature change is so small that linearization is legitimate in any case):
$\Delta H_2O = j\cdot \Delta T_0 $
where $j$ is the proportionality constant for the water vapor concentration.
An increased water vapor concentration in turn causes a temperature increase due to the greenhouse effect of water vapor, which is linearly dependent on the water vapor concentration:
$\Delta T_1 = k\cdot \Delta H_2O $
In summary, the triggering temperature increase ∆T0 causes a subsequent increase in temperature ∆T1:
$\Delta T_1 = j\cdot k\cdot \Delta T_0 $
Since the prerequisite of the method is that the cause of the triggering temperature increase is insignificant, the increase by ∆T1 naturally also causes a feedback cycle again:
$\Delta T_2 = j\cdot k\cdot \Delta T_1 = (j\cdot k)^2\cdot \Delta T_0$
This is repeated recursively. The final temperature change is therefore a geometric series:
$\Delta T = \Delta T_0\sum_{n=0}^\infty(j\cdot k)^n = \Delta T_0\cdot \frac{1}{1-j\cdot k} $
If $j\cdot k\ge 1$, the series would diverge and the temperature would grow beyond all limits. It is therefore important to be clear about the magnitude of these two feedback factors.

For the determination of the first term, $j$ we can apply a simplified approach by accepting the statement commonly used in the mainstream literature, that for each degree C of temperature increase the relative air moisture may rise up to 7%. In the German version of this post I did the explicit calculations and came to the result that the realistic maximum air moisture rise is 6% per degree temperature rise, which has hardly any effect on the final result.

Dependence of the greenhouse effect on the change in relative humidity

Infrared radiation transport in the atmosphere is dependent on relative humidity. This is taken into account in the well-known and proven MODTRAN simulation program. With increasing humidity, the outgoing infrared radiation decreases due to the greenhouse effect of water vapor.

The decrease in radiation is linear between 60% and 100% humidity. Therefore, the increase in relative humidity from 80% to 86% is considered to determine the decrease in radiant power and the temperature increase required for compensation.

To do this, we set the parameters of the MODTRAN simulation to

  • the current CO2 concentration of 420 ppm,
  • a relative humidity of 80%,
  • and a cloud constellation that comes close to the average IR radiant power of 240 $\frac{W}{m^2}$.

The temperature offset is now increased until the reduced iR radiation of 0.7 \frac{W}{m^2} is compensated for by increasing the temperature. This is the case when the ground temperature is increased by 0.215 °C.

A 7% higher relative humidity therefore causes a greenhouse effect, which is offset by a temperature increase of 0.215°C. Extrapolated to a (theoretical) change of 100% humidity, this results in $k=3.07$°C/100%.

The final feedback factor and the total greenhouse effect

This means that a 1 degree higher temperature in a feedback cycle causes an additional temperature increase of $k\cdot j = 0.215$.

The geometric series leads to an amplification factor $f$ of the pure CO$_2$ greenhouse effect by
$f=\frac{1}{1-0.215} = 1.27 $

This means that the sensitivity amplified by the water vapor feedback when doubling the CO$_2$ concentration $\Delta T$ is no longer $\Delta T_0=0.8$°C, but
$\Delta T = 1.27\cdot 0.8$ °C = 1.02°C $\approx$ 1°C

This result does not take into account the increase in temperature caused by the higher water vapor concentration




The Extended Carbon Sink Model (work in progress)


[latexpage]

Introduction – potential deficit of the simple linear carbon sink model

With the simple linear carbon sink model the past relation between anthropogenic emissions and atmospheric CO2 concentration can be excellently modelled, in particular when using the high quality emission and concentration data after 1950.
The model makes use of the mass conservation applied to the CO2-data, where $C_i$ is the CO2 concentration in year $i$, $E_i$ are the anthropogenic emissions during year $i$, $N_i$ are all other CO2 emissions during year $i$ (mostly natural emissions), and $A_i$ are all absorptions during year $i$. We assume emissions caused by land use change to be part of the natural emissions, which means that they are assumed to be constant. Due to the fact that their measurement error is very large, this should be an acceptable assumption.
With the concentration growth $G_i$
$G_i = C_{i+1}-C_i $
we get from mass conservation the yearly balance
$ E_i + N_i – A_i = G_i $
$E_i$ and $G_i$ are measured from known data sets (IEA and Maona Loa), and we define the effective sink $S_i$ as
$S_i = A_i – N_i$
The atmospheric carbon balance therefore is
$E_i – G_i = S_i $
The effective sink ist modelled as a linear function of the CO2-concentration by minimizing
$\sum_i (S_i – \hat{S}_i)^2$
w.r.t. $a$ and $n$, where
$\hat{S}_i = a\cdot C_i + n $
The equation can be re-written to
$\hat{S}_i = a\cdot (C_i – C^0)$
where
$C^0 = -\frac{n}{a}$
is the average reference concentration represented by the oceans and the biosphere. The sink effect is proportional to the difference of the atmospheric concentration and this reference concentration. In the simple linear model the reference concentration is assumed to be constant, implying that these reservoirs are close to inifinite. Up to now this is supported by the empirical data.
This procedure is visualized here:

This results in an excellent model reconstruction of the measured concentration data:

It is important to note that the small error since 2010 is an over-estimation of the actual measured data, which means that the estimated sink effect is under-estimated. Therefore we can safely say that currently we do not see the slightest trend of a possible decline of the 2 large sink systems, the ocean sink and the land sink from photosynthesis.

Nevertheless it can be argued that in the future both sink systems may enter a state of saturation, i.e. a lack of the ability to absorb surplus carbon from the atmosphere. As a matter of fact it is claimed from the architects of the Bern model and representatives of the IPCC that the capacity of the ocean is not larger than 5 times the capacity of the atmosphere, and therefore future ability to take up extra CO2 will rapidly decline. We don’t see this claim justified by data, but before we can prove that the claim is not justified, we will adapt the model to make it capable of calculating varying sink capacities.

Extending the model with a second finite accumulating box

In order to take care of the finite size of both the ocean and the land sinks, we do not pretend that these sink systems are infinite, but assume a second box besides the atmosphere with a concentration $C^0_i$, taking up all CO2 from both sink systems. The box is assumed to be $b$ times larger than the atmosphere, therefore for a given sink-related change of atmosphere concentration ($-S_i$) we get an increase of concentration in the “sink box” of the same amount ($S_i$) but reduced by the factor b:
$ C^0_{i+1} = C^0_i + \frac{1}{b}\cdot S_i $
The important model assumption is that $C^0_i$ is the reference concentration, which determines future sink ability.
The initial value is the previously calculated equilibrium concentration $C^0$
$C^0_0 = C^0$
Therefore by evaluation of the recursion we get
$C^0_i = C^0 + \frac{1}{b}\sum_{j=1}^i S_i$
The main modelling equation is adapted to
$\hat{S}_i = a\cdot (C_i – C^0_i)$
or
$\hat{S}_i = a\cdot (C_i – \frac{1}{b}\sum_{j=1}^i S_i) + n $

Obviously measurements must be started at the time where the anthropogenic emissions are still close to 0. Therefore we begin with the measurements from 1850, being aware that the data before 1959 are much less reliable than since then. There are reasons to assume that before 1950 land use change induced emissions play a stronger role than later. But there are strong reasons, that the estimated IEA values are too large, so in order to reach a reference value $C^0$ close to 280 ppm, an aequate weight for land use change emissions is 0.5.

Results for different scenarios

We will now evaluate the actual emission and concentration measurements for 3 different scenarios, for b=5, b=10, and b=50.
The first scenario (b=5) is considered to be the worst case scenario, rendering similar results as the Bern model.
The last scenario (b=50) corresponds to the “naive” view that the CO2 in the oceans is equally distributed, making use of the full potential buffer capacity of the oceans.
The second scenario (b=10) is somewhere in between.

Szenario b=5: Oceans and land sinks have 5 times the atmospheric capacity

The “effective Concentration” used for estimating the model reduces the measured concentration by the weighted cumulative sum of the the effective sinks with $b=5$. We see, that before 1900 there is hardly any difference to the measured concentration:

First we reconstruct the original data from the model estimation:

Now we calculate the future scenarios:

Constant emissions after 2023

In order to understand the reduced sink factor, we first investigate the case where emissions remain constant after 2023. By the end of 2200 CO2 concentration would be close to 600 ppm, with no tendency to flatten.

Emission reductions to reach equilibrium and keep permanently constant concentration

It is easy to see that under the given conditions of a small CO2 buffer, the concentration keeps increasing when emissions are constant. The interesting question is, how the emission rate has to be reduces in order to reach a constant concentration.
From the model setup one would assume that the yearly emission reduction should be $\frac{a}{b} \approx 0.005$, and indeed, with a yearly emission reduction of 0.5% after 2023, we reach a constant concentration eventually and hold it. This means that emission rates have to be cut to half within 140 years – provided the pessimistic assumption $b=5$ turns out to be correct:

Fast reduction to 50% emissions, then keeping concentration constant

An interesting scenario is the one, which cuts emissions to half the current amout within a short time, and then trying to keep the concentration close to the current level:

Scenario b=10: Oceans and land sinks have 10 times atmospheric capacity

Assuming the capacity of the (ocean and plant) CO2 reservoir to be 10-fold results, as expected, to half the sink reduction.

It does not change significantly the model approximation quality to the actual CO2 concentration data:

Constant emissions after 2023

The growth of the concentration for constant emissions is now smaller than 550 ppm by the end of 2200, but still growing.

Emission reductions to reach equilibrium and keep permanently constant concentration

The emission reduction rate can be reduced to 0.2% in order to compensate the sink reduction rate:

Fast reduction to 50% emissions, then keeping concentration constant

This is easier to see for the scenario, which reduces swiftly emissions to 50%. with peak concentration below 440 ppm, the further slow reduction with 0.2% p.a. keeps the concentration at about 415 ppm.

Szenario b=50: Oceans and land sinks have 50 times the atmospheric capacity

This scenario comes close to the original linear concentration model, which does not consider finite sink capacity.

Again, the reconstruction of the existing data shows no large deviation:

Constant emissions after 2023
Emission reductions to reach equilibrium and keep permanently constant concentration

We only need a yearly reduction of 0.05% for reaching a permanently constant CO2 concentration of under 500 ppm:

Fast reduction to 50% emissions, then keeping concentration constant

This scenario hardly increases today’s CO2-concentration and approximates eventually 400 ppm:

How to decide which model parameter b is correct?

It appears that with measurement data up to now it cannot be decided whether the sink receivers are finite, and if so, how limited they are.

The most sensitive detector from simple non-disputed measurements appears to be the concentration growth. I can be measured from both the actually measured data in the past,

but also in the modelled data at any time. When comparing the concentration growth with future constant emissions of the 2 cases b=5 and b=50, we get this result:

This implies that with the model b=5 concentration growth will never be under 0.8 ppm, whereas with the model b=50 the concentration growth decreases to appr. 0.1 ppm. But these large differences will only show up in many years, apparently not before 2050.

Preliminary Conclusions

Due to the fact that measurement data up to the current time can be reproduced well by both the Bern model as well as the simple linear sink model, it cannot be reliably decided with current data yet how large the effective size of the carbon sinks are. When emissions remain constant for a longer period of time, we expect to be able to perform a statistical test for the most likely value of the sink size factor b.

Nevertheless this extended sink model allows us to calculate the optimal rate of emission reduction for a given model assumption. Even in the worst case the required emission reduction is so small, so that any short term “zero emission” targets are not justified.

A related conclusion is the possibility of a re-calculation of the available CO2-budget. Given a target concentration C$_{target}$ the total bugdet is the amount of CO2 required to fill up both atmosphere and accumulating box up to the target concentration.
Obviously the target concentration must be chosen in such a way, that it is compatible with the environmental requirements.




A Computational Model for CO2-Dependence on Temperature in the Vostok Ice cores


[latexpage]

The Vostok Ice core provides a more than 400000 year view into the climate history with several cycles between ice ages and warm periods.

It hat become clear that CO2 data are lagging temperature data by several centuries. One difficulty arises from the necessity that CO2 is measured in the gas bubbles whereas temperature is determined from a deuterium proxy in the ice. Therefore there is a different way of determining the age for the two parameters – for CO2 there is a “gas age”, whereas the temperature series is assigned an “ice age”. There are estimates of how much older the “ice age” is in comparison to the gas age. But there is uncertainty, so we will have to tune the relation between the two time scales.

Preprocessing the Vostok data sets

In order to perform model based computations with the two data sets, the original data must be converted into equally spacially sampled data sets. This is done by means of linear interpolation. The sampling interval is chosen 100 years, which is approximately the sampling interval of the temperature data. Apart from this, the data sets must be reversed, and the sign of the time axis must be set to negative values.
Here is the re-sampled temperature data set from -370000 years to -10000 years overlayed over the original temperature data:

And here the corresponding CO2-data set:

The two data sets are now superimposed:

Data model

Due to the fact of the very good predictive value of the temperature dependent sink model for current emission, concentration, and temperature data (equation 2) , we will use the same model based on CO2 mass balance, and possible linear dependence of CO2 changes on concentration and temperature, but obviously without the anthropogenic emissions. Also the time interval is no longer a single year, but a century.

G$_i$ is growth of CO2-concentration C$_i$ during century i:

$G_i = C_{i+1}- C_i$

T$_i$ is the average temperature during century i. The model equation without anthropogenic emissions is:

$ – G_i = x1\cdot C_i + x2\cdot T_i + const$

After estimating the 3 parameters x1, x2, and const from G$_i$, C$_i$, and T$_i$ by means of ordinary least Squares, the modelled CO$_2$ data $\hat{C_i}$ are recursively reconstructed by means of the model, the first actual concentration value of the data sequence $C_0$, and the temperature data:
$\hat{C_0} = C_0$
$ \hat{C_{i+1}} = \hat{C_i} – x1\cdot \hat{C_i} – x2\cdot T_i – const$

Results – reconstructed CO$_2$ data

The standard deviation of $\{\hat{C_i}-C_i\}$ measures the quality of the reconstruction. Minimizing this standard deviation by shifting the temperature data is optimized, when the temperature data is shifted 1450..1500 years to the past:

Here are the corresponding estimated model parameters and the statistical quality measures from the Python OLS package:

The interpretation is, that there is a carbon sink of 1.3% per century, and an emission increase of 0.18 ppm per century and 1 degree temperature increase.

Modelling the sinks (-G$_i$) results in this diagram:

And the main result, the reconstruction of CO$_2$ data from the temperature extended sink modell looks quite remarkable:

Equilibrium Relations

The equilibrium states are more meaningful than the incremental changes. The equlibrium is defined by equality of CO2 sources and sinks, resulting in $G_i = 0$. This creates a linear relation between CO2 concentration C and Temperature T:

$C = \frac{0.1799\cdot T + 3.8965}{0.0133}$ ppm

For the temperature anomaly $T=0$ we therefore get the CO2 concentration of

$C_{T=0}=\frac{3.8965}{0.0133} ppm = 293 ppm$.
The difference of this to the modern data can be explained by different temperature references. Both levels are remarkably close, considering the very different environmental conditions.

And relative change is
$\frac{dC}{dT} = 13.5 \frac{ppm}{^\circ C} $

This is considerably different from the modern data, where we got $ 66.5 \frac{ppm}{°C}$.
There is no immediate explanation for this deviation. We need, however, consider the fact that we have time scale differences of at least 100 if not more. Therefore we can expect totally different mechanisms at work.




Temperature Dependent CO2 Sink model


[latexpage]

In the simple model of CO2 sinks and natural emissions published in this blog and elsewhere, the question repeatedly arose in the discussion: How is the — obvious — temperature dependence of natural CO2 sources, for example the outgassing oceans, or sinks such as photosynthesis, taken into account?

The model shows no long-term temperature dependence trend, only a short-term cyclical dependence. A long-term trend in temperature dependence over the last 70 years is not discernible even after careful analysis.
In the primary publication, it was ruled out that the absorption coefficient could be temperature-dependent (Section 2.5.3). However, it remained unclear whether a direct temperature dependence of the sources or sinks is possible. We re-visit the sink model in order to find a way to consider temperature dependence adequately.

Original temperature-independent model

For setting up the equation for mass conservation of CO2 in the atmosphere (see equations 1,2,3 of the publication), we split the total yearly emissions into anthropogenic emissions $E_i$ in year $i$, and all other, predominantly natural emissions $N_i$ . For simplification, the — more unknown than known — land use caused emissions are included in the natural emissions.
The increase of CO2 in the atmosphere is
$G_i = C_{i+1} – C_i$,
where $C_i$ is atmospheric CO2 concentration at the beginning of year $i$.
With absorptions $A_i$ the mass balance becomes:
$E_i – G_i = A_i – N_i$
The difference between the absorptions and the natural emissions was modeled linearly with a constant absorption coefficient $a^0$ expressing the proportionality with concentration $C_i$ and a constant $n^0$ for the annual natural emissions
\begin{equation}E_i – G_i = a^0\cdot C_i – n^0\end{equation}

The estimated parameters are:
$a^0=0.0183$,
$n^0=5.2$ ppm

While the proportionality between absorption and concentration by means of an absorption constant $a^0$ is physically very well founded, the assumption of constant natural emissions appears arbitrary.
Effectively this assumed constant contains the sum of all emissions except the explicit anthropogenic ones and also all sinks that are balanced during the year.
Therefore it is enlightening to calculate the estimated natural emissions $\hat{N_i}$ from the measured data and the mass balance equation with the estimated absorption constant $a^0=0.0183$:
$\hat{N_i} = G_i – E_i + a^0\cdot C_i $

The mean value of $\hat{N_i}$ results in the constant model term $n^0$. A slight smoothing results in a cyclic curve. Roy Spencer has attributed these fluctuations to El Nino. By definition a priori it cannot be said whether the fluctuations are attributable to the absorptions $A_i$ or to the natural emissions $N_i$. In any case no long-term trend is seen.

The reconstruction $\hat{C_i}$ of the measured concentration data is done recursively from the model and the initial value taken from the original data:
$\hat{C_0} = C_0$
$\hat{C_{i+1}} = \hat{C_i} + E_i +n^0 – a^0\cdot \hat{C_i}$

Extending the model by Temperature

The sink model is now extended by a temperature term $T_i$:
\begin{equation}E_i – G_i = a\cdot C_i + b\cdot T_i + c\end{equation} These 3 regression parameters can be estimated directly, but we do not know how the resulting numbers relate to the estimation without temperature dependence. Therefore we will motivate and build this model in an intuitive way.

The question arises why and how sources or sinks should be dependent on El Nino? It implies a temperature dependence. But why can’t the undeniable long term temperature trend be seen in the model? Why is there no trend in the estimated natural emissions?
The answer is in the fact that CO2 concentration and temperature are highly correlated, at least since 1960, i.e. during the time when CO2 concentration was measured with high quality:

Therefore any longterm trend dependent on temperature would be attributed to CO2 concentration when the model is based on concentration. This has been analysed in detail. We make no claim of causality between CO2 concentration and temperature, in neither direction, but just recognise their strong correlation. The optimal linear CO2 modelling for temperature anomaly based on the HadSST4 temperature data is:
$T_i^C = d\cdot C_i + e$
with $d=0.0082 \frac{^{\circ} C}{ppm}$ and $e = -2.7$°C

The actual temperature $T_i$ is the sum of the modelled Temperature $T_i^C$ and the residual Temperature $T_i^R$
Therefore the new model equation becomes
$E_i – G_i = a\cdot C_i + b\cdot (T_i ^C + T_i^R)+ c$
Replacing $T_i^C$ with its CO2-concentration proxy
$E_i – G_i = a\cdot C_i + b\cdot (d\cdot C_i + e + T_i^R)+ c$
and re-arrangement leads to:
$E_i – G_i = (a + b\cdot d)\cdot C_i + b\cdot T_i^R+ (c + b\cdot e)$.

Now the temperature part of the model depends only on zero mean variations, i.e. without trend.
All temperature trend information is covered by the coefficients of $C_i$. This model corresponds to Roy Spencer’s observation that much of the cyclic variability is explained by El Nino, which is closely related to the “residual temperature” $T_i^R$.
With $b=0$ we would have the temperature independent model above, and the coefficients of $C_i$ and the constant term correspond to the known estimated parameters. Due to the fact that $T_i^R$ does not contain any trend, the inclusion of the temperature dependent term does not change the other coefficients.

The estimated parameters of the last equation are:
$a + b\cdot d = 0.0183 = a^0$ ,
$b = -2.9\frac{ppm}{^{\circ}C}$,
$c + b\cdot e = -5.2 ppm = -n^0 $ .

The first and last parameter correspond to those of the temperature independent model. But now, from the estimated $b$ coefficient, we now can evaluate the contribution of Temperature $T_i$ to the sinks and the natural emissions

The final determined parameters are
$a = a_0 – b\cdot d = 0.0436$,
$b = -0.29 \frac{ppm}{^{\circ}C}$,
$c = -n_0 – b\cdot e = -13.6ppm $

It is quite instructive how close the yearly variations of temperature matches the variations of the measured sinks:

The smoothed residual is now mostly close to 0, with the exception of the Pinatubo eruption (after 1990) being the most dominant non-accounted signal after application of the model. Curiously in 2020 there is a reduced sink effect, most likely due to higher average temperature, effectively compensating the reduced emissions due to Covid lockdowns.
The model reconstruction of the concentration is now extended by the temperature term:
$\hat{C_0} = C_0$
$\hat{C_{i+1}} = \hat{C_i} + E_i – a\cdot \hat{C_i} – b\cdot T_i – c$

This is confirmed when looking at the reconstruction. The reconstruction only deviates at 1990 due to the missing sink contribution from the Pinatubo eruption, but follows the shape of the concentration curve precisely. This is an indication, that the Concentration+Temperature model is much better suited to model the CO2-concentration.
In order to compensate the deviations after 1990, the sink effect due to Pinatubo A$_i^P$must be considered. It is introduced as a negative emission signal into the recursive modelling equation:
$\hat{C_{i+1}} = \hat{C_i} + E_i -A_i^P- a\cdot \hat{C_i} – b\cdot$
This reduces the deviations of the model from the measured concentration significantly:

Consequences of the temperature dependent model

The concentration dependent absorption parameter is in fact more than twice as large as the total absorption parameter, and increasing temperature increases natural emissions. As long as temperature is correlated to CO2 concentration, the to trends cancel each other, and the effective sind coefficient appears invarant w.r.t. temperature.

The extended model becomes relevant, when temperature and CO2 concentration diverge.

If temperature rises faster than according the above CO2 proxy relation, then we can expect a reduced sink effect, while with temperatures below the expectancy value of the proxy the sink effect will increase.

As a first hint for further research we can estimate the temperature equilibrium concentration based on current measurements. This is given by (anthropogenic emissions and concentration growth at 0 by definition):
$a\cdot C + b\cdot T + c = 0$
$C = \frac{-b\cdot T – c}{a}$
For $T = 0°$ (= 14° C worldwide average temperature) we get as the – no emissions – equilibrium concentration.
$C = \frac{-c}{a} = \frac{-13.6}{0.0436} ppm = 312 ppm$

The temperature sensitivity is the Change of equilibrium concentration for 1° temperature change:
$\frac{\Delta C}{\Delta T} = \frac{-b}{a} = 66.5 \frac{ppm}{°C}$
Considering the fact the the temperature anomaly was appr. T = -0.5° in 1850, this corresponds very well with the assumed pre-industrial equilibrium concentration of 280 ppm.

A model for paleo climate?

An important consequence of the temperature enhanced model is for understanding paleo climate, which is e.g. represented in the Vostok ice core data:

Without analysing the data in detail, with the temperature dependence of the CO2 concentration we have a tool for e.g. estimating the equilibrium CO2 concentration depending on temperature. Stating the obvious, it is clear that CO2 concentration is controlled by temperature and not the other way round – the time lag between temperature changes and concentration changes is several centuries.

The Vostok data have been analysed with the same model of concentration and temperature dependent sinks and natural sources. Although the model parameters are substantially different due to the totally different time scale, the measured CO2 concentration is nicely reproduced by the model, driven entirely by temperature changes:




The inflection point of CO2 concentration


[latexpage]

And rising and rising…?

At first glance, the atmospheric CO2 concentration is constantly rising, as shown by the annual mean values measured at Maona Loa (ftp://aftp.cmdl.noaa.gov/products/trends/co2/co2_mm_mlo.txt):

The central question that arises is whether the concentration is growing faster and faster, i.e. whether more is being added each year? If so, the curve would be concave, i.e. curved upwards.

Or is the annual increase in concentration getting smaller and smaller? Then it would be convex, i.e. curved downwards.

Or is there a transition, i.e. a turning point in the mathematical sense? This could be recognized by the fact that the annual increase initially increases and then decreases from a certain point in time.

At first glance, the overall curve appears concave, which means that the annual increase in concentration appears to increase with each year.

The answer to this question is crucial for the question of how urgent measures to curb CO2 emissions are.

Closer examination with the measured annual increase

To get a more accurate impression, we calculate the — raw and slightly smoothed — annual increase in CO2 concentration:

This confirms that until 2016 there was a clear trend towards ever higher annual concentration increases, from just under 0.75 ppm/year in 1960 to over 2.5 ppm/year in 2016.

Since 2016, however, the annual increase has been declining, initially slightly, but significantly more strongly in 2020 and 2021. The corona-related decline in emissions certainly plays a role here, but this does not explain the decline that began in 2016.

There is therefore an undisputed turning point in the concentration curve in 2016, i.e. a trend reversal from increasing concentration growth to decreasing concentration growth. Is there a satisfactory explanation for this? This is essential, because if we can foresee that the trend of decreasing concentration growth will continue, then it is foreseeable that the concentration will stop increasing at some point and the goal of the Paris Climate Agreement, the balance between CO2 sources and CO2 sinks, can be achieved in the foreseeable future.

Explanation due to stagnating emissions

As part of the Global Carbon Brief project, Zeke Hausfather 2021 revised the values of global CO2 emissions over the last 20 years based on new findings, with the important result that global emissions have been constant for 10 years within the limits of measurement accuracy:

To assess the implications of this important finding, one needs to know the relationship between emissions and CO2 concentration.

From my own research on this in a publication and in a subsequent blog post, it follows that the increase in concentration results from the emissions and absorptions, which are proportional to the CO2 concentration.

This model has also been described and published in a similar form by others:

Trivially, it follows from the conservation of mass that the concentration $C_i$ at the end of the year $i$ results from the concentration of the previous year $C_{i-1}$, the natural emissions $N_i$, the anthropogenic emissions $E_i$ and the absorptions $A_i$:
\begin{equation}\label{mass_conservation}C_i = C_{i-1} + N_i + E_i – A_i \end{equation} This directly results in the effective absorption calculated from emissions and the measured increase in concentration:
\begin{equation}\label{absorption_measurement}$A_i – N_i = E_i – (C_i – C_{i-1}) \end{equation} Assuming constant annual natural emissions
$N_i = n$
and the linear model assumption, i.e. that the absorptions are proportional to the concentration of the previous year,
$A_i = a\cdot C_{i-1}$
the absorption model is created (these two assumptions are explained in detail in the publication above), where $n = a\cdot C_0$ :
\begin{equation}\label{absorption_equ}A_i – N_i = a\cdot(C_{i-1} – C_0)\end{equation} with the result $a=0.02$ and $C_0 = 280 ppm $. In this calculation, emissions due to land use changes are not taken into account. This explains the numerical differences between the result and those of the cited publications. The omission of land-use changes is justified by the fact that in this way natural emissions lead to the pre-industrial equilibrium concentration of 280 ppm.

With this model, the known concentration between 2000 and 2020 is projected very accurately from the data between 1950-2000:

Growth rate of the modelled concentration

The growth rate of the modelled concentration $G^{model}i$ is obtained by converting the model equation:
$G^{model}_i = E_i – a\cdot C{i-1} + n$
This no longer shows the cyclical fluctuations caused by El Nino:

The global maximum remains, but the year of the maximum has moved from 2016 to 2013.
These El Nino-adjusted concentration changes confirm Zeke Hausfather’s statement that emissions have indeed been constant for 10 years.

Evolution of CO2 concentration at constant emissions

In order to understand the inflection point of the CO2 concentration, we want to calculate the predicted course with the assumption of constant emissions $E_i = E$ and the equations (\ref{absorption_measurement}) and (\ref{absorption_equ}):
\begin{equation}\label{const_E_equ}C_i – C_{i-1} = E- a\cdot(C_{i-1} – C_0)\end{equation} The left-hand side describes the increase in concentration. On the right-hand side, an amount that increases with increasing concentration $C_{i-1}$ is subtracted from the constant emissions $E$, which means that the increase in concentration decreases with increasing concentration. This can be illustrated with a special bank account. As soon as the concentration reaches the value $\frac{E}{a} + C_0$, the equilibrium state is reached in which the concentration no longer increases, i.e. the often used “net zero” situation. With current emissions of 4.7 ppm, “net zero” would be at 515 ppm, while the “Stated Policies” emissions scenario of the International Energy Agency (IEA), which envisages a slight reduction in the future, reaches equilibrium at 475 ppm, as described in the publication above. According to the IEA’s forecast data, this will probably be the case in 2080:

According to this, constant emissions are sufficient justification for a convex course of CO2 concentrations, as we have seen since 2016. At the same time, this proves that CO2 absorption does indeed increase with increasing concentration.




Invariance of natural CO2 sources and sinks regarding long time temperatur trend


[latexpage]

In the simple model of CO2 sinks and natural emissions published in this blog and elsewhere, the question repeatedly arose in the discussion: How is the — obvious — temperature dependence of natural CO2 sources, for example the outgassing oceans, or sinks such as photosynthesis, taken into account? This is because the model does not include any long-term temperature dependence, only a short-term cyclical dependence. A long-term trend in temperature dependence over the last 70 years is not discernible even after careful analysis.
In the underlying publication, it was ruled out that the absorption coefficient could be temperature-dependent (Section 2.5.3). However, it remained unclear whether a direct temperature dependence of the sources or sinks is possible. And why this is not recognizable from the statistical analysis. This is discussed in this article.

Original temperature-independent model

The simplified form of CO2 mass conservation in the atmosphere (see equations 1,2,3 of the publication) with anthropogenic emissions $E_i$ in year $i$, the other, predominantly natural emissions $N_i$ (for simplification, the land use emissions are added to the natural emissions), the increase of CO2 in the atmosphere $G_i = C_{i+1} – C_i$ ($C_i$ is atmospheric CO2 concentration) and the absorptions $A_i$ is:
$E_i – G_i = A_i – N_i$
The difference between the absorptions and the other emissions was modeled linearly with a constant absorption coefficient $a$ and a constant $n$ for the annual natural emissions:
$A_i – N_i = a\cdot C_i + n$

While the absorption constant and the linear relationship between absorption and concentration are physically very well founded and proven, the assumption of constant natural emissions appears arbitrary. Therefore, instead of a constant expression $n$, it is enlightening to calculate the residual from the measured data and the calculated absorption constant $a$ instead
$N_i = G_i – E_i + a\cdot C_i $
must be considered:

The mean value of $N_i$ results in the constant model term $n$. A slight smoothing results in a periodic curve. Roy Spencer has attributed these fluctuations to the El Nino, although it is not clear whether the fluctuations are attributable to the absorptions $A_i$ or the natural emissions $N_i$. But no long-term trend is discernible. Therefore, the question must be clarified as to why short-term temperature dependencies are present, but long-term global warming does not appear to have any correspondence in the model.

Temperature-dependent model

We now extend the model by additionally allowing a linear temperature dependence for both the absorptions $A_i$ and the other emissions $N_i$. Since our measurement data only provide their difference, we can represent the temperature dependence of this difference in a single linear function of the temperature $T_i$, i.e. $b\cdot T_i + d$. Assuming that both $A_i$ and $N_i$ are temperature-dependent, the difference between the corresponding linear expressions is again a linear expression. Accordingly, the extended model has this form.
$A_i – N_i = a\cdot C_i + n + b\cdot T_i + d$
In principle, $n$ and $d$ could be combined into a single constant. However, since $d$ depends on the temperature scale used, and $n$ on the unit of measurement of the CO2 concentration, we leave it at 2 constants.

CO2 concentration as a proxy for temperature

As already explained in the publication in section 2.3.2, there is a high correlation between CO2 concentration and temperature. Where this correlation comes from, i.e. whether there is a causal relationship (and in which direction) is irrelevant for this study. However, we are not establishing the correlation between $T$ and $log(C)$ here, but between $T$ (temperature) and $C$ (CO2 concentration without logarithm).

As a result, the temperature anomaly can be derived from the concentration using the linear function
$T_i = e\cdot C_i + f$
with
$e=0.0083, f=-2.72 $
can be approximated.

Use of the CO2 proxy in the temperature-dependent equation

If we now experimentally insert the proxy function for the temperature into the temperature-dependent equation, we obtain the following equation:
$A_i – N_i = a\cdot C_i + n + b\cdot (e\cdot C_i + f) + d $
and
$A_i – N_i = (a+b\cdot e)\cdot C_i + (n+b\cdot f\cdot) + d $
The expression on the right-hand side now has the same form as the original equation, i.e.
$A_i – N_i = a`\cdot C_i + n` $
with
$ a`= a + b\cdot e $
$ n` = n + b\cdot f + d $

Conclusions

Therefore, with a linear dependence of temperature on CO2 concentration, temperature effects of sinks and sources cannot be distinguished from concentration effects, both are included in the “effective” absorption constant $a$ and the constant of natural emissions $n$. Therefore, the simple source and sink model contains all linear temperature effects.
This explains the astonishing independence of the model from the global temperature increase of the last 50 years.
This correlation also suggests that the absorption behavior of atmospheric sinks will not change in the future.

However, if we want to know exactly how the temperature will affect the sources and sinks, other data sources must be used. This knowledge is not necessary for forecasting future CO2 concentrations from anthropogenic emissions due to the correlation found.




Emissions and the carbon cycle

In the climate discussion, the so-called “CO2 footprint” of living beings, especially humans and farm animals, is increasingly declared as a problem, to the point,

  • to discredit the eating of meat,
  • slaughter farm animals (e.g. in Ireland),
  • or even discouraging young people from having children.

This discussion is based on false premises. It is pretended that exhaling CO2 has the same “climate-damaging” quality as burning coal or petroleum.
A closer analysis of the carbon cycle shows the difference.

The carbon cycle

All life on earth is made up of carbon compounds.
The beginning of the so-called food chain is plants, which use photosynthesis to produce mainly carbohydrates, and in some cases fats and oils, from CO2 in the atmosphere, thus storing both carbon and energy.

  • The further processing of these carbon compounds is divided into several branches, where again a conversion into CO2 takes place:
  • the immediate energy consumption of the plant, the “plant respiration”,
  • the — mainly seasonal — decay of part or all of the plant, and humus formation,
  • the energy supply of animals and humans as food. Here, apart from the direct energy supply, a transformation into proteins and fats takes place, partly also into lime.
  • Proteins and fats are passed along the food chain.
  • In the course of life, plants, animals and humans release some of the carbon absorbed from food through respiration as CO2, and in some cases also as methane.
  • With the decomposition of animals and humans, the remaining CO2 is released again.
  • The formed lime binds the CO2 for a long time. E.g. each eggshell binds 5g CO2 for a very long time.

Abstractly speaking, all CO2 from all living things, whether bound or exhaled, ultimately comes from the atmosphere via photosynthesis. This is very nicely explained by the famous physicist Richard Feynman:

All living beings are temporary stores of CO2. The described mechanisms cause different half-lives of this storage.
Human interventions usually cause a prolongation of the storage and consequently a more sustainable use of CO2:

  • Mainly by conservation and thus stopping the decay processes. This refers not only to the preservation of food, but also through long-term conservation of wood, as long as wood utilization is sustainable. In this way, building with wood is a long-term commitment of CO2.
  • Last year’s grain is usually stored and only processed into bread etc. about a year later. In the meantime, this year’s grain plants have already grown again. Thus, the metabolic emissions from humans and animals are already compensated before they take place. If the grain were to rot without being processed, it would have already decomposed into CO2 again last fall.
  • The rearing of farm animals also means CO2 storage, and not only in the form of the long-lived bones. However, the use of fossil energy in mechanized agriculture and fertilizers must be taken into account here.

Limitation – fertilization and mechanization of agriculture

3 factors mean that the production of food may still release more CO2 than in “free nature”, namely when processes are involved that use fossil fuels:

  • The use of chemically produced fertilizers
  • the mechanization of agriculture
  • the industrialization of food production.

Because of very different production processes, it is very misleading to speak of a product-specific carbon footprint.

To pick an important example, beef is usually given an extremely high “carbon footprint.” Beef that comes from cattle raised largely on pasture — fertilized without artificial fertilizers — has a negligible “carbon footprint,” contrary to what is disseminated in the usual tables. The same is true for wild animals killed in hunting.

An example that illustrates the duplicity of the discussion is the production of bio-fuels. This uses fertilizers and mechanical equipment powered by fossil energy in much the same way as the rest of agriculture. However, the fuels produced are considered sustainable and “CO2-free.”

Dependencies

The most important insight from biology and ecology is that it is not within our arbitrary power to remove individual elements of the sensitive ecology without doing great harm to the whole.
Typical examples of such harmful influences are:

  • Overgrazing, i.e., desolation by eating away at the (plant) bases of life. Examples of this are widely known. “Overgrazing” can also occur as a result of “well-intentioned” and assumed positive interventions such as “water quality improvement” in Lake Constance, with the result that there is no longer enough food for plants and animals in the water.
  • Less well known is “undergrazing,” particularly the failure to remove withered tumbleweeds in the vast semi-arid areas of the world. To address this problem, Alan Savory has introduced the concept of “Holistic Management” with great success. This concept includes as a major component the expansion of livestock production.If plants are not further utilized by “larger” animals, then they are processed by microorganisms and generally decompose again quickly, releasing the bound CO2; in some cases they are converted into humus. So nothing is gained for the CO2 concentration of the atmosphere if e.g. cattle or pigs are slaughtered to allegedly improve the CO2 balance. On the contrary, the animals prolong the life of the organic carbon-binding matter.

Dependence of plant growth on CO2

Plants thrive better the higher the atmospheric CO2 concentration, especially C3 plants:

For plant growth, the increase in CO2 concentration over the last 40 years has been markedly favorable, and the world has become significantly greener, with the side effect of sink effect, i.e., uptake of the additional anthropogenic CO2:

C3 plants do not reach the same uptake of CO2 as C4 plants below a concentration of 800 ppm. That is why many greenhouses are enriched with CO2.

Conclusions

Knowing these relationships, compelling conclusions emerge:

  1. Because of the primacy of photosynthesis and the dependence of all life on it, the totality of living things is a CO2 sink, so in the medium and long term the CO2 concentration can only decrease, never increase, because of the influence of living things.
    All living beings are CO2-storages, with different storage times.
  2. There are at least 3 forms of long-term CO2-binding, which lead to a decrease of the CO2-concentration:

    • Calcification
    • humus formation
    • non-energy wood utilization

  3. The use of “technical aids” that consume fossil energy must be separated from the natural carbon cycle in the considerations. It is therefore not possible to say that a particular foodstuff has a fixed “CO2 footprint”. It depends solely on the production method and animal husbandry.
  4. A “fair” consideration must assume here, just as with electric vehicles, for example, that the technical aids of the future or the production of fertilizers are sustainable.

In addition, taking into account the knowledge that more than half of current anthropogenic emissions are reabsorbed over the course of the year, even a 45% reduction in current emissions leads to the “net zero” situation where atmospheric concentrations no longer increase. Even if we make little change in global emissions (which is very likely given energy policy decisions in China and India), an equilibrium concentration of 475 ppm will be reached before the end of this century, which is no cause for alarm.




5 Simple Climate Facts


[latexpage]

1. Global CO2 emissions have reached their maximum level in 2018 and are not expected to increase further

There was a massive drop in emissions in 2020 due to Corona. But the maximum $CO_2$ emissions had already been reached in 2018 at 33.5 Gt, and 2019 and 2021 emissions are below that level as well:

Already since 2003, there has been a clear trend of decrease in relative CO2 emissions growth (analogue to economic growth), where between 2018 and 2019, the 0% line was then reached, as described in this article:

The reason for this is that the growth in emissions in emerging economies now roughly balances the decline in emissions in industrialized countries. Also, there has already been a sharp bend in the emission growth in China in 2010.

So the actual “business-as-usual” scenario is not the catastrophic scenario RCP8.5 with exponentially growing emissions that is still widely circulated in the media, but de facto a continuation of global CO2 emissions at the plateau reached since 2018.   So the partial target of the Paris climate agreement, “Countries must reach peak emissions as soon as possible“, has already been achieved for the world as a whole since 2018.

2. It is enough to halve emissions to avoid further growth of CO2 levels in the atmosphere

To maintain the current level of CO2 in the atmosphere, it would be sufficient to reduce emissions to half of today’s level (https://www.youtube.com/watch?v=JN3913OI7Fc&t=291s (in German)). The straightforward mathematical derivation and various scenarios (from “business as usual” to “zero carbon energy transition” can be found in this article. Here the result of the predicted CO2 concentration levels:

Dieses Bild hat ein leeres Alt-Attribut. Der Dateiname ist image-11-1024x397.png

In the worst case, with future constant emissions, CO2 concentration will be 500 ppm by 2100 and remain below the equlibrium concentration of 544 ppm, which is below double the pre-industrial concentration. The essential point is that in no case will CO2 levels rise to climatically dangerously high levels, but they would probably fall to dangerously low levels if the global energy transition were “successful”, because current peak grain harvests are 15% larger than they were 40 years ago due to increased CO2 levels.  
Literally, the Paris climate agreement states in Article 4.1:
Countries must reach peak emissions as soon as possible “so as to achieve a balance between anthropogenic emissions by sources and removals by sinks of greenhouse gases in the second half of this century.”
This means that the balance between anthropogenic emissions and CO2 removals must be achieved in the 2nd half of this century. Fact is that the balance will be reached when total emissions are halved. The time target to reach this 50% goal is between 2050 and 2100, these two limits correspond to the blue and turquoise green scenario. So the Paris climate agreement does not call for complete decarbonization at all, but allows for a smooth transition rather than the disruption implied by the complete decarbonization.

3. According to radiative physics, climate sensitivity is only half a degree

The possible influence of $CO_2$ on global warming is that its absorption of thermal radiation causes that radiation to reach space in a weakened form. The physics of this process is radiative transfer. To actually measure this greenhouse effect, the infrared radiation emitted into space must be measured. The theoretically expected greenhouse effect is so tiny, at 0.2 $\frac{W}{m^2}$ per decade, that it is undetectable with current satellite technology, which has a measurement accuracy of about 10 $\frac{W}{m^2}$.
Therefore, one has no choice but to settle for mathematical models of the radiative transfer equation. However, this is not a valid proof for the effectiveness of this greenhouse effect in the real, much more complex atmosphere.
There is a widely accepted simulation program MODTRAN that can be used to simulate the emission of infrared radiation into space, and thus the $CO_2$ greenhouse effect, in a physically clean way. If I use this program to calculate the so-called CO2 sensitivity (the temperature increase when CO2 doubles from 280 to 560 ppm) under correct conditions, the result is a mere 1/2 °C:

Dieses Bild hat ein leeres Alt-Attribut. Der Dateiname ist image-12-1024x590.png

The facts are discussed in this article. Also, in order to understand the mindset of IPCC-affiliated scientists, I describe there their, in my opinion, incorrect approach to sensitivity calculations using MODTRAN simulation.

Accordingly, if all real-world conditions are correctly accounted for, the temperature increase from doubling $CO_2$ from 280 ppm to 560 ppm is just 1/2 °C, well below the Paris Climate Agreement targets.

4. The only detectable effect of CO2 increase is the greening of the Earth

While the greenhouse effect is so far a theoretical hypothesis, which because of its small effect (less than 0. 2 $\frac{W}{m^2}$ in 10 years, which is only a fraction of the measurement errors of infrared satellite measurements (10 $\frac{W}{m^2}$), so far not provable beyond doubt, another welcome effect of increased $CO_2$ content has been abundantly demonstrated: Between 1982 and 2009, the greening of the Earth has increased by 25-50%, 70% of which is due to increases in CO2. Notably, parts of Earth’s drylands have also become greener because plants have a more efficient water balance at higher $CO_2$ levels.

5. The increase in world mean temperature over the past 40 years has been caused by decreased cloud formation

It is a fact that the mean temperature of the world has increased considerably since 1970. If it is not only due to increased CO2 concentration, what could be the cause?

A simple calculation shows that 80% of the temperature increase over the last 40 years is due to the real and measurable effect of reduced cloud reflectivity, and at most 20% is due to the hypothetical and so far not definitively proven CO2 greenhouse effect:

The causes  of reduced cloud formation may indeed be partly man-made, because the basic mechanism of heat regulation by evaporation through plants and the resulting clouds depends on the way humans farm and treat the natural landscape (see also this video (in German)). The most important man-made risk factors are

All 3 factors contribute to the 5% decrease of average cloud cover since 1950 ( https://taz.de/Wasser-und-Klimaschutz/!5774434/ ), which explains at least 80% of the temperature rise since then as described above.
To stop the warming caused by reduced cloud formation, CO2 emission reductions by stopping use of fossil fuels are of no use. A refocus on solving the real problems instead of ideological fixation on CO2 is overdue.