1

Water Vapour Feedback


[latexpage]

In the climate debate, the argument of feedback through water vapor is used to amplify the climate effect of greenhouse gases – the sensitivity to a doubling of their concentration in the atmosphere – which, according to the radiative transfer equation and general consensus, is a maximum of 0.8°, by an alleged factor of 2-6. However, this is usually not quantified more precisely, only formulas with the “final feedback” are usually given.

Recently, David Coe, Walter Fabinski and Gerhard Wiegleb described and analyzed precisely this feedback in the publication “The Impact of CO2, H2O and Other ‘Greenhouse Gases’ on Equilibrium Earth Temperatures“. Based on her publication, this effect is derived below using partly the same and partly slightly different approaches. The results are almost identical.

All other effects that occur during the formation of water vapor, such as cloud formation, are ignored here.

The basic mechanism of water vapor feedback

The starting point is an increase in atmospheric temperature by ∆T0, regardless of the cause. Typically, the greenhouse effect is assumed to be the primary cause. The argument is now that the warmed atmosphere can absorb more water vapor, i.e. the saturation vapor pressure (SVP) increases and it is assumed that consequently the water vapor concentration ∆H2O also increases, as a linear function of the temperature change. (The temperature change is so small that linearization is legitimate in any case):
$\Delta H_2O = j\cdot \Delta T_0 $
where $j$ is the proportionality constant for the water vapor concentration.
An increased water vapor concentration in turn causes a temperature increase due to the greenhouse effect of water vapor, which is linearly dependent on the water vapor concentration:
$\Delta T_1 = k\cdot \Delta H_2O $
In summary, the triggering temperature increase ∆T0 causes a subsequent increase in temperature ∆T1:
$\Delta T_1 = j\cdot k\cdot \Delta T_0 $
Since the prerequisite of the method is that the cause of the triggering temperature increase is insignificant, the increase by ∆T1 naturally also causes a feedback cycle again:
$\Delta T_2 = j\cdot k\cdot \Delta T_1 = (j\cdot k)^2\cdot \Delta T_0$
This is repeated recursively. The final temperature change is therefore a geometric series:
$\Delta T = \Delta T_0\sum_{n=0}^\infty(j\cdot k)^n = \Delta T_0\cdot \frac{1}{1-j\cdot k} $
If $j\cdot k\ge 1$, the series would diverge and the temperature would grow beyond all limits. It is therefore important to be clear about the magnitude of these two feedback factors.

For the determination of the first term, $j$ we can apply a simplified approach by accepting the statement commonly used in the mainstream literature, that for each degree C of temperature increase the relative air moisture may rise up to 7%. In the German version of this post I did the explicit calculations and came to the result that the realistic maximum air moisture rise is 6% per degree temperature rise, which has hardly any effect on the final result.

Dependence of the greenhouse effect on the change in relative humidity

Infrared radiation transport in the atmosphere is dependent on relative humidity. This is taken into account in the well-known and proven MODTRAN simulation program. With increasing humidity, the outgoing infrared radiation decreases due to the greenhouse effect of water vapor.

The decrease in radiation is linear between 60% and 100% humidity. Therefore, the increase in relative humidity from 80% to 86% is considered to determine the decrease in radiant power and the temperature increase required for compensation.

To do this, we set the parameters of the MODTRAN simulation to

  • the current CO2 concentration of 420 ppm,
  • a relative humidity of 80%,
  • and a cloud constellation that comes close to the average IR radiant power of 240 $\frac{W}{m^2}$.

The temperature offset is now increased until the reduced iR radiation of 0.7 \frac{W}{m^2} is compensated for by increasing the temperature. This is the case when the ground temperature is increased by 0.215 °C.

A 7% higher relative humidity therefore causes a greenhouse effect, which is offset by a temperature increase of 0.215°C. Extrapolated to a (theoretical) change of 100% humidity, this results in $k=3.07$°C/100%.

The final feedback factor and the total greenhouse effect

This means that a 1 degree higher temperature in a feedback cycle causes an additional temperature increase of $k\cdot j = 0.215$.

The geometric series leads to an amplification factor $f$ of the pure CO$_2$ greenhouse effect by
$f=\frac{1}{1-0.215} = 1.27 $

This means that the sensitivity amplified by the water vapor feedback when doubling the CO$_2$ concentration $\Delta T$ is no longer $\Delta T_0=0.8$°C, but
$\Delta T = 1.27\cdot 0.8$ °C = 1.02°C $\approx$ 1°C

This result does not take into account the increase in temperature caused by the higher water vapor concentration




The Extended Carbon Sink Model (work in progress)


[latexpage]

Introduction – potential deficit of the simple linear carbon sink model

With the simple linear carbon sink model the past relation between anthropogenic emissions and atmospheric CO2 concentration can be excellently modelled, in particular when using the high quality emission and concentration data after 1950.
The model makes use of the mass conservation applied to the CO2-data, where $C_i$ is the CO2 concentration in year $i$, $E_i$ are the anthropogenic emissions during year $i$, $N_i$ are all other CO2 emissions during year $i$ (mostly natural emissions), and $A_i$ are all absorptions during year $i$. We assume emissions caused by land use change to be part of the natural emissions, which means that they are assumed to be constant. Due to the fact that their measurement error is very large, this should be an acceptable assumption.
With the concentration growth $G_i$
$G_i = C_{i+1}-C_i $
we get from mass conservation the yearly balance
$ E_i + N_i – A_i = G_i $
$E_i$ and $G_i$ are measured from known data sets (IEA and Maona Loa), and we define the effective sink $S_i$ as
$S_i = A_i – N_i$
The atmospheric carbon balance therefore is
$E_i – G_i = S_i $
The effective sink ist modelled as a linear function of the CO2-concentration by minimizing
$\sum_i (S_i – \hat{S}_i)^2$
w.r.t. $a$ and $n$, where
$\hat{S}_i = a\cdot C_i + n $
The equation can be re-written to
$\hat{S}_i = a\cdot (C_i – C^0)$
where
$C^0 = -\frac{n}{a}$
is the average reference concentration represented by the oceans and the biosphere. The sink effect is proportional to the difference of the atmospheric concentration and this reference concentration. In the simple linear model the reference concentration is assumed to be constant, implying that these reservoirs are close to inifinite. Up to now this is supported by the empirical data.
This procedure is visualized here:

This results in an excellent model reconstruction of the measured concentration data:

It is important to note that the small error since 2010 is an over-estimation of the actual measured data, which means that the estimated sink effect is under-estimated. Therefore we can safely say that currently we do not see the slightest trend of a possible decline of the 2 large sink systems, the ocean sink and the land sink from photosynthesis.

Nevertheless it can be argued that in the future both sink systems may enter a state of saturation, i.e. a lack of the ability to absorb surplus carbon from the atmosphere. As a matter of fact it is claimed from the architects of the Bern model and representatives of the IPCC that the capacity of the ocean is not larger than 5 times the capacity of the atmosphere, and therefore future ability to take up extra CO2 will rapidly decline. We don’t see this claim justified by data, but before we can prove that the claim is not justified, we will adapt the model to make it capable of calculating varying sink capacities.

Extending the model with a second finite accumulating box

In order to take care of the finite size of both the ocean and the land sinks, we do not pretend that these sink systems are infinite, but assume a second box besides the atmosphere with a concentration $C^0_i$, taking up all CO2 from both sink systems. The box is assumed to be $b$ times larger than the atmosphere, therefore for a given sink-related change of atmosphere concentration ($-S_i$) we get an increase of concentration in the “sink box” of the same amount ($S_i$) but reduced by the factor b:
$ C^0_{i+1} = C^0_i + \frac{1}{b}\cdot S_i $
The important model assumption is that $C^0_i$ is the reference concentration, which determines future sink ability.
The initial value is the previously calculated equilibrium concentration $C^0$
$C^0_0 = C^0$
Therefore by evaluation of the recursion we get
$C^0_i = C^0 + \frac{1}{b}\sum_{j=1}^i S_i$
The main modelling equation is adapted to
$\hat{S}_i = a\cdot (C_i – C^0_i)$
or
$\hat{S}_i = a\cdot (C_i – \frac{1}{b}\sum_{j=1}^i S_i) + n $

Obviously measurements must be started at the time where the anthropogenic emissions are still close to 0. Therefore we begin with the measurements from 1850, being aware that the data before 1959 are much less reliable than since then. There are reasons to assume that before 1950 land use change induced emissions play a stronger role than later. But there are strong reasons, that the estimated IEA values are too large, so in order to reach a reference value $C^0$ close to 280 ppm, an aequate weight for land use change emissions is 0.5.

Results for different scenarios

We will now evaluate the actual emission and concentration measurements for 3 different scenarios, for b=5, b=10, and b=50.
The first scenario (b=5) is considered to be the worst case scenario, rendering similar results as the Bern model.
The last scenario (b=50) corresponds to the “naive” view that the CO2 in the oceans is equally distributed, making use of the full potential buffer capacity of the oceans.
The second scenario (b=10) is somewhere in between.

Szenario b=5: Oceans and land sinks have 5 times the atmospheric capacity

The “effective Concentration” used for estimating the model reduces the measured concentration by the weighted cumulative sum of the the effective sinks with $b=5$. We see, that before 1900 there is hardly any difference to the measured concentration:

First we reconstruct the original data from the model estimation:

Now we calculate the future scenarios:

Constant emissions after 2023

In order to understand the reduced sink factor, we first investigate the case where emissions remain constant after 2023. By the end of 2200 CO2 concentration would be close to 600 ppm, with no tendency to flatten.

Emission reductions to reach equilibrium and keep permanently constant concentration

It is easy to see that under the given conditions of a small CO2 buffer, the concentration keeps increasing when emissions are constant. The interesting question is, how the emission rate has to be reduces in order to reach a constant concentration.
From the model setup one would assume that the yearly emission reduction should be $\frac{a}{b} \approx 0.005$, and indeed, with a yearly emission reduction of 0.5% after 2023, we reach a constant concentration eventually and hold it. This means that emission rates have to be cut to half within 140 years – provided the pessimistic assumption $b=5$ turns out to be correct:

Fast reduction to 50% emissions, then keeping concentration constant

An interesting scenario is the one, which cuts emissions to half the current amout within a short time, and then trying to keep the concentration close to the current level:

Scenario b=10: Oceans and land sinks have 10 times atmospheric capacity

Assuming the capacity of the (ocean and plant) CO2 reservoir to be 10-fold results, as expected, to half the sink reduction.

It does not change significantly the model approximation quality to the actual CO2 concentration data:

Constant emissions after 2023

The growth of the concentration for constant emissions is now smaller than 550 ppm by the end of 2200, but still growing.

Emission reductions to reach equilibrium and keep permanently constant concentration

The emission reduction rate can be reduced to 0.2% in order to compensate the sink reduction rate:

Fast reduction to 50% emissions, then keeping concentration constant

This is easier to see for the scenario, which reduces swiftly emissions to 50%. with peak concentration below 440 ppm, the further slow reduction with 0.2% p.a. keeps the concentration at about 415 ppm.

Szenario b=50: Oceans and land sinks have 50 times the atmospheric capacity

This scenario comes close to the original linear concentration model, which does not consider finite sink capacity.

Again, the reconstruction of the existing data shows no large deviation:

Constant emissions after 2023
Emission reductions to reach equilibrium and keep permanently constant concentration

We only need a yearly reduction of 0.05% for reaching a permanently constant CO2 concentration of under 500 ppm:

Fast reduction to 50% emissions, then keeping concentration constant

This scenario hardly increases today’s CO2-concentration and approximates eventually 400 ppm:

How to decide which model parameter b is correct?

It appears that with measurement data up to now it cannot be decided whether the sink receivers are finite, and if so, how limited they are.

The most sensitive detector from simple non-disputed measurements appears to be the concentration growth. I can be measured from both the actually measured data in the past,

but also in the modelled data at any time. When comparing the concentration growth with future constant emissions of the 2 cases b=5 and b=50, we get this result:

This implies that with the model b=5 concentration growth will never be under 0.8 ppm, whereas with the model b=50 the concentration growth decreases to appr. 0.1 ppm. But these large differences will only show up in many years, apparently not before 2050.

Preliminary Conclusions

Due to the fact that measurement data up to the current time can be reproduced well by both the Bern model as well as the simple linear sink model, it cannot be reliably decided with current data yet how large the effective size of the carbon sinks are. When emissions remain constant for a longer period of time, we expect to be able to perform a statistical test for the most likely value of the sink size factor b.

Nevertheless this extended sink model allows us to calculate the optimal rate of emission reduction for a given model assumption. Even in the worst case the required emission reduction is so small, so that any short term “zero emission” targets are not justified.

A related conclusion is the possibility of a re-calculation of the available CO2-budget. Given a target concentration C$_{target}$ the total bugdet is the amount of CO2 required to fill up both atmosphere and accumulating box up to the target concentration.
Obviously the target concentration must be chosen in such a way, that it is compatible with the environmental requirements.




A Computational Model for CO2-Dependence on Temperature in the Vostok Ice cores


[latexpage]

The Vostok Ice core provides a more than 400000 year view into the climate history with several cycles between ice ages and warm periods.

It hat become clear that CO2 data are lagging temperature data by several centuries. One difficulty arises from the necessity that CO2 is measured in the gas bubbles whereas temperature is determined from a deuterium proxy in the ice. Therefore there is a different way of determining the age for the two parameters – for CO2 there is a “gas age”, whereas the temperature series is assigned an “ice age”. There are estimates of how much older the “ice age” is in comparison to the gas age. But there is uncertainty, so we will have to tune the relation between the two time scales.

Preprocessing the Vostok data sets

In order to perform model based computations with the two data sets, the original data must be converted into equally spacially sampled data sets. This is done by means of linear interpolation. The sampling interval is chosen 100 years, which is approximately the sampling interval of the temperature data. Apart from this, the data sets must be reversed, and the sign of the time axis must be set to negative values.
Here is the re-sampled temperature data set from -370000 years to -10000 years overlayed over the original temperature data:

And here the corresponding CO2-data set:

The two data sets are now superimposed:

Data model

Due to the fact of the very good predictive value of the temperature dependent sink model for current emission, concentration, and temperature data (equation 2) , we will use the same model based on CO2 mass balance, and possible linear dependence of CO2 changes on concentration and temperature, but obviously without the anthropogenic emissions. Also the time interval is no longer a single year, but a century.

G$_i$ is growth of CO2-concentration C$_i$ during century i:

$G_i = C_{i+1}- C_i$

T$_i$ is the average temperature during century i. The model equation without anthropogenic emissions is:

$ – G_i = x1\cdot C_i + x2\cdot T_i + const$

After estimating the 3 parameters x1, x2, and const from G$_i$, C$_i$, and T$_i$ by means of ordinary least Squares, the modelled CO$_2$ data $\hat{C_i}$ are recursively reconstructed by means of the model, the first actual concentration value of the data sequence $C_0$, and the temperature data:
$\hat{C_0} = C_0$
$ \hat{C_{i+1}} = \hat{C_i} – x1\cdot \hat{C_i} – x2\cdot T_i – const$

Results – reconstructed CO$_2$ data

The standard deviation of $\{\hat{C_i}-C_i\}$ measures the quality of the reconstruction. Minimizing this standard deviation by shifting the temperature data is optimized, when the temperature data is shifted 1450..1500 years to the past:

Here are the corresponding estimated model parameters and the statistical quality measures from the Python OLS package:

The interpretation is, that there is a carbon sink of 1.3% per century, and an emission increase of 0.18 ppm per century and 1 degree temperature increase.

Modelling the sinks (-G$_i$) results in this diagram:

And the main result, the reconstruction of CO$_2$ data from the temperature extended sink modell looks quite remarkable:

Equilibrium Relations

The equilibrium states are more meaningful than the incremental changes. The equlibrium is defined by equality of CO2 sources and sinks, resulting in $G_i = 0$. This creates a linear relation between CO2 concentration C and Temperature T:

$C = \frac{0.1799\cdot T + 3.8965}{0.0133}$ ppm

For the temperature anomaly $T=0$ we therefore get the CO2 concentration of

$C_{T=0}=\frac{3.8965}{0.0133} ppm = 293 ppm$.
The difference of this to the modern data can be explained by different temperature references. Both levels are remarkably close, considering the very different environmental conditions.

And relative change is
$\frac{dC}{dT} = 13.5 \frac{ppm}{^\circ C} $

This is considerably different from the modern data, where we got $ 66.5 \frac{ppm}{°C}$.
There is no immediate explanation for this deviation. We need, however, consider the fact that we have time scale differences of at least 100 if not more. Therefore we can expect totally different mechanisms at work.




Temperature Dependent CO2 Sink model


[latexpage]

In the simple model of CO2 sinks and natural emissions published in this blog and elsewhere, the question repeatedly arose in the discussion: How is the — obvious — temperature dependence of natural CO2 sources, for example the outgassing oceans, or sinks such as photosynthesis, taken into account?

The model shows no long-term temperature dependence trend, only a short-term cyclical dependence. A long-term trend in temperature dependence over the last 70 years is not discernible even after careful analysis.
In the primary publication, it was ruled out that the absorption coefficient could be temperature-dependent (Section 2.5.3). However, it remained unclear whether a direct temperature dependence of the sources or sinks is possible. We re-visit the sink model in order to find a way to consider temperature dependence adequately.

Original temperature-independent model

For setting up the equation for mass conservation of CO2 in the atmosphere (see equations 1,2,3 of the publication), we split the total yearly emissions into anthropogenic emissions $E_i$ in year $i$, and all other, predominantly natural emissions $N_i$ . For simplification, the — more unknown than known — land use caused emissions are included in the natural emissions.
The increase of CO2 in the atmosphere is
$G_i = C_{i+1} – C_i$,
where $C_i$ is atmospheric CO2 concentration at the beginning of year $i$.
With absorptions $A_i$ the mass balance becomes:
$E_i – G_i = A_i – N_i$
The difference between the absorptions and the natural emissions was modeled linearly with a constant absorption coefficient $a^0$ expressing the proportionality with concentration $C_i$ and a constant $n^0$ for the annual natural emissions
\begin{equation}E_i – G_i = a^0\cdot C_i – n^0\end{equation}

The estimated parameters are:
$a^0=0.0183$,
$n^0=5.2$ ppm

While the proportionality between absorption and concentration by means of an absorption constant $a^0$ is physically very well founded, the assumption of constant natural emissions appears arbitrary.
Effectively this assumed constant contains the sum of all emissions except the explicit anthropogenic ones and also all sinks that are balanced during the year.
Therefore it is enlightening to calculate the estimated natural emissions $\hat{N_i}$ from the measured data and the mass balance equation with the estimated absorption constant $a^0=0.0183$:
$\hat{N_i} = G_i – E_i + a^0\cdot C_i $

The mean value of $\hat{N_i}$ results in the constant model term $n^0$. A slight smoothing results in a cyclic curve. Roy Spencer has attributed these fluctuations to El Nino. By definition a priori it cannot be said whether the fluctuations are attributable to the absorptions $A_i$ or to the natural emissions $N_i$. In any case no long-term trend is seen.

The reconstruction $\hat{C_i}$ of the measured concentration data is done recursively from the model and the initial value taken from the original data:
$\hat{C_0} = C_0$
$\hat{C_{i+1}} = \hat{C_i} + E_i +n^0 – a^0\cdot \hat{C_i}$

Extending the model by Temperature

The sink model is now extended by a temperature term $T_i$:
\begin{equation}E_i – G_i = a\cdot C_i + b\cdot T_i + c\end{equation} These 3 regression parameters can be estimated directly, but we do not know how the resulting numbers relate to the estimation without temperature dependence. Therefore we will motivate and build this model in an intuitive way.

The question arises why and how sources or sinks should be dependent on El Nino? It implies a temperature dependence. But why can’t the undeniable long term temperature trend be seen in the model? Why is there no trend in the estimated natural emissions?
The answer is in the fact that CO2 concentration and temperature are highly correlated, at least since 1960, i.e. during the time when CO2 concentration was measured with high quality:

Therefore any longterm trend dependent on temperature would be attributed to CO2 concentration when the model is based on concentration. This has been analysed in detail. We make no claim of causality between CO2 concentration and temperature, in neither direction, but just recognise their strong correlation. The optimal linear CO2 modelling for temperature anomaly based on the HadSST4 temperature data is:
$T_i^C = d\cdot C_i + e$
with $d=0.0082 \frac{^{\circ} C}{ppm}$ and $e = -2.7$°C

The actual temperature $T_i$ is the sum of the modelled Temperature $T_i^C$ and the residual Temperature $T_i^R$
Therefore the new model equation becomes
$E_i – G_i = a\cdot C_i + b\cdot (T_i ^C + T_i^R)+ c$
Replacing $T_i^C$ with its CO2-concentration proxy
$E_i – G_i = a\cdot C_i + b\cdot (d\cdot C_i + e + T_i^R)+ c$
and re-arrangement leads to:
$E_i – G_i = (a + b\cdot d)\cdot C_i + b\cdot T_i^R+ (c + b\cdot e)$.

Now the temperature part of the model depends only on zero mean variations, i.e. without trend.
All temperature trend information is covered by the coefficients of $C_i$. This model corresponds to Roy Spencer’s observation that much of the cyclic variability is explained by El Nino, which is closely related to the “residual temperature” $T_i^R$.
With $b=0$ we would have the temperature independent model above, and the coefficients of $C_i$ and the constant term correspond to the known estimated parameters. Due to the fact that $T_i^R$ does not contain any trend, the inclusion of the temperature dependent term does not change the other coefficients.

The estimated parameters of the last equation are:
$a + b\cdot d = 0.0183 = a^0$ ,
$b = -2.9\frac{ppm}{^{\circ}C}$,
$c + b\cdot e = -5.2 ppm = -n^0 $ .

The first and last parameter correspond to those of the temperature independent model. But now, from the estimated $b$ coefficient, we now can evaluate the contribution of Temperature $T_i$ to the sinks and the natural emissions

The final determined parameters are
$a = a_0 – b\cdot d = 0.0436$,
$b = -0.29 \frac{ppm}{^{\circ}C}$,
$c = -n_0 – b\cdot e = -13.6ppm $

It is quite instructive how close the yearly variations of temperature matches the variations of the measured sinks:

The smoothed residual is now mostly close to 0, with the exception of the Pinatubo eruption (after 1990) being the most dominant non-accounted signal after application of the model. Curiously in 2020 there is a reduced sink effect, most likely due to higher average temperature, effectively compensating the reduced emissions due to Covid lockdowns.
The model reconstruction of the concentration is now extended by the temperature term:
$\hat{C_0} = C_0$
$\hat{C_{i+1}} = \hat{C_i} + E_i – a\cdot \hat{C_i} – b\cdot T_i – c$

This is confirmed when looking at the reconstruction. The reconstruction only deviates at 1990 due to the missing sink contribution from the Pinatubo eruption, but follows the shape of the concentration curve precisely. This is an indication, that the Concentration+Temperature model is much better suited to model the CO2-concentration.
In order to compensate the deviations after 1990, the sink effect due to Pinatubo A$_i^P$must be considered. It is introduced as a negative emission signal into the recursive modelling equation:
$\hat{C_{i+1}} = \hat{C_i} + E_i -A_i^P- a\cdot \hat{C_i} – b\cdot$
This reduces the deviations of the model from the measured concentration significantly:

Consequences of the temperature dependent model

The concentration dependent absorption parameter is in fact more than twice as large as the total absorption parameter, and increasing temperature increases natural emissions. As long as temperature is correlated to CO2 concentration, the to trends cancel each other, and the effective sind coefficient appears invarant w.r.t. temperature.

The extended model becomes relevant, when temperature and CO2 concentration diverge.

If temperature rises faster than according the above CO2 proxy relation, then we can expect a reduced sink effect, while with temperatures below the expectancy value of the proxy the sink effect will increase.

As a first hint for further research we can estimate the temperature equilibrium concentration based on current measurements. This is given by (anthropogenic emissions and concentration growth at 0 by definition):
$a\cdot C + b\cdot T + c = 0$
$C = \frac{-b\cdot T – c}{a}$
For $T = 0°$ (= 14° C worldwide average temperature) we get as the – no emissions – equilibrium concentration.
$C = \frac{-c}{a} = \frac{-13.6}{0.0436} ppm = 312 ppm$

The temperature sensitivity is the Change of equilibrium concentration for 1° temperature change:
$\frac{\Delta C}{\Delta T} = \frac{-b}{a} = 66.5 \frac{ppm}{°C}$
Considering the fact the the temperature anomaly was appr. T = -0.5° in 1850, this corresponds very well with the assumed pre-industrial equilibrium concentration of 280 ppm.

A model for paleo climate?

An important consequence of the temperature enhanced model is for understanding paleo climate, which is e.g. represented in the Vostok ice core data:

Without analysing the data in detail, with the temperature dependence of the CO2 concentration we have a tool for e.g. estimating the equilibrium CO2 concentration depending on temperature. Stating the obvious, it is clear that CO2 concentration is controlled by temperature and not the other way round – the time lag between temperature changes and concentration changes is several centuries.

The Vostok data have been analysed with the same model of concentration and temperature dependent sinks and natural sources. Although the model parameters are substantially different due to the totally different time scale, the measured CO2 concentration is nicely reproduced by the model, driven entirely by temperature changes:




The inflection point of CO2 concentration


[latexpage]

And rising and rising…?

At first glance, the atmospheric CO2 concentration is constantly rising, as shown by the annual mean values measured at Maona Loa (ftp://aftp.cmdl.noaa.gov/products/trends/co2/co2_mm_mlo.txt):

The central question that arises is whether the concentration is growing faster and faster, i.e. whether more is being added each year? If so, the curve would be concave, i.e. curved upwards.

Or is the annual increase in concentration getting smaller and smaller? Then it would be convex, i.e. curved downwards.

Or is there a transition, i.e. a turning point in the mathematical sense? This could be recognized by the fact that the annual increase initially increases and then decreases from a certain point in time.

At first glance, the overall curve appears concave, which means that the annual increase in concentration appears to increase with each year.

The answer to this question is crucial for the question of how urgent measures to curb CO2 emissions are.

Closer examination with the measured annual increase

To get a more accurate impression, we calculate the — raw and slightly smoothed — annual increase in CO2 concentration:

This confirms that until 2016 there was a clear trend towards ever higher annual concentration increases, from just under 0.75 ppm/year in 1960 to over 2.5 ppm/year in 2016.

Since 2016, however, the annual increase has been declining, initially slightly, but significantly more strongly in 2020 and 2021. The corona-related decline in emissions certainly plays a role here, but this does not explain the decline that began in 2016.

There is therefore an undisputed turning point in the concentration curve in 2016, i.e. a trend reversal from increasing concentration growth to decreasing concentration growth. Is there a satisfactory explanation for this? This is essential, because if we can foresee that the trend of decreasing concentration growth will continue, then it is foreseeable that the concentration will stop increasing at some point and the goal of the Paris Climate Agreement, the balance between CO2 sources and CO2 sinks, can be achieved in the foreseeable future.

Explanation due to stagnating emissions

As part of the Global Carbon Brief project, Zeke Hausfather 2021 revised the values of global CO2 emissions over the last 20 years based on new findings, with the important result that global emissions have been constant for 10 years within the limits of measurement accuracy:

To assess the implications of this important finding, one needs to know the relationship between emissions and CO2 concentration.

From my own research on this in a publication and in a subsequent blog post, it follows that the increase in concentration results from the emissions and absorptions, which are proportional to the CO2 concentration.

This model has also been described and published in a similar form by others:

Trivially, it follows from the conservation of mass that the concentration $C_i$ at the end of the year $i$ results from the concentration of the previous year $C_{i-1}$, the natural emissions $N_i$, the anthropogenic emissions $E_i$ and the absorptions $A_i$:
\begin{equation}\label{mass_conservation}C_i = C_{i-1} + N_i + E_i – A_i \end{equation} This directly results in the effective absorption calculated from emissions and the measured increase in concentration:
\begin{equation}\label{absorption_measurement}$A_i – N_i = E_i – (C_i – C_{i-1}) \end{equation} Assuming constant annual natural emissions
$N_i = n$
and the linear model assumption, i.e. that the absorptions are proportional to the concentration of the previous year,
$A_i = a\cdot C_{i-1}$
the absorption model is created (these two assumptions are explained in detail in the publication above), where $n = a\cdot C_0$ :
\begin{equation}\label{absorption_equ}A_i – N_i = a\cdot(C_{i-1} – C_0)\end{equation} with the result $a=0.02$ and $C_0 = 280 ppm $. In this calculation, emissions due to land use changes are not taken into account. This explains the numerical differences between the result and those of the cited publications. The omission of land-use changes is justified by the fact that in this way natural emissions lead to the pre-industrial equilibrium concentration of 280 ppm.

With this model, the known concentration between 2000 and 2020 is projected very accurately from the data between 1950-2000:

Growth rate of the modelled concentration

The growth rate of the modelled concentration $G^{model}i$ is obtained by converting the model equation:
$G^{model}_i = E_i – a\cdot C{i-1} + n$
This no longer shows the cyclical fluctuations caused by El Nino:

The global maximum remains, but the year of the maximum has moved from 2016 to 2013.
These El Nino-adjusted concentration changes confirm Zeke Hausfather’s statement that emissions have indeed been constant for 10 years.

Evolution of CO2 concentration at constant emissions

In order to understand the inflection point of the CO2 concentration, we want to calculate the predicted course with the assumption of constant emissions $E_i = E$ and the equations (\ref{absorption_measurement}) and (\ref{absorption_equ}):
\begin{equation}\label{const_E_equ}C_i – C_{i-1} = E- a\cdot(C_{i-1} – C_0)\end{equation} The left-hand side describes the increase in concentration. On the right-hand side, an amount that increases with increasing concentration $C_{i-1}$ is subtracted from the constant emissions $E$, which means that the increase in concentration decreases with increasing concentration. This can be illustrated with a special bank account. As soon as the concentration reaches the value $\frac{E}{a} + C_0$, the equilibrium state is reached in which the concentration no longer increases, i.e. the often used “net zero” situation. With current emissions of 4.7 ppm, “net zero” would be at 515 ppm, while the “Stated Policies” emissions scenario of the International Energy Agency (IEA), which envisages a slight reduction in the future, reaches equilibrium at 475 ppm, as described in the publication above. According to the IEA’s forecast data, this will probably be the case in 2080:

According to this, constant emissions are sufficient justification for a convex course of CO2 concentrations, as we have seen since 2016. At the same time, this proves that CO2 absorption does indeed increase with increasing concentration.




Invariance of natural CO2 sources and sinks regarding long time temperatur trend


[latexpage]

In the simple model of CO2 sinks and natural emissions published in this blog and elsewhere, the question repeatedly arose in the discussion: How is the — obvious — temperature dependence of natural CO2 sources, for example the outgassing oceans, or sinks such as photosynthesis, taken into account? This is because the model does not include any long-term temperature dependence, only a short-term cyclical dependence. A long-term trend in temperature dependence over the last 70 years is not discernible even after careful analysis.
In the underlying publication, it was ruled out that the absorption coefficient could be temperature-dependent (Section 2.5.3). However, it remained unclear whether a direct temperature dependence of the sources or sinks is possible. And why this is not recognizable from the statistical analysis. This is discussed in this article.

Original temperature-independent model

The simplified form of CO2 mass conservation in the atmosphere (see equations 1,2,3 of the publication) with anthropogenic emissions $E_i$ in year $i$, the other, predominantly natural emissions $N_i$ (for simplification, the land use emissions are added to the natural emissions), the increase of CO2 in the atmosphere $G_i = C_{i+1} – C_i$ ($C_i$ is atmospheric CO2 concentration) and the absorptions $A_i$ is:
$E_i – G_i = A_i – N_i$
The difference between the absorptions and the other emissions was modeled linearly with a constant absorption coefficient $a$ and a constant $n$ for the annual natural emissions:
$A_i – N_i = a\cdot C_i + n$

While the absorption constant and the linear relationship between absorption and concentration are physically very well founded and proven, the assumption of constant natural emissions appears arbitrary. Therefore, instead of a constant expression $n$, it is enlightening to calculate the residual from the measured data and the calculated absorption constant $a$ instead
$N_i = G_i – E_i + a\cdot C_i $
must be considered:

The mean value of $N_i$ results in the constant model term $n$. A slight smoothing results in a periodic curve. Roy Spencer has attributed these fluctuations to the El Nino, although it is not clear whether the fluctuations are attributable to the absorptions $A_i$ or the natural emissions $N_i$. But no long-term trend is discernible. Therefore, the question must be clarified as to why short-term temperature dependencies are present, but long-term global warming does not appear to have any correspondence in the model.

Temperature-dependent model

We now extend the model by additionally allowing a linear temperature dependence for both the absorptions $A_i$ and the other emissions $N_i$. Since our measurement data only provide their difference, we can represent the temperature dependence of this difference in a single linear function of the temperature $T_i$, i.e. $b\cdot T_i + d$. Assuming that both $A_i$ and $N_i$ are temperature-dependent, the difference between the corresponding linear expressions is again a linear expression. Accordingly, the extended model has this form.
$A_i – N_i = a\cdot C_i + n + b\cdot T_i + d$
In principle, $n$ and $d$ could be combined into a single constant. However, since $d$ depends on the temperature scale used, and $n$ on the unit of measurement of the CO2 concentration, we leave it at 2 constants.

CO2 concentration as a proxy for temperature

As already explained in the publication in section 2.3.2, there is a high correlation between CO2 concentration and temperature. Where this correlation comes from, i.e. whether there is a causal relationship (and in which direction) is irrelevant for this study. However, we are not establishing the correlation between $T$ and $log(C)$ here, but between $T$ (temperature) and $C$ (CO2 concentration without logarithm).

As a result, the temperature anomaly can be derived from the concentration using the linear function
$T_i = e\cdot C_i + f$
with
$e=0.0083, f=-2.72 $
can be approximated.

Use of the CO2 proxy in the temperature-dependent equation

If we now experimentally insert the proxy function for the temperature into the temperature-dependent equation, we obtain the following equation:
$A_i – N_i = a\cdot C_i + n + b\cdot (e\cdot C_i + f) + d $
and
$A_i – N_i = (a+b\cdot e)\cdot C_i + (n+b\cdot f\cdot) + d $
The expression on the right-hand side now has the same form as the original equation, i.e.
$A_i – N_i = a`\cdot C_i + n` $
with
$ a`= a + b\cdot e $
$ n` = n + b\cdot f + d $

Conclusions

Therefore, with a linear dependence of temperature on CO2 concentration, temperature effects of sinks and sources cannot be distinguished from concentration effects, both are included in the “effective” absorption constant $a$ and the constant of natural emissions $n$. Therefore, the simple source and sink model contains all linear temperature effects.
This explains the astonishing independence of the model from the global temperature increase of the last 50 years.
This correlation also suggests that the absorption behavior of atmospheric sinks will not change in the future.

However, if we want to know exactly how the temperature will affect the sources and sinks, other data sources must be used. This knowledge is not necessary for forecasting future CO2 concentrations from anthropogenic emissions due to the correlation found.




Emissions and the carbon cycle

In the climate discussion, the so-called “CO2 footprint” of living beings, especially humans and farm animals, is increasingly declared as a problem, to the point,

  • to discredit the eating of meat,
  • slaughter farm animals (e.g. in Ireland),
  • or even discouraging young people from having children.

This discussion is based on false premises. It is pretended that exhaling CO2 has the same “climate-damaging” quality as burning coal or petroleum.
A closer analysis of the carbon cycle shows the difference.

The carbon cycle

All life on earth is made up of carbon compounds.
The beginning of the so-called food chain is plants, which use photosynthesis to produce mainly carbohydrates, and in some cases fats and oils, from CO2 in the atmosphere, thus storing both carbon and energy.

  • The further processing of these carbon compounds is divided into several branches, where again a conversion into CO2 takes place:
  • the immediate energy consumption of the plant, the “plant respiration”,
  • the — mainly seasonal — decay of part or all of the plant, and humus formation,
  • the energy supply of animals and humans as food. Here, apart from the direct energy supply, a transformation into proteins and fats takes place, partly also into lime.
  • Proteins and fats are passed along the food chain.
  • In the course of life, plants, animals and humans release some of the carbon absorbed from food through respiration as CO2, and in some cases also as methane.
  • With the decomposition of animals and humans, the remaining CO2 is released again.
  • The formed lime binds the CO2 for a long time. E.g. each eggshell binds 5g CO2 for a very long time.

Abstractly speaking, all CO2 from all living things, whether bound or exhaled, ultimately comes from the atmosphere via photosynthesis. This is very nicely explained by the famous physicist Richard Feynman:

All living beings are temporary stores of CO2. The described mechanisms cause different half-lives of this storage.
Human interventions usually cause a prolongation of the storage and consequently a more sustainable use of CO2:

  • Mainly by conservation and thus stopping the decay processes. This refers not only to the preservation of food, but also through long-term conservation of wood, as long as wood utilization is sustainable. In this way, building with wood is a long-term commitment of CO2.
  • Last year’s grain is usually stored and only processed into bread etc. about a year later. In the meantime, this year’s grain plants have already grown again. Thus, the metabolic emissions from humans and animals are already compensated before they take place. If the grain were to rot without being processed, it would have already decomposed into CO2 again last fall.
  • The rearing of farm animals also means CO2 storage, and not only in the form of the long-lived bones. However, the use of fossil energy in mechanized agriculture and fertilizers must be taken into account here.

Limitation – fertilization and mechanization of agriculture

3 factors mean that the production of food may still release more CO2 than in “free nature”, namely when processes are involved that use fossil fuels:

  • The use of chemically produced fertilizers
  • the mechanization of agriculture
  • the industrialization of food production.

Because of very different production processes, it is very misleading to speak of a product-specific carbon footprint.

To pick an important example, beef is usually given an extremely high “carbon footprint.” Beef that comes from cattle raised largely on pasture — fertilized without artificial fertilizers — has a negligible “carbon footprint,” contrary to what is disseminated in the usual tables. The same is true for wild animals killed in hunting.

An example that illustrates the duplicity of the discussion is the production of bio-fuels. This uses fertilizers and mechanical equipment powered by fossil energy in much the same way as the rest of agriculture. However, the fuels produced are considered sustainable and “CO2-free.”

Dependencies

The most important insight from biology and ecology is that it is not within our arbitrary power to remove individual elements of the sensitive ecology without doing great harm to the whole.
Typical examples of such harmful influences are:

  • Overgrazing, i.e., desolation by eating away at the (plant) bases of life. Examples of this are widely known. “Overgrazing” can also occur as a result of “well-intentioned” and assumed positive interventions such as “water quality improvement” in Lake Constance, with the result that there is no longer enough food for plants and animals in the water.
  • Less well known is “undergrazing,” particularly the failure to remove withered tumbleweeds in the vast semi-arid areas of the world. To address this problem, Alan Savory has introduced the concept of “Holistic Management” with great success. This concept includes as a major component the expansion of livestock production.If plants are not further utilized by “larger” animals, then they are processed by microorganisms and generally decompose again quickly, releasing the bound CO2; in some cases they are converted into humus. So nothing is gained for the CO2 concentration of the atmosphere if e.g. cattle or pigs are slaughtered to allegedly improve the CO2 balance. On the contrary, the animals prolong the life of the organic carbon-binding matter.

Dependence of plant growth on CO2

Plants thrive better the higher the atmospheric CO2 concentration, especially C3 plants:

For plant growth, the increase in CO2 concentration over the last 40 years has been markedly favorable, and the world has become significantly greener, with the side effect of sink effect, i.e., uptake of the additional anthropogenic CO2:

C3 plants do not reach the same uptake of CO2 as C4 plants below a concentration of 800 ppm. That is why many greenhouses are enriched with CO2.

Conclusions

Knowing these relationships, compelling conclusions emerge:

  1. Because of the primacy of photosynthesis and the dependence of all life on it, the totality of living things is a CO2 sink, so in the medium and long term the CO2 concentration can only decrease, never increase, because of the influence of living things.
    All living beings are CO2-storages, with different storage times.
  2. There are at least 3 forms of long-term CO2-binding, which lead to a decrease of the CO2-concentration:

    • Calcification
    • humus formation
    • non-energy wood utilization

  3. The use of “technical aids” that consume fossil energy must be separated from the natural carbon cycle in the considerations. It is therefore not possible to say that a particular foodstuff has a fixed “CO2 footprint”. It depends solely on the production method and animal husbandry.
  4. A “fair” consideration must assume here, just as with electric vehicles, for example, that the technical aids of the future or the production of fertilizers are sustainable.

In addition, taking into account the knowledge that more than half of current anthropogenic emissions are reabsorbed over the course of the year, even a 45% reduction in current emissions leads to the “net zero” situation where atmospheric concentrations no longer increase. Even if we make little change in global emissions (which is very likely given energy policy decisions in China and India), an equilibrium concentration of 475 ppm will be reached before the end of this century, which is no cause for alarm.




5 Simple Climate Facts


[latexpage]

1. Global CO2 emissions have reached their maximum level in 2018 and are not expected to increase further

There was a massive drop in emissions in 2020 due to Corona. But the maximum $CO_2$ emissions had already been reached in 2018 at 33.5 Gt, and 2019 and 2021 emissions are below that level as well:

Already since 2003, there has been a clear trend of decrease in relative CO2 emissions growth (analogue to economic growth), where between 2018 and 2019, the 0% line was then reached, as described in this article:

The reason for this is that the growth in emissions in emerging economies now roughly balances the decline in emissions in industrialized countries. Also, there has already been a sharp bend in the emission growth in China in 2010.

So the actual “business-as-usual” scenario is not the catastrophic scenario RCP8.5 with exponentially growing emissions that is still widely circulated in the media, but de facto a continuation of global CO2 emissions at the plateau reached since 2018.   So the partial target of the Paris climate agreement, “Countries must reach peak emissions as soon as possible“, has already been achieved for the world as a whole since 2018.

2. It is enough to halve emissions to avoid further growth of CO2 levels in the atmosphere

To maintain the current level of CO2 in the atmosphere, it would be sufficient to reduce emissions to half of today’s level (https://www.youtube.com/watch?v=JN3913OI7Fc&t=291s (in German)). The straightforward mathematical derivation and various scenarios (from “business as usual” to “zero carbon energy transition” can be found in this article. Here the result of the predicted CO2 concentration levels:

Dieses Bild hat ein leeres Alt-Attribut. Der Dateiname ist image-11-1024x397.png

In the worst case, with future constant emissions, CO2 concentration will be 500 ppm by 2100 and remain below the equlibrium concentration of 544 ppm, which is below double the pre-industrial concentration. The essential point is that in no case will CO2 levels rise to climatically dangerously high levels, but they would probably fall to dangerously low levels if the global energy transition were “successful”, because current peak grain harvests are 15% larger than they were 40 years ago due to increased CO2 levels.  
Literally, the Paris climate agreement states in Article 4.1:
Countries must reach peak emissions as soon as possible “so as to achieve a balance between anthropogenic emissions by sources and removals by sinks of greenhouse gases in the second half of this century.”
This means that the balance between anthropogenic emissions and CO2 removals must be achieved in the 2nd half of this century. Fact is that the balance will be reached when total emissions are halved. The time target to reach this 50% goal is between 2050 and 2100, these two limits correspond to the blue and turquoise green scenario. So the Paris climate agreement does not call for complete decarbonization at all, but allows for a smooth transition rather than the disruption implied by the complete decarbonization.

3. According to radiative physics, climate sensitivity is only half a degree

The possible influence of $CO_2$ on global warming is that its absorption of thermal radiation causes that radiation to reach space in a weakened form. The physics of this process is radiative transfer. To actually measure this greenhouse effect, the infrared radiation emitted into space must be measured. The theoretically expected greenhouse effect is so tiny, at 0.2 $\frac{W}{m^2}$ per decade, that it is undetectable with current satellite technology, which has a measurement accuracy of about 10 $\frac{W}{m^2}$.
Therefore, one has no choice but to settle for mathematical models of the radiative transfer equation. However, this is not a valid proof for the effectiveness of this greenhouse effect in the real, much more complex atmosphere.
There is a widely accepted simulation program MODTRAN that can be used to simulate the emission of infrared radiation into space, and thus the $CO_2$ greenhouse effect, in a physically clean way. If I use this program to calculate the so-called CO2 sensitivity (the temperature increase when CO2 doubles from 280 to 560 ppm) under correct conditions, the result is a mere 1/2 °C:

Dieses Bild hat ein leeres Alt-Attribut. Der Dateiname ist image-12-1024x590.png

The facts are discussed in this article. Also, in order to understand the mindset of IPCC-affiliated scientists, I describe there their, in my opinion, incorrect approach to sensitivity calculations using MODTRAN simulation.

Accordingly, if all real-world conditions are correctly accounted for, the temperature increase from doubling $CO_2$ from 280 ppm to 560 ppm is just 1/2 °C, well below the Paris Climate Agreement targets.

4. The only detectable effect of CO2 increase is the greening of the Earth

While the greenhouse effect is so far a theoretical hypothesis, which because of its small effect (less than 0. 2 $\frac{W}{m^2}$ in 10 years, which is only a fraction of the measurement errors of infrared satellite measurements (10 $\frac{W}{m^2}$), so far not provable beyond doubt, another welcome effect of increased $CO_2$ content has been abundantly demonstrated: Between 1982 and 2009, the greening of the Earth has increased by 25-50%, 70% of which is due to increases in CO2. Notably, parts of Earth’s drylands have also become greener because plants have a more efficient water balance at higher $CO_2$ levels.

5. The increase in world mean temperature over the past 40 years has been caused by decreased cloud formation

It is a fact that the mean temperature of the world has increased considerably since 1970. If it is not only due to increased CO2 concentration, what could be the cause?

A simple calculation shows that 80% of the temperature increase over the last 40 years is due to the real and measurable effect of reduced cloud reflectivity, and at most 20% is due to the hypothetical and so far not definitively proven CO2 greenhouse effect:

The causes  of reduced cloud formation may indeed be partly man-made, because the basic mechanism of heat regulation by evaporation through plants and the resulting clouds depends on the way humans farm and treat the natural landscape (see also this video (in German)). The most important man-made risk factors are

All 3 factors contribute to the 5% decrease of average cloud cover since 1950 ( https://taz.de/Wasser-und-Klimaschutz/!5774434/ ), which explains at least 80% of the temperature rise since then as described above.
To stop the warming caused by reduced cloud formation, CO2 emission reductions by stopping use of fossil fuels are of no use. A refocus on solving the real problems instead of ideological fixation on CO2 is overdue.




Global Temperature predictions


[latexpage]

Questioning the traditional approach

The key question to climate change is how much does the $CO_2$ content of the atmosphere influence the global average temperature? And in particular, how sensitive is the temperature to changes in $CO_2$ concentration?
We will investigate this by means of two data sets, the HadCRUT4 global temperature average data set, and the CMIP6 $CO_2$ content data set.
The correlation between these data is rather high, so it appears to be fairly obvious, that rising $CO_2$ content causes rising temperatures.
With a linear model it appears easy to find out how exactly temperatures at year i $T_i$ is predicted by $CO_2$ content $C_i$ and random (Gaussian) noise $\epsilon_i$. From theoretical considerations (radiative forcing) it is likely that the best fitting model is with $log(C_i)$:
$T_i = a + b\cdot log(C_i) + \epsilon_i$
The constants a and b are determined by a least squares fit (with the Python module OLS from package statsmodels.regression.linear_model):
a=-16.1, b=2.78
From this we can determine the sensitivity, which is defined as the temperature difference when $CO_2$ ist doubled:
$\Delta(T) = b\cdot log (2) °C = 1.93 °C $
This is nearly 2 °C, a number close to the official estimates of the IPCC.

What is wrong with this, it appears to be very straightforward and logical?
We have not yet investigated the residue of the least squares fit. Our model says that the residue must be Gaussian noise, i.e. uncorrelated.
The statistical test to measure this is the Ljung-Box test. Looking at the Q-criterion of the fit, it is Q = 184 with p=0. This means, that the residue has significant correlations, there is structural information in the residue, which has not been covered with the proposed linear model of log($CO_2$) content. Looking at the diagram which shows the fitted curve, we get a glimpse why the statistical test failed:

We see 3 graphs:

  • The measured temperature anomalies (blue),
  • the smoothed temperature anomalies (orange),
  • the reconstruction of the temperature anomalies based on the model (green)

While the fit looks reasonable w.r.t. the noisy original data, it is obvious from the smoothed data, that there must be other systematic reasons for temperature changes besides $CO_2$, causing temporary temperature declines as during 1880-1910 or 1950-1976. Most surprizingly, from 1977-2000 the temperature rise is considerably larger than would be expected from the model of the $CO_2$ increase.

The systematic model deviations, among others a 60 year cyclic pattern, can also be observed when we look at the residue of the least squares fit:

Enhancing the model with a simple assumption

Considering the fact that the oceans and to some degree the biosphere are enormeous heat stores, which can take up and return heat, we enhance the temperature model with a memory term of the past. Not knowing the exact mechanism, this way we can include the “natural variability” into the model. In simple terms this corresponds to the assumption: The temperature this year is similar to the temperature of last year. Mathematically this is modelled by an extended autoregressive process ARX(n),, where the Temperature at year i is assumed to be a sum of

  • a linear function of the logarithm of the $CO_2$ content,log($C_i$), with offset a and slope b,
  • a weighted sum of the temperature of previous years,
  • random (Gaussian) noise $\epsilon_i$

$ T_i = a + b\cdot log(C_i) + \sum_{k=1}^{n} c_k \cdot T_{i-k} +\epsilon_i $

In the most simple case ARX(1) we get

$ T_i = a + b\cdot log(C_i) + c_1\cdot T_{i-1} +\epsilon_i $

With the given data the parameters are estimated, again with the Python module OLS from package statsmodels.regression.linear_model:
$a=-7.33, b=1.27, c_1=0.56 $
The reconstruction of the training data set is much closer to the original data:

The residue of the fit now looks much more like a random process, which is confirmed by the Ljung-Box test with Q=20.0 and p=0.22

By considering the natural variability the sensitivity to $CO_2$ is reduced to
$\Delta(T) = b\cdot log (2) °C = 0.88 °C $

In another post we have applied the same type of model to the dependence of the atmospheric $CO_2$ content on the anthropogenic $CO_2$ emissions, and used this as a model for predictions of future atmospheric $CO_2$ content. 3 scenarios are investigated:

  • “Business as usual” re-defined from latest emission data as freezing global $CO_2$ emissions to the level of 2019 (which is what is actually happening)
  • 100% worldwide decarbonization by 2050
  • 50% worldwide decarbonization by 2100

The resulting atmospheric $CO_2$ has been calculated as follows:

Feeding these predicted $CO_2$ content time series into the temperature ARX(1) model, the following global temperature scenarios can be expected for the future:

Conclusions

The following conclusions are made under the assumption that there is in fact a strong dependence of the global temperature on the atmospheric $CO_2$ content. I am aware that this is contested, and I myself have argued at other places that the $CO_2$ sensitivity is as low as 0.5°C and that the influence of cloud albedo is much larger than that of $CO_2$. Nevertheless it is worth taking the mainstream assumptions serious and take a look at the outcome.

Under the “business as usual” scenario, i.e. constant $CO_2$ emissions at the 2019 level, we can expect a further temperature increase by appr. 0.5°C by 2150. This is 1.4°C above pre-industrial level and therefore below the 1.5° C mark of the Paris climate agreement.
Much more likely and realistic is the “50% decarbonization by 2100” scenario, with a further 0.25°C increase, followed by a decrease to current temperature levels.

The politically advocated “100% decarbonization by 2050”, which is not only completely infeasible without economic collapse of most industrial countries, brings us back to the cold pre-industrial temperature levels which is not desireable.




How much CO2 will remain in the atmosphere?


[latexpage]

There are two parts of the climate discussion:

  • The sensitivity of the temperature w.r.t. the atmospheric $CO_2$ content
  • The amount of $CO_2$ in the atmosphere

While the $CO_2$ sensitivity dominates the scientific climate discussion, the political decisions are dominated by “carbon budget” criteria on the basis of numbers, which are hardly publicly discussed.
It has been claimed that more that 20% of the emitted $CO_2$ will remain in the atmosphere for more than 1000 years.

This article will investigate the functional relation between $CO_2$ emissions and the actual $CO_2$ content in the atmosphere.
Atfter finding this relation several future emission scenarios and their effect on the atmospheric $CO_2$ content are investigated.

Carbon dioxid emissions in the past

Starting point are the actual $CO_2$ emissions during the last 170 years

It is very informative to look at the relative changes of this time series (here the mathematical derivation). This is the equivalent of economic growth for $CO_2$ emissions.

The largest increase in $CO_2$ emissions was between 1945 and 1980, the period of great growth in wealth and quality of life primarily in the industrialized countries, with the absolute peak of global emissions growth passed in 1970, interestingly 3 years before the first oil crisis. At the turn of the millennium, there was another increase in emissions, this time caused by the economic boom of the emerging economies. Since 2003, the growth of emissions has been steadily declining, and has de facto already fallen below the zero line, i.e. from now on, emissions are not expected to grow, despite the growth in China, India and other emerging and developing countries.
This is convincingly illustrated in the time-series graph of the Global Carbon Project:

Source: Global Carbon Project (Time Series)

The long-standing decline in emissions in industrialized countries is currently balancing out with the slowing rise in emerging economies China and India since 2010.
Accordingly, it is realistic to call constant $CO_2$ emissions from 2019 onward “business as usual”. While 2020 was a Covid-19 driven emissions decline, the rebound in 2021 is expected to remain 1.2% below the 2019 level.

CO2 content of the atmosphere with simple emission models

It is assumed that before 1850 the $CO_2$ level was approximately constant and that the measured $CO_2$ content is the sum of the pre-industrial constant level and a function of the $CO_2$ emissions. The aim of this chapter is to find a simple function, which explains the atmospheric content.

Three different models are tested:

  • The first model assumes that all $CO_2$ emissions will remain in the atmosphere forever. This means that the additional – on top of the pre-industrial leval – $CO_2$ content would be the cumulative sum of all $CO_2$-emissions.
  • The second model assumes an exponential decay of emitted $CO_2$ into the oceans or biosphere with a half life time of 70 years, i.e. half of all emitted $CO_2$ is absorbed after 70 years. This is achieved by a convolution with an exponential decay kernel and a time constant $70/ln(2) \approx 100 $ years
  • The third model assumes an exponential decay of emitted $CO_2$ into the oceans or biosphere with a half life time of 35 years, i.e. half of all emitted $CO_2$ is absorbed after 35 years. This is achieved by a convolution with an exponential decay kernel and a time constant $35/ln(2) \approx 50 $years.

In order to make the numbers comparable, the emissions, that are measured in Gt have to be converted to ppm. This is done with the equivalence of 3210 Gt $CO_2$ = 410 ppm .

The yellow graph are the measured actual emissions from the diagram above, and the blue graph is the measured actual $CO_2$ content.

The first “cumulative” model approximates the measured $CO_2$ content quite well from 1850 to 1910, but heavily overpredicts the $CO_2$ content after 1950. This falsifies the hypothesis that $CO_2$ stays in the atmosphere for “thousands of years”.
Also the second model model with a half life time of 70 years of emitted $CO_2$ overshoots considerably after 1950, it approximates the time between 1925 and 1945. The third model with a half life time for emissions of 35 year fits the actual $CO_2$ content from 1975 till now.

This confirms, what has very recently been published in Nature , that the rate of $CO_2$ absorption into the oceans increases with increasing atmospheric $CO_2$ content.

figure2
Source: https://www.nature.com/articles/s41467-020-18203-3

The same relation, in particular the increasing “carbon sink” of oceans and biosphere, is reported from the Global Carbon Project in this graphics:

Co2 sources sinks

Although we can expect a further increase of the $CO_2$ flux into the ocean in the future, we can therefore safely use the third model with a half life time of 35 years for conservative, i.e. non-optimistic predictions.

Future scenarios

In order to evaluate policy decisions, I will apply this model to predict the future $CO_2$ content with 3 different emission scenarios:

  • The first scenario (red) I’d like to call the “Business as usual” scenario, in the sense that China already now increases $CO_2$ emissions only marginally and has committed itself to stop increasing $CO_2$ emissions after 2030. Today emissions are not growing any more. This scenario means, that we keep global emission on the 2019 level.
  • The second scenario (green) is the widely proclaimed decarbonisation by 2050
  • The third scenario (blue) is a compromise proposal, reducing emissions to 50% of the 2019 value (37 Gt) by 2100. This scenario reflects the facts that fossil fuels are finite, and that research and development of sound new technologies takes time:

The consequences for the $CO_2$-content based on the simple model with 35 years half life time are these:

  • The first scenario (red) increases $CO_2$ content, but not beyond 510ppm in the long distant future, which is less double the amount of the pre-industrial era. Depending on the sensitivity this means a hypothetical $CO_2$ induced temperature increase of 0,16° to 0,8° from current temperatures, resp. 0,45° to 1,3° since pre-industrial times, depending on the sensitivity.
  • The second scenario — worldwide fast decarbonisation — (green) hardly increases the $CO_2$ content any more, and eventually reduces the atmospheric $CO_2$ content to pre-industrial levels.
    Do we really want this??? This would mean deprivation of all plants, which thrive best at $CO_2$-levels larger that 400 ppm. Not even the IPCC ever formulated this as a desirable goal.
  • The compromise scenario (blue) will slightly raise $CO_2$ content but keep it below 460 ppm, and then gradually reduce it to the 1990 level. The atmospheric $CO_2$ levels will begin to fall after 2065.
Atmospheric $CO_2$ content prediction based on simple model with 35 years half life time.

A rigorous mathematical model based on the the single simple assumption that oceanic and biological $CO_2$ absorption is proportional to $CO_2$ concentration comes to essentially the same result, with the nice side effect to have an error estimate of the prediction:

Conclusion

Not even the most pessimistic of the scenarios described above reaches a “catastrophic” $CO_2$ content in the atmosphere.
The complete decarbonisation scenario by 2050 can only be judged as utter nonsense. No one can wish to go back to pre-industrial $CO_2$ levels.
On the other hand, the limited fossile resources motivate to replace them in a feasible and humane way. This is reflected in the “compromise” scenario, which gradually reduces long-term emissions to the level of approximately the year 1990.