The current climate discussion is determined by one central parameter, the atmospheric content. This, in turn, is significantly influenced by human emissions, which, however, are partially offset by the — growing — uptake of from the air into the oceans and as fertilization into the biosphere. The exact relationship, however, is far from clear. The Intergovernmental Panel on Climate Change assumes — simplistically speaking — the formula that while about half of all emissions are swallowed back up by the oceans and biosphere, the rest remains in the atmosphere for all eternity. This statement is irritating because if there is an absorption mechanism, it must apply to all and not just a small portion that we call “human emissions” (the molecules do not have labels from where they came).
Looking for reasonable solutions, there are two papers that lead to a viable solution, one by Ari Halparin (2015) and one by Roy Spencer (2019), both of which point to a simple model based on these assumptions:
- A natural equilibrium concentration in the atmosphere as a result of the natural absorption and emission of
- The decay of the surplus human emission generated above the equilibrium proportional to the concentration
Despite the apparent resulting numerical differences of the referred publications, by careful error propagation we will demonstrate that both results are statistically compatible with each other and with the result of this article.
A simple, robust model
Therefore, we create a simple global “accountant” model of the balance, with as the content of the atmosphere in year i, as the global emissions of human origin in year i, the global natural emissions in year i, as the absorption of in year i into the oceans and biosphere:
As a consequence of mass conservation, this is trivially correct — like a bank account with deposits and withdrawals — when all quantities are measured with the same unit (GtC/a, Gt CO2/a, ppm). For the representations here the unit ppm is used.
There are only two simple model assumptions:
- The absorption is assumed to be proportional to the concentration of the previous year:
The physical justification for this assumption is the fact that the partial pressure of , which is relevant for absorption processes, increases with concentration. It is also known that C3 plants, representing the majority of all plants, have a linear absorption property. Not knowing the precise dependency, in any case a linear relation is a good initial assumption, which is justified when the receiving reservoir is not saturated with .
Assuming that there are different absorption constants for oceanic ( and biological absorption (), under the linearity assumtion they can be added to a single constant a:
This is a radical simplification of the box-diffusion model by Oeschger et al. (1975), refferred to in the article “Predicting Future Atmospheric Carbon Dioxide Levels” by Siegenthaler and Oeschger (1978). Instead of assuming separate boxes for the mixed layer and for the biosphere, we assume a one-dimensional diffusion process between atmosphere and ocean resp. biosphere with a single diffusion constant, making no assumptions about the properties of a possible mixed layer nor the mechanism of the absorption in the biosphere.
The advantage of such a simple model is that we do not have to make any speculative assumptions about the many model parameters, some of which are quite arbitrary (e.g. thickness of mixed layer), but restrict the whole model to a single absorption parameter.
More than 40 years after their article was written, there is now the additional argument, that Oeschger’s complicated model, which explained the content including bomb test data pretty well, failed badly for predicting future -levels. Their prediction of the 2020 additional -content beyond the pre-industrial level is 145% larger (590ppm =295ppm +295ppm) than the actual level (415ppm = 295ppm+ 120ppm). It remains to be analyzed whether the failure is due to overestimated emissions based on an unrealistic assumption of exponential growth, or whether it is caused be the model itself.
- Without emissions of human origin, i.e. with , we assume that natural emissions and absorptions are balanced (), resulting in a constant equilibrium concentration . This implies that pre-industrial global natural emissions are assumed to be constant: . This relation makes a falsifiable statement about the magnitude of natural emissions.
As we know, there are causes for changes in the natural emissions, e.g. vulcanoes or ocean cycles and changes of land use. We will see from the measured data, how significant these influences are, and if the model needs to be adapted. For the time being, we assume no changes within the investigated time range 1959-2019.
The full final equation is therefore
This study uses only the measured data of content as well as those of emissions. Starting from the Maona-Loa measured data of content, monthly data from 1959 to 2020 are available. Of these, annual averages were determined. The series of measurements of annual emissions covers the years 1750-2019.
The absorption constant a and the natural equilibrium concentration in a given time interval are obtained by estimation with the least-squares method of this linear equation (dependent variable , independent variable ), using the Python module OLS:
It can also be seen as an autoregressive process where the coefficient fot the external process () is forced to be 1 due to the mass conservation constraint.
This results in ,which is remarkably close to the assumed concentration before industrialization, using data from 1959-2019 only. This result appears to give this simple model compelling credibility. But is very sensitive to changes of the measured data, resulting in a very large confidence interval.
The mean absorption constant a for the entire period 1959-2019 is a = 0.018.
This means that on average during the last 60 years, 1.8% of the difference between current content of the atmosphere and 283 ppm was absorbed by the oceans and biosphere every year, i.e., currently
This formula provides us with a simple way to calculate the equilibrium content for a given emission amount, i.e., when the uptake is equal to the emission :
For the current total human caused emissions of 4.65 ppm/year, this consequently gives an equilibrium content of 544 ppm.
With total emissions halved to 2.33 ppm/year , the equilibrium content would be 412 ppm, which is approximately the current value.
When hypothetically stopping all emissions, half the value of the difference from the equilibrium level (283 ppm) is reached after the half-life of years.
Modeling and forecasting
This model can be used, on the one hand, to reproduce the measured content recursively from emissions, but also to determine it for future forecasts based on emission scenarios. Here is the reconstruction of the Maona-Loa content data series
- From the emissions data series,
- the initial 1959 content value
- and the two determined model parameters a and :
With the above model equation the whole curve is recursively computed, without using any of the measured -content data (except the initial one). It results in an an excellent reconstruction with small apparently periodical variations :
These discrepancies of up to 1 ppm between content and reconstruction apparently have a period of about 20 years. They are a subject for further study, in which it must be determined whether these are changes in the uptake of or temperature-related changes in ocean emissions, perhaps due to ocean cycles. In fact Roy Spencer relates some of the elevations to vulcanic events and the cyclic pattern to the El-Nino index.
For projections of future emission scenarios, these small overlaps, which are symmetric w.r.t. 0, do not play a significant role.
To evaluate policy decisions, I will apply this model to predict future levels with 5 different scenarios:
- I would like to call the first scenario (red) the “business-as-usual” scenario, in the sense that China is committed to stopping the rise in emissions after 2030, with very slow growth since 2010. Already, global emissions are no longer rising since 2018, and all developed countries have falling emissions. This scenario assumes global emissions remain at the current maximum of 37 Gt/yr = 4.6 ppm.Total budget to 2100: 2997 Gt , then 37 Gt/yr
- The second scenario (green) is the widely proclaimed decarbonization by 2050.
This glosses over the fact that a full global replacement of existing fossil fuel sources would require the daily reinstallation of the equivalent of one major nuclear power plant, each with 1.5 GW of capacity.
Total budget by 2050: 555 Gt , 0 Gt/year thereafter.
- Three other scenarios (blue, turquoise, purple) aim to reduce emissions to 50% of today’s level, roughly the 1990 level. This scenario reflects the facts that fossil fuels are finite and that research and development of new reliable technologies take time and that the reduced emissions will not further increase concentration beyond today’s level.The 3 scenarios differ in the target by when this reduction should be achieved, either by 2100 (blue, corresponding to a 0.9% emission reduction per annum), or by 2050 (turquoise, corresponding to a 2.3% emission reduction per annum), the scenario of immediate reduction to 50% is only hypothetical.
Total budget scenario blue by 2100: 2248 Gt , then 18.5 Gt/year, scenario turquoise by 2050: 860 Gt , then 18.5 Gt/year
Scenario purple: 18.5 Gt/year, starting now
The consequences for the content are as follows:
- The first scenario (red) increases levels, but it would take until after 2200 to reach the equilibrium state of 544ppm. This is less than a doubling from pre-industrial times. Depending on sensitivity (0.5°…2°), this implies a hypothetical temperature increase of 0.1° to 0.6° over current temperatures, or 0.4° to 1.4° since pre-industrial times. In any case, below the optimistic target of 1.5 ° of the Paris climate agreement.
- The second scenario — global rapid decarbonization — (green) barely increases levels and eventually reduces atmospheric levels to pre-industrial levels.
Do we really want that? That would mean food deprivation for all plants that thrive best at levels greater than 400 ppm. Not even the Intergovernmental Panel on Climate Change (IPCC) has ever stated such a reduction as a goal.
- The realistic reduction scenario (blue, until 2100) raises the values the next 50 years to max 460 ppm and then gradually lowers them after 2055 to today’s level.
- The faster reduction scenario (turquoise, until 2050) raises the values the next 20 years to max 430 ppm and then gradually lowers them to today’s level.
- The hypothetical reduction scenario (purple, immediately) freezes the content at the current value.
In no case are catastrophic effects to be expected as described in some of the IPCC horror scenarios ( levels above 1200 ppm).
The so-called energy transition scenario is literally a “back to the Stone Age” scenario that should be rejected not only for economic reasons, but also for ecological reasons. Plant life thrives much better with the higher content of over 400 ppm than with the deficiency in pre-industrial times.
To arrive at a viable target for policy, two goals need to be defined:
- What is the ideal level to aim for? Due to the fact that food production, especially of C3 crops, is dependent on content, I think a lower level than today is problematic (http://klima-fakten.net/?page_id=690). This would mean that in the long term we need to reduce emissions at most to half of today’s level, possibly less.
- What is the maximum level we are willing to accept temporarily, weighing all the consequences of the resulting decisions. The “fast” reduction to 50% by 2050 would mean a barely noticeable increase to 430 ppm, while the variant of reduction to 50% by 2100, which is more adapted to the prosperity of mankind, would lead to an increase to 460 ppm that is, in my opinion, justifiable. The corresponding emission reduction of 0.9% per annum appears to be easily achievable and is compatible with existing technology. Both scenarios meet the conditions of the Paris climate agreement.