Context/Earth

[[ Check out my Wordpress blog Context/Earth for environmental and energy topics tied together in a semantic web framework ]]


Wednesday, December 7, 2011

The basic algorithm

A statement was made by a climate change skeptic:
"The existence of fossil fuels is due to the greater plant productivity in a warm world with lots more CO2."

That is what is so bizarre and surreal about the skeptical AGW discussions. They tend to devolve into either elliptical contradictions or else reveal self-consistent and circular truths.
  1. We have low-levels of CO2 now
  2. Levels of CO2 seem to be rising.
  3. This will lead to a warmer climate due to the GHG effect
  4. Wait a second ! --- heat and extra CO2 may be good after all.
  5. That will promote more plant growth
  6. It also might explain why we have lots of fossil fuels.
  7. Millions of years for the plants to get buried by sedimentation and tectonic activity
  8. That's how much of the CO2 was sequestered, lowering atmospheric concentration
  9. But now we are digging it all up
  10. And burning it, releasing CO2 back into the air
  11. Go to step #2

Now we have to believe in the theory of CO2 assisted climate change. The algorithmic (or is that AlGoreRythmic?) steps have already been laid out. The problem is that the skeptics can't argue their way out of this basic trickbox.

Sunday, November 13, 2011

Multiscale Variance Analysis and Ornstein-Uhlenbeck of Temperature Series

Can we get behavioral information from something as seemingly random and chaotic as the Vostok ice core temperature data? Yes, as it turns out, performing a stochastic analysis of any time series can yield some very fundamental behaviors of the underlying process. The premise is that we can model the historical natural variability that occurs over hundreds of thousands of years by stochastic temperature movements.

I took the approach of modeling a bounded random walk from two different perspectives. In Part 1, I take the classic approach of applying the fundamental random walk hopping formulation.

Part 1 : Ornstein-Uhlenbeck Random Walk

Figure 1 below shows the Vostok data collected at roughly 20 to 600 year intervals and above that a simulated time series of a random walk process that has a similar variance.

Fig. 1: Ornstein-Uhlenbeck Random Walk process (top green) emulates
Vostok temperature variations (below blue)

The basic model I assume is that some type of random walk (red noise) statistics are a result of the earth’s climate inhabiting a very shallow energy well in its quiescent state. See the Figure 2 below. The well is broad on the order of 10 degrees C, and relatively shallow. Within this potential well, the climate hovers in a metastable state whereby small positive feedbacks of CO2, water vapor, or albedo can push it in either direction. The small amounts of perturbation needed to move the global temperature represents this shallowness.

Fig. 2 : Schematic of random walk behavior of a metastable system occupying a shallow quasi-equilibrium
The model of correlated Markovian random walk, whereby the perturbation can go in either a positive or negative direction each year matches this shallow energy well model remarkably well. The random walk can't grow without bound as soft temperature constraints do exist, mainly from the negative feedback of the Stefan-Boltzmann law.

To model these soft constraints, what we do is put a proportional drag on the random walk so the walk produces a noticeable asymmetry on large excursions from the quiescent point, Tq (I don’t call it equilibrium because it is really a quasi-equilibrium). This generates a reversion to the mean on the Markov random walk hopping direction, which is most commonly associated with the general Ornstein-Uhlenbeck process.

The way that random walk works is that if something gets to a certain point proportionately to the square root of time, then you can often interpolate how far the excursion is in a fraction of this time, see Figure 3 below.

$$ \Delta T = \sqrt{Dt} $$

Fig. 3 : Multiscale Variance applied to Monte Carlo simulated random walk.
The classic Square Root random walk is shown for reference.


The agreement with the Vostok data is illustrated by Figure 4 below. The multiscale variance essentially takes all possible pairs of points and plots the square root of the average temperature as a function of the time interval, L.

Fig. 4 : Multiscale variance of Vostok data
The Ornstein-Uhlenbeck kernel solution has a simple formula for the variance:

$$Temperature(L) = F(L) = \sqrt{\frac{h}{2 \sigma}(1-e^{-2 \sigma L})}$$

where h is the hop rate and σ is the drag factor. We assume the mean is zero for convenience.

The very simple code for generating a Monte Carlo simulation and the multiscale variance is shown in the awk code snippet below. The BEGIN portion sets up the random walk parameters and then performs a random flip to determine whether the temperature will move high or low. The movements labeled A and C in Figure 2 correspond to random numbers less than 1/2 (positive hops) and the movements labeled B and D correspond to  random numbers greater than 1/2 (negative hops). This nearly canonical form captures reversion to the mean well if the mean is set to zero.

The hopping rate, h, corresponds to a diffusion coefficent of 0.0008 degrees^2/year and a drag parameter is set to 0.0012, where the drag causes a reversion to the mean for long times and the resulting hop rate is proportionally reduced according to how large the excursion is away from the mean.
BEGIN {  ## Set up the Monte Carlo O-U model
   N = 330000
   hop = 0.02828
   drag = 0.0012
   T[0] = 0

   srand()
   for(i=1; i<N; i++) {
     if(rand()<0.5) {
       T[i] = T[i-1] + hop*(1-T[i-1]*drag)
     } else {
       T[i] = T[i-1] - hop*(1+T[i-1]*drag)
     }
   }
}

{} ## No stream input

END {  ## Do the multiscale variance
   L = N/2
   while(L > 1) {
     Sum = 0
     for(i=1; i<=N/2; i++) {
        Val = T[i] - T[i+int(L)]
        Sum += Val * Val
     }
     print int(L) "\t" sqrt(Sum/(N/2))
     L = 0.9*L
   }
}

The END portion calculates the statistics for the time series T(i).   The multiscale variance is essentially described by the following correlation measure. All pairs are calculated for a variance and the result is normalized as a square root.

$$ F(L) = [\frac{2}{N}\sum_{j = 1}^{N/2} ( Y_j - Y_{j+L})^2]^{\frac{1}{2}} \quad \forall \quad L \lt N/2 $$

The O-U curve is shown in Figure 4 by the red curve. The long time-scale fluctuations in the variance-saturated region are artifacts based on the relative shortness of the time series. If a longer time series is correlated then these fluctuations will damp out and match the O-U asymptote.

The results from 8 sequential Monte Carlo runs of several hundred thousand years are illustrated in Figure 5 below.  Note that on occasion the long term fluctuations coincide with the Vostok data. I am not suggesting that some external forcing factors aren’t triggering some of the movements, but it is interesting how much of the correlation structure shows up at the long time ranges just from random walk fluctuations.


Fig. 5 : Multiscale Variance for Vostok data and
various sampled Ornstein-Uhlenbeck random walk processes
In summary, the Vostok data illustrates the characteristic power-law correlation at shorter times and then it flattens out at longer times starting at about 10,000 years. This behavior results from the climate constraints of the maximum and minimum temperature bounds, set by the negative feedback of S-B at the high end and various factors at the low end. So to model the historical variations over a place like Vostok, we start with a random walk and modify it with an Ornstein-Uhlenbeck drag factor so that as the temperature starts walking closer to the rails, the random walk starts to slow down. By the same token, we give the diffusion coefficient a bit of a positive-feedback push when it starts to go away from the rail.

The question is whether the same random walk extends to the very short time periods ranging from years to a few centuries. Vostok doesn’t give this resolution as is seen from Figure 4 but other data sets do. If we can somehow extend this set of data to a full temporal dynamic range, this would provide an fully stochastic model for local climatic variation.

Figure 6 below shows a Greenland core data set which provides better temporal resolution than the Vostok set.
http://www.ncdc.noaa.gov/paleo/metadata/noaa-icecore-2490.html
ftp://ftp.ncdc.noaa.gov/pub/data/paleo/icecore/greenland/summit/ngrip/ngrip2006d15n-ch4-t.txt

Fig. 6 : Full Greenland time series generated at one year intervals.
Yet, this data is qualified by the caveat that the shorter time intervals are "reconstructed". You can see this by the smoother profile at a shorter time scale shown in Figure 7.

Fig. 7 : At shorter scale note the spline averaging which smooths out the data.

This smoothening reconstruction is a significant hurdle in extracting the actual random walk characteristics at short time scales. The green curve in Figure 8 below illustrates the multiscale variance applied to the set of 50,000 data points. The upper blue curve is a Monte Carlo execution of the excursions assuming O-U diffusion at a resolution of 1 year, and the lower red curve is the multiscale variance applied after processing with a 250 point moving average filter.

Fig. 8 : Multiscale Variance of Ornstein-Uhlenbeck process
does not match Greenland data (blue line)
unless a moving average filter of 250 points is applied (red line).

Higher resolution temperature time series are available from various European cities. The strip chart Figure 9 below shows a time series from Vienna starting from the 18th century.

Fig. 9 : Vienna Temperature time series

The multiscale variance shown in Figure 10 suggests that the data is uncorrelated and consists of white noise.

Fig. 10 : Multiscale Variance on data shows flat variance due to uncorrelated white noise.
Rise at the end due to AGW at current duration.

Sensitivity

This description in no way counters the GHG theory but is used to demonstrate how sensitive the earth’s climate is to small perturbations. The fact that the temperature in places such as Vostok follows this red noise O-U model so well, indicates that it should also be hypersensitive to artificial perturbations such as that caused by adding large amounts of relatively inert CO2 to the atmosphere. The good agreement with a Ornstein-Uhlenbeck process is proof that the climate is sensitive to perturbations, and the CO2 we have introduced is the added forcing function stimulus that the earth is responding to. For the moment it is going in the direction of the stimulus, and the red noise is along for the ride, unfortunately obscuring the observation at the short time scales we are operating under.

Figure 11 below shows a couple of shorter duration simulations using the Vostok parameters. Note that the natural fluctuations can occasionally obscure any warming that may occur. I explored this in the previous post where I calculated cumulative exceedance probabilities here.

Fig. 11 : Simulated time series showing the scale of the red noise

Extrapolating the Vostok data, for an arbitrary 100 year time span, a 0.75 degree or more temperature increase will occur with only a 3% probability. In other words the one we are in right now is a low chance occurrence. It is not implausible, but just that the odds are low.

Alternatively, one could estimate the fraction of the increase due to potential variability with some confidence interval attached to it. This will generate an estimate of uncertainty for the forced fraction versus the natural variability fraction.

The model of Ornstein-Uhlenbeck random walk could include a drift term to represent a forcing function. For a long term data series we keep this low so that it only reaches a maximum value after a specified point. In Figure 12 below, a drift term of 0.0002 degrees/year causes an early variance increase after 5000 years. This may indeed have been a natural forcing function in the historical as it creates an inflection point that matches the data better than the non-forced Ornstein-Uhlenbeck process in Figure 4.

Fig. 12 : Adding a small biased positive forcing function to the hop rate creates a strong early variance
as the temperature shifts to warmer temperatures sooner
Fig. 13 : Another simulation run which matches the early increase in variance seen in the data

The code below illustrates the small bias parameter, force, added to the diffusion term, hop, which adds a small drift term that adds to the variance increase over a long time period.
BEGIN {  ## Set up the Monte Carlo O-U model
   N = 330000
   hop = 0.02828
   drag = 0.0012
   force = 0.00015
   T[0] = 0.0

   srand()
   for(i=1; i<N; i++) {
     if(rand()<0.5) {
       T[i] = T[i-1] + (hop+force)*(1-T[i-1]*drag)
     } else {
       T[i] = T[i-1] - (hop-force)*(1+T[i-1]*drag)
     }
     if(i%100 == 0) print T[i]
   }
}
The final caveat is that these historical time series are all localized data proxies or local temperature measurements. They likely give larger excursions than the globally averaged temperature. Alas, that data is hard to generate from paleoclimate data or human records.


Part 2 : Semi-Markov Model

The first part made the assumption that a random walk can generate the long-term temperature time series data. The random-walk generates a power-law of 1/2 in the multiscale variance and the Ornstein-Uhlenbeck master equation provides a reversion-to-the-mean mechanism. One has to look at the model results cross-eyed to see halfway good agreement, so that the validity of this approach for matching the data is ambiguous at best.

Because the temperature excursions don't have orders of magnitude in dynamic range, the power-law may be obscured by other factors. For example, the Vostok data shows large relative amounts of noise over the recent short duration intervals, while the Greenland appears heavily filtered.

That raises the question of whether an alternate stochastic mechanism is operational. If we consider the Greenland data as given, then we need a model that generates smooth and slow natural temperature variations at a short time scale, and not the erratic movements of Brownian motion that the O-U supplies.

One way to potentially proceed is to devise a doubly stochastic model.  Along the time dimension, the overall motions are longer term, with the hopping proceeding in one direction until the temperature reverses itself. This is very analogous to a semi-Markov model as described by D.R. Cox and often seen in phenomenon such as the random telegraph signal as described in The Oil ConunDRUM.

A semi-Markov process supplies a very disordered notion of periodicity to the physical behavior. It essentially occupies the space between a more-or-less periodic waveform, with the erratic motion of a pure random walk.

The math behind a semi-Markov process  is very straightforward. On the temporal dimension, we apply a probability distribution for a run length, t.
$$p(t) = \alpha e^{-\alpha t}$$
To calculate a variance of the temperature excursion, T, we use the standard definition for the second moment.

$$\sigma_T\,^2 = \int_0^\infty T(t)^2 \cdot p(t) dt$$
Here, p(t) is related to the autocorrelation describing  long-range order and a typical PDF is a damped exponential:
$$p(t) = \alpha e^{- \alpha t}$$
The simplification we apply assumes that the short-term temperature excursion is
$$T(t) = \sqrt{D \cdot t}$$
Then to calculate a multiscale variance, we take the square root to convert it to the units of temperature,
$$ var(T) = \sqrt{\sigma_T\,^2}$$
we supply the parameterizations to the integration, and keep a running tally up to the multiscale time of interest, L.
$$var(T(L)) = \sqrt{\int_0^L \alpha e^{-\alpha t} \, Dt \, dt} = \sqrt{D (1-e^{-\alpha L} (\alpha L+1))/\alpha}$$
That is all there is to it. We have a short concise formulation, which has exactly two fitting parameters, one for shifting the time scale
$$ \tau = \frac{1}{\alpha}$$
and the diffusion parameter, D, scaling the temperature excursion.

To test this model, we can do both an analytical fit and Monte Carlo simulation experiments.

We start with the Greenland data as that shows a wide dynamic range in temperature excursions, see Figure 14 below.

Fig. 14 : Greenland multiscale variance with semi-Markov model


The thought was that the Greenland temperature data was reconstructed, with the background white noise removed at the finest resolution. This model reveals the underlying movement of that temperature.
$$D = 0.25 \,\, \mathsf{degrees^2/year}$$
$$ \alpha = 0.005 \,\, \mathsf{year^{-1}}$$
A quick check for asymptotic variance is
$$ var_\infty = \sqrt{D/\alpha} = 7 ^\circ C $$


Fig. 15 : Simulation of Greenland data assuming Semi-Markov with first-order lag bounds.
The simulation in Figure 15 uses the code shown below. Instead of square root excursions, this uses first-order lag movements, which shows generally similar behavior as the analytical expression.

#include <math.h>
#include <stdio.h>
#include <stdlib.h>
#include <time.h>

main() {  // Set up the Monte Carlo model
   #define N 100000
   double char_time = 100.0;
   double hop = 0.02;
   double drag = 0.03;
   int flip = 1;
   double *T = new double[2*N];
   int n = 1;
   int R, L, i;
   double Sum, Val, rnd;

   T[0] = 0.0;
   srand ( time(NULL) );
  
   while(n < N) {
     flip = -flip;
     rnd = (double)rand()/(double)RAND_MAX;
     R = -(int)(char_time * log(rnd));
     for(i=n; i<=n+R; i++)
        if(flip<0) {
          T[i] = T[i-1] + hop*(1-T[i-1]*drag);
        } else {
          T[i] = T[i-1] - hop*(1+T[i-1]*drag);
        }
     n += R;
   }
   // Do the multiscale variance
   L = N/2;
   while(L > 1) {
     Sum = 0.0;
     for(i=1; i<N/2; i++) {
        Val = T[i] - T[i+L];
        Sum += Val * Val;
     }
     printf("%i\t%g\n", L, sqrt(Sum/(N/2)));
     L = 0.1*L;
   }
}
Greenland shows large temperature volatility when compared against the ice core proxy measurements of the Antarctic. The semi-Markov model when applied to Vostok works well in explaining the data (see Figure 16), with the caveat that a uniform level of white noise is added to the variance profile. The white noise data is uncorrelated so adds the same background level independent of the time scale, i.e. it shows a flat variance. Whether this is due to instrumental uncertainty or natural variation, we can't tell.

Fig. 16 : Semi-Markov model applied to Vostok data


Assuming a white noise level of 0.9 degrees, the parametric data is:
 $$D = 0.25 \,\, \mathsf{degrees^2/century}$$
$$ \alpha = 0.018 \,\, \mathsf{century^{-1}}$$
 The check for asymptotic variance is
$$ var_\infty = \sqrt{D/\alpha} = 3.7 ^\circ C $$

The remarkable result is that the Vostok temperature shows similar model agreement to Greenland but the semi-Markov changes are on the order of 100's of centuries, instead of 100's of years for Greenland.


Another Antarctic proxy location is EPICA, which generally agrees with Vostok but has a longer temperature record. Figure 17 and Figure 18 also demonstrates the effect of small sample simulations and how that smooths out with over-sampling.

Fig. 17 : EPICA data with model and simulation

Fig. 18 : EPICA data with model and oversampling simulation


Part 3 : Epilog

The semi-Markov model described in Part 2 shows better agreement than the model of random walk with Ornstein-Uhlenbeck process described in Part 1. This better agreement is explained by the likely longer-term order implicit in natural climate change. In other words, the herky-jerky movements of random walk are less well-suited as a process model than longer term stochastic reversing trends appear. The random walk character still exists but likely has a strong bias depending on whether the historical is warming or cooling.

If we can get more of these proxy records for other locations around the world, we should be able to produce better models for natural temperature variability, and thus estimate exceedance probabilities for temperature excursions within well-defined time scales. With that we can estimate whether current temperature trends are out of the ordinary in comparison to historical records.


I found more recent data for Greenland which indicates a likely shift in climate fluctuations starting approximately 10,000 years ago.


Fig. 19 :  From http://www.gps.caltech.edu/~jess/ "The figure below shows the record of oxygen isotope variation, a proxy for air temperature, at the Greenland Summit over the past 110,000 years.  The last 10,000 years, the Holocene, is marked by relative climatic stability when compared to the preceding glacial period where there are large and very fast transitions between cold and warm times."

The data in Figure 19 above and Figure 20 below should be compared to the Greenland data described earlier, which ranged from 30,000 to 80,000 years ago.

Fig. 20 : Multiscale variance of Greenland data over the last 10,000 years, not converted to temperature.
Note the flat white noise until several thousand years ago, when a noticeable uptick starts.

The recent data definitely shows a more stable warmer regime than the erratic downward fluctuations of the past interglacial years. So the distinction is one between recent times and those of older than approximately 5,000 years ago.


See the post and comments on Climate Etc. regarding climate sensitivity. The following chart (Figure 21) illustrates possible mechanisms for a climate well. Notice the potential for an asymmetry in the potential barriers on the cool versus warm side.

Fig. 21: First Order Climate Well

The following Figure 22 is processed from the 30,000 to 80,000 year range Greenland ice core data. The histogram of cooling and warming rates is tabulated across the extremes (a) on the cool side below average (b) and on the warm side above average (c). Note the fast regime for warming on the warm side.

Fig. 22 : Detection of Skewed Well.
On the high side of paleo temperatures, a fast temperature rate change is observed.
A range of these these rates is equally likely, indicating a fast transition between warm and extreme.

Compare that to Figure 23 over the last 10,000 years (the Holocene era) in Greenland. The split is that rates appear to accelerate as a region transitions from cold to warm and then vice versa. I think we can't make too much out of this effect as the recent Holocene-era data in Greenland is very noisy and often appears to jump between extremes in temperature in consecutive data points. As described earlier this is characteristic of white noise (and perhaps even aliasing) since no long-range order is observed. More recent high resolution research has been published [1] but the raw data is not yet available. The claim is that the climate in Greenland during the Holocene appears highly sensitive to variations in environmental factors, which I find not too surprising.

Fig. 23 : Greenland last 10,000 years.
Note the slope emphasis on the warm side versus the cold side of temperature regime.

[1] Kobashi, T., K. Kawamura, J. P. Severinghaus, J.‐M. Barnola, T. Nakaegawa, B. M. Vinther, S. J. Johnsen, and J. E. Box (2011), High variability of Greenland surface temperature over the past 4000 years estimated from trapped air in an ice core, Geophys. Res. Lett., 38, L21501, doi:10.1029/2011GL049444.




The following two figures show an interesting view of the random walk character of the Vostok CO2 and temperature data. A linear, non-sigmoid, shape indicates that the sampled data is occupying a uniform space between the endpoints. The raw CO2 shows this linearity (Figure 24) while only the log of the temperature has the same linearity (Figure 25).  They both have similar correlation coefficients so that it is almost safe to assume that the two statistically track one another with this relation:

$$ ln(T-T_0) = k [CO2] + constant$$
or
$$ T = T_0 + c e^{k [CO2]}$$

This is odd because it is the opposite of the generally agreed upon dependence of the GHG forcing, whereby the log of the atmospheric CO2 concentration generates a linear temperature increase.

Fig.24: Rank histogram of CO2 excursions shows extreme linearity over range, indicating an ergodic mapping of a doubly stochastic random walk. The curl at the endpoints indicate the Ornstein-Uhlenbeck endpoints.

Fig. 25: Rank histogram of log temperature excursions shows extreme linearity over range.
The curl at the endpoints indicate the Ornstein-Uhlenbeck endpoints.
Fig. 26: Regular histogram for CO2. Regression line is flat but uniformity
is not as apparent because of noise due to the binning size.

Thursday, October 13, 2011

Vostok Ice Cores

As the partial pressure of CO2 in sea water goes like
$$ c = c_0 * e^{-E/kT}$$
and the climate sensitivity is
$$ T = \alpha * ln(c/c_1) $$
where c is the mean concentration of CO2, then it seems that one could estimate a quasi-equilibrium for the planet’s temperature. Even though they look nasty, these two equations actually solve to a quadratic equation, and one real non-negative value of T will drop out if the coefficients are reasonable.
$$T = \alpha * ln(c/c_1) - \alpha * E / kT $$
For CO2 in fresh water, the activation energy is about 0.23 electron volts. From "Global sea–air CO2 flux basedon climatological surface ocean pCO2, and seasonal biological and temperature effects" by Taro Takahashi,
The pCO2 in surface ocean waters doubles for every 16C temperature increase
(d ln pCO2/ dT=0.0423 C ).
This gives 0.354 eV.

We don’t know what c1 or c0 are and we can use estimates of climate sensitivity for α is (between 1.5/ln(2) and 4.5/ln(2)).

When solving the quadratic the two exponential coefficients can be combined as
$$ ln(w)=ln(c_0/c_1)$$
then the quasi-equilibrium temperature is approximated by this expansion of the quadratic equation.
$$ T = \alpha ln(w) – \frac{E}{k*ln(w)} $$
What the term “w” means is the ratio of CO2 in bulk to that which can effect sensitivity as a GHG.

As a graphical solution of the quadratic consider the following figure. The positive feedback of warming is given by the shallow-sloped violet curve, while the climate sensitivity is given by the strongly exponentially increasing curve. Where the two curves intersect, not enough outgassed CO2 is being produced such that the asymptotically saturated GHG can further act on. The positive feedback essentially has "hit a rail" due to the diminishing return of GHG heat retention.


We can use the Vostok ice core data to map out the rail-to-rail variations. The red curves are rails for the temperature response of CO2 outgassing, given +/- 5% of a nominal coefficient, using the activation energy of 0.354 eV. The green curves are rails for climate sensitivity curves for small variations in α.



This may be an interesting way to look at the problem in the absence of CO2 forcing. The points outside of the slanted parallelogram box are possibly hysteresis terms causes by latencies of in either CO2 sequestering or heat retention.  On the upper rail, the concentration drops below the expected value, while as drops to the lower rail, the concentration remains high for awhile.





The cross-correlation of Vostok CO2 with Temperature:

Temperature : ftp://ftp.ncdc.noaa.gov/pub/data/paleo/icecore/antarctica/vostok/deutnat.txt
CO2 core : ftp://ftp.ncdc.noaa.gov/pub/data/paleo/icecore/antarctica/vostok/co2nat.txt

The CO2 data is in approximately 1500 year intervals while the Temperature data is decimated more finely.  The ordering of the data is backwards from the current date so the small lead that CO2 shows in the above graph is actually a small lag when the direction of time is considered.

  

The top chart shows the direction of the CO2:Temperature movements. Lots of noise but a lagged chart will show hints of lissajous figures, which are somewhat noticeable as CCW rotations for a lag. On temperature increase, more of the CO2 is low than high, as you can see it occupying the bottom half of the top curve.

The middle chart shows where both CO2 and T are heading in the same direction. The lower half is more sparsely populated because temperature shoots up more sharply than it cools down.

The bottom chart shows where the CO2 and Temperature are out-of-phase. Again T leads CO2 based on the number you see on the high edge versus the low edge. The lissajous CCW rotations are more obvious as well.

Bottom line is that Temperature will likely lead CO2 because I can’t think of any Paleo events that will spontaneously create 10 to 100 PPM of CO2 quickly, yet Temperature forcings likely occur. Once set in motion, the huge adjustment time of CO2 and the positive feedback outgassing from the oceans will allow it to hit the climate sensitivity rail on the top.

So what is the big deal? We don’t have a historical forcing of CO2 to compare with, yet we have one today that is 100 PPM.

That people is a significant event, and whether it is important or mot we can rely on the models to help.




This is what the changes in temperature look like over different intervals.



The changes follow the MaxEnt estimator of a double sided damped exponential. A 0.2 degree C change per decade(2 degree C per century)  is very rare as you can see from the cumulative.

That curve that runs through the cumulative density function (CDF) data is a maximum entropy estimate. The following constraint generated the double-sided exponential or Laplace probability density function (PDF) shown below the cumulative:
$$\int_{I}{|x| p(x)\ dx}=w$$
which when variationally optimized gives
$$p(x)={\beta\over 2}e^{-\beta|x|},\ x\ \in I=(-\infty,\infty)$$
where I fit it to:
$$\beta = 1/0.27$$
which gives a half-width of about +/- 0.27 degrees C.

The Berkeley Earth temperature study shows this kind of dispersion in the spatially separated stations.






Another way to look at the Vostok data is as a random up and down walk of temperature changes. These will occasionally reach high and low excursions corresponding to the interglacial extremes. The following is a Monte Carlo simulation of steps corresponding to 0.0004 deg^2/year.




The trend goes as:
$$ \Delta T \sim \sqrt{Dt}$$
Under maximum entropy this retains its shape:


This can be mapped out with the actual data via a Detrended Fluctuation Analysis.
$$ F(L) = [\frac{1}{L}\sum_{j = 1}^L ( Y_j - aj - b)^2]^{\frac{1}{2}} $$
No trend in this data so the a coefficient was set to 0. This essentially takes all the pairs of points, similar to an autocorrelation function but it shows the Fickian spread in the random walk excursions as opposed to a probability of maintaining the same value.


The intervals are a century apart. Clearly it shows a random walk behavior as the square root fit goes though the data until it hits the long-range correlations.

Monday, October 10, 2011

Sea temperature correlation

This is a continuation from the Thermal Diffusion and Missing Heat post.

If one takes several months worth of data from a location, the sea surface temperature (SST) and subsurface time series looks like this over a few days.



After a cross-correlation function is applied 75 meters downward, one sees an obvious yet small lag from that level as it mixes with the lower layers.



 The lag is longer the farther the reach extends downward.  If one assumes a typical James Hansen diffusion coefficient of 1.5 cm^2/s, the diffusion is actually very slow over long distance. A Fickian diffusion would only displace 100 meters after 3 years at that rate. So the effect is fairly subtle and to detect it requires some sophisticated data processing.

Bottom-line is that the surface water is heated and it does mix with the deeper waters. Otherwise, one would not see the obvious thermal mixing as is obtained from the TAO sites.


Some other correlations using R. These show longer range correlations. Note the strong correlation oscillations which indicates that temperatures changes happen in unison and in a coherent fashion, given a lag that is less apparent at this scale. Note that the lag is in the opposite direction due to the specification that R uses for defining an ACF ( x(t+k)*y(t), instead of x(t)*y(t+k)).
 

Thermal mixing and an effective diffusivity is occurring. The only premise required is that an energy imbalance is occurring and that it will continue to occur as CO2 is above the historical atmospheric average. This imbalance shows up to a large extent in the oceans waters and is reflected as a temporal lag in global warming.




Notes: 
Have to be careful about missing data in certain sites. Any time you see a discontinuity in a correlation function, bad data (values with -9.999 typically) is usually responsible.

Wednesday, October 5, 2011

Temperature Induced CO2 Release Adds to the Problem

As a variable amount of CO2 gets released by decadal global temperature changes, it makes sense that any excess amount would have to follow the same behavior as excess CO2 due to fossil fuel emissions.

From a previous post (Sensitivity of Global Temperature), I was able to detect the differential CO2 sensitivity to global temperature variations. The correlation of temperature anomaly against d[CO2] is very strong with zero lag and a ratio of about 1 PPM change in CO2 per degree temperature change detected per month.

Now, this does not seem like much of a problem, as naively a 1 degree change over a long time span should only add one PPM during the interval. However, two special considerations are involved here. First, the measure being detected is a differential rate of CO2 production and we all know that sustained rates can accumulate into a significant quantities of a substance over time. Secondly, the atmospheric CO2 has a significant adjustment time and the excess isn't immediately reincorporated into sequestering sites. To check this, consider that a slow linear rate of 0.01 degree change per year when accumulated over 100 years will lead to a 50 PPM accumulation, if the excess CO2 is not removed from the system. This is a simple integration where f(T(t)) is the integration function :
$$ [CO2] = f_{co_2}(T(t)) = \int^{100}_0 0.01 t\, dt = \frac{1}{2} 0.01 * 100^2 = 50 $$
The sanity check on this is if you consider that a temperature anomaly of 1 degree change held over 100 years would release 100 PPM into the atmosphere. This is simply a result of Henry's Law applied to the ocean. The ocean has a large heat capacity and so will continue outgassing CO2 at a constant partial-pressure rate as long as the temperature has not reached the new thermal equilibrium. (The CO2 doesn't want to stay in an opened Coke can, and it really doesn't want to stay there when it gets warmed up)

So, if we try the impulse response we derived earlier (Derivation of MaxEnt Diffusion) to this problem, with a characteristic time that matches the IPCC model for Bern CC/TAR, standard:


As another sanity check, the convolution of this with a slow 1 degree change over the course of 100 years will lead to at least a 23 PPM CO2 increase.


Again, this occurs because we are far from any kind of equilibrium, with the ocean releasing the CO2 and the atmosphere retaining what has been released. The slow diffusion into the deep sequestering stores is just too gradual while the biotic carbon cycle is doing just that, cycling the carbon back and forth.

So now we are ready to redo the model of CO2 response to fossil-fuel emissions (Fat-Tail Impulse Response of CO2) with the extra positive feedback term due to temperature changes. This is not too hard as we just need to get temperature data that goes back far enough (the HADCRUT3 series goes back to 1850). So when we do the full combined convolution, we add in the integrated CO2 rate term f(T), which adds in the correction as the earth warms.

$$ [CO2] = FF(t) \otimes R(t) + f_{co_2}(T(t)) \otimes R(t) $$

When we compute the full convolution, the result looks like the following curve (baseline 290 PPM):


The extra CO2 addition is almost 20 PPM just as what we had predicted from the sanity check. The other interesting data feature is that it nearly recreates the cusp around the year 1940.  The previous response curve did not pick that up because it is entirely caused by the positive-feedback warming during that time period. The effect is not strong but discernible.


We will continue to watch how this plays out. What is worth looking into is the catastrophic increase of CO2 that will occur as long as the temperature stays elevated and the oceans haven't equilibrated yet.

Friday, September 30, 2011

TOTAL Discovery Data

From Figure 1 of Laherrere's recent discovery data, he suggests a crude oil URR of 2200 GB.

The oil company Total S.A. also has an accounting of yearly discoveries. I overlaid their data with Laherrere's data in the figure below:


Total must do a bit of a different backdating because their data is consistently above Laherrere's for the majority of the years. As of about 2005, their cumulative is at 2310 GB while Laherrere is at 1930 GB.

This leads to the following asymptotic graph for cumulative oil according to the Dispersive Discovery model. In this case I assigned a URR of 2800 GB to the model, with a model value of 2450 GB as of 2005. In other words, the Total discovery data may hit an asymptote of 2800 GB, which may be a bit generous:


This is really for comparative purposes as I next plotted what Laherrere's discovery data looks like against the same model.

You can see that Laherrere's data likely won't hit that asymptote.

Discovery data for crude oil is hard to come by. Perhaps Laherrere is using 2P probabilities and Total is applying possible reserve growth so that it is >2P? Or perhaps Total is using barrels of oil equivalent (BOE) to inflate the numbers (which Shell oil does)? In the greater scheme of things, does this really matter?

The following chart is the Shock Model applied to the Total discovery data, whereby I tried to follow the historical crude oil (not All Liquids) production rates by varying the extraction rate until the year 2000, then I kept the extraction rate constant at about 3.1% of reserves. This is lower than the currently accepted 4% to 5% extraction rate from reserves.


If Total does use BOE maybe this should actually fit an All Liquids curve, in which case the extraction rates would need to get increased to match the higher levels of All Liquids production.

Bottom line is that the peak plateau might extend for a couple of years and we will have a fatter decline tail if we believe the Total numbers. If it is an All Liquids discovery model, then it is a wash. As if everyone didn't know this by now, peak oil is not about the cumulative, it is about the extractive flow rates, and this is a good example of that.

In general, the URR is incrementally getting pushed up with time. Laherrere had used 2000 GB for a crude oil URR for some time (see this curve from 2005) and now likely because of the deep water oil it is at 2200 GB.

As for going through the trouble of evaluating the Gulf Of Mexico data, that is just noise on the overall curve IMO. It's getting to the point that we have enough historical data that global predictions for crude oil are really starting to stabilize. And the Dispersive Discovery model will anticipate any future discoveries. The longer we wait, the closer all the estimates will start to converge, and any new data will have little effect on the projected asymptotic cumulative.

The following is a curve that takes the Total S.A. discovery data and extrapolates the future discoveries with a dispersive discovery model. The final discovery URR is 2700 billion barrels, which is quite a bit higher than the one Laherrere plots. This is higher because I am making the assumption that Total S.A. is including backdated NGPL and other liquids along with the crude oil. Which means I had to fit against the production data that also used these liquids.



To model the perturbations in production levels, which is necessary to accumulate the reserves properly, I used the Oil Shock Model. In the inset, you can see the changes in extraction rate that occurred over the years. The extraction rate is essentially the same as the Production/Reserve ratio. Notice that the extraction rate was steady until the 1960's at which it ramped up. It started to level off and drop down during the 1970's oil crisis and didn't really start to rise again until the 1990's. I am extrapolating the extraction rate from today to match the peak extraction rate of the 1960's by the year 2050.

This is largely a descriptive working model, which essentially reflects the data that Total S.A. is providing and then reflecting that in terms of what we are seeing in the production numbers. The current plateau could be extended if we try to extract even faster (as in Rockman's words "PO is about oil flow rate") or we can start including other types of fuels to the mix. This latter will happen if the EIA and IEA add biofuels and other sources to the yearly production.

The bottom-line is that it is hard to come up with any scenarios, based on the data that Total and IHS supplies , that can extend this plateau the way that Total suggests it will, peak to 2020. That is what is maddening about this whole business and you wonder why drivel such as what Yergin continues to pump out gets published.