Sunday, April 11, 2010

Sea-level rise


After the hockey-stick battle it seems that the issue of sea-level rise has all the odds to become the next matter for a constructive and polite debate. Nature Reports has two commentaries on projections for future sea-level rise, describing how different authors ( Rahmstorf on one side and Lowe and Gregory on the either side)  cook their rice with different recipes.


It seems to me that both are not quite in agreement. These two commentaries illustrate that there is much still unknown about future sea-level rise and that it is not politically incorrect to think that projections leading to 2 meters of sea-level by 2100 are unlikely. In this matter there isn't a consensus.

Future global sea-level rise will be mainly caused by three contributions: thermal expansion of the ocean, in-situ melting of glaciers and ice sheets and dynamical flow of ice into the ocean where it will subsequently melt. Global climate models can simulate in principle the first contribution, regional climate models could estimate the second contribution but there are currently no reliable models to estimate the third contribution. Before waiting for better models, some researchers have taken alternative routes to estimate future global sea-level rise by semi-empirical methods. The much cited paper by Pfeffer et al (2008) is based on past analogs and dynamical constraints about how much ice could disappear from Greenland and West Antarctica and concludes that a sea-level rise of 2 meters by 2100 would only be possible under extremely accelerated conditions.
Vermeer and Rahmstorf (2009) and Grinsted et all (2009) take a different approach, assuming that a statistical relationships exists between the rate of sea-level rise and temperature (VR) or radiative forcing (G) and that this relationship can be extrapolated into the future using the temperature rise simulated by global models. Lowe and Gregory argue that the assumptions on which this empirical approach is based cannot be really justified and  that the relationship between global sea-level rise and temperature is much more complex that a linear model can describe.
I am not an expert on global sea-level rise but I have some experience on statistical modelling in the climate context and here I would like to explain why I still found the method by Vermeer and Rahmstorf is unreliable.

VR propose a statistical model to describe sea-level rise H as a function of two variables: global temperature T and the rate of change of global temperature change :

dH/dt = a (T-T0) + b dT/dt                                                         (1)

This models is calibrated with observations over the last 120 years, i.e. the value of the parameters a ,  b and T0 are those that best fit the observations. For this VR use not the annual mean values of T or H but 30-year means of T , dT/dt and dH/dt. This means that in the last 120 years the available sample contains just a few independent data points, something between 4 and 8.

This analysis is basically a regression analysis with at most 8 independent data points, and 3 parameters to be fitted. This is surely a very small sample site and VR and aware of this. A possible by-pass to this problem is to use a long simulation with a climate model as a virtual reality to test whether or not the statistical models provides reasonable results. A climate model is not a perfect representation of reality but it can be deemed to be a parallel reality, complex enough to pose a sufficient test for the statistical method: if it fails in this parallel reality it will probably fail in the real world. VR then took data from our 1000 year long simulation with ECHO-G (yes, we provided those data :-) and could test their statistical model. Apparently the test was successful and they were able to estimate values of the parameters a and b. b was estimated to be about 2.5  cm/K (+- 0.5 cm/K) for the  ECHO-G data (see figure). This value seems physically reasonable: when the global temperature rise more rapidly(or more slowly) the sea-level also should rise more rapidly (or more slowly).
 Test of the statistical model (1) in the virtual reality of a climate model simulation. The grey line shows the reconstruction using the original model of Rahmstorf (2007), instead of model (1). Upper panel displays the rate of sea-level change and the lower panel displays sea-level height














Now the statistical model has to be applied to the real temperature and sea-level data. The first surprise is that the estimation of b with real data in the last 120 years yields a negative number (!). This is of course physically quite strange, as it would indicate that sea-level will rise less rapidly with a more rapid temperature rise. VR solve (apparently) this problem by introducing a time lag in the statistical model, which now becomes:

dH/dt= a (T(t-tau) -T0) + b dT(t-tau)/dt                                                              (2)

where tau is the lag estimated to be about 13 years.

There is some physical reasoning in the paper to justify this time lag by assuming that ocean heat content need some time to reach polar regios where this heat is then used up to melt ice or to accelerate the ice gliding in outlet Greenland glaciers.

I would have two objections here: the first is that if the parameter b is now describing a different physical process (not thermal expansion but accelerated ice flow or a mixture of both) why should its value be similar? the amount of sea-level rise produced by 1 w/m2 used up to heat sea water would be very different to the amount of sea-level rise produced when this 1w/m2 is used to accelerate a Greenland glacier. My second, more serious, concern is that the modified statistical model (2) is not the original statistical model (1) any more and therefore the test for the model (1) in the parallel reality of the climate model cannot be carried over to support model (2). We have therefore just a statistical model with 4 tunable parameters to be estimated with a sample of about 8 independent data points. I think this is just not possible in practice.

Actually, whereas in the VR paper uncertain ranges are given for the parameters a and b in equation 1, no such ranges are indicated for these parameters in equation 2. This means that being a statistical method to estimate future sea-level rise we do not have an estimation about their uncertainties in its parameters, a very weird situation in my opinion.

We can see why the estimation of b is impossible from the evidence that VM present in their paper. To estimate the parameter b, VR calculate the correlation between the reconstructions of sea-level rise obtained by their model and the observed sea-level rise, for a large number of possible values of b, and pick the one that yields the highest correlation. I found this method of estimation quite curious- as far as I know it is not a standard method- and it does not allow for an easy estimation of the uncertainties in the parameter b. Actually, in the range of values of b calculated by VR this correlations between reconstruction and observations ranges between 0.88 and 0.99 - all of them calculated with time series with at most 8 independent samples. For instance, the correlation obtained for a value of b=-4.5 cm/K and tau=0 years is r= 0.98 , whereas the correlation obtained for a value of b=4.5 cm/K and tau= 13 years is r=0.97. All values for b between -4.5 and 4.5 display correlations between r=0.98 and r=0.97. VM claim that they can identify a maximum in the correlation along this range of values of b and thus pin down the value of b (see figure below)
From their own analysis my conclusion is that it is not possible to estimate b with such a small sample size. The uncertainty in the estimation of the correlation of two series of sample size 8 when the true correlation is 0.98 is already larger than the range shown in this figure, and therefore b could attain any value between -5 cm/K and 5 cm/K.

Correlation between reconstructed and observed sea-level rate of change dH/dt as a function of two parameters in model (2). The maximum of the correlation is used to pin down the value of b and of  the time lag tau. The four panels show different options to bin the samples.





51 comments:

richardtol said...

Thanks Eduardo

There are more problems.

Why is the rate of sea level rise a function of the level of temperature? If this were an error correction model (which would be appropriate) the level of the sea would also be on the right hand side.

Before satellites, it was quite hard to measure the level of the sea, as you could only do so relative to land and you do not really know what the land is doing vertically. This problem is worse as you go back further into the past. Ditto for the temperature. So you need to correct for measurement error, errors in variables, and heteroskedasticity. This would blow away any confidence you might have had in your estimates.

AnonyMoose said...

So what happens when data after 2000 is run through the rice maker?

wflamme said...

Eduardo,

your objection pretty much nails it down, I think.

wflamme said...

"Why is the rate of sea level rise a function of the level of temperature?"

Richard, the basic idea ist that there exists a global mean temperature that would not cause net melting and that the power available for melting is about proprtional to the deviation from that temperature.

P Gosselin said...

I have yet to find a single scientist who is willing to put money down on 6 mm / year for the next 10 years. Not one.
Models are one thing, reality seems not to be cooperating. Still waiting for signs of acceleration.

richardtol said...

@wflamme
That's a sensible specification, but then the equation would be different.

If the equilibrium sea level depends on the equilibrium temperature, both levels should show up in the equation.

As formulated by Vermeer and Rahmstorf (I double checked, Eduardo did not make a typo), there is no equilibrium relationship between sea level and temperature.

The same mistake is made in Rahmstorf's Science paper, by the way.

Werner Krauss said...

okay, Eduardo, I consider buying your house on Mallorca. I am not afraid of rising temperatures, but don't like wet feet inside the house. But reading your post - deal done!

wflamme said...

Richard,

I think the VR2009 paper sums up the modelling assumptions rather well - yes it's a coarse model.

But I don't see the point in your objection. Left hand of the equation is not sea level but change rate in sea level, thus proportional to change rate in volume.
This rate is assumed to equal the sum of

a) melting rate, assumed to be proportional to the deviation from no-melt temperature thus ~ (T-T0)

b) short term thermal expansion rate of the well-mixed ocean layer assumed to be proportional to
(vol(T(t+dt))-vol(T(t)))/dt
thus ~ dT/dt

Leigh Jackson said...

IPCC AR4 projected a sea level rise of 0.26–0.59 metres by the 2090s for their highest-emissions scenario, excluding ice melt effects, saying that "understanding of these effects is too limited ... to provide a best estimate or an upper bound for sea level rise” in the twenty-first century.

Regarding the two papers in Nature Reports and your own remarks Eduardo, would you go so far as to say that the science is now suggesting an upper limit for sea level rise of around 2m by 2100 - that appearing now to be quite unlikely?

eduardo said...

@ 9
Leigh,

my opinion is that the 2 meter limit for 2100 is considered unlikely by everyone. The differences are rather whether the most likely value is 50, 80 cm or 150 cm
My interpretation of this discussion is that empirical methods cannot give us new insights (although to be fair I havent looked in detail into Grinsted et al). Estimations of the ice-sheet stability will have to be based currently on expert opinion.
The present rate of global sea-level rise from satellites is about 3 mm/year. At the beginning of the 20th century the estimations based on a few tide gauge records show something about 1.2 mm/year. To hit 80 cm or more in 2100 this rate would have to accelerate soon to 10 mm/year . It is really difficult to see an acceleration in the satellite data, which are the only global data. Actually, the 20 years of satellite data indicate rather a tiny deceleration of sea-level rise.

richardtol said...

@wflamme
In the model, dS = a(T-T0).

Physically, that means that, if the temperature stabilizes at any level that is above the initial equilibrium temperature, sea level will rise forever. The ocean will swallow the sun.

Statistically, that means that they have an omitted variable (S) on the right hand side that is correlated with their parameter of interest (a). Therefore, the estimators are inefficient (bad enough in a small sample) and their estimate of the effect of temperature on sea level rise is biased.

Leigh Jackson said...

eduardo 10
If the science equally supports a 50 cm sea rise and a 150 cm rise as the upper limit there is much we still need to learn but we have advanced beyond IPCC AR4.

eduardo said...

I see that some of our authors changed the title from 'sea-level rice' to 'sea-level rise'. Thanks,the original title was a pun to the following text :-)

eduardo said...

@ 12
Leigh,

I always found the expression 'science tells us..' dangerous. In this case even more. I think the science cannot say anything definitive in this case other than we dont know much. IPCC 4 said that. In my opinion nothing has changed, but I may be too strict- actually I think that IPCC report should only include what I would denote as 'established science': papers that have been published at least 3 years before the start of the actual writing, or at least consider the most recent papers with explicit caution.

richardtol said...

@wflamme
The solution is in the appendix of Vermeer and Rahmstorf. The equation in the paper is not the one that is used (sic).

There is a second regression with sea level. This is estimated conditional on the shown regression (which is still biased). Conditional estimation introduces more statistical problems, but at least its physically correct.

Georg said...

Thank you Eduardo for this analysis.
One question, one remark.

1) You see a contradiction between the dregrees of freedom and the number of parameters to be fitted. I was just wondering if the tau=13yr is not just controlled by the number of degrees of freedom. It's a sort of typical response time of a system. (so the 8*13 is about the number of available years). And if so, does that change anything in your overfitting argument?

2)"always found the expression 'science tells us..' dangerous. In this case even more. I think the science cannot say anything definitive in this case other than we dont know much. IPCC 4 said that. "

That is way to pessimistic for my taste. I wouldnt agree also that the IPCC says anything like "we dont know much" concerning sea level.

Unknown said...

Thanks Eduardo for posting this. The approach shown in the VR paper displays another nice example of inadequate knowledge of basic statistical skills among some leading climate researchers, that jeopardizes the credibility of parts of climate research.

As Richard pointed out, the structure of the equation makes it clear that the model can only be a crude approximation, otherwise the Earth "could swallow the Sun". (Nice expression, thanks Richard!)

If this is so, how can you fit an equation that can only hold very narrowly around T0 ? There is no test how far from T0 measurement points (H,T) can be taken for fitting parameters (a,b). The shorter the time series, the larger statistical errors. The longer the time series (Echo for 1000 years?), the larger the systematic errors from overstretching the model.

The parameter 'b' is an interesting species, by the way. Whereas parameter 'a' represents some sort of glacier responsiveness to raising levels of temperature, a faster increase in global average temperature brings the model from "climate" to "weather" time scales. In my view, it is fully sensible to find a negative 'b', since a faster increase in temperature would not necessarily accelerate ice dynamics due to ice and ocean inertia. A negative 'b' would hence be a correction for an exaggerated 'a', hence could nicely cover a too alarmist glacier response to rising temperatures.

I am afraid that I can't take this type of "physics" for serious! However, there is a responsiveness on my blood pressure...

Georg said...

@Bjoern
"If this is so, how can you fit an equation that can only hold very narrowly around T0 ? There is no test how far from T0 measurement points (H,T) can be taken for fitting parameters (a,b). The shorter the time series, the larger statistical errors."
Not really a problem (at least compared to the overfitting) as far as I can see. The statistical model was tested on a millenium simulation with quite some variability.
One could allways argue that any extrapolation is invalid since there are unknwon and huge threshholds in the system.

richardtol said...

@wflamme, bjorn
I'm taking back what I took back.

There is a second equation in the appendix, but that has S0 rather than S.

The Rahmstorf model is plain nonsense from a physical perspective.

My statistical objections are unaltered.

richardtol said...

@Georg
Any model is an approximation. That is no excuse for nonsense.

Georg said...

@Richard
Bjoern said (at least as far I understand it) the model is nonsense because it is extrapolating away from T0. I cant see at least this problem (the target is year 2100 with something between 50-150cm). This is not compeletely out of range for the direct obs (30cm over 100 something years) and it was tested on longer simulated timeseries.
So this particular point doesnt seem a problem to me. But there is apparently another problem (posting of Edu).

richardtol said...

@Georg
A model as misspecified as this one not only has problems with extrapolation but also with estimation.

In this case, a physically and statistically superior specification is attainable at minimal cost. There is no excuse. The referees and editors should be deeply ashamed.

Unknown said...

Come on Georg, you are a physicist if I am not mistaken.

A linear model as used by VR is the first Taylor development of a probably more complex and non-linear response. There is no issue at all with using a crude (linear) approximation, only should there be some professional caution when extrapolating and interpreting the model.

There is no sophistication whatsoever in this type of knowledge. I would not have passed my Vordiplom (~bachelor) in physics without knowing this. And yet I find people such as Mann and now Rahmstorff with a "prof." in their name who seem to have forgotten their elementary, first-year mathematical training. It is a mere shock for me. A shame for science as a whole and the peer-review process. End of Ranting...

Georg Hoffmann said...

@Bjoern
"of a probably more complex and non-linear response"

I guess one needs a bit more than "probably" here.
Again, I think Eduardo has a serious point here but I cannt see yours.
Why shouldnt sea level rise be a relatively linear function of temperature and temperature change (at least in a certain temperature range).
The extrapolation in itself is not the problem (and I thought at least this is your point, if this is not true, please specify).

wflamme said...

Eduardo,

regarding the paper's SI, how could b become negative?

Thermal SSL-expansion is so yesterday.

Unknown said...

@Georg #24

I fully agree that in a narrow temperature range around a hypothetical 'T0', sea level rise as a function of temperature or time or whatsoever must be linear.

Why is the VR model unreliable for deriving physical conclusions? Richard has already said why, it is very simple, and here are my top ten reasons:

The amount of ice that can melt is finite. There must hence be a leveling off in the increase in H when T >> T0. (The word 'probably' was a polite way to say 'dead sure'.) What exactly now is "T >> T0"? As I said, nobody knows and there is no test in the method applied by VR to find the (systematic) extrapolation error when the response is less than linear. En outre, The linear approximation is an upper boundary of the expected response. Furthermore, the term dT/dt does not seem to make much physical sense and could again lead to an overstating of 'a'.

To discuss a linear model and its limitation is a typical first-grader exercise, so I wonder why we need to lose so many words here.

BTW, this discussion is very analogous to the problem I am having with the MBA method of deriving temperatures from tree rings. Until today, Mann still seems not to have grasped that his linear, autoregressive method is structurally inapt to identify temperatures that are outside the temperature range he was using for finding fitting parameters to his model.

I would expect that any professional scientist knows how first to discern between nature, a model and the mathematical equations desribing it and then to be very careful when using equations for testing and extrapolating models.

Leigh Jackson said...

eduardo 14
My perspective as a layman is that if the scientific consensus is that the debate is about whether the sea will rise, at most, 50 cm more or less than 1 metre over the next nine decades, that is more helpful to know than the IPCC 4's blank space on the question.

richardtol said...

@Georg
Suppose
S=f(T)
then the first order Taylor is
S=S0 + a(T-T0)
taking the first difference
dS=a dT

Rahmstorf's dS=aT is just wrong, also as an approximation.

Georg said...

@Richard

I havent understood your argument until now. Seems trivial to me, but you might correct me.

Rahmstorf formula is actually

ds=a dT

with dT=T-T0 since these are small temperarure changes around the point of developpment.

Georg said...

@Bjoern

"The amount of ice that can melt is finite. There must hence be a leveling off in the increase in H when T >> T0. (The word 'probably' was a polite way to say 'dead sure'.) What exactly now is "T >> T0"? As I said, nobody knows and there is no test in the method applied by VR to find the (systematic) extrapolation error when the response is less than linear."

Actually yes, this is shown in the paper (Figure 3 which is the figure above). The model start to fail at a time horizon of about 500 years (and the corresponding T-T0 variations).
The improvement due to dT/dt seems quite impressing to me and certainly worth to explore.

@Eduardo
Is this right that only equation 1 is used in VR09?
Eq 2 is just a trial to give some meaning to the negative b, right?

eduardo said...

@ 25
Wflamme,
b turns negative when using the observations to fit the statistical model 1. They argue that it is an artifact because now dH/dt is responding to a lagged T and dT/dt. If one uses what they claim is the 'correct lag' then b is positive.

My concern is that there is no way of determining the lag and therefore b, from the observations. Their 'goodness of fit', the correlation to observations, just varies in the second decimal place when changing the lag.


@ 30
Björn,
my reading of the paper is that they use eq (2) for the sea-level projections. The lag, however, would not make a big differences on the long-term - just wait 12 years longer to get the same (T-T0) and dT/dt. The value of b would make a larger difference : b=2.5 or b=4.5, I guess that they took b=4.5 cm/K


@ all
I also have problems in interpreting physically both models, even more model 2 with the spooky lag. I did not get into that point though.

My understanding is that dH/dt is mostly controlled by the net heat flux: if the net heat flux to the ocean is positive, water warms and expands. If the expansion coefficient would be independent of temperature, salinity and pressure, this would be an exact relationship. If the net flux is used to melt land ice, ocean mass also increases. However, the amount of sea-level created by heat-flux into the ocean and very different than the sea-level caused by melting by the same amount of heat. This means that the partition of the net heat flux (ocean or land ice) can cause large changes in the parameters. And this partition will certainly change from the present into the future. This is one point Low and Gregory pointed out.

My interpretation of the statistical models is that dH/dt represents the tendency towards a sea-level equilibrium to a putative temperature T0. At this temperature, sea-level would not change; if T is constant but different from T0, ice is melting. The second term, dT/dt would be a proxy for the net heat-flux into the ocean (positive heat flux -> water warms).
I think that both terms partially overlap physically: the sea-level restoring movement towards equilibrium with T0 must be driven by a net heat-flux.

All in all, I would perhaps consider similar models to play with, always conditioned to a large sample size where I could really test different models in an independent data set. In no way would I consider these results as something like a serious projection.

Georg Hoffmann said...

@Eduardo
"my reading of the paper is that they use eq (2) for the sea-level projections."

I dont think that's correct. Their paragraph "Projections of Future Sea Level" (page 21530) starts like this:

"After Eq. 2 (that is your eq. 1) has passed a 3-fold test with simulated and observed sea-level data, we will apply it to the 21st century by using ..."

I thought your critique on introducing the tO was that this adds another parameter into a statistical model with too few data (and degrees of freedon respectively).

If they in fact used eq 1 (I mean your 1 and their 2) does that change the statistical reasoning and your critique?

eduardo said...

@ 32
Georg,

I think you can answer yourself: which value of b would you use ? The one estimated with the climate model data would not include the contribution from ice melting and ice-sheet instability, which is what they are trying to estimate. So it must be the one fitted from observations. Which one? You have to choose a pair (lag,b), so we are in the previous situation. I think they choose b=4.5 cm/K because the fit is r=0.987. A value of b=0 and lag 0 would yield a r=0.95, a value of b=-4.5 cm/K and lag 1 would yield r=0.99. VR and claim that the determination of this maximum is robust using at most 8 samples. However, in the main paper they also say that the value of b and the lag cannot be determined simultaneously. So the value of b is just arbitary

richardtol said...

@Georg
dS = S(t)-S(t-1)

It is the discrete time equivalent of dS/dt

dS is something entirely different than S(t)-S(0)

Georg Hoffmann said...

@Richard

I think Eduardo's interpretation is correct.
T-T0 is in fact a small term and S does not depend on absolut temperatures but on small deviations from TO.
But if you think that Science and PNAS Editors and Reviewers should be "ashamed" why not writing a coommentary on your Taylor development refutation?

Georg Hoffmann said...

@Eduardo

I wouldn't change the statistical model and take the formulation without any lag. The model is already at the border of overfitting. So it's certainly safer nt to introduce more parameters.

I checked in their program for the predictions (the program is called sealevel_predict.m in their supplemental material). This is what they did:

" % This is the aT + bdT/dt expression relative to 1951-1980, where lambda = b/a:
reftemp = mean(magicc_temp(116:156,model,sc,cc))- mean(gisstemp(1:41)); % this aligns all model temps for 1880-1920
magicc_dtemp = magicc_temp(:,model,sc,cc)-reftemp + lambda * rateofipcc(:);
"

So pretty sure they took in fact your eq. 1 for the prediction (and therefore a negative b).

One question, one remark

1) Is there a statistical problem when using just three parameters? Obviously they did some testing on that, but I have no experience what that actually means.

2) My understanding/reading of the section with the "new b" and a lag tau has changed a bit. It seems to me that they just looking for a possible explanation for the b value without claiming better statistics and a clearly suprior model. It's just fishing for a reasonable explanation for the negative b. Within the framework of such extremely simplified models that seems still acceptable to me. However, it should be clear that in each of the parameter a,b,tau there are hundreds of physical processes squeezed in with many known/unknown thresholds.

3) I agree with your conclusion "In no way would I consider these results as something like a serious projection." but I am not sure if Rahmstorf would disagree entirely with this. If this paper is the only information we have pointing to sea level rise between 50-150cm I am pretty sure nobody would care about. But it is not.

Leigh Jackson said...

Georg 36
"If this paper is the only information we have pointing to sea level rise between 50-150cm I am pretty sure nobody would care about. But it is not."

Could you give some examples, please?

richardtol said...

@Georg H
Rahmstorf applies his model to a period of 1100 years. These approximations matter.

As it stands, my paper would have a simple message: "Rahmstorf is a fool." The paper would be rejected because this is not a new finding.

I'll see whether I can be more constructive. As it stands, I have identified a number of errors that one would not tolerate in an undergrad paper. I have yet to find a statistically sound estimate of the impact of temperature on sea level.

eduardo said...

@ 36

Georg,

would you agree with the following calculation?
With the parameters a=0.56 cm/yK and b=-4.5 cm/K, we just need to let the global temperature increase exponentially from the equilibrium level T0 with the time constant 0.12 y(-1), about a trippling every 8 years. At that rate, the increase in temperature a(T-T0) exactly cancels the derivative term bdT/dt, so that sea-level rate is zero.

Georg Hoffmann said...

@Eduardo

Yes, ofcourse, I agree. 0.12°C per year is many times the expected trend and apparently far away from the region where the model does something reasonable.
You have more experience with such kind of statistical models but isnt it quite normal that you can find parameter intervalls where the models produce crap?

@Leigh Jackson
Besides of the papers published in the last about 4 years (you can easily google that) for me the key argument are paleo data.

The Eemian had a SL about 4-6Meter higher, the change was very fast and probably fueled by Greenland and West Antarctica.
Deglaciation from the last glacial maximum had sea level changes in the order of > 1 meter per century. I cannt see why this cant happen today with Greenland. The LGM was about 5°C cooler than today. A middle of the road scenario gives about +2°-3°C end of the century.

Furthermore Greenland is even under pre-industrial climate at a place where it shouldnt be. This means the GIS is maintained by its height and albedo at a place where it woulnt grow once its gone. It is therefore unstable.

wflamme said...

A time lag for thermal expansion simply doesn't make sense physically, even for a coarse model. The same applies to a negative value for thermal expansion. AFAIR, the second term should be around 1.5%/K * thickness_mixed_layer.

One should fit that and then focus upon improvement of the first term (if at all) IMO.

corinna said...

#40 George

The orbital forcing conditions were very different during Eeminan,
the summer radiation at 65 N (which is the key variable for an ice age) was about 60-70 W/m**2 higher than it is today. This is the kind of signal you have to have to melt a significant amount of land ice on Greenland. The anthropogenic warming is not even coming near to such a regional signal.

Anonymous said...

Very entertaining thread, indeed!
For me, working in engineering, it is quite obvious that you don't fit data to a non-physical function if you are planning to extrapolate.
If you are going to extrapolate you are infinitely better off with a physical model. If that is not possible, the second choice would be to fit data to a model which at least behaves nicely. Your model must give resonable results with all reasonable inputs. Never use a model with such unphysical behaviour in the extremes.

But hey, it's climate science.

Jonas B1

gregor said...

fyi: r pielke sr. is also criticizing the VR 2009 paper http://pielkeclimatesci.wordpress.com/2010/04/13/continued-misconception-of-the-concept-of-heating-in-the-pipeline-in-the-paper-vermeera-and-rahmstorf-2009-titled-global-sea-level-linked-to-global-temperature/

eduardo said...

Danke , Georg. ich hatte es nicht gesehen.

Theoretisch koennte doch ein lag zwischen Flux und dH/dt existieren, da die thermische Ausdehung auch vom Druck und T abhängt, und Temperaturänderungen eine gewisse Zeit brauchen würden, um sich in die Tiefe zu verteilen. Im Klimamodell aber ist die Beziehung zwischen Flux und dH/dt in der Tat simultan

wflamme said...

Eduardo,

IMO ist es ziemlich egal, wie sich die Wärmeenergie nun verteilt, wenn man im Wesentlichen von einer temperaturproportionalen Ausdehnung ausgeht ... also ob zB die obersten 100m um 2 Grad wärmer werden oder die obersten 200m um 1 Grad sollte keinen bedeutenden Unterschied machen.

Mal hier geguckt:
http://ocp.ldeo.columbia.edu/climatekidscorner/whale_dir.shtml

Das Temperaturprofil links und das Dichteprofil rechts zeigt keine offensichtlichen Anomalien, das dichteste, tiefste Wasser ist auch das kälteste. Dh egal welches Wasser erwärmt würde, es würde netto immer eine Ausdehnung bewirken, weil es kein Wasser gibt, das sich unterhalb seines Dichteanomaliepunktes befindet und bei dem eine Erwärmung eine Kontraktion bedeuten würde.

eduardo said...

@ 46
'IMO ist es ziemlich egal, wie sich die Wärmeenergie nun verteilt, wenn man im Wesentlichen von einer temperaturproportionalen Ausdehnung ausgeht ...'

In both cases water would expand, but the magnitude may be different, since the expansion coefficient depends on temperature, pressure and salinity. For instance, at 20 C it is 5 times larger than at 0 C. So, in theory, it makes a small difference if the whole heat is concentrated in the upper warm layers or penetrates into the colder layers.

wflamme said...

Eduardo (#46)

Ok, aber erstmal reden wir von einem Lag, die Wärme wird an der Oberfläche eingebracht und wird dort in jedem Fall eine Expansion verursachen. Damit wir aber netto erstmal keine Expansion feststellen, muß irgendwo (durch eingebrachte Wärme) Wasser entsprechend kontrahieren. Welches Wasser soll das denn sein?
Die Ausdehnung kann zwar kleiner sein, wenn vorwiegend kaltes Wasser erwärmt wird, aber eben nicht Null und erst recht nicht negativ.

Zweitens, selbst wenn solches anomalisches Wasser gefunden würde, was sich bei Erwärmung kontrahiert, wie kommt die Wärme von der Oberfläche genau zu diesem Wasser, ohne daß sich das Oberflächenwasser zunächst entsprechend erwärmt?

Drittens, welche weitere Magie stellt sicher, daß sich die beiden Effekte Ausdehnung und Kontraktion gegenseitig immer so aufheben, daß man zunächst keine Veränderung zu bemerken glaubt?

Viertens stellt der Ausdruck dT(t-tau)/dt ja nichts anderes dar als ein Totzeitglied (Proportionalglied mit verzögerter Antwort). Nach einem Temperatursprung stellt man also eine Weile nichts fest, dann eine entsprechende Volumenveränderung.
Das anomale Wasser entwärmt sich also plötzlich wieder und transferiert die Wärme wieder zurück in die oberen Schichten, wo sie die gewünschte (aber verzögerte) Ausdehnung bewirken. Noch mehr Magie.

Fünftens findet dieser Rücktransfer von Energie dann so statt, daß sich die erwärmten oberen Schichten irgendwie doch nicht erwärmen, denn nach dem Temperatursprung bleibt die Oberflächentemperatur ja unverändert.
Also noch viel mehr Magie.

IMO gehört ein kaputter Ansatz geschrottet, nicht kaputtrepariert.

Unknown said...

@Georg #30

"The improvement due to dT/dt seems quite impressing to me and certainly worth to explore."

Try to fit the data to this model, which is as arbitrary as the VR one:

dH/dt = a log(T/T0) + b arctan(dT/dt)

or

dH/dt = a (T - T0)^2 + b dT/dt

You may find an even better fit, but would it tell you anything about the quality of the model?

A model does not necessarily get better because 5 data points can be fitted well to it.

eduardo said...

@ 48

wflamme,

ja, völlig einverstanden. Vielleicht liegt der Knackpunkt beim Wort 'lag'. Ich meine damit, dass die Response nicht unbedingt simultan sein soll, sondern eher verteilt in der Zeit mit einer gewissen Abklingzeit. Ich sehe aber auch nicht, warum der Einfluss von T oder dT/dt erst nach einer Zeitverschiebung sich bemerkbar machen sollte. In diesem Fall sollte man auch nicht vergessen, dass die Daten mit einem ca 30-jährigen low-pass Filter geglättet worden sind, was es noch problematischer macht, von einem 12-jährigen lag zu sprechen

wflamme said...

Eduardo (#50),

in this case thermal expansion could be modeled as a low pass filter (PT1): a thermal diffusion resistor between the surface at T and the heat capacity of the mixed layer and its mean temperature T_ml.

In addition with an inertial model like this there is no obvious need for pre-smoothing data. Let's see how well it fits (hope I find the time soon).