,In view of the interest on scepticism, I thought that the following text on models, written as a sort of personal introduction within the Post Normal Science workshop (more info here and here held in Hamburg last year, could be interesting for some readers of Klimazwiebel. It is a bit too long and there is no guarantee that at the end the reader will be satisfied. On similar tone, but much better written, is the article by Sibylle Anderl in FAZ.
My view of science, rightly or wrongly, is strongly influenced by my university background. Physics students are confronted from the very beginning with the concepts of theory and models. I would argue that these two concepts are not really separated and I would use both words as synonymous here - some models are theoretical or fundamental, like Newton's theory of gravitation, and other models are practical or numerical implementations that follow through the main ides expressed in theoretical models. A more important point is, however, that models and reality are deemed clearly separated, and actually physics makes use of quite different models that aim to describe different aspects of the same purported 'reality'. These a very common situation in quantum physics, in which subatomic particles- electrons, protons, etc., are handled as either particles or as waves, depending on the experiential situation. Many examples of this sort of dichotomy can be mentioned: the nucleus can be described as a drop of a 'nuclear liquid or as a set of neutrons and protons moving in a shell structure similar to electrons in a atom; phase transitions are brought about by the average influence of a whole solid body or by just the neighbouring atoms, etc. It is not unusual that in exams the student is asked to explain a phenomenon within a certain model. In the mind of a progressing physics student, the concept of reality loses value vary rapidly, and it is very seldom referred to, if at all. This Orwellian doublespeak does not seem to cause dramatic clashes, at least to most of us. A theory is just a tool to reduce the wealth of experimental observations to a common framework, or to make predictions about the outcome of as yet not available experimental results -arguably, both aspects, prediction and reductionism, being two sides of the same coin. A model is certainly not the 'reality', and even does not attempt to map reality one-to-one. The concept of existence (reality) is not central in physical models.
I think it is important to keep in mind the limitations of this 'irrational' concept of science or of scientific activity. Basically, the scientific activity consist of designing models (theories) that condense observations and test them against other observations. Predictions are not useful per se, but only as a tool to benchmark models. This utilitarian concept of models, i.e. quite detached of the concept of reality, is underlined by the fact that very often the building bricks used in those theories cannot be found in the real world. For example, Newton's model of gravitation was formulated by defining the functional form of the force between two point masses separated by a given distance. Obviously, nobody had seen at that time, or later, 'a point mass' . Models in Modern physics are much more alien to the daily experience. Also, climate models, and models of fluid motions in general, incorporate concepts that have only a limited range of validity, and thus they cannot be thought as 'reality'. One familiar concept is density. Density only contains a meaning at (our) macroscopic scales and increasingly loses its connection to (our) reality at atomic scales, where it would rather be equivalent to the rather loose concept of density of a forest. It seems therefore clear that models cannot attempt to map 'reality', in as much as 'reality' is not a well defined concept either.
I deem the sort of useful, down-to-Earth predictions, surely important and based on complex science, but a fundamentally different activity from that of model building and testing. Perhaps this is the reason of much of the controversy surrounding post-normal science. It could also be related to the eternal squabbles between the two dominant schools of statistical thought, frequentist and Bayesian. One of the most important aims of frequentist school is precisely hypothesis testing, which we could interpret here as model testing The frequentist try to estimate to what extent some preconceived hypothesis or models are compatible with observations and to what extent the agreement between models and observations could possibly be due to chance. Models and hypothesis are thus not proven by the statistical analysis, they are only disproved or deemed incompatible with experiments. This is exactly the viewpoint of classical science.
This lies at the centre of the attribution of anthropogenic climate change, since models that disregard the anthropogenic climate forcing are incompatible with observations, whereas models that do include those forcings are compatible with observations (actually less incompatible, as explained later). The concept of attribution is distinct from that of a useful or accurate prediction for the the future climate. This difference stems not only from the uncertainty in the possible future history of anthropogenic emissions, which is of course a crucial external condition for climate prediction but which does not form part of climate science. The difference in both sorts of activities is neatly illustrated by considering that the IPCC takes into account about 20 or so climate models, all of them claiming to describe the same 'aspects of an underlying reality' and each of them providing different predictions for the future climate. They are thus competing models. The classical scientific activity would be directed at separating the wheat from the chaff until hopefully one of these models remains. Even more strictly, classical scientific activity would be aimed at testing all climate models against present observations with the unhidden purpose of proving them wrong. This would not be quite difficult, because we already know that all climate models are wrong, in the sense that not any single one of them can reproduce all available observations, even taking into account the observation uncertainty.
However, climate prediction, and actually economic and many other sorts of predictions, have a different goal, namely to use as efficiently as possible all the available tools we have at hand (models, observations, experience, insights, etc) to deliver the most 'reasonable' future evolution of a given system. This is the world of Bayesian methods. Now all models are used, since all models are more or less equally wrong, and this is what we have anyway. All observations are used, since this is also the maximum amount of information and insight we may have about the system. Predictions are not good or bad per se, and they may even change and do change when new information (new data, new models) becomes available. this does not invalidate the methods used in former predictions, the predictions are just up-dated. Predictions are more or less efficient or more or less reasonable. This is a stark contrast to classical science, and much more similar to my understanding of what post-normal science is.
As an Orwellian doublespeaker by education, I do not feel especially uneasy. when confronted with this situation, as far as one knows on which court one is playing the game.