On Being a Denialist Part 7

Taking things with a grain of salt

One of The Git’s friends is very much enamoured of Thermageddon, but refuses to discuss the issue on the grounds that The Git knows far more about climate than he does. He has however made alarmist claims where his expertise exceeds The Git’s. To boot, he is a marine biologist, and that’s an area of biology that The Git had not studied at all closely until quite recently. The specific claim made by alarmist marine biologists is, to quote the Wiki-bloody-pedia, “there is evidence of ongoing ocean acidification caused by carbon [dioxide] emissions”. According to the Oxford English Dictionary (and several other authoritative dictionaries), acidification means to make acidic. Chemists, marine or otherwise, measure acidity and alkalinity on what is called the pH scale where 7 is neutral, greater than 7 is basic (alkaline) and less than 7 is acidic.

Contrary to alarmist marine biologists’ claims that the oceans are becoming increasingly acidic, the oceans are decidedly alkaline varying between ~pH 8 and pH 9. So, where does this claim that anthropogenic carbon [dioxide] emissions have made the oceans acidic (the pH to fall below pH7) come from?

According to Mark Z. Jacobson, whose paper Studying ocean acidification with conservative, stable numerical schemes for non equilibrium air-ocean exchange and ocean equilibrium chemistry appeared in the Journal of Geophysical Research, “surface ocean pH is estimated to have dropped from near 8.25 to near 8.14 between 1751 and 2004, it is forecasted to decrease to near 7.85 in 2100 under the SRES A1B emission scenario, for a factor of 2.5 increase in H+ in 2100 relative to 1751.” Note the dates and the accuracy of the pH to two decimal places. Also note that the pH values are not less than 7; the oceans are basic!

The measurement of relative acidity and alkalinity using pH was introduced by Danish chemist Søren Peder Lauritz Sørensen at the Carlsberg Laboratory in 1909. The scale was later revised to the modern pH in 1924 to accommodate definitions and measurements in terms of electrochemical cells. Thus the pH of anything as variable as seawater prior to 1909 cannot be known by definition!

It gets worse. Measuring the pH of strongly ionic solutions such as seawater is difficult and techniques to deal with accurate estimates of seawater pH weren’t developed until the 1980s and 1990s. There are three main methods in use and the results they give do not agree with each other; they vary by as much as the purported pH change. The introduction of a reasonably inexpensive and sufficiently accurate technique using the Honeywell DuraFET pH sensor was only developed over the last ~15 years. So, the claimed decrease in ocean pH over more than 250 years is based at best on 30 years of actual data; more likely 15 years.

The alarmist marine biologists making these claims are trading on the general public’s ignorance of the well-accepted definition of what is acidic and what is alkaline (basic). They are also trading on ignorance within the general scientific community of the difficulties involved in measuring seawater pH.

Because of a variety of problems inherent in electrometric pH measurements, including electrode drift, electromagnetic interference and problems with the reference electrode, the precision of these pH measurements is relatively poor. On average, we obtain a precision of +0.02 pH units on replicate samples. The accuracy of our pH measurements are difficult to evaluate directly because we have no seawater standard for pH measurements. The accuracy is therefore dependent primarily on the accuracy of the seawater buffers that are used for electrode calibration. In order to improve the precision of our time-series pH measurement data, we are currently evaluating the spectrophotometric methods for pH measurements described by Byrne et al. (198_). Although these measurements are currently being made on a regular basis, the methodological details are not finalized and are not described here. [Emphasis The Git’s]

The pH change over 250 years is claimed to be 0.11. The following diagram shows the pH changes on a daily and seasonal basis in what RA Horne’s Marine Chemistry (1969) calls a “shallow Texas bay”.  In summertime, the pH is ~8.2 at 6 am and 8.9 at 6 pm, a not much less range than is claimed for the 250 year period. Similarly, the pH at 6 pm in winter is 8.4 or lower by 0.5 than in summertime.

diurnal and seasonal pH

There is a multitude of problems facing the would-be measurer of seawater pH. If you measure the pH of the sample in the dark, the result is different than if the lights are turned on. Filtering the seawater to remove living and dead organic matter alters the pH as does changing the temperature. Pure water is pH 7.0 at 25°C and 6.55 at 50°C. Seawater pH falls rapidly with increasing depth near the surface. Taking these factors into account as well as the varied techniques in use, it would appear to be an impossible task to gather sufficient real-world data to make an estimate of pH change over decades, let alone the centuries of the claimed change. There would appear to be no standardised method for reconciling the differences inherent in measuring seawater pH. It seems to be computer models substituting for real data (again).

Putting all that aside, the real interest here is the purported effect of pH change on marine organisms.

Effects of Ocean Acidification on Marine Species & Ecosystems

Emphasis in the quotes are The Git’s

Oceana [sic] acidification may cause many negative effects on a variety of marine species and ecosystems, which would have rippling consequences throughout the entire ocean. One of the most devastating impacts of rising ocean acidity could be the collapse of food webs.

Marine animals  interact in complex food webs that may be disrupted by ocean acidification due to losses in key species that will have trouble creating calcium carbonate shells in acidified waters. Some species of calcifying plankton that are threatened by ocean acidification form the base of marine food chains and are important sources of prey to many larger organisms.

Note the repeated use of the weasel-words in this scare-literature: may, could, are expected as well as the continual use of the word acid to refer to seawater that is nowhere acidic, but everywhere alkaline. Experiments conducted to demonstrate shell-loss appear to use hydrochloric acid (HCl) to decrease the pH of seawater, rather than the demon carbon [dioxide]. While HCl reliably alters seawater pH, carbon [dioxide] participates in a process known as buffering when it dissociates into carbonate and bicarbonate ions in seawater. Buffered solutions resist pH change.

Here, Jennifer Marohasy shows a couple of photographs of “active underwater fumaroles pumping out virtually pure CO2. The sea grass is extraordinarily lush and healthy and there is very healthy coral reef a few metres away.”

Some marine biologists have claimed that these photographs are deceptive in that they show organisms already adapted to the high levels of CO2 around these fumaroles. While this is true, and the adult organisms are sessile, their juvenile stages are not. They are free-swimming and thus also adapted to the conditions in the wider ocean. Clearly they are quite capable of coping with a wide range of conditions, something that would come as no surprise to anyone even vaguely acquainted with Earth’s wide range of temperatures and CO2 levels over past millennia.

Quoting again from Effects of Ocean Acidification on Marine Species & Ecosystems:

Tiny swimming sea snails called pteropods are considered the ‘potato chips of the sea’ as they serve as a critical part of the arctic marine food web, ultimately feeding whales and other top predators. Pteropod shells are expected to dissolve in acidity levels predicted by the end of this century and may not be able to survive. Population crashes or changes in the distribution of pteropods would have serious implications for some of the most abundant marine ecosystems.

Other important calcifying species have been witnessed to have troubles in acidified waters.

Sea urchins are important grazers and can help to protect coral reefs from encroaching algae. Young sea urchins have been observed to grow slower and have thinner, smaller, misshapen protective shells when raised in acidified conditions, like those expected to exist by the year 2100.  Slower growth rates and deformed shells may leave urchins more vulnerable to predators and decrease their ability to survive. Furthermore, under acidified conditions the sperm of some sea urchins swim more slowly, this reduces their chances of finding and fertilizing an egg, forming an embryo and developing into sea urchin larvae.

Dr J Floor Anthoni writes:

As far as measuring the effect of raised CO2 levels on marine animals, the situation is complicated because CO2 rapidly becomes toxic, with symptoms of depression of physiological functions, depressed metabolic rate + activity + growth, followed by a collapse in circulation. Remember that free CO2 amounts to only 1% of the total CO2 ‘bonded’ to the water and that it takes some time for equilibrium with the other CO2 species to happen. It is thus too easy to overdose the free CO2 by increasing the CO2 in the air above. In other words, it is nearly impossible to mimic the natural situation truthfully in an experiment. [Emphasis in the original]

All of this reminds me of a made-up scare by marine biologists back in the 1970s. Supposedly, the Crown of Thorns starfish (Acanthaster planci) was going to completely devastate the Great Barrier Reef within a decade or so. Forty years on, no such devastation has occurred, although limited areas of the reef have seen coral denuded by them. It turns out that marine biologists who actually dive in the corraline waters observe the beneficial effects of the Crown of Thorns starfish on coral. Coral reefs in tropical waters are severely damaged by intense cyclones (called hurricanes in the Northern Hemisphere) and the Crown of Thorns starfish perform their good works during the recovery phase. Not all corals grow at the same rate and without the starfish, the faster growing branching corals would predominate over the slower growing corals. The starfish, by preferential feeding on the branching corals enable the slower growing corals to thrive.

Quoting again from Effects of Ocean Acidification on Marine Species & Ecosystems:

Squid are the fastest invertebrates in the oceans and require high levels of oxygen for their high-energy swimming. Increasingly acidic oceans interfere with the acidity of a squid’s blood and consequently the amount of oxygen that it can carry. Squid are important prey for many marine mammals, including beaked and sperm whales. Squid fisheries are also the most lucrative fishery in California accounting for 25 million dollars in revenues in 2008.

This is perhaps the weirdest claim of the lot. Animal blood, like seawater, is highly buffered and individuals’ blood pH has no relationship to the pH of the local environment. Rather it is under the control of the respiratory system. While it’s certainly important for animals to maintain optimum blood pH, no animal, marine or terrestrial that The Git knows of exposes its blood to pH influences external to its skin. The Git knows of women who wash their hair in vinegar (an acid) and those who wash their hair with soap (an alkali). Neither, to the best of his knowledge, suffer from acidosis, or alkalosis as a consequence.

When pressed for explanation for the falsehood that the oceans are acidic, those responsible respond that acidification means lowering pH. Actually, there is no authoritative source for this claim. When pressed for a source, the Thermageddonists as usual cannot quote an authoritative reference. When The Git was studying chemistry in 1969, shifting pH towards the neutral point (7) was called neutralisation, never was it called acidification.

 

Advertisements

On Being a Denialist Part 6

One of The Git’s favouritest commenters at Watts Up With That? is Dr Robert G Brown, a Professor at Duke University. His comments are well-written and very enlightening. One such recently made has been promoted by Anthony Watts to a head post.

Is the climate computable?

phlogiston: I do realise that over the Antarctic land mass albedo from surface snow is anomalously higher than that from cloud, since the snow presents such a pure white surface. However this is probably not the case for sea ice whose surface is more irregular and cracked with patches of dark sea in between. The trouble is that water vapor is literally a two-edged sword. As vapor, it is the strongest greenhouse gas in the atmosphere by (IIRC) around an order of magnitude, so increasing water vapor can and does measurably increase the GHE — a lot, when considering dry air versus saturated air. In arid deserts, temperatures skyrocket during the day and plummet at night because of the absence of a water vapor driven GHE — CO_2 alone isn’t nearly enough to keep upward facing surfaces from rapidly losing their heat due to radiation. In very humid tropical climates, the nights are consistently warm because of the GHE. However, water vapor is also the mediating agent for two major cooling mechanisms. One is the bulk transport of latent heat — sunlight and LWIR hit the sea surface and cause rapid evaporation of surface molecules of water. Wind blows over the ocean surface, stripping off water molecules as it goes. This evaporated water has a huge heat content relative to liquid water — the latent heat of vaporization. As the warm water vapor is carried aloft by convection, it carries the heat along with it. It also cools as it rides the adiabatic lapse rate upward, and further cools by radiating its heat content away (some of which returns to the Earth as GHE back radiation). Eventually the partial pressure of water vapor in the moist air becomes saturated relative to the temperature and the dew point is reached, making it comparatively probable that the water vapor will recondense into water. In order to do so, though, several things have to be “just right”. The water vapor has to be able to lose the latent heat of vaporization that it picked up at the water surface when it evaporated. The future water droplets have to be able to nucleate — which is a lot more likely to occur when there are ionic aerosols in the atmosphere as water (a polar molecule) is attracted to bare charge of either sign. More here.

Thought for the Day

The observer, when he seems to himself to be observing a stone, is really, if physics is to be believed, observing the effects of the stone upon himself. – Bertrand Russell

On Being a Denialist Part 5

Apart from a bunch of apparently deranged sceptics (The Dragon Slayers) everyone involved in the science of The Great Climate Debate accepts the general principal that carbon dioxide in the atmosphere absorbs photons (energy) emitted by the Earth’s surface and re-emits them, some of which inevitably return to the Earth’s surface. Despite the protestations of the Slayers, photons have no idea which direction they are to travel in (bar a God who cares about such things). This leads to the Earth’s surface being warmer than it would be if there were no carbon dioxide in the atmosphere. CAGWers have a nasty habit of claiming that all sceptics of imminent Thermageddon refuse to accept this well-tested property of carbon dioxide. For those unfamiliar with atmospheric physics, Science of Doom made an educational (and quite technical) series of posts some time ago if you feel the need to come up to speed with the underpinnings.

As The Git pointed out in an earlier post, most accept the calculation of MODTRAN5 that a doubling of CO2 level in the atmosphere leads to ~1°C of temperature increase. However, this is ceteris paribus (all other things being equal). And where the real world is concerned, things are only very rarely ceteris paribus. An increase in CO2 inevitably leads to an increase in the amount of plant food available to plants.

Continue reading

What is This Thing Called Climate?

One of the oddest things about the debate (that’s supposedly over) regarding climate is how very little of the discussion is actually about climate. Let’s start by defining what is generally meant by the term. From the OED:

3. a. Condition (of a region or country) in relation to prevailing atmospheric phenomena, as temperature, dryness or humidity, wind, clearness or dullness of sky, etc., esp. as these affect human, animal, or vegetable life.

Originally, climate meant a region, inclination or slope. “The meaning passed in Greek through the senses of ‘slope of ground, e.g. of a mountain range’, the supposed ‘slope or inclination of the earth and sky from the equator to the poles’, ‘the zone or region of the earth occupying a particular elevation on this slope, i.e. lying in the same parallel of latitude’, ‘a clime’, in which sense it was adopted in late Latin.” Aristotle identified three climates, the Northern frigid zone, a Torrid zone to the South and the region between was Temperate as that’s where all the civilised people lived. He also believed that there was a corresponding temperate and frigid zone South of the torrid zone, but that we would never be able to confirm this since, by extrapolation, life could not exist in the torrid zone. Ah, the perils of extrapolation from what is known to what is not yet known!

Moving forward in time a couple of millennia, Russian German climatologist Wladimir Köppen devised  the most widely used climate classification system. He first published his system in 1884 and it underwent several modifications, by himself and his collaborator, Rudolf Geiger. Quoting from the wiki-bloody-pedia:

The system is based on the concept that native vegetation is the best expression of climate. Thus, climate zone boundaries have been selected with vegetation distribution in mind. It combines average annual and monthly temperatures and precipitation, and the seasonality of precipitation.

Continue reading

On Being a Denialist Part 4

There is a concept in psychology called Magical Thinking which may be characterised by perception of causal relationships where science can find none. An example is the cargo cult in the Pacific nations after World War 2. The natives of the region had observed during that war that aircraft delivered many useful goods: food, tools, clothing etc. After the armed forces left the region, the aircraft stopped arriving and so did the associated goods they had previously delivered. The natives built symbolic aircraft out of local materials in order to obtain a resumption of the goods that they called cargo. Of course the cargo never arrived because it wasn’t the shape of the aircraft that was the cause of the arrival of cargo during WW2.

In current CAGW climatology there is a similar magical association between two distinctly different physical concepts: temperature and energy. The latter can be in many forms so the preferred physical concept of energy in many circumstances is enthalpy,  the “measure of the total energy of a thermodynamic system. It includes the system’s internal energy and thermodynamic potential (a state function), as well as its volume and pressure (the energy required to “make room for it” by displacing its environment, which is an extensive quantity). The unit of measurement for enthalpy in the International System of Units (SI) is the joule [J].”

To illustrate, sea level air at 25°C and 10% relative humidity (RH) has enthalpy of 30.4 kJ/kg. Sea level air at 25°C and 90% RH has enthalpy of 71.8 kJ/kg, ~2.4 times the energy despite the same dry bulb temperature. So, when climatologists (regardless of any associated beliefs) talk of a “Global Average Surface Temperature Anomaly” (GASTA) they refer to the averaging of temperatures regardless of the enthalpy (energy content) of the air that is sampled. This would be fine if it were temperatures that arrived at the Earth from the Sun, but it’s not. What arrives from the Sun is energy: electromagnetic energy that is mainly in the form of visible light, but includes ultraviolet and infra red photons too.

Let’s first look at what temperature is and what averaging temperatures might mean. Put simply, temperature is a measure of the rate of vibration of molecules in thermal equilibrium. In thermal equilibrium means that the molecules are all vibrating at the same rate. If you have a metal rod where one end is in a fire and the other in a bowl of ice water, then heat energy is flowing from the hot end to the cold end. There is no thermal equilibrium and hence no defined temperature. Similarly, the Earth is not in thermal equilibrium and hence has no defined temperature. If there were such a definition you might expect to find such at The National Institute of Standards and Technology (NIST).

Toward a Global Microwave Standard

Much of what is known about decadal climate change – and much of what appears on the evening weather forecast as well – comes from satellite-based remote sensing of microwave radiation at different levels in the Earth’s atmosphere. Microwave measurements are generally reported as the apparent temperature of the object being monitored.  Yet, at present, there is no accepted brightness-temperature (radiance) standard for microwaves that can be used for authoritative calibration of microwave sensors, for resolving discrepancies between readings from different satellites, or for comparing one program’s results with another’s.

Weather and climate uses for microwave remote sensing measurements require that the observed temperature be accurate within 1 kelvin or less. But existing measurements cannot be made with that accuracy or reliability. “Right now,” says David Walker, Project Leader for Microwave Remote Sensing in PML’s Electromagnetics Division, “new data coming from nominally identical instruments can differ by as much as a couple of kelvin.”

Well there’s a good reason to accept the calculations of GISS and CRU then. They tell us that they calculate GASTA to an accuracy of a tenth of a Kelvin (Celsius degree). But try as he might, The Git cannot find on NIST’s website, in Oke’s Boundary Layer Climates or anywhere else a Standard Definition of either Earth’s temperature or GASTA. There might be a very good reason for this — several good reasons in fact.

Consider what we are doing when we calculate the average length (arithmetic mean) of a bunch of sticks. Let’s say we have three sticks of lengths 2, 4 and 5 metres. We can lay them end to end, measure that length (11 metres), divide by three (the number of sticks) and discover the average length to be 3.6 metres.

Now let’s try the same with temperatures. Let’s say we have three beakers of water, one at 0°C another at 10°C and the third at 20°C. Mix the contents of the beakers together, wait for thermal equilibrium and measure the result. Do this several times. At no time is there ever a temperature of 30°C (the sum of the three temperatures) unlike the stick example where there is a well-defined 11 metres of stick. Worse, depending on the ratio of liquid water to ice in the 0°C sample, the resulting temperature might well be an entirely different value to 10°C which is the average of the three temperatures. Indeed, repeat the experiment a sufficient number of times to sufficient accuracy and you might well deduce that there are an infinite number of possible average temperatures.

So, what exactly is happening here in the second example compared to the first? Length and ever so many other physical values are extensive. Extensive values can be legitimately averaged. Temperature and ever so many other physical values are intensive and therefore attempting to average them is said to be physically undefined.

The Global Average Surface Temperature Anomaly calculation is just such a physically undefined operation as described above. First, the individual average temperature at a recording station is calculated. At some stations this is the sum of the hourly temperatures divided by 24 (the number of hours in a day). At others the minimum temperature is subtracted from the maximum and the result divided by two. This average is called the median and in the sticks example above is 3.5 metres and clearly not the same value  as the arithmetic mean (3.6 m).

These average temperatures of the sampled air (and water in the case of HadCRUT) are then further averaged together to be compared with the average of temperatures over a particular length of time. The current average is then differenced from the period of time average to generate the Global Surface Temperature Anomaly as a proxy for enthalpy.

While this might make sense to climatologists, to The Git’s philosophical mind it more closely resembles numerology than any kind of description of a well-defined physical reality. Cargo cultism. Magical thinking.

And here endeth this lesson.

Response to Either way, 22’s Comment on Denialism Part 1

Either way, 22 blogs:

Honestly, I have no idea why The Pompous Git was in my RSS feed: must have written something of interest to me or been linked from a site I read. Two weeks ago a title that popped up from the category piqued my curiosity: I expected satire, but instead found what looks like a real, honest-to-god attempt to propagate climate change denial, in an unbearably pretentious style — but hey, judging by the blog’s title, it’s supposed to be the author’s signature — with a lot of conceited mockery forced in. Ironically, it only served to underline his confusion about the very basics of the theory he wanted to discredit, not to mention his problems grappling with secondary-school level physics.

The post is not unique as far as denialists’ posts go, but being more elaborate, makes for an excellent example of all their typical failures of “motivated reasoning”: ignoring 99% of available data to concentrate on a few select graphs and papers (which of course “prove” their point), a total failure to cross-check and/or verify anything (including definitions the author himself links to), a long succession of logical fallacies, and last but not least, classic symptoms of the Dunning-Kruger syndrome: He Knows Better, Because He Once Read a Climate Handbook. Tremble, mere climate scientists.

Before we get stuck into Either way, 22‘s/Anonymous Coward’s [delete whichever is inapplicable] comment, it’s worth pointing out the completely false claim here that The Git claimed to be a climate science expert (suffering “the classic symptoms of the Dunning-Kruger syndrome”). Rather, he pointed out that he took for granted a classic undergraduate text (Oke’s Boundary layer Climates) was a fair representation of climate science. He also pointed out that another popular undergraduate text, The Changing Earth: Exploring Geology and Evolution, by Munro and Wicander had nothing substantive on CAGW either. He also referred to the AGU publication The Oceans and Rapid Climate Change: Past, Present, and Future, Volume 126. He also mentioned the influence of Hubert Lamb and Gordon Manley who both wrote books and papers in The Git’s collection. This is manifestly not “He Knows Better, Because He Once Read a Climate Handbook”. The Git referred to The Received View, that is what is actually taught to undergraduates. It should be fairly obvious that The Git is not responsible for the existence of that Received View. Indeed, he is perplexed that acceptance of The Received View earns the pejorative label: climate denialist (as if anyone ever denied the existence of climate!) On with the farce: Continue reading

On Being a Denialist Part 2

In the last post, The Git looked at the claimed relationship between CO2 emissions and Global Average Surface Temperature Anomaly (GASTA). He showed that despite a claimed scientific consensus that there is no obvious relationship unless you admit backwards causation in time. Nevertheless, it appears that GASTA has increased over the 160 years since the end of the Little Ice Age. The question then arises as to what the effect of this temperature rise is. A second question arises as to what the effect of mitigation strategies have been.

First, we need to look at the temperature rise in context:

Global Temperature Anomalies

Global temperature trend from 1880 to present, compared to a base period of 1951-
1980. Global temperatures continue to rise, with the decade from 2000 to 2009 as the warmest on record. Source: NASA/Earth Observatory/Robert Simmon

You will note that the claimed increase in temperature is far from uniform. 1880 to 1910 is flat, 1910 to 1940 is substantially the same slope as 1970 and after. 1940 to 1970 shows a small decline. There is an apparent periodicity of ~ 30 years. The first period of temperature rise is claimed to be entirely natural, there being no substantial increase in anthropogenic CO2 emissions during that period. The second rise is said to be unaffected by natural causes and entirely due to anthropogenic CO2. To be fair, not everyone agrees that it’s entirely due to anthropogenic CO2 even on the CAGW side of the debate. However, if we allow that then the claim of consensus needs to be discarded.

The Git argues that to a very great extent it doesn’t matter whether the rise is entirely, partially, or not at all due to anthropogenic CO2 since the variation is so small it is very much less than changes that have already occurred earlier in the Holocene. The Eemian, the interglacial period prior to the Holocene began about 130,000 years ago and ended about 114,000 years ago. For virtually the entire duration, Earth’s temperature is estimated to have been about the same as during the Holocene Optimum. Hippopotami, elephants, rhinoceroses and hyenas lived as far north as the rivers Rhine and Thames. It’s also worth noting that these same species inhabited what is now called the Sahara Desert during the Holocene Optimum as did the humans who hunted there. It would seem that temperatures 2–3°C higher than present was very good for life on Earth.

While we have yet to experience that condition, we are told that the temperature rise that already occurred during the 20thC was deleterious. First though, The Git has clear recollections of the period ~1960–1965 in UKLand. In the latter years, we experienced what was called The Big Freeze. The Git recalls a midwife freezing to death on her bicycle while on the way to deliver a baby. He experienced frost bite on his ears. While travelling to school each day, he saw small birds frozen to death in the trees and shrubs. The motion of the swans on the lake in the park kept a gradually smaller and smaller area ice-free until one day, they too succumbed to that dreadful winter of 1962–3. The implication of the chart above is that this was “climate normal”.

Continue reading