The ‘adjustment’ methods used to show ‘global warming’

Science Matters features a monthly update on ocean SST averages from HadSST3 (latest is Oceans Cool Off Previous Three Years). Graeme added this comment:
I came across this today (link here). Can you comment on why your studies seem to show the reverse! Regards, Graeme Weber

While thinking about a concise, yet complete response, I put together this post. This is how I see it, to the best of my knowledge (emphasis added):

The question could be paraphrased in these words: Why are there differences between various graphs that report changes in global temperatures?

The short answer is: The differences arise both from what is measured and how the measurements are processed.

For example, consider HadSST3 as one example and GISSTEMP as another. All climate temperature products divide the Earth’s surface into grid cells for analysis.

This is necessary because a global average can be biased by some regions being much more heavily sampled, eg. North America or North Atlantic.

HadSST takes in measurements only from cells containing ocean, while GISSTEMP uses data files from NOAA GHCN v3 (meteorological stations), ERSST v5 (ocean areas), and SCAR (Antarctic stations).

Beyond this, HadSST3 is properly termed a temperature data product, while GISSTEMP is a temperature reconstruction product. The distinction goes to how the product team deals with missing data.

HadSST3 calculates averages each month from grid cells with sufficient samples of observations and excludes cells with inadequate samples for the month.

GISSTEMP estimates temperature values for cells lacking data by referring to cells that are observed sufficiently. The estimates are the best guess as to what temperatures would have been recorded had there been fully functional sensors operating.

This process is called interpolation, resulting in a product combining observations with estimates, ie an admixture of data and guesses.

I rely on HadSST3 because I know their results are based on observational data.

I am doubtful of GISSTEMP results because many studies, including some of my own, show that interpolation produces strange and unconvincing results which come to light when you look at changes in the local records themselves.

One disturbing thing is that GISSTEMP keeps on changing the past, and always in the direction of adding warming.  What you see today differs from yesterday and who knows what you’ll see tomorrow.

Roger Andrews does a thorough job analyzing the effects of adjustments upon Surface Air Temperature (SAT) datasets. His article on Energy Matters is Adjusting Measurements to Match the Models – Part 1: Surface Air Temperatures.

Another thing is that temperature patterns are altered so that places that show cooling trends on their own are converted to warming after processing.

Figure 3: Warming vs. cooling at 86 South American stations before and after BEST homogeneity adjustments  This shows results from BEST, another reconstruction product demonstrating how an entire continent is presented differently by means of processing.

Then there is the problem that more and more places are showing estimates rather than observations. Years ago, Dr. McKitrick noticed that the decreasing number of stations reporting coincided with the rising GMT reports last century.

Below is his graph showing the correlation between Global Mean Temperature (Average T) and the number of stations included in the global database.

Source: Ross McKitrick, U of Guelph

Currently, it is clear that a great many places are estimated, and it is even the case that active station records are ignored in favor of estimates.

Source: Real Climate Science

For these reasons, I am skeptical of these land+ocean temperature reconstructions. HadSST3 deals with the ocean in a reasonable way, without inventing data.

When it comes to land surface stations, it is much more reasonable to compute the change derivative for each station (i.e. slope) and average the slopes as an indication of regional, national or global temperature change.

This form of Temperature Trend Analysis deals with missing data in the most direct way: by putting unobserved months at a specific station on the trendline of the months that are observed at that station–no infilling, no homogenization.

Several of my studies using this approach are on this blog under the category Temperature Trend Analysis. A guideline for these resources is at Climate Compilation Part I Temperatures

The method of analysis is demonstrated by a post as Temperature Data Review Project-My Submission.which also confirms the problems noted above.

A peer-reviewed example of this way of analyzing climate temperature change is the paper Arctic temperature trends from the early nineteenth century to the present W. A. van Wijngaarden, Theoretical & Applied Climatology (2015) here

Is the globe warming or cooling?

Despite the difficulties depicting temperature changes noted above, we do observe periods of warming and cooling at different times and places. Interpreting those fluctuations is a matter of context.

For example, consider GISSTEMP estimated global warming in the context of the American experience of temperature changes during a typical year.

Read more at Science Matters

Trackback from your site.

Leave a comment

Save my name, email, and website in this browser for the next time I comment.
Share via