26
Graph Set This is the first set of studies that we made after re-rating the stations using the updated Leroy (2010) system. We examine compliant (Class 1&2) stations with non- compliant (Class 3,4,5) using different classes as baseline comparisons. We also ran the data without baselines for purposes of comparison and to ensure that the results were not a methodological artifact.

Part 1 Baseline Comparisons

  • Upload
    magee

  • View
    43

  • Download
    0

Embed Size (px)

DESCRIPTION

- PowerPoint PPT Presentation

Citation preview

Page 1: Part 1 Baseline Comparisons

Original Graph Set

This is the first set of studies that we made after re-rating the stations using the updated Leroy (2010) system. We examine compliant (Class 1&2) stations with non-compliant (Class 3,4,5) using different classes as baseline comparisons. We also ran the data without baselines for purposes of comparison and to ensure that the results were not a methodological artifact.

Page 2: Part 1 Baseline Comparisons

Part 1Baseline Comparisons

How do stations compare with nearby stations of a different class?

This uses a different approach than our (more recent) “nine regions” method. But we wanted to be certain that however we addressed the problem the basic results would be the same.

Page 3: Part 1 Baseline Comparisons

Methodology

• We cut the US up into 26 grid boxes.

• We then compare well sited and poorly sited stations within each grid, using each Class as a separate touchstone.

• This gives us four baselines of comparison: Class 1\2, Class 3, Class 4, and Class 5.

Note: We combine Class 1&2 and treat them as a single class to ensure a robust sample and also because neither Class 1 nor 2 stations are temperature biased, according to Leroy (2010) and are therefore equivalent for our purposes.

Page 4: Part 1 Baseline Comparisons

J1G1 I2

B2 E2 H2C2A2 D2 F2 G2

B3 E3C3

D3 F3 G3 H3

E4H4

B1 E1A1 C1 D1 F1

Grid Boxes1.) Comparisons are made within each box to

establish baseline. 2.) All boxes are averaged for gridded results.

Page 5: Part 1 Baseline Comparisons

Class 4 stations are the most numerous (36%) and therefore this comparison is the most robust.Compliant (Class 1\2) stations show a trend of 0.095°C / Decade lower than non-compliant (Class 3\4\5 stations).

Page 6: Part 1 Baseline Comparisons

Class 3 stations are the second most numerous (33%).Compliant (Class 1\2) stations show a trend of 0.102°C / Decade lower than non-compliant (Class 3\4\5 stations).

Page 7: Part 1 Baseline Comparisons

Class 1\2 stations comprise only 20% of the total number.Compliant (Class 1\2) stations show a trend of 0.082°C / Decade lower than non-compliant (Class 3\4\5 stations).

Page 8: Part 1 Baseline Comparisons

Class 5 stations comprise only 12% of the total number. Results therefore cannot be considered to be robust. Yet the same pattern emerges.Compliant (Class 1\2) stations show a trend of 0.076°C / Decade lower than non-compliant (Class 3\4\5 stations).

Page 9: Part 1 Baseline Comparisons

Part 2Equipment

This is a look at how the different equipment affects the data.

• CRS: Cotton Region Shelters (a/k/a “Stevenson Screens”

• MMTS (Maximum-Minimum Temperature Sensors)

• ASOS (Automated Surface Observing Systems)

Page 10: Part 1 Baseline Comparisons

This compares different equipment. Note that the modern MMTS shows a significantly lower trend than the obsolete CRS and the notoriously unreliable ASOS. Yet rather than adjusting CRS and ASOS trends downward to match MMTS, MMTS trends are adjusted upwards to conform with the older, less reliable equipment.

Page 11: Part 1 Baseline Comparisons

CRS equipment shows a higher overall trend than MMTS and somewhat less difference between compliant and non-compliant stations (0.64). Part of this is due to poor distribution of stations and is addressed by gridding (see next slide).

Page 12: Part 1 Baseline Comparisons

After gridding and baselining to Class 4, CRS equipment shows a difference between compliant and non-compliant stations of 0.73.

Page 13: Part 1 Baseline Comparisons

Modern MMTS equipment shows a much larger difference between compliant and non-compliant stations (0.173).

(ASOS comparisons cannot be made, as there are too few for a robust internal comparison. They tend to be better sited, almost exclusively in airports, yet their trends are higher owing to an equipment (HO-83) failure issue and other factors pertaining to the unique situations in airports.)

Page 14: Part 1 Baseline Comparisons

After gridding and baselining to Class 4, MMTS equipment shows a slightly smaller, yet still very large difference between compliant and non-compliant stations (0.164).

Page 15: Part 1 Baseline Comparisons

Part 3Urban vs. Rural

This section confirms the that urbanization not only increases the readings, but also the trends.

In addition, urbanization is found to dampen, though not eliminate, the differences between complaint (Class 1&2) and non-compliant (Class 3,4,5) stations. Rural stations show the greatest disparity.

This is significant because 10% of the rated sites are urban and 25% semi-urban, which is a far greater proportion than the actual CONUS surface area. Therefore, to that extent, the trends are exaggerated.

Page 16: Part 1 Baseline Comparisons

We now turn our attention to urban vs. rural trends. Urban trends are much higher than rural (0.99) with semi-urban trends squarely in between, at 0.56 higher than rural.

Page 17: Part 1 Baseline Comparisons

The difference between compliant and non-compliant rural stations is much greater (.095) than for urban stations.

Page 18: Part 1 Baseline Comparisons

The difference between compliant and non-compliant semi-urban stations is also much greater (0.114) than for urban stations.

Page 19: Part 1 Baseline Comparisons

The difference between compliant and non-compliant urban stations is much less (0.037), as urban waste heat overwhelms the stations, nominally compliant and non-compliant, alike. Class 4 station in urban areas show the same tendencies as Class 5 stations in rural areas.

Page 20: Part 1 Baseline Comparisons

This chart demonstrates the large effect of urban areas on (otherwise) compliant stations. It also tells us how the NOAA deals with this by way of adjustment: Namely that non-urban trends appear to be adjusted upward to match urban trends rather than urban trends being adjusted downward to match rural trends.

Page 21: Part 1 Baseline Comparisons

Non-compliant (Class 3,4,5) stations show somewhat less urban-rural difference than the compliant (Class 1,2) stations.

Page 22: Part 1 Baseline Comparisons

Part 4Gridded, but with no Baseline

We now examine the data without any baseline.

Page 23: Part 1 Baseline Comparisons

Without a baseline, the data shows a .077 cooler trend for compliant (Class 1&2) stations than for non-compliant stations (Class 3,4,5). This is consistent with our overall findings.

Page 24: Part 1 Baseline Comparisons

And this shows how NOAA adjusts for the differences: Not by adjusting the non-compliant stations downward to match the compliant stations, but by adjusting the compliant stations upward to match those stations that are out of compliance.

Page 25: Part 1 Baseline Comparisons

Finally, we showcase the best equipment with urban and semi-urban stations excluded (there are a handful of rural airports included, however). This data is not gridded or baselined, but is a simple national average. The warming effects of poor siting are obvious, as are the effects of NOAA adjustment procedure. Compliant trends are fully .190 higher after NOAA adjustment.

Page 26: Part 1 Baseline Comparisons

Baseline Comparisons

It is, of course, important to provide a simple, ungridded nationwide average of all well sited stations and poorly sited stations. And indeed, we provide those figures.

But it is possible that a nationwide average result can be skewed by poor station distribution. If well (or poorly) sited stations are concentrated in some areas but not in others. Furthermore, it is not very revealing to compare a well sited station in Northern Virginia with a poorly sited station in Arizona. One would want to compare well sited (Class 1&2) stations with nearby poorly sited (Class 3,4,5) stations and vice versa .

Therefore, gridding and baselining is desirable.