For example, people won't remember all the times you got the weather exactly right, but they will remember the few times you got it wrong. People are horrible "natural statisticians" in that regard.
Spot on. I completely forget about selective memory. <--- itself selective memory lol
Another good example of selective interpretation occurs when I teach basic weather classes at the local community college. At lunch break I bring the students outside into the parking lot and have them estimate the speed of the wind they feel on their hands/faces. After collecting and recording their estimates, I objectively measure the wind speed with a hand held anemometer and
almost always the human estimates are double and triple the objective measurement of wind speed.
To wit: What we
think is happening is often different from what is
actually happening.
Also, the GIGO principle still applies (Garbage In, Garbage Out) when it comes to hurricane prediction, for example. From what I understand, the hurricane models are pretty good, but they are accuracy limited by the paucity of measurement data that can be had over the affected area. Do I have that right?
Yes. The numerical models that simulate and forecast the real atmosphere (such as hurricane-specific models) are strongly dependent upon the quantity and quality of the weather observations used for initialization. Small errors that creep into any model's initialization can seriously degrade the quality of the objective numerical forecast after only a few iterations of the forecast equations.
To try to get around that, modelers are now running what are called "ensemble" numerical forecasts. Small errors are
deliberately entered into the model initialization and the model is run a number of times to see if the effect of the errors compounds or decays. Several model runs are then plotted against each other which indicates the degree of reliability of the model's solutions. Sometimes the run solutions are closely grouped, despite the deliberate introduction of errors, and sometimes they are not closely grouped. Closely grouped model solutions are likely to be more reliable and are given more weight in the forecast process. That's what I meant about the "confidence factor"
Example of ensemble 500 millibar geopotential height forecasts:
Animation using Javascript Animation Player
In fact, (and forgive me for making a hasty generalization) it seems that weathermen are among the most vocal skeptics on the subject of GW. Do you know that to be true, or am I just imagining things?
Climate science and day-to-day weather forecasting are quite different sciences from each other. Even some experienced meteorologists lose sight of that.
I suspect your observation about weather forecaster skepticism may be true. Here's why: Meteorologists work daily with the numerical models I described above and they soon become suspicious of the models' ability to objectively forecast weather more than 4-5 days in advance. And for good reason. The problem is that those same meteorologists tend to then project their model skepticism onto climate forecast models, even though climate forecast models are vastly different than day-to-day numerical simulations of the weather used for forecasting. Their reasoning is: If operational numerical models have trouble forecasting weather more than 5 days in advance, how can anyone hope to forecast conditions decades in advance? The error is, of course, that climate forecast models use vastly different physics packages than daily forecast models.
Hope that makes sense.