There have been a few negative comments recently about the quality of BoM weather forecasts. To some extent these are answered by an entry on the BoM website. There is also at least one related page on long-term forecasts. These show that there are - unsurprisingly - occasions where the forecasts are incorrect. The short term commentary also makes it explicit that the analysis covers the overall outcome for about 500 sites (and any one station - perhaps one of particular interest to the complainant - could have an unusually high share of the errors for some reason).
So I thought I would do some quantitative analysis of the BoM performance, using as a mechanism, forecasts for Mallacoota. In the interest of transparency I should note that I start from a position that the BoM does a pretty good job overall: they will stuff up occasionally but nobody is perfect. (I recall a senior forecaster being asked in a public forum how often he got things wrong. His response was that he hoped he was now sufficiently experienced to make only one really egregious error per year.)
There are a number of issues to look at in setting up such an analysis.
- When to do the analysis? If the weather is 'normal' - a stable pattern with close to long term averages across a wide area - forecasting is relatively easy. Visually-challenged Frederic could do it. What is needed is a period with possibly unusual statistics (eg a forecast heat wave or heavy rainfall) and see whether the forecast is met.
- How long ahead does one look? I recall a comment some years ago by a meteorologist that anyone forecasting more than 6 days ahead was living a fantasy. I suspect that with advances in technology and knowledge that might be a few days longer now, and macro-forecasts about Climate seen to stretch for a year at least.
- What variables to look at? Temperatures - high, low, mean or range? Rainfall - amount, rate or duration? Wind - run, gust or direction?\
- The actual source of the 'forecast'. Is it part of an official forecast by BoM or a media release by BoM? Is it from mainstream media claiming to quote BoM? Is it from an international organisation analysing data? (For example, at one time official forecasters in Norway, using European satellites, seemed to be doing extremely well. That state of affairs didn't last.) Or apparently just an individual expressing their opinion? (Note that some posts under individual bylines may be by very good meteorologists expressing an opinion without BoM or any other organisation's imprimatur.)
My initial (hopefully not "cunning") plan is to undertake some short term analyses of a week (or thereabouts) of forecasts around what looks like a period of 'interesting' weather. It is of course important to note that any one event is basically irrelevant: the key question is how do the forecasts stack up over a fairly good number of events.
I may also accompany those short term events with some commentary about longer range forecasts.
It is likely that this post will grow as analyses are undertaken, so possibly bookmark the post.
Rain event of 6 April and nearby dates
After a very dry few weeks the Bureau forecast showed 10-45mm of rain for 6 April 2024. I think they first made this forecast on 31 March but I didn't start my analysis until 3 April. I don't think that greatly affects the outcome, but I will try to pick my target a bit further in advance in future.
As a general point, it is crucial to interpret the numbers correctly. By "10-45mm" they mean that there is a 75% chance of 10mm or more and a 25% chance of 45mm or more. I don't know the exact distribution the BoM uses but on that day the chances of 0mm or 100mm would both be very low.
We now get to a tricky question of how to analyse the data where we have 2 estimates per day for a rolling 7 day period. From a user's point of view it might be considered that the first forecast given is the most important as it enables longer range planning. For example a forecast of 10-45mm of rain a week in advance might persuade someone not to arrange a long bushwalk or a BBQ on that day. If however they were very flexible in timing they might decide to wait until the 5th and see the forecast then (in this case it had risen to 10 - 60mm of rain).
The initial approach I have used is to compare the actual fall for the day - recorded at the BoM site near the airport, in both cases for the calendar day - with the latest forecast for the day in question. That is the forecast made at around1600 on the previous day. The outcome of this comparison is shown in this chart.
In my opinion the BoM forecasts here have performed very well. On the "big day" the actual fall is midway between the 75% and 25% probabilities. On no day was the actual fall above the 25% probability forecast and on no day when rain was recorded was the 25% probability 0mm.
I have also added in the initial forecasts (from 3 April when I started recording these data. The initial forecast was lower for the 6th but higher for the later days. However the changes made were marginal and I still consider the BoM did a good job.
Rainfall 21 April to 1 May
I commented above that I would choose periods of interesting weather. For this period the interest came from an article on the ABC website with a somewhat hysterical headline about the
East Coast going to get heavy rainfall. On reading the full article it appears the potential problem is, as is often the case, more with the headline writers of the ABC rather than their analyst (or the BoM meteorologists). It does appear - especially looking out the window at 0700hrs on 21 April and seeing 8/8ths cloud cover - that there is a fair chance that the BoM may have been somewhat conservative in their forecasts for the period. So that is interesting.
Long range forecasting
The BoM website has two elements relevant to this topic. The one which I suspect gets the most media attention is
Climate Drivers, since that covers El Nino/La Nina, beloved of the gutter press. However there is also a
Long range forecast page which is possibly more relevant to a consideration of the performance of BoM forecasting. The information available in the two areas is rather different as discussed below.
Climate Drivers
Background
This is a huge topic which encapsulates - or at least attempts to cover - the inter-relationships between the atmosphere and the oceans. It is international in scope, so if a problem occurs it may be unfair to blame it on BoM. Skip the following italicised stuff if you wish.
By way of background, in 1990 I attended a meeting of the US Statistical Society, covering Environmental Statistics. One of the speakers was Andrew Solow from the Woods Hole Oceanographic Institute and he commented that analysis of models of the atmospheric elements of climate was relatively easy with modern (now 30 years ago!!) computers, He went on to say that when feedback between oceans and the atmosphere was introduced to models even the Kray supercomputers slowed right down due to the complexity of the calculations. Some history from the (US) National Weather Service might be helpful: It wasn’t until the late 1960s that Jacob Bjerknes and others realized that the changes in the ocean and the atmosphere were connected and the hybrid term “ENSO” was born. It wasn’t until the 1980s or later that the terms La Niña and Neutral gained prominence.
The name El Nino is much older borrowed from Peruvian fishermen who noted changes in the weather pattern around Christmas - the time of the boy child (in Spanish "El Nino"). La Nina (the girl child ) is taken as the opposite: I wonder why they didn't use "La Vieja" - the old woman - having difference in two dimensions.
Another interesting event at the Conference was a participant questioning a presenter about changes in technology. He claimed that a simple change in the batch of paint (same producer, same specs) had caused a significant difference in the data from a range of mercury thermometers. That being the case how can you compare historic data from a screened mercury thermometer with digital data from a satellite? I can't remember the presenter's answer, but I was interested to see the copious notes that Senator Al Gore, sitting next to me, wrote about this exchange. I suspect someone from the US Weather Service was going to have a bad day when Al got back to Washington.
It is important to note that El Nino is basically measured in the Pacific Ocean, which is what (most directly) affects the USA. For Australia the Indian and Southern Oceans are also important so the task is even more difficult. These are assessed through the
Indian Ocean Dipole (IOD) and the
Southern Annular Mode (SAM). Of course the IOD and the SAM will link to ENSO: there are no - physical - walls in the sky!
Strategy for Analysis
What does all of this mean for Australia? Typically, for SE Australia, which is what we are concerned about, El Nino means dry weather and La Nina means wet weather. However I suspect that either or both of those can be modified by IOD or SAM - I have a memory that we were supposed to have an El Nino Summer 2023-24, but SAM went feral and boosted the rainfall.
My plan (and this
is a cunning plan, thank you Baldrick) is to go back to
the start of 2023 and examine the ratings given from then on for ENSO, IOD and SAM and compare them with the rainfall we received in the following period (probably assessing rainfall as "difference to Median").
That is going to take a while.
Long Range Forecasts
I shall for the time being keep to rainfall forecasts. But may include material on temperatures later.
For rainfall, going to that tab in the long range forecast section gives this page.
There is a great deal of choice in what to analyse in that page. with 8 variables and 4 observations for 4 forecast periods from which to choose. (In some cases there are further choices within the variables.)
Before getting to my monitoring I clicked on the variable "past accuracy" since the overall thrust of this post is looking at the accuracy of BoM forecasts. That generated this screen:
I am assuming that the chart shows the % of forecasts that are accurate (which is - from the BoM post on accuracy of current forecasts - the actual rainfall being between the 25% and 50% probabilities). Clear on that?
As an aside, I have questioned BoM why they use 50% as their upper bound here while the daily forecasts use an upper bound of 75%. Mallacoota falls within the 55-65% range (but the next higher range is not far to our North) for the 1st and 3rd weeks but drops for the 2nd and 4th weeks. (I am interpreting the charts to refer to the range of dates stated in the image title rather than a more generic 1,2 .. weeks etc from the current date.)
My core interest for my monthly rainfall reports for Mallacoota is how the current rainfall compares with the median. So that is the principal variable I will choose. For times, the graphic is a little strange as it is possible to select by the headers or by clicking on the images. I have decided to select the 1 month ranges which gives me maps for May and June. By zooming the map to focus on the SE Corner I can see the range for Mallacoota quite clearly.
For the June image I have cropped to only show the local(ish) area.
This is not particularly good news for the ski industry.
Taking the statistic as the midpoint of the range given by BoM my two indicators for the assessment are the chances of above median rainfall for May (42.5%) and June (57.5). I have also looked at the "probability of at least ..." chart and set the probability to 50%. That suggests we have a 50% probability of at least 50-100mm in May and June (and judging by the proximity of the border to the next range below in the bottom end of that range).
No comments:
Post a Comment
Comments are welcome but if I decide they are spam or otherwise inappropriate they will not be approved.