I've posted this before. If you cant see the problem that's ok.

Providing a range-less 20% chance of Zero must be balanced by 80% chance of non zero, which means it's going to rain somewhere, with a certainty of 80%. Feel free to try and math the rain out of that logic.

At that specific location, there's a 20% chance it will rain (Rain = 0.2mm or more). 80% chance it will be dry.

With a reminder of what the possible rainfall amounts on Bureau-produced forecasts mean (I know I'm rehashing a bit here but others may not know or have not noticed in the past):

At a specific location (e.g. Happytown), the 'Possible Rainfall':

1 to 3mm = 50% chance of 1mm or more, 25% chance of 3mm or more

20 to 25mm = 50% chance of 20mm or more, 25% chance of 25mm or more

2 to 15mm = 50% chance of 2mm or more, 25% chance of 15mm or more

5 to 35mm = 50% chance of 5mm or more, 25% chance of 35mm or more

On storm days there is a tendency to see possible rainfall amounts having larger ranges (like the last two examples), reflecting the localised nature of convective precipitation.

If just "0mm" is used in the rainfall range, it indicates there is less than a 25% chance of 0.2mm or more, otherwise 0-0.2mm would be used (which would then indicate 25% of 0.2mm or more) or some other range, e.g. 0-0.4mm, 0-1mm. In other words, the chance of rain is very low, and that location should strongly expect no rain and (in this case) mostly sunny conditions. So with that in mind and because we can't have "negative rainfall", if there is a Zero present in the Possible Rainfall section:

0mm = A 50% chance *or less* of getting more than 0mm, *less than* 25% chance of 0.2mm or more (otherwise 0.2mm would be shown like in the next example)

0 to 0.2mm = A 50% chance *or less* of getting more than 0mm, 25% chance of 0.2mm or more

0 to 5mm = A 50% chance *or less* of getting more than 0mm, 25% chance of 5mm or more

If there is higher than a 50% chance of getting more than 0mm, because it exceeds the 50% probability threshold, then you may instead expect to see something like:

0.2 to 1mm = 50% chance of 0.2mm or more, 25% chance of 1mm or more

0.4 to 4mm = 50% chance of 0.4mm or more, 25% chance of 4mm or more

I should note Weatherzone's Opticast seems to use a different approach to determining the rain amount expected with less flexibility in it's rainfall ranges, so don't use the Bureau's 50%-25% probabilities on the possible rainfall (on WZ there is the option to change the source on most town forecasts and also hovering over the info with your mouse will identify its source). Some places also get there rain amounts from BOM OCF on WZ Forecasts which also seems to use a different approach.

However, the pic above though appears to be the BOM rainfall amounts that you get from the normal Bureau-produced forecasts (which have guidelines available on what the thresholds are), but just using WZ graphics instead, so the probability ranges are applicable in this case.

~~~~

Probability forecasts in general:

Enhancing Weather Information with Probability Forecasts -

https://www.ametsoc.org/ams/index.cfm/ab...lity-forecasts/A probability forecast includes a numerical expression of uncertainty about the quantity or event being forecast. Ideally, all elements (temperature, wind, precipitation, etc.) of a weather forecast would include information that accurately quantifies the inherent uncertainty. Surveys have consistently indicated that users desire information about uncertainty or confidence of weather forecasts. The widespread dissemination and effective communication of forecast uncertainty information is likely to yield substantial economic and social benefits, because users can make decisions that explicitly account for this uncertainty.

Uncertainty can be expressed in ways other than probabilistic terms, such as odds or frequencies. But studies by social scientists have indicated repeatedly that expressing uncertainty in qualitative terms, such as “likely,” creates unnecessary ambiguity, with one user interpreting the same term as reflecting a higher probability than would another user.

One highly desirable property of any probability forecast is that it be “reliable” (or “well-calibrated”). For instance, over the long term, precipitation should occur on approximately 20% of the occasions for which the forecast probability is 20%.

The definition of the event being forecast must be clearly understood in order for probability forecasts to be communicated effectively and acted upon appropriately. For example, for a forecast of 30% probability of precipitation for Boston tomorrow, a person may be unsure as to whether that means: (a) it will rain over 30% of the Boston area tomorrow; (b) it will rain for 30% of the time tomorrow somewhere in Boston; (c) there is a 30% probability it will rain somewhere in Boston tomorrow; or (d) at any given location in the Boston area, there is a 30% probability that it will rain tomorrow. The definition of a precipitation event used by the NWS is measurable precipitation within the stated time period at any point in the area for which the forecast is valid (i.e., (d) is the correct answer).

Probability -

http://research.metoffice.gov.uk/research/nwp/ensemble/probability.htmlUse of probabilities can sometimes cause some confusion, and many people are more familiar with Odds which are commonly used for betting. The two are very closely related. For example, a probability of 10% means 10 times out of 100, or a 1 in 10 chance. Thus for every 10 occasions the event will not occur on 9 occasions and will only occur once. The Odds are therefore 9:1 against. Working in the opposite direction, if the Odds are 4:1 against an event occurring, then this means that it will not happen 4 times as often as it happens. So it will occur on 1 occasion in 5. Turning 1 in 5 into a percentage gives 20%.

Is the forecast right or wrong?:

Probabilistic Forecasting -

http://www.nssl.noaa.gov/users/brooks/public_html/prob/Probability.htmlhttp://research.metoffice.gov.uk/research/nwp/ensemble/probability.htmlIt is important to remember that the reason for issuing probability forecasts is that it is often impossible to give a categorical yes/no forecast with complete accuracy. A probability forecast instead describes how likely an event is on a particular occasion. Thus it is reasonable to ask whether a probability forecast can be wrong. For example, if a probability is given as 10% and the event occurs, then is this right, or wrong? One might think that it is wrong because the probability was low but the event did occur, but this is the wrong interpretation. Of all the times that a 10% probability is issued, the event should happen 1 time in 10. Thus we can never say whether a single probability forecast is right or wrong. We can only measure how good our probability forecasts are by looking at a large set of forecasts. Then we can group all the 10% forecasts together and check that the event occurred on 1 in 10 of these occasions; similarly for the 70% forecasts, it should occur on 7 in 10, etc. Results from verifying a large number of forecasts can be plotted in a Reliability Diagram.

There is one rather trivial exception to the general rule that probability forecasts cannot be wrong. The only time a single forecast can be wrong is if the issued probability is either 0% or 100%, which is equivalent to going back to a categorical forecast, and getting it wrong!

Deterministic vs Probabilistic Forecasting -

http://tornado.sfsu.edu/geosciences/classes/m698/Determinism/determinism.htmlProbabilistic forecasting is a technique for weather forecasting that relies on different methods to establish an event occurrence/magnitude probability. This differs substantially from giving a definite information on the occurrence/magnitude (or not) of the same event, technique used in deterministic forecasting. Both techniques try to predict events but information on the uncertainty of the prediction is only present in the probabilistic forecast.

"...Think about how you do a forecast. The internal conversation you carry on with yourself as you look at weather maps is virtually always involves probabilistic concepts. It is quite natural to have uncertainty about what's going to happen. And uncertainty compounds itself. You find yourself saying things like "If that front moves here by such-and-such a time, and if the moisture of a certain value comes to be near that front, then an event of a certain character is more likely than if it those conditions don't occur." This brings up the notion of conditional probability. A conditional probability is defined as the probability of one event, given that some other event has occurred. We might think of the probability of measureable rain (the standard PoP), given that the surface dewpoint reaches 55F, or whatever...."

Perhaps it's in human nature to be uncomfortable with not being able to blame someone or something for a perceived fault (possibly a kind of coping mechanism given emotions often get involved when it comes to weather?). With the previous kind of forecast, it was easier to take out one's frustrations about missing the rain or storms and blame the Bureau for what one may believe to be an incorrect forecast. In assessing the performance of probability forecasts, it requires a shift in thinking, you have to judge the errors over the long term.

At home over a long period of time (e.g. a year or more), you could take note of how many times it rains (meaning 0.2mm or more, and without a gauge of your own approximately when the concrete/pavement becomes totally wet by raindrops would be a good enough comparison) at each percentage level. I believe forecasts are done down to a 6km x 6km grid in NSW & QLD (3km x 3km in VIC & TAS). 6km x 6km is about the dimensions of a large country town (e.g. Warwick, Armidale), or if in the city / urban area: your suburb plus at least the next ring of suburbs (and maybe including the second ring of suburbs from your suburb if nearer to the inner city or have small suburb sizes in your urban area). So if the rain missed your house but maybe affected an adjacent suburb/neighbourhood you'd have to make a judgment call based on the radar, nearby obs (if available), or the thickness of the rain curtain as to whether 0.2mm or more fell. What you'd hope to find over the long-term is that it only rains about 10% of the time when 10% chance of rain is given, about 50% of the time when 50% chance of rain is given, and about 90% of the time when 90% chance of rain is given etc. It should never rain with 0% chance of rain forecast and it should always rain with 100% chance of rain forecast (with a reminder that rain = 0.2mm or more).

Reasons why the Bureau changed to probability forecasts (you can see some of the reasons echoed above):

http://www.smh.com.au/environment/weathe...006-10qzv2.htmlhttp://media.bom.gov.au/social/blog/440/...ecast-language/Interpreting the Bureau's chance of rain forecasts:

http://media.bom.gov.au/social/blog/209/right-as-rain-how-to-interpret-the-daily-rainfall-forecast/It would seem unlikely that they will go back to the previous style of forecast. They found that the "Patchy rain", "Scattered showers and thunderstorms" and "Isolated showers and the chance of thunderstorms" forecasts etc., were too confusing and not being understood by most of the general public. They changed to probability forecasts with the aim of making it more simple and less confusing, and to satisfy the public's demands of 'how likely is it going to rain' and 'how heavy will the rain be'.

It does seem like though that it doesn't matter what style of forecast they use, most people won't be happy with the Bureau until forecast accuracy is basically 100%. The Bureau also get a harder time from people over perceived incorrect forecasts because many see them as a faceless entity. They are more forgiving if a perceived forecast error comes from say somewhere like Higgins, other local FB weather pages, or other knowledgeable weather-folk elsewhere on the net, because often through interaction you build a personal connection with them (aside from the humanisation, the bans and the threat of a ban in some of these places probably play a part in reducing the criticism too).