RWC – Round 1 Review

If you recall, last week we published the predictions for round 1 of the RWC from our (fairly basic) rugby model. Let’s look how things went…

RWC Pool Round 1 Results

I’ve decided to group the predictions into 3 classes: green for those where the model picks the right winner and the margin prediction was within 7 points (i.e. a converted try) of the actual margin; blue for those where the right winner is picked, but the predicted margin is wrong by more than 7 points; and red for those where the model predicts the wrong winner (irrespective of margin). There is no great science behind this, but it provides us with an OK indicator of model accuracy without getting bogged down in too many numbers. Given the model itself is fairly approximate, it’s about right that the measurement of its effectiveness is fairly approximate too.

Apart from one glaring match, the model did not do too badly. Half of the games were rated green, and we even got one game bang on (USA v. Samoa). The model did not make the same mistake as many journalist pundits did (and continue to do) of under-rating Argentina – the model got that one pretty much right (and it would have been even better had the ABs not bombed so many tries…).

Of the two rated blue, the Ireland Canada game was difficult because of the large gulf between the teams’ ratings and the final score – any model will struggle to get a blow-out to within 7 points. And the model was actually out by only 9 – pretty close to green actually.

The two games the model did not predict accurately were the two surprises of the round – Tonga v. Georgia and the biggie, Japan v. South Africa. The model was not alone in not correctly predicting the Japan game – and I include the South African management in that.

So, a pretty good start. Round 2 starts tonight – I will update the model to take account of the round 1 games, and make some more predictions!

Leave a Reply

Your email address will not be published. Required fields are marked *