Rugby World Cup – Round 2 Review and Round 3 Picks

Just a quick update this time, as I am travelling. To recap round 2:

RWC Pool Round 2 Results

Again, not too bad. The model really only got one result badly wrong – Japan vs. Scotland, where it had picked a Japan win (albeit narrow). This highlights one of the downsides of this type of analysis – we do not really have that much data to go on for some teams, and so single results can have a big impact. Remember we are only using results from the past 2 seasons – going further back in time gives us more data to work with, but runs the risk of said data being out of date and not reflective of the teams’ rating now. We also picked the England-Wales game incorrectly, although that was always going to be close. The key difference (or so the model thought) should have been the home advantage to England – as it turned out, it may have been a disadvantage on the day…

Round 3 ratings and picks

The model has updated the ratings based on the results from round 2 (remember, RWC games count double):

Ratings 29sep15

The big changes are largely with the minnows shifting up or down depending on whether they lost by more or less than the model predicted in round 2. Scotland have improved by 4 points, although the model does not think this will be enough to save them against South Africa (see below). Japan drops by 9 – again, this is due to the limited data (number of games) to base the ratings on, which adds a certain amount of volatility. Interestingly, Argentina is continuing to improve, and is now rated just shy of Australia, England and Wales.

So, for the upcoming Round 3 games, the model’s picks are:

RWC Pool Round 3 Predictions

So, according to the model, the key games to watch are Samoa vs. Japan, and England vs. Australia. Again, the points difference for England is coming from the home advantage rating, suggesting a very close match is in store. Wales vs. Fiji may also be closer than the model suggests – it of course does not take account of injuries affecting a team’s rating.

Let’s see next week how things have gone!

RWC – Round 1 Review

If you recall, last week we published the predictions for round 1 of the RWC from our (fairly basic) rugby model. Let’s look how things went…

RWC Pool Round 1 Results

I’ve decided to group the predictions into 3 classes: green for those where the model picks the right winner and the margin prediction was within 7 points (i.e. a converted try) of the actual margin; blue for those where the right winner is picked, but the predicted margin is wrong by more than 7 points; and red for those where the model predicts the wrong winner (irrespective of margin). There is no great science behind this, but it provides us with an OK indicator of model accuracy without getting bogged down in too many numbers. Given the model itself is fairly approximate, it’s about right that the measurement of its effectiveness is fairly approximate too.

Apart from one glaring match, the model did not do too badly. Half of the games were rated green, and we even got one game bang on (USA v. Samoa). The model did not make the same mistake as many journalist pundits did (and continue to do) of under-rating Argentina – the model got that one pretty much right (and it would have been even better had the ABs not bombed so many tries…).

Of the two rated blue, the Ireland Canada game was difficult because of the large gulf between the teams’ ratings and the final score – any model will struggle to get a blow-out to within 7 points. And the model was actually out by only 9 – pretty close to green actually.

The two games the model did not predict accurately were the two surprises of the round – Tonga v. Georgia and the biggie, Japan v. South Africa. The model was not alone in not correctly predicting the Japan game – and I include the South African management in that.

So, a pretty good start. Round 2 starts tonight – I will update the model to take account of the round 1 games, and make some more predictions!