Round 4 (and thus all pool games) is done and dusted, so let’s see how the model went:
So, Round 4 was definitely the model’s best – the winners of all games predicted correctly, and 9 out of 12 within 7 points! Note here I have counted the Canada-Romania prediction as a success, as although a draw was forecast, the end result was within 3 points. Disagree if you will, but the model agrees wholeheartedly with my decision…
Some interesting results that I would like to highlight: The NZ vs. Tonga game ended up pretty much as predicted – or at least, bang-on according to the form book. Perhaps that is a counter to those who have been critical of NZ’s performance? Ireland did better than expected against France, although whether than is because Ireland are playing better than their rating or France are playing worse is the question (and one that we answer with the end-Round-4 ratings – see below). Similarly South Africa vs. USA – a big victory, but perhaps again due to an over-rating of the USA rather than an under-rating of SA. And Scotland struggled much more than the model predicted to overcome Samoa. This highlights one of the weaknesses of the model: performance is variable, for every team. Just because a team has a rating of X does not mean that they cannot over- or under-perform in any given game. And in this game Samoa played pretty well (and really should have won).
Betting with the model
As an exercise, I have simulated a betting strategy using the model predictions. Based on the model prediction, for each game a 10 GBP simulated bet was placed on the predicted winner with a margin of either “12 or under” (if the prediction was 12 or under), or “13 and over” (if the prediction was 13 or over). For those games between top teams and minnows where these odds were not offered (as the victory margin was expected to be much higher), the bet with a 20 point spread than included the predicted margin was made (so, for example, if the prediction was a 52 point margin to team A, then the “Team A by 41-60 points” spread bet was chosen).
And, a bit to my surprise I must say, the model did fantastically well. The net position of the simulated account ended up just under 300 GBP over the pool phase, and was remarkably not particularly volatile. Ah, the power of algorithms…
Ratings: End of Pool Play
Based on the results of the last round of pool games, I have re-run the model to update the ratings:
Not much change here over the ratings at the conclusion of round 3, perhaps indicating we are reaching a bit of stability and have a pretty good idea (based on results) of current rating levels. Of the teams in the Quarter Finals, France and South Africa are the big movers of the week, down 2 and up 2 respectively. So I guess this answers the question I raised above, of whether Ireland performed better than expected or France worse (well, in the model’s opinion anyway). Interestingly, Australia’s win over Wales was not of sufficient size to move the ratings of either team.
Ratings: Big Movers
I have somewhat arbitrarily defined a “big mover” ratings-wise to be any team that has moved more than 6 ratings points from its rating at the start of the RWC at any point during pool play. There are 9 big movers under that definition, as we can see here:
Aside from Argentina and Australia, all of the big movers are the smaller nations. This should not surprise – as the smaller nations do not play the bigger nations all that often, we have less data with which to calculate their ratings, and the overall levels are more difficult to predict in advance of the competition. So you see the impact of Japan’s victory over South Africa, which after the pool rounds looks to have been an extraordinary performance (or an extraordinary game plan) rather than an indication of significant improvement by Japan. Georgia’s jump after round 1 looks largely due to an over-rating of Tonga – along with Samoa they have performed much worse than expected in the RWC.
And Australia and Argentina are the clear form teams of the tournament, if we define “form” as performing better than was expected at the start of the comp.
Ratings: The Quarter Finalists
Finally, I thought I would extract the ratings and how they have developed over pool play, for each of the 8 quarter finalists:
Interestingly, all 8 teams are either rated the same or better than their rating at the start of the tournament. Individual performance variation adds a little bit of noise, with the ratings easily adjusting by one or two points with a big win or unexpected loss. Apart from Australia and Argentina however, the overall movements are small, and we can essentially say that the other 6 teams are performing pretty much as expected given results over the past couple of years. Including NZ.
In terms of ratings then, we have NZ on top, followed by Australia, Ireland and South Africa are equal third, ahead of Wales, Argentina, France and Scotland. In a couple of days we will see what this means for the Quarter Finals predictions.