The Power Rankings are an analysis of the previous 52 weeks of high rated players at quality events. It is a simple “who did you beat and who beat you” algorithm. This is different than the Pro Tour Points, which will determine which players qualify for the Tour Championship.
Previous – Ranking in the previous week.
Best – Best ranking over the past 12 months.
Worst – Worst ranking over the past 12 months.
Adj Wins – Adjusted wins over Quality Players in the past 12 months. The adjustment is due to aging events that lose their value.
Adj Losses – Adjusted losses over Quality Players in the past 12 months. The adjustment is due to aging events that lose their value.
Adj Win % – Overall winning percentage against Quality Players – this is the stat that is used to rank the players.
Events – Number of Qualified Events played.
ACR – Average Competitor Rating – This statistic gives the relative strength of schedule for each player. It is not currently incorporated into the final rankings, but it is something that we want to track just in case it becomes more disparate over time.
The 2016 disc golf season is upon us and it is time to rank the top pro players. Since 2006, we have been developing algorithms that rank the top disc golfers in the world. There is much to consider. Regression, iteration, strength of schedule, qualifying events, qualified players, dropped events, aging events.
It is critical to define the system and how it will grow over time in order to provide an unbiased Power Ranking system. We are proud to put forward the first ranking system that is wholly unbiased towards players, events, or PDGA ratings and our mapped out growth for the system will ensure it stays that way.
How Players Qualify
There are three requirements for a player to qualify for the rankings.
How qualifications will change over time
Note: We include an Aging Event Multiplier so that the oldest events lose value over time such that when they drop off the rankings, their importance has gently dropped to zero. This avoids awkward jumps or drops in the rankings when older events drop out, while simultaneously weighting the most recent Qualified Events more heavily.
Note 2: For every 10 events a player plays, we will drop their worst event for the purposes of the rankings.
Now that we have our players and events defined, we get down to brass tacks. A straight up “I beat you so I’m better than you” mentality is what we are going for. We only care about wins and losses against Quality Players. For example, if you play in an event with 21 Quality Players and get 5th relative to the Quality Players in the event, your win/loss record at the event would be 16 wins to 4 losses. Note: Any players with a rating below 999 would not be counted in the win/loss columns.
So, looking at the last 52 weeks of events, Paul McBeth has beat 586 Quality Players and has lost to 6 Quality Players, giving him an insane winning percentage of .990, easily making him the number one ranked player in the world. Will Schusterick’s winning percentage is .890, and Sockibomb’s is .860. Players are ranked entirely on their win/loss percentage against other Quality Players.
Other ranking systems put false weight on various events. We believe that what makes a Quality Event, for the purposes of ranking players, is simply the number of Quality Players at the event. If an event has 20 Quality Players and you win it, you are 19 & 0. If an event has 10 Quality Players and you win it, you are 9 & 0. Rather than using a false multiplier for events, which puts a human’s natural bias on it, we utilize a system that relies solely on the number of Quality Players at the event. More Quality Players at an event means the event is more important in the rankings. It just makes sense.
To put this into context, the Aussie Open, a PDGA Major, is not included in our Power Rankings. The event had six players rated 999 or above. In the PDGA Ranking system, McBeth gets big points for this win since it is a Major. We feel this “Major” win did not have enough competition to even be counted.
Similarly, in 2015, there were ten EuroTour events included in the DGWT Rankings. Several of these events had zero Quality Players and the largest had nine. We will not include these events until there is more maturation in the quality of players. Including these events will skew the rankings towards players that play in Europe, as they will get points for top finishes at these events, even though there is little to no Quality competition.
There are many different ways to rank players in individual sports. The PDGA and DGWT use systems that artificially weight “big events” and give points to players for their finishes at these events. The points earned are independent of the number of Quality Players at the events, some of which actually have none. The PDGA also adds in a PDGA Rating factor, which is biased towards players that compete in premier events. These systems have inherent flaws due to the event and rating bias that naturally infiltrates them.
From our perspective, the players that attend an event define how great it is. An arbitrary label (Major, NT, A, B, C) cannot accurately define the quality of the players at an event. Additionally, a player’s performance against other Quality Players is the best measure of how they should be ranked.
There have been many iterations of our rankings and they have coalesced into this system, which may morph even more over the coming decade. While our sport is still young and the top tier of events are still being defined, this system, with its fluid benchmarks and qualifications, is the ideal ranking system moving forward. Let us know your thoughts.
Visit the Rankings page every Tuesday for the updated Pro Tour Power Rankings.