A year ago last week, Major League Soccer and Audi of America announced a new soccer technology with the hopes that it will help bring the game of soccer in the Unites States to the next level. The two parties held high hopes of their Audi Player Index (API), and believed that the “new form of soccer intelligence...will engage soccer enthusiasts in a new way.”
The idea behind the API is brilliant, and in theory this technological advancement would not only keep fans more involved in MLS, but assist in bringing in new fans. But theory and reality often do not take the same path, and one year after the API’s induction, there are more questions surrounding it than answers.
Is the API as exceptional as MLS claims? Is it as ambiguous as many fans believe? Can the API even be trusted as an accurate and reliable source of information? With all these questions surrounding the soccer assessment tool, I decided to research this Audi Player Index.
I started the research by finding what MLS and Audi each say of the API, and the quick search gave the impression of an innovative technology. Toronto FC's Sebastian Giovinco was the best player in MLS in 2016, according to API. Following him were the likes of David Villa, Ignacio Piatti, and Bradley Wright-Phillips. These four players are some of the best in MLS, and so it makes sense that they would also be on top of the API.
As for the individual teams in 2016, the teams that made the playoffs had a better API than teams below the red line by an average of 235. Toronto FC had the second highest API, after reaching the MLS Cup Final, and FC Dallas was ranked eighth based on API after winning the MLS Supporters Shield.
So, all-in-all, the Audi Player Index passes the quick eye test. But I was still left with many questions about it, with the biggest question being what exactly does it measure? And this is where things start to get tricky.
If you go to the MLS API website you will find 24 different areas that are measured and how many points are gained or lost from each action. The site claims, “This chart (pictured below) highlights only a portion of the components that are utilized to compute the Audi Player Index.” In fact, the actions that MLS and Audi give us are a meager 27% of the all the player actions. With just 27%, the point values that are listed are meaningless. Sure, a key pass from a forward is worth 25 points but what is that in comparison to other areas that are not listed?
Which leads me to the next point. Kevin Shank from American Soccer Analysis also did research on the API, and what he found is that the API is deceiving, flawed, and questionable at best. In 2016, there were eight different occasions when players did not play in an MLS match but were given an API score. Players' scores also differ from the MLS site and from the MLS Match Center. These two examples will lead to the overall teams' scores being faulty. So, while FC Dallas came in eighth overall in 2016, it is possible that they could have been much higher (or even lower?). These are just three examples of times that the API score was inaccurate, and the question must be asked how many other errors were made by this complex algorithm?
I still have not answered the question of what does the API measure, and simply put I do not know. Because MLS and Audi will not release the algorithm, it is impossible to know everything that goes into it. However, Shank put together his own database based on the MLS Match Center and compared it to what MLS released. Since he could not get any help from the API makers, he was forced to look into the HTML and CSS code to break the algorithm down himself. What he ultimately found is that the API is subjective and illogical.
Based on his research it was found that some "positive actions like successful passes were awarded a high negative score and negative actions like conceding goals were awarded positive scores, making it apparent that the data source is flawed."
In addition, some player actions are awarded different point values for the same action. MLS claims that the score will vary based on how well or poor the action was completed. Translated into everyday speech, this mean Antonio Nocerino and Cristian Higuita could both make a key pass to Cyle Larin that leads to a corner, but Nocerino could get more points from it because he looked better when making the pass. To make matters worse, if Carlos Rivas were to make that same exact key pass, he would score fewer points than both Higuita and Nocerino because he is playing forward when the other two are midfielders.
So, the system is obviously flawed, but it should not be disbanded altogether, mainly because the players that scored the most points were generally the players that were the best on the field, as well as the best teams based on final standings were mostly accurate based on API scores.
After that it gets pretty sketchy. There are way too many variances and inaccuracies for it to be counted on. Higuita and Tommy McNamara, who both were on the field for about 15 minutes each on Sunday, got a score of -34. Meanwhile, Sean Okoli, who played for six minutes, got a score of 269. From watching the game, all three players had the same general effect on the outcome.
When looking at the API, it is important to remember that it is not a perfect science. In fact, I am more inclined to call it an art than a science. It is flawed, misleading, and has many questions surrounding it. But there is an answer that will solve many of these issues, and only MLS and Audi can give it to us. They need to give us the algorithm. They need to tell us what and how they are measuring all of the player actions that receive a score. If, and only if, this happens can the API become a useful statistic.