Pages
▼
Thursday, October 22, 2015
Visualizing champion difficulty in Jupyter
I've been playing around some more with League of Legends analysis and Jupyter. This time I scraped champion information from League of Graphs, and calculated which champions were the easiest / hardest to play, and which champions are best in the early / late game. Unfortunately, Blogger does not interact with Jupyter, so I have posted the notebook on nbviewer, if you are interested in taking a look!
Friday, October 16, 2015
Playing in random forests in League of Legends
I've wanted to learn more about machine learning, specifically python's scikit-learn module. I'm an avid League of Legends player (summoner names lemmingo and Umiy), and Riot Games provides a thorough API for querying game data, so I decided to explore machine learning using League of Legends (LoL). Specifically, I wanted to see if I could predict the eventual winner of a game long before the game finishes. In this post, I'm going to explore some features of the LoL dataset, describe what features are important in predicting the winner, show how the predictions change over time, and investigate how winnable surrendered games are. If you are interested in the details of how I did my analysis, go to the github page for this project, or this Jupyter notebook.
For those who don't know what League of Legends is, it's an online competitive game where teams of five players try to destroy each other's base. Some key things players do during the game are: 1. get gold; 2. destroy towers; 3. kill other players; and 4. kill boss monsters called dragons. Games usually last 25-40 minutes, and each game has a clear winner (no ties). Teams can surrender after twenty minutes if the the game is a rout.
As one might expect, gold difference is the most predictive feature, at every time point. Next most important are the kill and tower difference. Surprisingly, dragon difference is relatively unimportant. This is probably because killing dragons yields only a small advantage. If I were to put on my amateur designer hat, I'd recommend Riot decrease the number of dragons required to get the double dragon buff. Another note is that first blood, first dragon, and first tower are mostly uninformative.
For those who don't know what League of Legends is, it's an online competitive game where teams of five players try to destroy each other's base. Some key things players do during the game are: 1. get gold; 2. destroy towers; 3. kill other players; and 4. kill boss monsters called dragons. Games usually last 25-40 minutes, and each game has a clear winner (no ties). Teams can surrender after twenty minutes if the the game is a rout.
Loading the data
The details of how I loaded the data are in the Jupyter notebook, but I will give a summary here. To get a list of players, I first queried the API for the list of featured games; from those feature games I extracted 50 player names. Then I created a list of every ranked game those players were in during the year 2015, approximately 30,000 games in total. These should all be high skill-level games. I then queried the API for the first 6,000 of these games. This took a few hours because the API is limited to ~0.8 requests per second. A JSON with all of the data for 6,000 games takes up ~1GB (this is due to the JSON being text, and containing a lot of redundant information). Finally, I extracted features from the JSON. Now that I have my scripts more established, I can query games, extract their features, and save them directly to a (smaller) csv.
Exploring the features
As with any good data science project, the first thing I did was explore the features. I started by plotting a histogram of the game lengths:
As Riot says, the average game lasts between 25-35 minutes. Few games last less than 20 minutes, with a large spike of games ending right after 20 minutes, due to surrender. There are others spikes just after 30 and 40 minutes, which I don't have a clear explanation for. Perhaps large team fights break out around then.
The second thing I plotted was the average gold difference at 20 minutes.
This is about as normal a distribution as one could hope for, with a standard deviation of 6,300. Rather than plot each feature individually, we can plot how pairs of features are related to each other:
As you might expect, there is a large degree of collinearity between gold difference, kill difference, tower difference, and dragon difference. This would be a problem for linear regression, but I am going to use random forests, which are less sensitive to collinearity. If I were interested purely in prediction accuracy and generality, I would perform PCA on these components, but for now I'll just use them raw.
The second thing I plotted was the average gold difference at 20 minutes.
This is about as normal a distribution as one could hope for, with a standard deviation of 6,300. Rather than plot each feature individually, we can plot how pairs of features are related to each other:
The features shown here are the gold difference between the teams, the kill difference, difference in number of towers destroyed, and the difference in dragons killed. Apologies for the ugly axes. |
Feature selection
I was curious about which features are most predictive of the eventual game winner. One of the cool things about random forests is that they can identify which features are important based on how many trees they are found in. I created a random forest classifier and used it to predict the winner of the game at five minute intervals. Then for each timepoint, I extracted the feature importance, and plotted a heatmap:
|
To get a better look at how feature importances changes with time, let's take slices of feature importance at 20 and 35 minutes.
There are some significant changes in importance between these time points. Notably, barons and inhibitors gain importance at 35 minutes. To compensate for this, there is a loss of importance in the rest of the features. Once again, dragons are relatively unimportant, even at the late timepoints. Inhibitors may be ever so slightly more predictive than barons.
Accuracy over time
Given the above feature importance data, I reduced the number of features in the classifier to avoid overfitting the data, and only used the features for the differences, and the baron and inhibitor features. With these features, I then created a small random forest at each time point, and performed cross-validated predictions for the eventual winner of the game.
At the beginning of the game, the classifier isn't successful because there isn't enough information to work with. As the game goes on, the classifier improves, peaking at ~80% accuracy between 20-35 minutes. However, the accuracy drops for longest games. One possibility for this is that there are fewer games that last 40 minutes (less than 1000), which means there is less data to train the classifier. Another possibility is that games that last a long time are fairly close, and close games are harder to predict.
How winnable are surrendered games?
As one last fun thing to do, I wanted to investigate the games that were surrendered early. I separated the games into "early surrenders," and "close" games. To explore the surrendered games, I plotted the gold difference at twenty minutes. The gold difference is bimodal, with one team normally having a large gold lead:
I then created used a random forest to predict the winner of these games, and it did so with 99% accuracy. To see whether these surrendered games were really unwinnable, I trained a random forest on the close games, and then used that forest to predict the surrendered games. Here, it was only 94% accurate. My interpretation of this is that around 6% of surrendered games are winnable!
I have lots of ideas for how to expand on these initial results. I would like to train the model on low ELO games, and see how predictable those are in comparison to high ELO games. Riot has an API for professional games as well, and it would be fun to see if pro games are more (or less) predictable than amateur games. I could scrape games from different regions and compare them. One long term goal is to create a live predictor for LCS games as they happen. If you have any ideas, let me know in the comments!
I have lots of ideas for how to expand on these initial results. I would like to train the model on low ELO games, and see how predictable those are in comparison to high ELO games. Riot has an API for professional games as well, and it would be fun to see if pro games are more (or less) predictable than amateur games. I could scrape games from different regions and compare them. One long term goal is to create a live predictor for LCS games as they happen. If you have any ideas, let me know in the comments!