Video Game Recommendation Engine – This is how we do it
These are data science panels and we started off this panel with a video game recommendation engine. I had Stephen fill out a survey prior to the panel and from his results I built a recommendation model, with the goal of selecting games he has not played (he’s played a lot of games, so not an easy task) and would rate above average.
How are we going to build this recommendation? Through Propensity scoring!
A propensity score is an estimated probability that a data point might have the predicted outcome.
- One of our panelists completed a survey and had to rank video games they have played
- Their responses were linked to our ancillary data (critics score, user score, and genres)
- Our model shot out a score between 0 and 1. The closer to 1 the more likely this game would be enjoyed by the panelist.
Video Game Recommendation Engine – The Output
For this panelist, the survey told us this about their gaming preferences:
The value User Score more than the Critics Score.
Their preferred genre is Action Adventure.
Their preferred platform is the PS2.
Video Game Debate: Overview
On the screen will be a video game, with some profiling data.
Panelist will debate the impact, perceived and replay value of the featured game.
Crowd will decide who made the better argument.
This is the meat of the panel., on the screen is also the IGN review headline and rating, Stephen and myself would take turns and argue if it deserved it’s ranking.
Stephen went first and argued that Goldeneye does not deserve this high of rating and his key point was on the replay value. I attempted to argue on to value it at time of release. The crowd sided with Stephen.
Pokémon Gold & Silver
I went first this round and argued for the rating, this was a very pro Pokémon crowd. Stephen brought up good points on where he thinks the series should go and adding another region is not the answer. The crowd sided with Me.
Ultimate Marvel vs. Capcom 3
Stephen chose to argue for this game, I wanted to throw a curve-ball in this debate. It would have been very obvious if we chose Marvel vs Capcom 2, too easy. I argued that it wasn’t even the best in the series, and the best in the series is actually X-men vs Street fighter.
Halo Combat Evolved
Stephen was on team Halo for this one, I love Halo as well, but the crowd did not. That was a shock to us but maybe Halo doesn’t have replay value? Or everyone is getting tired with the series.
Battle Dome: Overview
Two games go in… only one comes out
Panelists will argue for a game, they cannot both argue for the same game
The crowd decides who had the best argument
This was fun and challenging section of our panel. I won’t go into details on this section but I do want to try something out. As test to see who is interacting with my page by reading the data stories, I have a special giveaway.
Here are the rules, you must have an Instagram account. You must be following my Instagram account: @pancake_analytics.
To enter you need read through the battle dome section, screen shot your favorite match-up and post it to instagram.
In this post I want you tag @pancake_analytics and caption the post with “Who do you have in this Battle Dome match-up?”.
This giveaway will end on December 31st, 2019 and the winner will receive a Game-stop Gift card from me. For to use on your next video game purchase in the new year!
Here’s the disclaimer I have to post:
Per Instagram rules, we must mention this is in no way sponsored, administered, or associated with Instagram, Inc. By entering, entrants confirm they are 13+ years of age, release Instagram of responsibility, and agree to Instagram’s term of use. Good luck!!!!!
Here’s the battle dome match-ups:
I want to personally thank everyone who attended the panel in Tampa, at the Tampa Comic Convention. I look forward to meeting again in 2020.
This Panel was held on:
Friday, August 2, 2019 at 7:30 PM – 8:30 PM
During the Tampa Bay Comic Convention 2019, held at the Tampa Convention Center.
The Panelists were:
Tom Ferrara (@pancake_analytics) , Kalyn Hundley (@kehundley08), Andy Polak (@polak_andy)
I want to take a quick moment to discuss the panelists. I love giving as many different point of views as possible to these data science panels. Without this variety of point of views it’s more of a lecture and less of a discussion. This mix of panelists gave the audience the data science view, the tech industry view and the biological sciences view. Best part about this is Smash Brother brought us all together.
Changing the Tier Conversation
One of the main objectives of this panel was getting a discussion going on tier selection in Smash and how do we base tier selection in data science, and how do we validate our findings through one of the best players in the game.
A k-means cluster uncovers trends within our Smash Brothers data to understand the relational similarities and differences on key in game attributes.
The more clusters the clearer our picture becomes and the deeper we can understand the pros and cons of each main selection.
A brief overview of a k-means cluster:
- Standardize your variables
- Analyze your elbow curve
- Validate your clusters
Treat each game release as new product launch or a change in the market.
You would re-score your data, to understand the current market and you’re able to migrate and understand how the meta-game has changed.
We end up with five unique clusters:
This group is the slowest by run speed and lightest by weight.
Jack Of All Trades:
They are middle group on everything, there is no distinct trend.
Like the Jack of All Trades group but faster.
Fast in aerial attacks and the heaviest of the characters.
This group is the fastest and the lightest.
A propensity model is a statistical scorecard that is used to predict the behavior of your customer or prospect base. Propensity models are often used to identify those most likely to respond to an offer, or to focus retention activity on those most likely to churn.
So who should be your main? In this segment I rely on industry knowledge as well (ZeRo’s tiers as dependent variable). I’ll build propensity score with the following independent variables:
- Change in air acceleration
- Base air acceleration
- Base speed in the air
- Base Run Speed
- Character Weight
- Ultimate Smash Bros. Cluster
- Wii-U Smash Bros. Cluster
What makes these three stand above the crowd?
The are middle ground on weight, fast air accelerators.
What are the differences between the three?
Wario has a slow run speed.
Palutena is the lightest.
Yoshi is the middle ground of this group.
The Curious Case of Ganondorf
Ganondorf has more in-common with Jiggly Puff than he does Bowser.
The reason being is he’s quicker and can adapt well in aerial attacks and in falling than Bowser can.
On the flip-side of this I can also say Bowser more accurately represents how he’s viewed from the super Mario franchise, in Super Smash Bros. Ultimate.
Game Time: Name that segment: Overview
I personally feel one of the best ways to reinforce learning is through a game. For this panel I decided to reinforce the k-means segmentation and wanted volunteers to guess the segment 3 characters on the screen fall into.
Here was the overview:
On the screen will be 3 characters
All 3 characters belong to the same segment
Volunteers will do their best to convince the panel of which segment the characters fall into:
- Jack of All Trades
- Air Tanks
For participating volunteers receive a fabulous prize.
For this particular game the prize was an amiibo of their choice that works with Smash Ultimate for the Nintendo Switch.
I want to personally thank everyone who attended the panel in Tampa, at the Tampa Comic Convention. I look forward to meeting again in 2020.
How crazy would it be if I told you Howard the Duck and Old Man Logan are closer to each other in skill sets than they are to any other Marvel characters? Or how about Thor and Dr. Octopus are lookalikes as well? Let’s answer these questions together by wrangling some readily available data.
If I’ve learned anything from my career in data science it’s this: 80% of the work is data gathering and etl work, and 20% is analysis.
Nothing holds truer to this statement than finding data of Marvel characters skills set, on a normalized scale. In this data story I’ll be using data from Marvel Contests of Champions (power index levels, health and attack) and the Marvel Battle Royale (a twitter fan poll of greatest superheroes).
A few more variables I’ll need to calculate around the results of the Marvel Battle Royale Twitter Fan Poll:
Total votes per each round
Average Total votes
A flag for if they were higher than average total votes per marvel character
This flag I’ll use as my dependent variable and my independent variables will be the Marvel Contest of Champions statistics.
What will this do? This will predict the likelihood a Marvel Character would receive higher than the average total votes in the Marvel Battle Royale.
Once this is calculated I’ll receive an output of coefficients which I can apply to the rest of the Marvel Characters whom weren’t in the Marvel Battle Royale to create a propensity score.
Now let’s back track a little bit and see why I’m going with a propensity model as opposed to a grouping by opinion. I.e. Let’s put all the top attackers in the same category.
The top 3 characters based on Attack are Rocket Raccoon, Spider-man (Symbiote), and Blade.
In the above histogram, if you look all the way to the far right you’ll notice they are the data points on their own little island.
Well what if I just grouped everyone by Health? This data visualization looks more promising but mostly likely there would overlap on the other attributes and you wouldn’t be able to implement this successfully.
The power index by definition could be suitable but from the top 3 selected on power index I can tell this rating wasn’t an index in the vein of what I would typically use an index for (time-series forecasting) and it looks to be similar to the Pokemon Go Combat Point System, the ability to use their full potential.
One use of a propensity score is to create similar groups, based on the likelihood of performing a behavior.
In this case Doctor Octopus and Thor (Ragnarok) statistically the same in the Marvel Contest of Champions skill set. For those of you want to go down and interesting rabbit whole, you can find YouTube videos on why Doctor Octopus should be in a demi-god tier.
This propensity score approach literally put Doctor Octopus in the same tier as a demi-god!
Medusa by power index alone would be close to Thanos but factoring all skill sets, she is statistically closer to Gwenpool, Cable, and Nightcrawler than she is to the Mad Titan.
Now for the crazy but statistically significant section. Howard the Duck (I’m hoping he gets a show on Disney+) and Old Man Logan are a propensity score match.
An example like this where many begin to argue in data science, when does subject material expertise come into play? We can argue significance forever, on any topic, but we can agree on all Marvel Champions have a value if played correctly.
I want you to remember, Clark…In all the years to come… in your most private moments… I want you to remember my hand at your throat… I want you to remember the one man who beat you.
Chilling quote isn’t it? That was said by Batman to Superman during the The Dark Knight Returns, a comic book miniseries written and drawn by Frank Miller.
One of the greatest debates in comic book lore and a fun discussion to have is pitting up two superheroes against each other… Who wins and why? The below data story will introduce a data science approach to answering this debate. To have fun with it… I’ve thrown characters from the video game Injustice 2 into a Superhero Thrown Down Tournament.
Before we dive into the tournament and the results of the throw down, I’d like to touch on the approach: Propensity modeling.
Propensity modeling has been around since 1983 and is a statistical approach to measuring uplift (think return on investment). The goal is to measure the uplift of similar or matched groups.
The heart of this approach lies within two machine learning approaches (segmentation and probability.)
Why propensity modeling for this exercise? I wanted to rank my superheroes for the bracket using statistics (i.e. Batman is not getting a number one seed.)
35 characters were segmented on strength, ability, defense and health. For the propensity score I gathered ranking information from crowd sourced websites and surveys. Using this I was able to give an intangible skill score. The reasoning was I wanted the medium of comics to do the majority of the work for me. Comics are stories and the narrative drives the inner core of a character. The higher a character is on a fan sourced website I’m assuming they are written well and are timeless.
Next step was to take the mean of the intangible skill score and flag those characters above the average (this will be my dependent variable for my logistic regression to calculate a propensity score).
What was thrown into the propensity model? The skill sets gathered from the Injustice game, the assumption here is a character of Superman’s skill set would be written much differently then say Catwoman.
Now it’s time for our throw down.
The top four characters by propensity score were:
To determine a winner in the throw-downs characters were put up against each other in 11 categories.
Round 1 Takeaways:
Our number one seed Cyborg nearly lost to Atrocitus. The result was 6-2-5, that’s read as six wins, 2 ties and 5 losses.
There were no upsets in the first round of play. A few characters did not win a single category in their match-ups:
Harley Quinn (vs. Captain Cold)
Green Arrow (vs. Batman)
Black Manta (vs. Black Canary)
These three characters were ill-equipped to take on their opponent, it is possible they would have advanced given a new opponent.
Round 2 Takeaways:
Cyborg (our number one seed) defeated Captain Cold by a larger difference (+3 winning categories) compared to the previous match-up against Atrocitus, but he scored one win less.
We begin to see upsets in Round 2:
Robin defeated Black Adam by 1 winning category. Wonder Woman defeated Firestorm by 4 winning categories. Batman defeated Supergirl by 3 wining categories.
On propensity scores these were upsets, but from comic book debate standpoint you could argue these, i.e. given enough time to prepare Batman could defeat Supergirl.
Round 3 Takeaways:
Cyborg falls to Superman, loss by 4 categories. This was the biggest fight Superman was given in this tournament to date (in both previous rounds he had 9 winning categories).
The upsets keep coming in:
Robin sneaks in a win again by 1 winning category (over Brainiac). Wonder Woman defeats the top seed in her region of the bracket (Aquaman) by 4 winning categories. Batman defeated Green Lantern by 3 winning categories.
Final 4 Takeaways:
Robin’s Cinderella story comes to an end at the hands of Superman (winning in 9 categories). Robin did fair better than those previously who gave Superman 9 category wins… Robin won in 2 categories.
Batman was able to upset Wonder Woman, by 2 winning categories. We’re set for a championship round, the original who wins… Batman Versus Superman!
Our winner is…
Superman defeats Batman. Superman did not win in a landslide. Batman loss by two categories but he was able to win in 5 categories. Previously the highest total win categories against Superman were 3 winning categories.
What did we learn from diving into the DC data? Comic book writing and fan perception goes along way in determining who wins a thrown debate. If we use propensity modeling we can have more even playing field and limit the amount of unfair battles.