DC Comics, K-Means Clustering, Logistic Regression, Propensity Modeling

Recipe 011: DC Super Hero Throw Down: Propensity Modeling

FerraraTom

I want you to remember, Clark…In all the years to come… in your most private moments… I want you to remember my hand at your throat… I want you to remember the one man who beat you.

Chilling quote isn’t it?  That was said by Batman to Superman during the The Dark Knight Returns, a comic book miniseries written and drawn by Frank Miller.

One of the greatest debates in comic book lore and a fun discussion to have is pitting up two superheroes against each other… Who wins and why?  The below data story will introduce a data science approach to answering this debate.  To have fun with it… I’ve thrown characters from the video game Injustice 2 into a Superhero Thrown Down Tournament.


012_pic

 

 


010_pic

Before we dive into the tournament and the results of the throw down, I’d like to touch on the approach: Propensity modeling.

Propensity modeling has been around since 1983 and is a statistical approach to measuring uplift (think return on investment).  The goal is to measure the uplift of similar or matched groups.

The heart of this approach lies within two machine learning approaches (segmentation and probability.)

Why propensity modeling for this exercise?  I wanted to rank my superheroes for the bracket using statistics (i.e. Batman is not getting a number one seed.)

35 characters were segmented on strength, ability, defense and health.  For the propensity score I gathered ranking information from crowd sourced websites and surveys.  Using this I was able to give an intangible skill score.  The reasoning was I wanted the medium of comics to do the majority of the work for me.  Comics are stories and the narrative drives the inner core of a character.  The higher a character is on a fan sourced website I’m assuming they are written well and are timeless.

Next step was to take the mean of the intangible skill score and flag those characters above the average (this will be my dependent variable for my logistic regression to calculate a propensity score).

What was thrown into the propensity model?  The skill sets gathered from the Injustice game, the assumption here is a character of Superman’s skill set would be written much differently then say Catwoman.

011_pic


Now it’s time for our throw down.

001_pic

The top four characters by propensity score were:

Cyborg

Supergirl

Aquaman

Black Adam

To determine a winner in the throw-downs characters were put up against each other in 11 categories.


Round 1 Takeaways:

002_pic

Our number one seed Cyborg nearly lost to Atrocitus. The result was 6-2-5, that’s read as six wins, 2 ties and 5 losses.

There were no upsets in the first round of play.  A few characters did not win a single category in their match-ups:

Harley Quinn (vs. Captain Cold)

Green Arrow (vs. Batman)

Black Manta (vs. Black Canary)

These three characters were ill-equipped to take on their opponent, it is possible they would have advanced given a new opponent.

003_pic


Round 2 Takeaways:

004_pic

Cyborg (our number one seed) defeated Captain Cold by a larger difference (+3 winning categories) compared to the previous match-up against Atrocitus, but he scored one win less.

We begin to see upsets in Round 2:

Robin defeated Black Adam by 1 winning category.  Wonder Woman defeated Firestorm by 4 winning categories.  Batman defeated Supergirl by 3 wining categories.

On propensity scores these were upsets, but from comic book debate standpoint you could argue these, i.e. given enough time to prepare Batman could defeat Supergirl.

005_pic


Round 3 Takeaways:

006_pic

Cyborg falls to Superman, loss by 4 categories.  This was the biggest fight Superman was given in this tournament to date (in both previous rounds he had 9 winning categories).

The upsets keep coming in:

Robin sneaks in a win again by 1 winning category (over Brainiac). Wonder Woman defeats the top seed in her region of the bracket (Aquaman) by 4 winning categories.  Batman defeated Green Lantern by 3 winning categories.

007_pic


Final 4 Takeaways:

008_pic

Robin’s Cinderella story comes to an end at the hands of Superman (winning in 9 categories).  Robin did fair better than those previously who gave Superman 9 category wins… Robin won in 2 categories.

Batman was able to upset Wonder Woman, by 2 winning categories.  We’re set for a championship round, the original who wins… Batman Versus Superman!

batman-vs-superman-movie


Our winner is…

009_pic

Superman defeats Batman.  Superman did not win in a landslide.  Batman loss by two categories but he was able to win in 5 categories.  Previously the highest total win categories against Superman were 3 winning categories.


What did we learn from diving into the DC data?  Comic book writing and fan perception goes along way in determining who wins a thrown debate.  If we use propensity modeling we can have more even playing field and limit the amount of unfair battles.


005

SupermanPancakesW


003_008

K-Means Clustering, NBA2k

Recipe: 004 A Data Driven Approach During the NBA Pace and Space Era

FerraraTom

The format of this post will be slightly different from previous recipes.  Think of this as a yelp review, I’ll be going sharing the paper I presented during the SESUG 2018 SAS Conference.  This will be wordy than usual, but I will start with the recipe card per usual and then we’ll dive deep into the paper.  At the end of this post you’ll be a full belly of a new approach to building a NBA team, can be applied to one of my favorite game modes in the 2K series… Franchise mode.


001


002


SESUG Paper 234-2018 Data Driven Approach in the NBA Pace and Space Era

ABSTRACT

Whether you’re an NBA executive or Fantasy Basketball owner or a casual fan, you can’t help but begin the conversation of who is a top tier player? Currently who are the best players in the NBA? How do you compare a nuts and glue defensive player to a high volume scorer? The answer to all these questions lies within segmenting basketball performance data.

OVERVIEW

A k-means cluster is a commonly used guided machine learning approach to grouping data. I will apply this method to human performance. This case study will focus on NBA basketball individual performance data. The goal at the end of this case study will be to apply a k-means cluster to identify similar players to use in team construction.

INTRODUCTION 

My childhood was spent in Brooklyn, New York. I’m a die-hard New York Knicks fan. My formative years were spent watching my favorite team get handled by arguably the greatest basketball player of all time, Michael Jordan. Several moments throughout my life and to this day it crosses my mind, only if we had that player on our team. Over time I have come to terms with we would never have Michael Jordan or player of his caliber, but wouldn’t it be interesting if a NBA team could find complimentary parts or look-a-like players? This is why I’m writing a paper about finding these look-a-likes, these diamonds in the rough, or as the current term is “Unicorns”. Let’s begin this journey together in search for a cluster of basketball unicorns.

WATCHING THE GAME TAPE

What do high level performers have in common? In most cases you’ll find they study their sport, study their own game performance, study their opponents and study the performance of other athletes they strive to be like. The data analyst equivalent to watching game tape would be to gather as many independent and dependent variables as possible to perform an analysis. For the NBA data used in this k-means cluster analysis, I took the approach of what contributes to success in winning a game. Outscoring your opponent was a no-brainer starting point, but I’ll need to dig deeper. How many ways can and what methods can you outscore an opponent? The avid basketball fan would agree how a player scores a basket (i.e. field goal vs behind the three point line) will determine how they fit into an offensive scheme and defines their game plan. Beyond scoring there are other equally as important contributors to basketball performance. This is where I began to think of how much hustle and defensive metrics could I gather (i.e. rebounds, assists, steals, blocks, etc.). Could I normalize all of these metrics to come to get a baseline on player efficiency and more importantly effectively identify an individual player’s role in a team’s overall performance? To normalize my metrics I made the decision to produce my raw data on a per minute level, this way I wouldn’t show biases to high usage players or low usage players. To identify how a player fits into an offensive scheme and their scoring tendencies I calculated an individual level what percent of points scored comes from all methods of scoring (i.e. free throw percentage, three pointers made, two point field goals). Once I went through all of my data analyst game tape, I was ready to hold practice and cluster.

HOLDING PRACTICE

Practice makes perfect, but everything in moderation (i.e. the New York Knicks of the 1990’s overworked themselves during practice, they would lose steam in long games). Similar to I wouldn’t want to over-fit a model on sample data, I won’t get too complicated with my approach to standardizing my variables. Utilizing proc standard, I’ll standardize my clustering variables to have a mean of 0 and a standard deviation of 1. After standardizing the variables I’ll run the data analyst version of a zone defense (proc fastclus and use a macro to create max clusters from 1 through 9). I don’t anticipate to use a 9 cluster solution once running the game plan and evaluating my game time results. Ideally I want to keep my cluster size to small manageable number while still showing a striking difference between the groups. To evaluate how many cluster I’ll analyze to come to a final solution, I’ll extract the r-square values from each cluster solution and then merge them to plot an elbow curve. Using proc gplot to create my elbow curve, I’ll want to observe where the line begins to curve (creating an elbow). Finally, before we’re kicked off the court for another team’s practice, I’ll use proc anova to validate my clusters. As a validate metric I’ll use the variable “ttll_pts_per_m” this should help identify the difference between a team’s “go-to” option and a player whom is more of a complimentary piece at best.

RUNNING GAME PLAN AND GAME TIME RESULTS

A k-means cluster analysis was conducted to identify underlying subgroups of National Basketball Association athletes based on their similarity of responses on 11 variables that represent characteristics that could have an impact on 2016-17 regular season performance and play type. Clustering variables included quantitative variables measuring: perc_pts_ft (percentage of points scored from free throws) perc_pts_2pts (percentage of points scored from 2 pt field goals) perc_pts_3pts (percentage of points scored from 3 pt field goals) ‘3pts_made_per_m’N (3 point field goals made per minute) reb_per_min (rebounds per minute) asst_per_min (assists per minute) stl_per_min (steals per minute) blk_per_min (blocks per minute) fg_att_per_m (field goals attempted per minute) ft_att_per_min (free throws attempted per minute) fg_made_per_m (field goals made per minute) ft_made_per_m (free throws made per minute) to_per_min (turnovers per minute) All clustering variables were standardized to have a mean of 0 and a standard deviation of 1. Data was randomly split into a training set that included 70% of the observations (N=341) and a test set that included 30% of the observations (N=145). A series of k-means cluster analyses were conducted on the training data specifying k=1-9 clusters, using Euclidean distance. The variance in the clustering variables that was accounted for by the clusters (r-square) was plotted for each of the nine cluster solutions in an elbow curve (see figure 1 below) to provide guidance for choosing the number of clusters to interpret.

003

Canonical discriminant analyses was used to reduce the 11 clustering variable down a few variables that accounted for most of the variance in the clustering variables. A scatter-plot of the first two canonical variables by cluster (Figure 2 shown below) indicated that the observations in cluster 3 is the most densely packed with relatively low within cluster variance, and did not overlap very much with the other clusters. Cluster 1’s observations had greater spread suggesting higher within cluster variance. Observations in cluster 2 have relatively low cluster variance but there are a few observations with overlap.

004

The means on the clustering variables showed that, athletes in each cluster have uniquely different playing styles.

Cluster 1:

These athletes have high values for percentage of points from free throws, moderate on percentage points from 3 point field goals and low on percentage of points from 2 point field goals. These athletes attempt more field goals per minute, free throws per minute, make more 3 point field goals per minute and have the highest value for assists per minute; these athletes are focal points of a team’s offensive strategy.

Athletes in this cluster: Kevin Durant ,Anthony Davis, Stephen Curry

Cluster 2:

The athletes have extremely high values for percentage of points from 2 point field goals, moderate on percentage points from free throws, and extremely low values for percentage of points from 3 point field goals. These athletes rarely make perimeter shots and have low values for assists.

Athletes in this cluster: Rudy Gobert, Hassan Whiteside, Myles Turner

Cluster 3:

The athletes have high values for percentage of points from 3 point field goals, and low values for point 2 point field goals and free throws. These athletes stay on the perimeter (high values for 3 point field goals made) but are a secondary option at best, observed by a low field goal attempts per minute.

Athletes in this cluster: Otto Porter, Klay Thompson, Al Horford

In order to externally validate the clusters, an Analysis of Variance (ANOVA) was conducting to test for significant differences between the clusters on total points scored per minute (ttl_pts_per_m). A tukey test was used for post hoc comparisons between the clusters. The results indicated significant differences between the clusters on ttl_pts_per_m (F(2, 340)=86.67, p<.0001). The tukey post hoc comparisons showed significant differences between clusters on ttl_pts_per_m, with the exception that clusters 2 and 3 were not significantly different from each other. Athletes in cluster 1 had the highest ttl_pts_per_m (mean=.541, sd=0.141), and cluster 3 had the lowest ttl_pts_per_m (mean=.341, sd=0.096).

CONCLUSION

Using a k-means cluster is a data driven approach to grouping basketball player performance. This method can be used in constructing a team when a salary budget is constricted. The elephant in the room is this essentially is human behavior, therefore the validation step using proc anova is critical. The approach I’ve applied to the NBA data is a guide machine learning approach.


005007


006

https://www.nba2k.com/

http://www.sesug.org/SESUG2018/index.php


003_008

K-Means Clustering, Pokemon Go

Recipe: 001 Pokémon Go K-Means Clustering Segmentation

 


FerraraTomHere’s a treat for all the Pokémon Go players out there.  You’ll find below a recipe for a cluster analysis intended to guide you in building the most cost effective team of Pokémon. The goal of this recipe is to segment Kanto (GEN 1) Pokémon which can be found in the wild, with an emphasis on return on investment (ROI), or in this case Candy Cost investment and Gym Battle return.  Hope you enjoy! – Tom

001


002


003


004


005


006


007


008


Good Old Fashioned Pancake Recipe:

https://www.allrecipes.com/recipe/21014/good-old-fashioned-pancakes/

Cool Pokémon Pancake Art:

Support our theme for this week’s analytics:

https://www.pokemongo.com/en-us/

https://www.pokemon.com/us/

Follow this blog on Instagram: Pancake_analytics