K-Means Clustering, Logistic Regression, nintendo, Propensity Modeling, Regression Modeling, Super Mario

TBCC 2019 Smash Brothers, Segmentation & Strategy: Panel Recap

012


003


This Panel was held on:

Friday, August 2, 2019 at 7:30 PM – 8:30 PM

During the Tampa Bay Comic Convention 2019, held at the Tampa Convention Center.

The Panelists were:

Tom Ferrara (@pancake_analytics) , Kalyn Hundley (@kehundley08), Andy Polak (@polak_andy)

001

I want to take a quick moment to discuss the panelists.  I love giving as many different point of views as possible to these data science panels.  Without this variety of point of views it’s more of a lecture and less of a discussion.  This mix of panelists gave the audience the data science view, the tech industry view and the biological sciences view.  Best part about this is Smash Brother brought us all together.


Changing the Tier Conversation

004.png

One of the main objectives of this panel was getting a discussion going on tier selection in Smash and how do we base tier selection in data science, and how do we validate our findings through one of the best players in the game.

A k-means cluster uncovers trends within our Smash Brothers data to understand the relational similarities and differences on key in game attributes.

The more clusters the clearer our picture becomes and the deeper we can understand the pros and cons of each main selection.


005.png

A brief overview of a k-means cluster:

  • Standardize your variables
  • Analyze your elbow curve
  • Validate your clusters

Treat each game release as new product launch or a change in the market.

You would re-score your data, to understand the current market and you’re able to migrate and understand how the meta-game has changed.


006

We end up with five unique clusters:

Floaters:

This group is the slowest by run speed and lightest by weight.

Jack Of All Trades:

They are middle group on everything, there is no distinct trend.

Dashers:

Like the Jack of All Trades group but faster.

Air Tanks:

Fast in aerial attacks and the heaviest of the characters.

Speedsters:

This group is the fastest and the lightest.


007

propensity model is a statistical scorecard that is used to predict the behavior of your customer or prospect base. Propensity models are often used to identify those most likely to respond to an offer, or to focus retention activity on those most likely to churn.

So who should be your main?  In this segment I rely on industry knowledge as well (ZeRo’s tiers as dependent variable).   I’ll build propensity score with the following independent variables:

  • Change in air acceleration
  • Base air acceleration
  • Base speed in the air
  • Base Run Speed
  • Character Weight
  • Ultimate Smash Bros. Cluster
  • Wii-U Smash Bros. Cluster

008


What makes these three stand above the crowd?

The are middle ground on weight, fast air accelerators.

What are the differences between the three?

Wario has a slow run speed.

Palutena is the lightest.

Yoshi is the middle ground of this group.


The Curious Case of Ganondorf

009

Ganondorf has more in-common with Jiggly Puff than he does Bowser.

The reason being is he’s quicker and can adapt well in aerial attacks and in falling than Bowser can.

On the flip-side of this I can also say Bowser more accurately represents how he’s viewed from the super Mario franchise, in Super Smash Bros. Ultimate.


Game Time: Name that segment: Overview

010

I personally feel one of the best ways to reinforce learning is through a game.  For this panel I decided to reinforce the k-means segmentation and wanted volunteers to guess the segment 3 characters on the screen fall into.

Here was the overview:

5 Volunteers

On the screen will be 3 characters

All 3 characters belong to the same segment

Volunteers will do their best to convince the panel of which segment the characters fall into:

  • Floaters
  • Jack of All Trades
  • Dashers
  • Air Tanks
  • Speedsters

For participating volunteers receive a fabulous prize.

For this particular game the prize was an amiibo of their choice that works with Smash Ultimate for the Nintendo Switch.


I want to personally thank everyone who attended the panel in Tampa, at the Tampa Comic Convention.  I look forward to meeting again in 2020.


003_008

nintendo, Propensity Modeling, Super Mario

Recipe 014: Smash Brothers Main Selection

logo

In this recipe I’d like you to chow down on a Smash Brother analytical approach to selecting your main character.  The approach I’m going to introduce you puts an emphasis on what makes a character unique.

pancakes_smash


 

009_overallclust

Before I start diving into the Smash Brothers data, let’s discuss the k-means clustering approach.  A k-means helps paint a clear picture of our data, in this case specifically it will identify Smash Brothers Characters by their attributes to create picture for who your main should be.  Our characters will be assigned into segments

(tiers… everyone loves to put tiers around Smash Characters but they’re based solely on opinion and player preference)

based on trends in our data, and how closely a character is to the a group.

Take the above picture, without applying this approach we are in the top left quadrant, we only have a faint idea of who should be our main.  As we apply more segments and more trends in the data we’ll eventually end up in the bottom left quadrant.  A clear picture of who our main should be.

Now I keep mentioning trends in our data.  How do we find trends in data where attributes are on the surface completely skewed and non-normalized?  Take for instance a characters weight as a whole number will be larger than a characters acceleration rate in the air (aerial attacks).

We can achieve these trends by standardizing our variables, setting all variables to have a mean of zero.  In doing so this analysis focuses strictly on the trends in our data and we can have a pretty interesting discussion: i.e. Yoshi is more similar to Kirby, than he is to Pac-man.


 

Super Smash Bros Ultimate Mural

 

In preparation for this data story I came across the following article, on Business Insider: “These are the 11 best ‘Super Smash Bros. Ultimate’ characters, according to the world’s number-one ranked player

Here’s an excerpt from the article:

final

And here is ZeRo being named the best overall player:

final_001

This triggered a thought in my head and I haven’t done this on the Pancakes Analytics page yet, but typically you would bring a k-means cluster in production and re-score your segments on an agreed upon cadence.  In this case I’ll treat the release of a new game as the cadence.

I’ll run a k-means clustering on the character attributes in Wii-U version and then a k-means clustering on the same character attributes but for the Switch version.

While going through this process I’ll only be including those characters who were in both games and where the data is clean: i.e. all characters have a weight and all characters have available acceleration data.  Sorry Inkling, you’re not in this segmentation.

001_clust

002_clust

Above are both segmentation cadences and characters will be split into these segment tiers:

  • Floaters (Far right circle)
  • Jack of all Trades (Smack in the middle)
  • Dashers (Faster than your Jack of all Trades segment but not fast enough to be elite in that attribute)
  • Air Tanks (The bottom left circle)
  • Speedsters  (Top left circle)

These aren’t ranked by what tier is the best, but we can make some assumptions.  The Jack of All Trades segment, most likely you won’t be winning matches often but you’ll be competitive.

Smash Brothers is a unique fighting game, so characters do have a weight to them.  Being light weight does have it’s advantages, but the learning curve of playing as a Speedster might be too high risk high reward for you.

The Floaters, if you select someone with a weight advantage in this group, you’ll likely to win your match but you have to master the move set (your smash move).

Air Tanks, is a no brainer I think for any skill set.  If you want to have a high likelihood of lasting till time runs out, be an Air Tank (this won’t guarantee a win, that really depends on your competition).


 

003_gandorf

I’m hoping visual this stood out to you the reader: Ganondorf made a large leap from the Air Tanks to the Floaters.  This doesn’t only speak to Ganondorf but it also tells you information about Bowser as well.

When I speak to this to clients and those wanting to learn about a particular data, this is how it translates:

Ganondorf has more in-common with Jiggly Puff than he does Bowser.   The reason being is he’s quicker and can adapt well in aerial attacks and in falling than Bowser can.

On the flip-side of this I can also say Bowser more accurately represents how he’s viewed from the super Mario franchise, in Super Smash Bros. Ultimate.

Neither one of these characters were “nerfed”, only re-calibrated so there’s a distinct difference between the two.

What do you do with this information?  If you’re main is a Floater, Ganondorf would be a good transitional character if you were looking to play as a character with more weight.  Or say you always play as an Air Tank, because you have the assumption anyone who has Kirby as a main shouldn’t be playing Smash Bros. then Ganondorf is a good transitional main for you when you eventually given in and select Kirby, “by accident”.

Image result for kirby smash


 

Below are the segments a brief overview of those characters within each segments:

004_floaters

This segment has high variability and you can see this from the oblong shape of the circle.  Ganondorf and Jiggly Puff are driving this shape, all though they are in the same segment and are more similar to each-other than are to other segments, they are the furthest apart within this segment.

Now hold up… wait a second.  Didn’t I just try to prove a point of how similar they are?  Yes, but in relation of whose more similar to Ganondorf: Jiggly Puff or Bowser.  But if I posed the question who is more similar to Ganondorf: Jiggly Puff or Kirby… that answer is Kirby.

This group on average are the slowest by run speed and lightest by weight… they Float.


 

005_jackofalltrades

This segment is the medium of everything.  There’s no uniquely distinct trend in their data.  Now playing as Pikachu vs Mega Man would have so game-play differences but statistically speaking you are starting with same underlying stats.

If you’re new the series this a good group to start with… they’re a Jack of All Trades.

 


 

006_dashers

The Dasher segment is very similar to the Jack of All Trades segment, only slightly faster.  Playing in this group you could potentially do more harm than good, if you’re selecting because you want to stay middle ground. You could… Dash yourself off the area.


 

007_airtanks

Air Tanks are fast in the aerial attacks… and the heaviest?  I’m anticipating this group will be re-calibrated by the next release.  In other words… Bowser has no business being as effective as he is in the air as he weighs, normally these two variable don’t correlate.  I guess all the time battling a plumber who can flip and jumps is finally paying off.


 

008_speedsters

This is your high risk high reward group.  Characters in this segment are the fastest and the lightest.  I personally am awful playing as Sonic, he’s too fast for playing level but a seasoned player could probably mop the floor with Sonic.


for_post

So who should be your main?  In this segment I rely on industry knowledge as well (ZeRo’s tiers as dependent variable).   I’ll build propensity score with the following independent variables:

  • Change in air acceleration
  • Base air acceleration
  • Base speed in the air
  • Base Run Speed
  • Character Weight
  • Ultimate Smash Bros. Cluster
  • Wii-U Smash Bros. Cluster

propb

The output will give me the likelihood ZeRo would rank the character as a top tier character.  The highest influencers on predictability were:

Change in air acceleration

Run speed

The lowest influencers were:

Base air acceleration

Ultimate Smash Bros. Cluster (this highlights the bias towards the Wii-U stats, influencing ZeRo’s rankings)

Drum roll please….

main1

main2

main3

You should have your main be one of the above three.  This is the data solution to selecting your main.

Really looking forward to the comments section on this one 🙂


005

final_002


003_008

K-Means Clustering, NBA2k

Recipe: 004 A Data Driven Approach During the NBA Pace and Space Era

logo

The format of this post will be slightly different from previous recipes.  Think of this as a yelp review, I’ll be going sharing the paper I presented during the SESUG 2018 SAS Conference.  This will be wordy than usual, but I will start with the recipe card per usual and then we’ll dive deep into the paper.  At the end of this post you’ll be a full belly of a new approach to building a NBA team, can be applied to one of my favorite game modes in the 2K series… Franchise mode.


001


002


SESUG Paper 234-2018 Data Driven Approach in the NBA Pace and Space Era

ABSTRACT

Whether you’re an NBA executive or Fantasy Basketball owner or a casual fan, you can’t help but begin the conversation of who is a top tier player? Currently who are the best players in the NBA? How do you compare a nuts and glue defensive player to a high volume scorer? The answer to all these questions lies within segmenting basketball performance data.

OVERVIEW

A k-means cluster is a commonly used guided machine learning approach to grouping data. I will apply this method to human performance. This case study will focus on NBA basketball individual performance data. The goal at the end of this case study will be to apply a k-means cluster to identify similar players to use in team construction.

INTRODUCTION 

My childhood was spent in Brooklyn, New York. I’m a die-hard New York Knicks fan. My formative years were spent watching my favorite team get handled by arguably the greatest basketball player of all time, Michael Jordan. Several moments throughout my life and to this day it crosses my mind, only if we had that player on our team. Over time I have come to terms with we would never have Michael Jordan or player of his caliber, but wouldn’t it be interesting if a NBA team could find complimentary parts or look-a-like players? This is why I’m writing a paper about finding these look-a-likes, these diamonds in the rough, or as the current term is “Unicorns”. Let’s begin this journey together in search for a cluster of basketball unicorns.

WATCHING THE GAME TAPE

What do high level performers have in common? In most cases you’ll find they study their sport, study their own game performance, study their opponents and study the performance of other athletes they strive to be like. The data analyst equivalent to watching game tape would be to gather as many independent and dependent variables as possible to perform an analysis. For the NBA data used in this k-means cluster analysis, I took the approach of what contributes to success in winning a game. Outscoring your opponent was a no-brainer starting point, but I’ll need to dig deeper. How many ways can and what methods can you outscore an opponent? The avid basketball fan would agree how a player scores a basket (i.e. field goal vs behind the three point line) will determine how they fit into an offensive scheme and defines their game plan. Beyond scoring there are other equally as important contributors to basketball performance. This is where I began to think of how much hustle and defensive metrics could I gather (i.e. rebounds, assists, steals, blocks, etc.). Could I normalize all of these metrics to come to get a baseline on player efficiency and more importantly effectively identify an individual player’s role in a team’s overall performance? To normalize my metrics I made the decision to produce my raw data on a per minute level, this way I wouldn’t show biases to high usage players or low usage players. To identify how a player fits into an offensive scheme and their scoring tendencies I calculated an individual level what percent of points scored comes from all methods of scoring (i.e. free throw percentage, three pointers made, two point field goals). Once I went through all of my data analyst game tape, I was ready to hold practice and cluster.

HOLDING PRACTICE

Practice makes perfect, but everything in moderation (i.e. the New York Knicks of the 1990’s overworked themselves during practice, they would lose steam in long games). Similar to I wouldn’t want to over-fit a model on sample data, I won’t get too complicated with my approach to standardizing my variables. Utilizing proc standard, I’ll standardize my clustering variables to have a mean of 0 and a standard deviation of 1. After standardizing the variables I’ll run the data analyst version of a zone defense (proc fastclus and use a macro to create max clusters from 1 through 9). I don’t anticipate to use a 9 cluster solution once running the game plan and evaluating my game time results. Ideally I want to keep my cluster size to small manageable number while still showing a striking difference between the groups. To evaluate how many cluster I’ll analyze to come to a final solution, I’ll extract the r-square values from each cluster solution and then merge them to plot an elbow curve. Using proc gplot to create my elbow curve, I’ll want to observe where the line begins to curve (creating an elbow). Finally, before we’re kicked off the court for another team’s practice, I’ll use proc anova to validate my clusters. As a validate metric I’ll use the variable “ttll_pts_per_m” this should help identify the difference between a team’s “go-to” option and a player whom is more of a complimentary piece at best.

RUNNING GAME PLAN AND GAME TIME RESULTS

A k-means cluster analysis was conducted to identify underlying subgroups of National Basketball Association athletes based on their similarity of responses on 11 variables that represent characteristics that could have an impact on 2016-17 regular season performance and play type. Clustering variables included quantitative variables measuring: perc_pts_ft (percentage of points scored from free throws) perc_pts_2pts (percentage of points scored from 2 pt field goals) perc_pts_3pts (percentage of points scored from 3 pt field goals) ‘3pts_made_per_m’N (3 point field goals made per minute) reb_per_min (rebounds per minute) asst_per_min (assists per minute) stl_per_min (steals per minute) blk_per_min (blocks per minute) fg_att_per_m (field goals attempted per minute) ft_att_per_min (free throws attempted per minute) fg_made_per_m (field goals made per minute) ft_made_per_m (free throws made per minute) to_per_min (turnovers per minute) All clustering variables were standardized to have a mean of 0 and a standard deviation of 1. Data was randomly split into a training set that included 70% of the observations (N=341) and a test set that included 30% of the observations (N=145). A series of k-means cluster analyses were conducted on the training data specifying k=1-9 clusters, using Euclidean distance. The variance in the clustering variables that was accounted for by the clusters (r-square) was plotted for each of the nine cluster solutions in an elbow curve (see figure 1 below) to provide guidance for choosing the number of clusters to interpret.

003

Canonical discriminant analyses was used to reduce the 11 clustering variable down a few variables that accounted for most of the variance in the clustering variables. A scatter-plot of the first two canonical variables by cluster (Figure 2 shown below) indicated that the observations in cluster 3 is the most densely packed with relatively low within cluster variance, and did not overlap very much with the other clusters. Cluster 1’s observations had greater spread suggesting higher within cluster variance. Observations in cluster 2 have relatively low cluster variance but there are a few observations with overlap.

004

The means on the clustering variables showed that, athletes in each cluster have uniquely different playing styles.

Cluster 1:

These athletes have high values for percentage of points from free throws, moderate on percentage points from 3 point field goals and low on percentage of points from 2 point field goals. These athletes attempt more field goals per minute, free throws per minute, make more 3 point field goals per minute and have the highest value for assists per minute; these athletes are focal points of a team’s offensive strategy.

Athletes in this cluster: Kevin Durant ,Anthony Davis, Stephen Curry

Cluster 2:

The athletes have extremely high values for percentage of points from 2 point field goals, moderate on percentage points from free throws, and extremely low values for percentage of points from 3 point field goals. These athletes rarely make perimeter shots and have low values for assists.

Athletes in this cluster: Rudy Gobert, Hassan Whiteside, Myles Turner

Cluster 3:

The athletes have high values for percentage of points from 3 point field goals, and low values for point 2 point field goals and free throws. These athletes stay on the perimeter (high values for 3 point field goals made) but are a secondary option at best, observed by a low field goal attempts per minute.

Athletes in this cluster: Otto Porter, Klay Thompson, Al Horford

In order to externally validate the clusters, an Analysis of Variance (ANOVA) was conducting to test for significant differences between the clusters on total points scored per minute (ttl_pts_per_m). A tukey test was used for post hoc comparisons between the clusters. The results indicated significant differences between the clusters on ttl_pts_per_m (F(2, 340)=86.67, p<.0001). The tukey post hoc comparisons showed significant differences between clusters on ttl_pts_per_m, with the exception that clusters 2 and 3 were not significantly different from each other. Athletes in cluster 1 had the highest ttl_pts_per_m (mean=.541, sd=0.141), and cluster 3 had the lowest ttl_pts_per_m (mean=.341, sd=0.096).

CONCLUSION

Using a k-means cluster is a data driven approach to grouping basketball player performance. This method can be used in constructing a team when a salary budget is constricted. The elephant in the room is this essentially is human behavior, therefore the validation step using proc anova is critical. The approach I’ve applied to the NBA data is a guide machine learning approach.


005007


006

https://www.nba2k.com/

http://www.sesug.org/SESUG2018/index.php


003_008