--- title: "Methods Overview" author: "Evgeni Chasnovski" date: "`r Sys.Date()`" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{Methods Overview} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- ```{r setup, include = FALSE} knitr::opts_chunk$set( collapse = TRUE, comment = "#>" ) ``` `comperank` provides tools for computing ranking and rating based on competition results. It is tightly connected to its data infrastructure package [comperes](https://github.com/echasnovski/comperes). Basic knowledge about creating [valid competition results](https://github.com/echasnovski/comperes#store-and-convert) and [Head-to-Head expressions](https://github.com/echasnovski/comperes#head-to-head) with `comperes` is needed in order to efficiently use `comperank`. Understanding of __competition__ is quite general: it is a set of __games__ (abstract event) in which __players__ (abstract entity) gain some abstract __scores__ (typically numeric). The most natural example is sport results, however not the only one. Product rating can also be considered as a competition between products as "players". Here a "game" is a customer that reviews a set of products by rating them with numerical "score" (stars, points, etc.). __Rating__ is a list (in the ordinary sense) of numerical values, one for each player, or the numerical value itself. Its interpretation depends on rating method: either bigger value indicates better player performance or otherwise. __Ranking__ is a rank-ordered list (in the ordinary sense) of players: rank 1 indicates player with best performance. `comperank` leverages the [tidyverse](https://www.tidyverse.org/) ecosystem of R packages. Among other things, it means that the main output format is [tibble](http://tibble.tidyverse.org/). ## Overview `comperank` gets inspiration from the book ["Who's #1"](https://www.amazon.com/Whos-1-Science-Rating-Ranking/dp/069116231X) by Langville and Meyer. It provides functionality for the following rating algorithms: - Algorithms with __fixed Head-to-Head structure__: - Simplified Massey method with `rate_massey()` and `rank_massey()`. - Simplified Colley method with `rate_colley()` and `rank_colley()`. - Algorithms with __variable Head-to-Head structure__: - Keener method with `rate_keener()` and `rank_keener()`. - Markov method with `rate_markov()` and `rank_markov()`. - Offense-Defense method with `rate_od()` and `rank_od()`. - Algorithms with __iterative nature__: - General Iterative ratings with `rate_iterative()`, `rank_iterative()`, and `add_iterative_ratings()`. - Elo ratings with `rate_elo()`, `rank_elo()`, and `add_elo_ratings()`. As you can see, there are three sets of functions: - `rate_*()`. Its output is a tibble with columns `player` (player identifier) and at least one `rating_*` (rating value). Names of rating columns depend on rating method. - `rank_*()`. Its default output is similar to previous one, but with `ranking_*` instead of rating columns. It runs `rate_*()` and does ranking with correct direction. One can use option `keep_rating = TRUE` to keep rating columns in the output. - `add_*_ratings()`. These functions are present only for algorithms with iterative nature and competition results with games only between two players. They return tibble with row corresponding to a game and extra columns indicating ratings of players before and after the game. This vignette provides examples of basic usage of these functions. To learn more about algorithms behind them, see corresponding help pages. For this vignette we will need the following packages: ```{r library} library(rlang) # This also loads comperes package suppressPackageStartupMessages(library(comperank)) ``` ## Structure of competition results All functions in `comperank` expect competition results in one of the formats from `comperes` package. That is either __long__ or __wide__ format. __Long format__ is the most abstract way of presenting competition results. Basically, it is a data frame (or tibble) with columns `game` (game identifier), `player` (player identifier) and `score` where _each row represents the score of particular player in particular game_. One game can consist from __variable__ number of players which makes this format more usable. Inside a game all players are treated equally. Programmatically long format is represented with `longcr` S3 class which should be created with `as_longcr()` function from `comperes`. For examples we will use `ncaa2005` data set from `comperes` package, which is already of `longcr` class. It is an example competition results of an isolated group of Atlantic Coast Conference teams provided in book __"Who's #1"__: ```{r ncaa2005-long} ncaa2005 ``` __Wide format__ is a more convenient way to store results with __fixed__ number of players in a game. _Each row represents scores of all players in particular game_. Data should be organized in pairs of columns "player"-"score". Identifier of a pair should go after respective keyword and consist only from digits. For example: `player1`, `score1`, `player2`, `score2`. Order doesn't matter. Column `game` is optional. Programmatically wide format is represented with `widecr` S3 class which should be created with `as_widecr()` function from `comperes`: ```{r ncaa2005-wide} comperes::as_widecr(ncaa2005) ``` __All `comperank` functions expect either a data frame with long format structure, or `longcr` object, or `widecr` object__. ## Algorithms with fixed Head-to-Head structure __Massey__ and __Colley__ methods were initially designed for competitions where: - Games are held only between two players. - It is assumed that score is numeric and higher values indicate better player performance in a game. ### Massey method Idea of Massey method is that difference in ratings should be proportional to score difference in direct confrontations. Bigger value indicates better player competition performance. ```{r massey} rate_massey(ncaa2005) rank_massey(ncaa2005) rank_massey(ncaa2005, keep_rating = TRUE) ``` ### Colley method Idea of Colley method is that ratings should be proportional to share of player's won games. Bigger value indicates better player performance. ```{r colley} rank_colley(ncaa2005, keep_rating = TRUE) ``` ## Algorithms with variable Head-to-Head structure All algorithms with variable Head-to-Head structure depend on user supplying custom Head-to-Head expression for computing quality of direct confrontations between all pairs of players of interest. Computation of Head-to-Head values is done with functionality of `comperes` package. Programmatically it is implemented as summary of players' matchups - mini-"games" in `widecr` format between pair of players. In other words, for every directed pair (order matters) of players (including "pair" of player with oneself): - Data frame of matchups is computed in wide format, i.e. with columns `game`, `player1`, `score1`, `player2`, `score2`. - This data frame is summarised with Head-to-Head expression supplied in [dplyr](http://dplyr.tidyverse.org/) fashion. For more robust usage `comperes` provides `h2h_funs` - a list of the most common Head-to-Head [expressions](http://rlang.r-lib.org/reference/quotation.html) which are designed to be used with [rlang](http://rlang.r-lib.org/)'s [unquoting](http://rlang.r-lib.org/reference/quasiquotation.html) mechanism. All `comperank` functions are designed to be used smoothly with it. Examples of computing Head-to-Head values for more clarity: ```{r h2h-examples} # Examples of h2h_funs elements names(h2h_funs) h2h_funs[1:3] # Computing Head-to-Head values with unquoting comperes::h2h_long(ncaa2005, !!! h2h_funs) comperes::h2h_mat(ncaa2005, !!! h2h_funs["mean_score"]) # Computing Head-to-Head values manually comperes::h2h_mat(ncaa2005, mean(score1)) # To account for self play use `if-else` comperes::h2h_mat(ncaa2005, if(player1[1] == player2[1]) 0 else mean(score1)) ``` All functions for methods with variable Head-to-Head structure are designed with this rule in mind: __the more Head-to-Head value the better player1 performed than player2__. ### Keener method Keener method is based on the idea of "relative strength" - the strength of the player relative to the strength of the players he/she has played against. This is computed based on provided Head-to-Head values and some flexible algorithmic adjustments to make method more robust. Bigger value indicates better player performance. ```{r keener} rank_keener(ncaa2005, !!! h2h_funs["mean_score"], keep_rating = TRUE) ``` ### Markov method The main idea of Markov method is that players "vote" for other players' performance. Voting is done with Head-to-Head values and the more value the more "votes" gives player2 to player1. For example, if Head-to-Head value is "number of wins" then player2 "votes" for player1 proportionally to number of times player1 won in a matchup with player2. __Beware__ of careful consideration of Head-to-Head values for self plays. Actual "voting" is done in [Markov chain](https://en.wikipedia.org/wiki/Markov_chain) fashion: Head-to-Head values are organized in stochastic matrix which vector of stationary probabilities is declared to be output ratings. Bigger value indicates better player performance. As stochastic matrices can be averaged (with weights), this is the only method capable of direct averaging ratings for different Head-to-Head expressions. ```{r markov} rank_markov(ncaa2005, !!! h2h_funs["num_wins"], keep_rating = TRUE) rank_markov( ncaa2005, !!! h2h_funs[c("num_wins", "mean_score_diff_pos")], weights = c(0.2, 0.8), keep_rating = TRUE ) ``` ### Offense-Defense method The idea of Offense-Defense (OD) method is to account for different abilities of players by combining different ratings: - For player which can achieve _high_ Head-to-Head value (even against the player with strong defense) it is said that he/she has __strong offense__ which results into _high_ offensive rating. - For player which can force their opponents into achieving _low_ Head-to-Head value (even if they have strong offense) it is said that he/she has __strong defense__ which results into _low_ defensive rating. Offensive and defensive ratings describe different skills of players. In order to fully rate players, OD ratings are computed: offensive ratings divided by defensive. The more OD rating the better player performance. ```{r offense-defense} rank_od( ncaa2005, if (player1[1] == player2[1]) 0 else mean(score1), keep_rating = TRUE ) ``` ## Algorithms with iterative nature Rating methods with iterative nature assume that games occur in some particular order. All players have some initial ratings which are updated after every game in order they appear. Although, it is possible to consider games with more than two players, `comperank` only supports competition results with all games between two players. ### Iterative ratings Iterative ratings represent the general approach to ratings with iterative nature. It needs custom rating function and initial player ratings to perform iterative ratings computation. Rating function should accept four arguments: `rating1` (scalar rating of the first player before the game), `score1` (his score), `rating2` and `score2` for the data about second player's performance. It should return a numeric vector of length 2 with elements respectively representing ratings of players after the game. All functions assume that the order in which games were played is identical to order of values in column `game` (if present) or is defined by the row order. Arguably, the most useful function is `add_iterative_ratings()`, which adds to `widecr` format of competition results information about game ratings before and after the game. `rate_iterative()` and `rank_iterative()` return ratings after the last game. ```{r iterative} # Adds 1 to winner's rating and subtracts 1 from loser's rating test_rate_fun <- function(rating1, score1, rating2, score2) { c(rating1, rating2) + ((score1 >= score2) * 2 - 1) * c(1, -1) } add_iterative_ratings(ncaa2005, test_rate_fun) # Revert the order of games ncaa2005_rev <- ncaa2005 ncaa2005_rev$game <- 11 - ncaa2005_rev$game add_iterative_ratings(ncaa2005_rev, test_rate_fun) # Rating after the last game rank_iterative(ncaa2005, test_rate_fun, keep_rating = TRUE) ``` ### Elo method Elo method is, basically, an iterative rating method with fixed [Elo](https://en.wikipedia.org/wiki/Elo_rating_system) rating function. General idea is that rating increase for winner should be the bigger the more is rating difference between players. In other words, win over a better player should lead to more rating increase and win over a considerably weaker player shouldn't affect rating that much. ```{r elo} add_elo_ratings(ncaa2005) add_elo_ratings(ncaa2005_rev) rank_elo(ncaa2005, keep_rating = TRUE) rank_elo(ncaa2005_rev, keep_rating = TRUE) ```