Money is the root of all evil. Our evil endeavors can only succeed if we are well funded. More so, our funds should come from either an untraceable or laundered source. While cryptocurrency is a viable payment method that seems reasonably untraceable, let us not stake our funds on a somewhat volatile market. Instead let us use gold, but not real gold. Let us use the closest thing to a “digital gold standard”: World of Warcraft gold.
In World of Warcraft, you are able to farm gold and sell it for real fiat currency online. However, farming is tedious, honest, and honorable work. As a supervillain, this seems both time consuming and resource intensive. Luckily there is a less-than-righteous way for us to earn a lot of gold very quickly by gambling. The game is called Deathrolling and it involves two players taking turns generating a random number with the first person to “roll” a 1 losing.
More specifically, the game works as follows:
- Both players agree ahead of time how much gold to wager
- Player 1 starts by generating a random number between 1 and 10x the wager
amount using
/roll <max>
- Player 2 rolls for a number between 1 and the number that Player 1 rolled.
- The players take turns rolling with the new maximum being the result of the previous roll until one player rolls a 1.
Now, we’re not fools. We know that in other gambling games there are hidden advantages. In fact, in a lot of games in general there are hidden advantages and statistical likelihoods that we as humans may erroneously chalk up to “luck”. If we are going to be Deathrolling for gold, we want to make sure that we are not going to lose our fortune and become destitute. So we propose an investigation into the chances of winning a Deathroll and some of the factors we can manipulate to our advantage.
Specifically, we want to know if there is an advantage to being the first player to roll, and if there is a wager amount that will be the most “profitable” with the greatest return for risk. For all of this, we will be using an intuitionist approach in this investigation and so there will be no formal proofs, instead opting for simulated results.
Hunch #1: The probability of losing on any given roll (\(r_n\)) is determined by the previous roll such that \(p(l) =\frac{1}{r_{n-1}}\).
On every roll, there is a chance that we lose instantly by rolling a 1. The chance of instantly losing is related to the maximum of the roll. If the roll maximum is 2 we have a 50% chance of losing, at 10 a 10% chance, at 100 a 1% chance and so on.
We can check our hunch pretty easily using Python, numpy, and matplotlib with the following:
roll_max = np.arange(2,50)
p_loss = np.array([1/r for r in roll_max])
fig = plt.figure()
plt.plot(roll_max, p_loss)
plt.xlabel("Roll maximum")
plt.ylabel("Probability of losing")
Which results in what we expected:
Note: With all of these examples, we have the following imports:
import numpy as np
import matplotlib.pyplot as plt
from random import randint as roll
Hunch #2: The average number of rolls per game (\(\bar{n}\)) is \(\log_2(r_0)\), where \(r_0\) is the starting maximum number (10x the gold wagered).
This one is much more based on intuition. For every roll we have an equal likelihood of rolling above or below the middle. So for every successive roll, we expect to roughly halve the current maximum. This is because we assume that for the same maximum number (\(r\)) the average roll (\(\bar{r}\)) is \(\frac{r}{2}\).
Whenever we encounter successive halving such as with a binary search, the number of iterations is \(\log_2{(n)}\). Therefore, we get the feeling that the number of rolls in a game would, on average, be roughly \(\log_2{(r_0)}\) where \(r_0\) is the initial maximum.
To check our hunch, let us simulate a whole bunch of games. Our setup is as follows:
def deathroll(r_max):
p1_turn = True
num_rolls = 0
winner = 0 # 1 or 2 for p1 or p2
current_max = r_max
last_roll = -1
while (True):
last_roll = roll(1, current_max)
num_rolls += 1
if last_roll == 1:
break
else:
current_max = last_roll
p1_turn = not p1_turn
if (p1_turn):
winner = 2
else:
winner = 1
return (num_rolls, winner)
With a little helper function to get a bunch of game data quickly:
def get_game_data(num_games, r_max):
num_rolls = np.zeros(num_games)
p1_wins = np.zeros(num_games)
p2_wins = np.zeros(num_games)
for i in range(num_games):
data = deathroll(r_max)
num_rolls[i] = data[0]
if data[1] == 1:
p1_wins[i] = 1
else:
p2_wins[i] = 1
return num_rolls, p1_wins, p2_wins
Now we can have a look at the average number of rolls for a given starting number. For each starting number, we will simulate 1000 games and note the average number of rolls for all starting numbers from 2 to 10 000.
What we get is a plot that looks logarithmic maybe even \(\log_2{(n)}\):
At a glance we cannot be sure that this confirms our hunch so let us add a sanity check with the following by plotting \(\log_2{(r_0)}\) as well (and \(\log_e{(r_0)}\)1 because why not?):
Well, it turns out our hunch was not totally correct. This is perfectly fine since with this result, we can update our hunch. The true answer lies somewhere between \(\log_2{(r_0)}\) and \(\log_e{(r_0)}\).
Hunch #3: It is better to go second as your opponent will be more likely to roll a 1.
Luckily this hunch is a lot easier to check. All we do is see what the win rate is for each player over 10000 individual games per starting number.
num_games = 10000
starting_r_max = np.arange(2,1000)
avg_num_rolls = np.zeros(len(starting_r_max))
p1_win_rate = np.zeros(len(starting_r_max))
p2_win_rate = np.zeros(len(starting_r_max))
for i in range(len(starting_r_max)):
num_rolls, p1_wins, p2_wins = get_game_data(num_games, starting_r_max[i])
avg_num_rolls[i] = np.mean(num_rolls)
p1_win_rate[i] = np.count_nonzero(p1_wins)/num_games
p2_win_rate[i] = np.count_nonzero(p2_wins)/num_games
print("P1 win rate: ", np.mean(p1_win_rate))
print("P2 win rate: ", np.mean(p2_win_rate))
Finally if we take the mean win rate for each player for every starting number it looks like there is a slight advantage to going second, even if very small.
P1 win rate: 0.4993275551102204
P2 win rate: 0.5006724448897796
Now there is a little issue here since we are averaging averages. So let us take a look at the plot of our simulation:
As we can see there are two little tails there at the beginning of our plot, so we try a little “zoom-and-enhance” here by restricting the maximum wager (starting number) to 30 and see what is going on. This time however, we will push up the number of games to 100k per number.
In our plot there is definitely a convergence happening, which if we look back to our first hunch makes sense as the chance for Player 1 to roll a 1 is much higher at lower numbers. It is also interesting to note that our win rates also changed by restricting the starting number.
P1 win rate: 0.48281142857142856
P2 win rate: 0.5171885714285714
So what does this all mean? Well it seems to us that it is generally a better idea to go second, and to keep our initial wagers low thereby giving us the greatest chance of success. There is a problem though. All of this still seems a bit too honourable. We are merely using a bit of statistical knowledge to our advantage.
Instead, if we are to really be nefarious about this, we should find some way to perform RNG manipulation, modifying the rules to our advantage, or other cons. Either way, this was a fun if informal investigation into a game within a game. For better funding opportunities it might be a good idea to look into cryptocurrencies, or developing our own gambling platform.
-
The natural logarithm where \(e \approx 2.71828\) ↩︎