We are selectors for a cricket team who wish to make decisions based on data around the success or otherwise of our players’ strategies or relative performances that we trust and believe to be comparable.

If we follow Bayes’ procedure to calculate the probability that the “real” average runs scored, b, by our batsman as player B (batting as “Bazball”) is greater than that of the same batsman as player A, a (batting as “Anodyne”) for data set averages $\overline{x}$ and $\overline{y}$respectively for the player, and with n and m data points, then setting $s=n\overline{x}+\frac{1}{2}$ and $t=m\overline{y}+\frac{1}{2}$, we find that:

$$\text{Prob (b}>\text{a)}=\frac{{m}^{t}}{\mathrm{\Gamma}(s)\mathrm{\Gamma}(t)}{\int}_{0}^{\mathrm{\infty}}{a}^{t-1}{e}^{-ma}\mathrm{\Gamma}(s,na)da$$

where we have assumed a Poisson distribution to parametrise the average numbers of runs scored in the two strategies, and the Jeffreys’ uninformative prior distribution for this model.

For example, if there are $m=6$ innings at $\overline{y}=30.0$ for A, and $n=8$ innings at $\overline{x}=32.0$ for B, then we find that the above equation $=0.7407$ and the odds are almost 3:1 on that the strategy as B has a greater average than that of the player as A. If we have a rule that the odds on a strategy for at least 5 innings must be better than 4:1 on, then our decision in this case would be to ask the player to continue to bat as A, pending more information. If we only required 2.5:1 then we would be inclined to try strategy B for this player.

Another angle on this is to look at the average number of balls `survived’ by the player in an innings, with the two strategies. That is, I am interested in the number of balls faced at the point the player is out. To model this, I start from the Pascal distribution with an uninformative (uniform) prior, since this distribution is a discrete one which models a future number $n$ of independent trials, which contain $r$ failures. I have $r=1$ for this case, as cricket is unforgiving! The end of the trials is on the nth ball, when the player fails, i.e. is out. This simplifies things:

$$\text{f}(x)=P(X=x)=(\genfrac{}{}{0ex}{}{x-1}{r-1}){p}^{r}(1-p{)}^{x-r}={p}^{r}(1-p{)}^{x-r}$$

where $p$ is the probability of getting out on any given ball and $X$ is the random quantity realised as $x$ in a given trial (innings), the number of balls received up to and including getting out. The mean of the quantity $x$ is $\frac{1}{p}$. This makes sense, as the number of balls faced in an innings is greater than or equal to one. Our prior distribution (density) for the unknown value of p was taken to be $B(1,1)$. The Beta family is conjugate with respect to the Pascal distribution (negative binomial distribution), in such a way that for prior $B(\alpha ,\beta )$, the posterior is also a Beta density with parameters $\alpha +nr$ and $\beta +n\overline{x}-nr$, where $\overline{x}$ is the mean of the data $\mathbf{x}$ and there were $n$ trials (innings). Since $r=1$ the conditional probability density is:

$$f(p\mid \mathbf{x})={B}^{-1}{p}^{n}(1-p{)}^{s}$$

where $s=n(\overline{x}-1)$ and $B=B(1+n,1+s)$. This means that if we compare two sets of innings, n and m, with average balls faced until getting out, $\overline{x}\ge 1$ and $\overline{y}\ge 1$respectively, then, relabelling the respective unknowns we find the probability of B getting out sooner being greater than that of A is:

$$\text{Prob}(\beta >\alpha )=\frac{1}{{C}_{n}{C}_{m}}{\int}_{0}^{1}d\alpha {\int}_{\alpha}^{1}{\alpha}^{m}(1-\alpha {)}^{t}{\beta}^{n}(1-\beta {)}^{s}d\beta $$

where $t=m(\overline{y}-1)$ and ${C}_{n}=B(1+n,1+s))$ and ${C}_{m}=B(1+m,1+t)$ are constants calculated from the data, which normalise the integrals. From this probability we can deduce the odds that one strategy is longer-lasting than the other, ignoring run rates this time.

We can then also simplify by converting the discrete geometric distribution to the continuous, and computationally slightly easier exponential distribution $f(x)=p{e}^{-px}$. The resulting relative probability, corresponding to and in good agreement with the equation above, is:

$$\text{Prob}(\beta >\alpha )=\frac{{t}^{m+1}}{\mathrm{\Gamma}(n+1)\mathrm{\Gamma}(m+1)}{\int}_{0}^{1}{\alpha}^{m}{e}^{-\alpha t}\mathrm{\Gamma}(n+1,s\alpha ,s){\textstyle \phantom{\rule{0.167em}{0ex}}}d\alpha $$

where $\alpha $ and $\beta $ are the real probabilities of getting out on the next or any given delivery, given by the inverse of the respective real average numbers of balls faced up to the point of failure, i.e. getting out. The $\mathrm{\Gamma}(a,b,c)$ in the integrand is the generalised, incomplete Gamma function arising from the first integration over all cases in the joint distribution where $\beta >\alpha $. Note that the integral this time is over all cases from zero to one since the variables $\alpha $ (and $\beta $) are probabilities.

One can compare a set of innings in A and B modes, this time ignoring runs scored but focusing on how long the innings were and again perhaps having a rule for deciding which strategy is optimal and how to apply the rule to make the judgement.

If in the first strategy A the player stays at the crease for an average of $\overline{y}=25$ balls, in $m=4$ innings and in strategy B, $\overline{x}=20$ balls, in $n=5$ innings, I find that the probability that the unknown parameter $\beta $, representing the probability of getting out next ball in strategy B is higher than the unknown parameter $\alpha $, representing the probability of getting out next ball in strategy A is 0.623 in the exponential distribution calculation. The Pascal calculation gave 0.626, under $0.5\mathrm{\%}$ off.

The applications for this kind of Bayesian joint-probability A/B comparison to get simple odds for or against are miriad and go far beyond the tip of the iceberg which is sport strategy; they are numerous in business and governmental strategy.