Very nice, much faster than simulating it. I guess if you're using an informative prior, instead of adding 1 to the number of successes and failures, you add the corresponding parameters of your beta prior?
A pretty good shortcut (if you don't have a log-beta function, for example) is to approximate both A and B with the normal distribution. Then their difference is also normal, so you can just check the probability that a normal random variable is greater than zero.
Specifically, the mean μ of a beta distribution is α/(α+β) and the variance σ^2 is αβ/((α+β)^2(α+β+1)). Use these as the parameters for your normal approximation, and we have the difference D ~ N(μ_A-μ_B, σ_A^2+σ_B^2). The probability that B beats A is just the CDF of D evaluated at 0.
A pretty good shortcut (if you don't have a log-beta function, for example) is to approximate both A and B with the normal distribution. Then their difference is also normal, so you can just check the probability that a normal random variable is greater than zero.
Specifically, the mean μ of a beta distribution is α/(α+β) and the variance σ^2 is αβ/((α+β)^2(α+β+1)). Use these as the parameters for your normal approximation, and we have the difference D ~ N(μ_A-μ_B, σ_A^2+σ_B^2). The probability that B beats A is just the CDF of D evaluated at 0.
In Python: