This function is very helpful to estimate the probability of a binary event happening when the value of n is very small.
I found this blog post by Nassim Taleb. Actually I found about it first from his MOOCs on his youtube channel and later found out about his post. You can check out more there.
In this post I want to apply this on various problem I found on twitter. I will also discuss the problem mentioned in Nassim’s blog in this post.
- Dice Problem, twitter link
You toss a dice, and it lands “6.” You repeat the toss. It lands “6” again. How many repeats of 6 are required for you to declare that the dice is biased.
- Probability of a flat tire per mile twitter link
This cyclist friend of mine who usually asks me to post inquiries on X* (he is shy) cycled 12,000 miles w/0 a flat tire (tubeless 32mm Cont GP).Assume a memoryless process (i.e. no wear & tear effect, as he replaces the tires) what is the probability of a flat?
Given \( n \) trials and \( m \) observed successes, the Bayesian upper bound for the probability \( p \) (with a uniform prior) at 50% confidence is:
$$ p = \text{InverseBetaRegularized}(0.5, m+1, n-m) $$
For \( n = 12000 \), \( m = 0 \):
$$ p = \text{InverseBetaRegularized}(0.5, 1, 12000) \approx 0.00005776 $$
Probability of a flat per mile (50% confidence):
$$ p \approx 0.0058% \text{ per mile} $$