Skip to main content
\(\newcommand{\set}[1]{\{1,2,\dotsc,#1\,\}} \newcommand{\ints}{\mathbb{Z}} \newcommand{\posints}{\mathbb{N}} \newcommand{\rats}{\mathbb{Q}} \newcommand{\reals}{\mathbb{R}} \newcommand{\complexes}{\mathbb{C}} \newcommand{\twospace}{\mathbb{R}^2} \newcommand{\threepace}{\mathbb{R}^3} \newcommand{\dspace}{\mathbb{R}^d} \newcommand{\nni}{\mathbb{N}_0} \newcommand{\nonnegints}{\mathbb{N}_0} \newcommand{\dom}{\operatorname{dom}} \newcommand{\ran}{\operatorname{ran}} \newcommand{\prob}{\operatorname{prob}} \newcommand{\Prob}{\operatorname{Prob}} \newcommand{\height}{\operatorname{height}} \newcommand{\width}{\operatorname{width}} \newcommand{\length}{\operatorname{length}} \newcommand{\crit}{\operatorname{crit}} \newcommand{\inc}{\operatorname{inc}} \newcommand{\HP}{\mathbf{H_P}} \newcommand{\HCP}{\mathbf{H^c_P}} \newcommand{\GP}{\mathbf{G_P}} \newcommand{\GQ}{\mathbf{G_Q}} \newcommand{\AG}{\mathbf{A_G}} \newcommand{\GCP}{\mathbf{G^c_P}} \newcommand{\PXP}{\mathbf{P}=(X,P)} \newcommand{\QYQ}{\mathbf{Q}=(Y,Q)} \newcommand{\GVE}{\mathbf{G}=(V,E)} \newcommand{\HWF}{\mathbf{H}=(W,F)} \newcommand{\bfC}{\mathbf{C}} \newcommand{\bfG}{\mathbf{G}} \newcommand{\bfH}{\mathbf{H}} \newcommand{\bfF}{\mathbf{F}} \newcommand{\bfI}{\mathbf{I}} \newcommand{\bfK}{\mathbf{K}} \newcommand{\bfP}{\mathbf{P}} \newcommand{\bfQ}{\mathbf{Q}} \newcommand{\bfR}{\mathbf{R}} \newcommand{\bfS}{\mathbf{S}} \newcommand{\bfT}{\mathbf{T}} \newcommand{\bfNP}{\mathbf{NP}} \newcommand{\bftwo}{\mathbf{2}} \newcommand{\cgA}{\mathcal{A}} \newcommand{\cgB}{\mathcal{B}} \newcommand{\cgC}{\mathcal{C}} \newcommand{\cgD}{\mathcal{D}} \newcommand{\cgE}{\mathcal{E}} \newcommand{\cgF}{\mathcal{F}} \newcommand{\cgG}{\mathcal{G}} \newcommand{\cgM}{\mathcal{M}} \newcommand{\cgN}{\mathcal{N}} \newcommand{\cgP}{\mathcal{P}} \newcommand{\cgR}{\mathcal{R}} \newcommand{\cgS}{\mathcal{S}} \newcommand{\bfn}{\mathbf{n}} \newcommand{\bfm}{\mathbf{m}} \newcommand{\bfk}{\mathbf{k}} \newcommand{\bfs}{\mathbf{s}} \newcommand{\bijection}{\xrightarrow[\text{onto}]{\text{$1$--$1$}}} \newcommand{\injection}{\xrightarrow[]{\text{$1$--$1$}}} \newcommand{\surjection}{\xrightarrow[\text{onto}]{}} \newcommand{\nin}{\not\in} \newcommand{\prufer}{\mbox{prüfer}} \DeclareMathOperator{\fix}{fix} \DeclareMathOperator{\stab}{stab} \DeclareMathOperator{\var}{var} \newcommand{\inv}{^{-1}} \newcommand{\lt}{ < } \newcommand{\gt}{ > } \newcommand{\amp}{ & } \)

Section10.3Bernoulli Trials

Suppose we have a jar with \(7\) marbles, four of which are red and three are blue. A marble is drawn at random and we record whether it is red or blue. The probability \(p\) of getting a red marble is \(4/7\); and the probability of getting a blue is \(1-p=3/7\).

Now suppose the marble is put back in the jar, the marbles in the jar are stirred, and the experiment is repeated. Then the probability of getting a red marble on the second trial is again \(4/7\), and this pattern holds regardless of the number of times the experiment is repeated.

It is customary to call this situation a series of Bernoulli trials. More formally, we have an experiment with only two outcomes: success and failure. The probability of success is \(p\) and the probability of failure is \(1-p\). Most importantly, when the experiment is repeated, then the probability of success on any individual test is exactly \(p\).

We fix a positive integer \(n\) and consider the case that the experiment is repeated \(n\) times. The outcomes are then the binary strings of length \(n\) from the two-letter alphabet \(\{S,F\}\), for success and failure, respectively. If \(x\) is a string with \(i\) successes and \(n-i\) failures, then \(P(x)=\binom{n}{i}p ^i(1-p)^{n-i}\). Of course, in applications, success and failure may be replaced by: head/tails, up/down, good/bad, forwards/backwards, red/blue, etc.

Example10.12

When a die is rolled, let's say that we have a success if the result is a two or a five. Then the probability \(p\) of success is \(2/6=1/3\) and the probability of failure is \(2/3\). If the die is rolled ten times in succession, then the probability that we get exactly four successes is \(C(10,4)(1/3)^4 (2/3)^{6}\).

Example10.13

A fair coin is tossed \(100\) times and the outcome (heads or tails) is recorded. Then the probability of getting heads \(40\) times and tails the other \(60\) times is \begin{equation*} \binom{100}{40}\left(\frac{1}{2}\right)^{40}\left(\frac{1}{2}\right)^{60} =\frac{\binom{100}{40}}{2^{100}}. \end{equation*}

Discussion10.14

Bob says that if a fair coin is tossed \(100\) times, it is fairly likely that you will get exactly \(50\) heads and \(50\) tails. Dave is not so certain this is right. Carlos fires up his computer and in few second, he reports that the probability of getting exactly \(50\) heads when a fair coin is tossed \(100\) times is \begin{equation*} \frac{12611418068195524166851562157}{158456325028528675187087900672} \end{equation*} which is \(.079589\), to six decimal places. In other words, not very likely at all. Xing is doing a modestly more complicated calculation, and he reports that you have a \(99\)% chance that the number of heads is at least \(20\) and at most \(80\). Carlos adds that when \(n\) is very large, then it is increasingly certain that the number of heads in \(n\) tosses will be close to \(n/2\). Dave asks what do you mean by close, and what do you mean by very large?