Skip to main content
\(\newcommand{\set}[1]{\{1,2,\dotsc,#1\,\}} \newcommand{\ints}{\mathbb{Z}} \newcommand{\posints}{\mathbb{N}} \newcommand{\rats}{\mathbb{Q}} \newcommand{\reals}{\mathbb{R}} \newcommand{\complexes}{\mathbb{C}} \newcommand{\twospace}{\mathbb{R}^2} \newcommand{\threepace}{\mathbb{R}^3} \newcommand{\dspace}{\mathbb{R}^d} \newcommand{\nni}{\mathbb{N}_0} \newcommand{\nonnegints}{\mathbb{N}_0} \newcommand{\dom}{\operatorname{dom}} \newcommand{\ran}{\operatorname{ran}} \newcommand{\prob}{\operatorname{prob}} \newcommand{\Prob}{\operatorname{Prob}} \newcommand{\height}{\operatorname{height}} \newcommand{\width}{\operatorname{width}} \newcommand{\length}{\operatorname{length}} \newcommand{\crit}{\operatorname{crit}} \newcommand{\inc}{\operatorname{inc}} \newcommand{\HP}{\mathbf{H_P}} \newcommand{\HCP}{\mathbf{H^c_P}} \newcommand{\GP}{\mathbf{G_P}} \newcommand{\GQ}{\mathbf{G_Q}} \newcommand{\AG}{\mathbf{A_G}} \newcommand{\GCP}{\mathbf{G^c_P}} \newcommand{\PXP}{\mathbf{P}=(X,P)} \newcommand{\QYQ}{\mathbf{Q}=(Y,Q)} \newcommand{\GVE}{\mathbf{G}=(V,E)} \newcommand{\HWF}{\mathbf{H}=(W,F)} \newcommand{\bfC}{\mathbf{C}} \newcommand{\bfG}{\mathbf{G}} \newcommand{\bfH}{\mathbf{H}} \newcommand{\bfF}{\mathbf{F}} \newcommand{\bfI}{\mathbf{I}} \newcommand{\bfK}{\mathbf{K}} \newcommand{\bfP}{\mathbf{P}} \newcommand{\bfQ}{\mathbf{Q}} \newcommand{\bfR}{\mathbf{R}} \newcommand{\bfS}{\mathbf{S}} \newcommand{\bfT}{\mathbf{T}} \newcommand{\bfNP}{\mathbf{NP}} \newcommand{\bftwo}{\mathbf{2}} \newcommand{\cgA}{\mathcal{A}} \newcommand{\cgB}{\mathcal{B}} \newcommand{\cgC}{\mathcal{C}} \newcommand{\cgD}{\mathcal{D}} \newcommand{\cgE}{\mathcal{E}} \newcommand{\cgF}{\mathcal{F}} \newcommand{\cgG}{\mathcal{G}} \newcommand{\cgM}{\mathcal{M}} \newcommand{\cgN}{\mathcal{N}} \newcommand{\cgP}{\mathcal{P}} \newcommand{\cgR}{\mathcal{R}} \newcommand{\cgS}{\mathcal{S}} \newcommand{\bfn}{\mathbf{n}} \newcommand{\bfm}{\mathbf{m}} \newcommand{\bfk}{\mathbf{k}} \newcommand{\bfs}{\mathbf{s}} \newcommand{\bijection}{\xrightarrow[\text{onto}]{\text{$1$--$1$}}} \newcommand{\injection}{\xrightarrow[]{\text{$1$--$1$}}} \newcommand{\surjection}{\xrightarrow[\text{onto}]{}} \newcommand{\nin}{\not\in} \newcommand{\prufer}{\mbox{prüfer}} \DeclareMathOperator{\fix}{fix} \DeclareMathOperator{\stab}{stab} \DeclareMathOperator{\var}{var} \newcommand{\inv}{^{-1}} \newcommand{\lt}{ < } \newcommand{\gt}{ > } \newcommand{\amp}{ & } \)

Section10.6Probability Spaces with Infinitely Many Outcomes

To this point, we have focused entirely on probability spaces \((S,P)\) with \(S\) a finite set. More generally, probability spaces are defined where \(S\) is an infinite set. When \(S\) is countably infinite, we can still define \(P\) on the members of \(S\), and now \(\sum_{x\in S} P(x)\) is an infinite sum which converges absolutely (since all terms are non-negative) to \(1\). When \(S\) is uncountable, \(P\) is not defined on \(S\). Instead, the probability function is defined on a family of subsets of \(S\). Given our emphasis on finite sets and combinatorics, we will discuss the first case briefly and refer students to texts that focus on general concepts from probability and statistics for the second.


Consider the following game. Nancy rolls a single die. She wins if she rolls a six. If she rolls any other number, she then rolls again and again until the first time that one of the following two situations occurs: (1) she rolls a six, which now this results in a loss or (2) she rolls the same number as she got on her first roll, which results in a win. As an example, here are some sequences of rolls that this game might take:

  1. \((4,2,3,5,1,1,1,4)\). Nancy wins!

  2. \((6)\). Nancy wins!

  3. \((5,2, 3,2,1,6)\). Nancy loses. Ouch.

So what is the probability that Nancy will win this game?

Nancy can win with a six on the first roll. That has probability \(1/6\). Then she might win on round \(n\) where \(n\ge2\). To accomplish this, she has a \(5/6\) chance of rolling a number other than six on the first roll; a \(4/6\) chance of rolling something that avoids a win/loss decision on each of the rolls, \(2\) through \(n-1\) and then a \(1/6\) chance of rolling the matching number on round \(n\). So the probability of a win is given by: \begin{equation*} \frac{1}{6}+\sum_{n\ge 2}\frac{5}{6}\left(\frac{4}{6}\right)^{n-2}\frac{1}{6} = \frac{7}{12}. \end{equation*}


You might think that something slightly more general is lurking in the background of the preceding example—and it is. Suppose we have two disjoint events \(A\) and \(B\) in a probability space \((S,P)\) and that \(P(A)+P(B)\lt 1\). Then suppose we make repeated samples from this space with each sample independent of all previous ones. Call it a win if event \(A\) holds and a loss if event \(B\) holds. Otherwise, it's a tie and we sample again. Now the probability of a win is: \begin{equation*} P(A)+P(A)\sum_{n\ge 1}(1-P(A)-P(B))^n=\frac{P(A)}{P(A)+P(B)}. \end{equation*}