Skip to main content
\(\newcommand{\set}[1]{\{1,2,\dotsc,#1\,\}} \newcommand{\ints}{\mathbb{Z}} \newcommand{\posints}{\mathbb{N}} \newcommand{\rats}{\mathbb{Q}} \newcommand{\reals}{\mathbb{R}} \newcommand{\complexes}{\mathbb{C}} \newcommand{\twospace}{\mathbb{R}^2} \newcommand{\threepace}{\mathbb{R}^3} \newcommand{\dspace}{\mathbb{R}^d} \newcommand{\nni}{\mathbb{N}_0} \newcommand{\nonnegints}{\mathbb{N}_0} \newcommand{\dom}{\operatorname{dom}} \newcommand{\ran}{\operatorname{ran}} \newcommand{\prob}{\operatorname{prob}} \newcommand{\Prob}{\operatorname{Prob}} \newcommand{\height}{\operatorname{height}} \newcommand{\width}{\operatorname{width}} \newcommand{\length}{\operatorname{length}} \newcommand{\crit}{\operatorname{crit}} \newcommand{\inc}{\operatorname{inc}} \newcommand{\HP}{\mathbf{H_P}} \newcommand{\HCP}{\mathbf{H^c_P}} \newcommand{\GP}{\mathbf{G_P}} \newcommand{\GQ}{\mathbf{G_Q}} \newcommand{\AG}{\mathbf{A_G}} \newcommand{\GCP}{\mathbf{G^c_P}} \newcommand{\PXP}{\mathbf{P}=(X,P)} \newcommand{\QYQ}{\mathbf{Q}=(Y,Q)} \newcommand{\GVE}{\mathbf{G}=(V,E)} \newcommand{\HWF}{\mathbf{H}=(W,F)} \newcommand{\bfC}{\mathbf{C}} \newcommand{\bfG}{\mathbf{G}} \newcommand{\bfH}{\mathbf{H}} \newcommand{\bfF}{\mathbf{F}} \newcommand{\bfI}{\mathbf{I}} \newcommand{\bfK}{\mathbf{K}} \newcommand{\bfP}{\mathbf{P}} \newcommand{\bfQ}{\mathbf{Q}} \newcommand{\bfR}{\mathbf{R}} \newcommand{\bfS}{\mathbf{S}} \newcommand{\bfT}{\mathbf{T}} \newcommand{\bfNP}{\mathbf{NP}} \newcommand{\bftwo}{\mathbf{2}} \newcommand{\cgA}{\mathcal{A}} \newcommand{\cgB}{\mathcal{B}} \newcommand{\cgC}{\mathcal{C}} \newcommand{\cgD}{\mathcal{D}} \newcommand{\cgE}{\mathcal{E}} \newcommand{\cgF}{\mathcal{F}} \newcommand{\cgG}{\mathcal{G}} \newcommand{\cgM}{\mathcal{M}} \newcommand{\cgN}{\mathcal{N}} \newcommand{\cgP}{\mathcal{P}} \newcommand{\cgR}{\mathcal{R}} \newcommand{\cgS}{\mathcal{S}} \newcommand{\bfn}{\mathbf{n}} \newcommand{\bfm}{\mathbf{m}} \newcommand{\bfk}{\mathbf{k}} \newcommand{\bfs}{\mathbf{s}} \newcommand{\bijection}{\xrightarrow[\text{onto}]{\text{$1$--$1$}}} \newcommand{\injection}{\xrightarrow[]{\text{$1$--$1$}}} \newcommand{\surjection}{\xrightarrow[\text{onto}]{}} \newcommand{\nin}{\not\in} \newcommand{\prufer}{\mbox{prüfer}} \DeclareMathOperator{\fix}{fix} \DeclareMathOperator{\stab}{stab} \DeclareMathOperator{\var}{var} \newcommand{\inv}{^{-1}} \newcommand{\lt}{<} \newcommand{\gt}{>} \newcommand{\amp}{&} \)

Section10.5Central Tendency

Consider the following two situations:

  • Situation 1. A small town decides to hold a lottery to raise funds for charitable purposes. A total of \(10,001\) tickets are sold, and the tickets are labeled with numbers from the set \(\{0,1,2,\dots,10,000\}\text{.}\) At a public ceremony, duplicate tickets are placed in a big box, and the mayor draws the winning ticket from out of the box. Just to heighten the suspense as to who has actually won the prize, the mayor reports that the winning number is at least \(7,500\text{.}\) The citizens ooh and aah and they can't wait to see who among them will be the final winner.

  • Situation 2. Behind a curtain, a fair coin is tossed \(10,000\) times, and the number of heads is recorded by an observer, who is reputed to be honest and impartial. Again, the outcome is an integer in the set \(\{0,1,2,\dots,10,000\}\text{.}\) The observer then emerges from behind the curtain and announces that the number of heads is at least than \(7,500\text{.}\) There is a pause and then someone says “What? Are you out of your mind?”

So we have two probability spaces, both with sample space \(S=\{0,1,2,\dots,10,000\}\text{.}\) For each, we have a random variable \(X\text{,}\) the winning ticket number in the first situation, and the number of heads in the second. In each case, the expected value, \(E(X)\text{,}\) of the random variable \(X\) is \(5,000\text{.}\) In the first case, we are not all that surprised at an outcome far from the expected value, while in the second, it seems intuitively clear that this is an extraordinary occurrence. The mathematical concept here is referred to as central tendency, and it helps us to understand just how likely a random variable is to stray from its expected value.

For starters, we have the following elementary result.

Proof

To make Markov's inequality more concrete, we see that on the basis of this trivial result, the probability that either the winning lottery ticket or the number of heads is at least \(7,500\) is at most \(5000/7500=2/3\text{.}\) So nothing alarming here in either case. Since we still feel that the two cases are quite different, a more subtle measure will be required.

Subsection10.5.1Variance and Standard Deviation

Again, let \((S,P)\) be a probability space and let \(X\) be a random variable. The quantity \(E((X-E(X))^2)\) is called the variance of \(X\) and is denoted \(\var(X)\text{.}\) Evidently, the variance of \(X\) is a non-negative number. The standard deviation of \(X\text{,}\) denoted \(\sigma_X\) is then defined as the quantity \(\sqrt{\var(x)}\text{,}\) i.e., \(\sigma_X^2 =\var(X)\text{.}\)

Example10.21

For the spinner shown at the beginning of the chapter, let \(X(i)=i^2\) when the pointer stops in region \(i\text{.}\) Then we have already noted that the expectation \(E(X)\) of the random variable \(X\) is \(109/8\text{.}\) It follows that the variance \(\var(X)\) is: \begin{align*} \var(X) =\amp(1^2-\frac{109}{8})^2\frac{1}{8}+(2^2-\frac{109}{8})^2\frac{1}{4}+ (3^2-\frac{109}{8})^2\frac{1}{8}+(4^2-\frac{109}{8})^2\frac{1}{8}\\ \amp+ (5^2-\frac{109}{8})^2\frac{3}{8}\\ =\amp (108^2+105^2+100^2+93^2+84^2)/512\\ =\amp 48394/512 \end{align*} It follows that the standard deviation \(\sigma_X\) of \(X\) is then \(\sqrt{48394/512}\approx 9.722\text{.}\)

Example10.22

Suppose that \(0\lt p\lt 1\) and consider a series of \(n\) Bernoulli trials with the probability of success being \(p\text{,}\) and let \(X\) count the number of successes. We have already noted that \(E(X)=np\text{.}\) Now we claim the variance of \(X\) is given by: \begin{equation*} \var(X)=\sum_{i=0}^n (i-np)^2\binom{n}{i}p^i(1-p)^{n-i} = np(1-p) \end{equation*} There are several ways to establish this claim. One way is to proceed directly from the definition, using the same method we used previously to obtain the expectation. But now you need also to calculate the second derivative. Here is a second approach, one that capitalizes on the fact that separate trials in a Bernoulli series are independent.

Let \(\cgF=\{X_1,X_2,\dots,X_n\}\) be a family of random variables in a probability space \((S,P)\text{.}\) We say the family \(\cgF\) is independent if for each \(i\) and \(j\) with \(1\le i\lt j\le n\text{,}\) and for each pair \(a,b\) of real numbers with \(0\le a,b\le 1\text{,}\) the following two events are independent: \(\{x\in S: X_i(x)\le a\}\) and \(\{x\in S:X_j(x)\le b\}\text{.}\) When the family is independent, it is straightforward to verify that \begin{equation*} \var(X_1+X_2+\dots+X_n)=\var(X_1)+\var(X_2)+\dots+\var(X_n). \end{equation*}

With the aid of this observation, the calculation of the variance of the random variable \(X\) which counts the number of successes becomes a trivial calculation. But in fact, the entire treatment we have outlined here is just a small part of a more complex subject which can be treated more elegantly and ultimately much more compactly—provided you first develop additional background material on families of random variables. For this we will refer you to suitable probability and statistics texts, such as those given in our references.

Variance (and standard deviation) are quite useful tools in discussions of just how likely a random variable is to be near its expected value. This is reflected in the following theorem.

Proof
Example10.25

Here's an example of how Chebyshev's Inequality can be applied. Consider \(n\) tosses of a fair coin with \(X\) counting the number of heads. As noted before, \(\mu=E(X)=n/2\) and \(\var(X)=n/4\text{,}\) so \(\sigma_X=\sqrt{n}/2\text{.}\) When \(n=10,000\) and \(\mu=5,000\) and \(\sigma_X=50\text{.}\) Setting \(k=50\) so that \(k\sigma_X=2500\text{,}\) we see that the probability that \(X\) is within \(2500\) of the expected value of \(5000\) is at least \(0.9996\text{.}\) So it seems very unlikely indeed that the number of heads is at least \(7,500\text{.}\)

Going back to lottery tickets, if we make the rational assumption that all ticket numbers are equally likely, then the probability that the winning number is at least \(7,500\) is exactly \(2501/100001\text{,}\) which is very close to \(1/4\text{.}\)

Example10.26

In the case of Bernoulli trials, we can use basic properties of binomial coefficients to make even more accurate estimates. Clearly, in the case of coin tossing, the probability that the number of heads in \(10,000\) tosses is at least \(7,500\) is given by \begin{equation*} \sum_{i = 7,500}^{10,000} \binom{10,000}{i}/2^{10,000} \end{equation*} Now a computer algebra system can make this calculation exactly, and you are encouraged to check it out just to see how truly small this quantity actually is.