Skip to main content
\(\newcommand{\set}[1]{\{1,2,\dotsc,#1\,\}} \newcommand{\ints}{\mathbb{Z}} \newcommand{\posints}{\mathbb{N}} \newcommand{\rats}{\mathbb{Q}} \newcommand{\reals}{\mathbb{R}} \newcommand{\complexes}{\mathbb{C}} \newcommand{\twospace}{\mathbb{R}^2} \newcommand{\threepace}{\mathbb{R}^3} \newcommand{\dspace}{\mathbb{R}^d} \newcommand{\nni}{\mathbb{N}_0} \newcommand{\nonnegints}{\mathbb{N}_0} \newcommand{\dom}{\operatorname{dom}} \newcommand{\ran}{\operatorname{ran}} \newcommand{\prob}{\operatorname{prob}} \newcommand{\Prob}{\operatorname{Prob}} \newcommand{\height}{\operatorname{height}} \newcommand{\width}{\operatorname{width}} \newcommand{\length}{\operatorname{length}} \newcommand{\crit}{\operatorname{crit}} \newcommand{\inc}{\operatorname{inc}} \newcommand{\HP}{\mathbf{H_P}} \newcommand{\HCP}{\mathbf{H^c_P}} \newcommand{\GP}{\mathbf{G_P}} \newcommand{\GQ}{\mathbf{G_Q}} \newcommand{\AG}{\mathbf{A_G}} \newcommand{\GCP}{\mathbf{G^c_P}} \newcommand{\PXP}{\mathbf{P}=(X,P)} \newcommand{\QYQ}{\mathbf{Q}=(Y,Q)} \newcommand{\GVE}{\mathbf{G}=(V,E)} \newcommand{\HWF}{\mathbf{H}=(W,F)} \newcommand{\bfC}{\mathbf{C}} \newcommand{\bfG}{\mathbf{G}} \newcommand{\bfH}{\mathbf{H}} \newcommand{\bfF}{\mathbf{F}} \newcommand{\bfI}{\mathbf{I}} \newcommand{\bfK}{\mathbf{K}} \newcommand{\bfP}{\mathbf{P}} \newcommand{\bfQ}{\mathbf{Q}} \newcommand{\bfR}{\mathbf{R}} \newcommand{\bfS}{\mathbf{S}} \newcommand{\bfT}{\mathbf{T}} \newcommand{\bfNP}{\mathbf{NP}} \newcommand{\bftwo}{\mathbf{2}} \newcommand{\cgA}{\mathcal{A}} \newcommand{\cgB}{\mathcal{B}} \newcommand{\cgC}{\mathcal{C}} \newcommand{\cgD}{\mathcal{D}} \newcommand{\cgE}{\mathcal{E}} \newcommand{\cgF}{\mathcal{F}} \newcommand{\cgG}{\mathcal{G}} \newcommand{\cgM}{\mathcal{M}} \newcommand{\cgN}{\mathcal{N}} \newcommand{\cgP}{\mathcal{P}} \newcommand{\cgR}{\mathcal{R}} \newcommand{\cgS}{\mathcal{S}} \newcommand{\bfn}{\mathbf{n}} \newcommand{\bfm}{\mathbf{m}} \newcommand{\bfk}{\mathbf{k}} \newcommand{\bfs}{\mathbf{s}} \newcommand{\bijection}{\xrightarrow[\text{onto}]{\text{$1$--$1$}}} \newcommand{\injection}{\xrightarrow[]{\text{$1$--$1$}}} \newcommand{\surjection}{\xrightarrow[\text{onto}]{}} \newcommand{\nin}{\not\in} \newcommand{\prufer}{\mbox{prüfer}} \DeclareMathOperator{\fix}{fix} \DeclareMathOperator{\stab}{stab} \DeclareMathOperator{\var}{var} \newcommand{\inv}{^{-1}} \newcommand{\lt}{<} \newcommand{\gt}{>} \newcommand{\amp}{&} \)

Section11.4Applying Probability to Ramsey Theory

The following theorem, due to P. Erdős, is a true classic, and is presented here in a manner that is faithful to how it was first published. As we shall see later, it was subsequently recast—but that's getting the cart ahead of the horse.


Now let's take a second look at the proof of Theorem 11.4. We consider a probability space \((S,P)\) where the outcomes are graphs with vertex set \(\{1,2,\dots,t\}\text{.}\) For each \(i\) and \(j\) with \(1\le i \lt j\le t\text{,}\) edge \(ij\) is present in the graph with probability \(1/2\text{.}\) Furthermore, the events for distinct pairs are independent.

Let \(X_1\) denote the random variable which counts the number of \(n\)-element subsets of \(\{1,2,\dots,t\}\) for which all \(\binom{n}{2}\) pairs are edges in the graph. Similarly, \(X_2\) is the random variable which counts the number of \(n\)-element independent subsets of \(\{1,2,\dots,t\}\text{.}\) Then set \(X=X_1+X_2\text{.}\)

By linearity of expectation, \(E(X)=E(X_1)+E(X_2)\) while \begin{equation*} E(X_1)=E(X_2) = \binom{t}{n} \frac{1}{2^{C(n,2)}}. \end{equation*} If \(E(X)\lt 1\text{,}\) then there must exist a graph with vertex set \(\{1,2,\dots,t\}\) without a \(K_n\) or an \(I_n\text{.}\) And the question of how large \(t\) can be while maintaining \(E(X)\lt 1\) leads to exactly the same calculation we had before.

After more than fifty years and the efforts of many very bright researchers, only marginal improvements have been made on the bounds on \(R(n,n)\) from Theorem 11.2 and Theorem 11.4. In particular, no one can settle whether there is some constant \(c\lt 2\) and an integer \(n_0\) so that \(R(n,n)\lt 2^{cn}\) when \(n>n_0\text{.}\) Similarly, no one has been able to answer whether there is some constant \(d>1/2\) and an integer \(n_1\) so that \(R(n,n)>2^{dn}\) when \(n>n_1\text{.}\) We would certainly give you an \(A\) for this course if you managed to do either.


Carlos said that he had been trying to prove a good lower bound on \(R(n,n)\) using only constructive methods, i.e., no random techniques allowed. But he was having problems. Anything he tried seemed only to show that \(R(n,n)\ge n^c\) where \(c\) is a constant. That seems so weak compared to the exponential bound which the probabilistic method gives easily. Usually Alice was not very sympathetic to the complaints of others and certainly not from Carlos, who seemed always to be out front. But this time, Alice said to Carlos and in a manner that all could hear “Maybe you shouldn't be so hard on yourself. I read an article on the web that nobody has been able to show that there is a constant \(c>1\) and an integer \(n_0\) so that \(R(n,n)>c^n\) when \(n>n_0\text{,}\) provided that only constructive methods are allowed. And maybe, just maybe, saying that you are unable to do something that lots of other famous people seem also unable to do is not so bad.” Bob saw a new side of Alice and this too wasn't all bad.