Skip to main content
\(\newcommand{\set}[1]{\{1,2,\dotsc,#1\,\}} \newcommand{\ints}{\mathbb{Z}} \newcommand{\posints}{\mathbb{N}} \newcommand{\rats}{\mathbb{Q}} \newcommand{\reals}{\mathbb{R}} \newcommand{\complexes}{\mathbb{C}} \newcommand{\twospace}{\mathbb{R}^2} \newcommand{\threepace}{\mathbb{R}^3} \newcommand{\dspace}{\mathbb{R}^d} \newcommand{\nni}{\mathbb{N}_0} \newcommand{\nonnegints}{\mathbb{N}_0} \newcommand{\dom}{\operatorname{dom}} \newcommand{\ran}{\operatorname{ran}} \newcommand{\prob}{\operatorname{prob}} \newcommand{\Prob}{\operatorname{Prob}} \newcommand{\height}{\operatorname{height}} \newcommand{\width}{\operatorname{width}} \newcommand{\length}{\operatorname{length}} \newcommand{\crit}{\operatorname{crit}} \newcommand{\inc}{\operatorname{inc}} \newcommand{\HP}{\mathbf{H_P}} \newcommand{\HCP}{\mathbf{H^c_P}} \newcommand{\GP}{\mathbf{G_P}} \newcommand{\GQ}{\mathbf{G_Q}} \newcommand{\AG}{\mathbf{A_G}} \newcommand{\GCP}{\mathbf{G^c_P}} \newcommand{\PXP}{\mathbf{P}=(X,P)} \newcommand{\QYQ}{\mathbf{Q}=(Y,Q)} \newcommand{\GVE}{\mathbf{G}=(V,E)} \newcommand{\HWF}{\mathbf{H}=(W,F)} \newcommand{\bfC}{\mathbf{C}} \newcommand{\bfG}{\mathbf{G}} \newcommand{\bfH}{\mathbf{H}} \newcommand{\bfF}{\mathbf{F}} \newcommand{\bfI}{\mathbf{I}} \newcommand{\bfK}{\mathbf{K}} \newcommand{\bfP}{\mathbf{P}} \newcommand{\bfQ}{\mathbf{Q}} \newcommand{\bfR}{\mathbf{R}} \newcommand{\bfS}{\mathbf{S}} \newcommand{\bfT}{\mathbf{T}} \newcommand{\bfNP}{\mathbf{NP}} \newcommand{\bftwo}{\mathbf{2}} \newcommand{\cgA}{\mathcal{A}} \newcommand{\cgB}{\mathcal{B}} \newcommand{\cgC}{\mathcal{C}} \newcommand{\cgD}{\mathcal{D}} \newcommand{\cgE}{\mathcal{E}} \newcommand{\cgF}{\mathcal{F}} \newcommand{\cgG}{\mathcal{G}} \newcommand{\cgM}{\mathcal{M}} \newcommand{\cgN}{\mathcal{N}} \newcommand{\cgP}{\mathcal{P}} \newcommand{\cgR}{\mathcal{R}} \newcommand{\cgS}{\mathcal{S}} \newcommand{\bfn}{\mathbf{n}} \newcommand{\bfm}{\mathbf{m}} \newcommand{\bfk}{\mathbf{k}} \newcommand{\bfs}{\mathbf{s}} \newcommand{\bijection}{\xrightarrow[\text{onto}]{\text{$1$--$1$}}} \newcommand{\injection}{\xrightarrow[]{\text{$1$--$1$}}} \newcommand{\surjection}{\xrightarrow[\text{onto}]{}} \newcommand{\nin}{\not\in} \newcommand{\prufer}{\mbox{prüfer}} \DeclareMathOperator{\fix}{fix} \DeclareMathOperator{\stab}{stab} \DeclareMathOperator{\var}{var} \newcommand{\inv}{^{-1}} \newcommand{\lt}{<} \newcommand{\gt}{>} \newcommand{\amp}{&} \)

Section16.8Applying the Local Lemma

The list of applications of the Local Lemma has been growing steadily, as has the interest in how the lemma can be applied algorithmically, i.e., in a constructive setting. But here we present one of the early applications to Ramsey theory—estimating the Ramsey number \((R,3,n)\text{.}\) Recall that we have the basic inequality \(R(3,n)\le \binom{n+1}{3}\) from Theorem 11.2, and it is natural to turn to the probabilistic method to look for good lower bounds. But a few minutes thought shows that there are challenges to this approach.

First, let's try a direct computation. Suppose we try a random graph on \(t\) vertices with edge probability \(p\text{.}\) So we would want no triangles, and that would say we need \(t^3p^3=1\text{,}\) i.e., \(p=1/t\text{.}\) Then we would want no independent sets of size \(n\text{,}\) which would require \(n^te^{-pn^2}=1\text{,}\) i.e., \(t\ln n=pn^2\text{,}\) so we can't even make \(t\) larger than \(n\text{.}\) That's not helpful.

We can do a bit better by allowing some triangles and then removing one point from each, as was done in the proof for Theorem 11.7. Along these lines, we would set \(t^3p^3=t\text{,}\) i.e., \(p=t^{-2/3}\text{.}\) And the calculation now yields the lower bound \(R(3,n)\ge n^{6/5}/\ln^{-3/5} n\text{,}\) so even the exponent of \(n\) is different from the upper bound.

So which one is right, or is the answer somewhere in between? In a classic 1961 paper, Erdős used a very clever application of the probabilistic method to show the existence of a graph from which a good lower bound could be extracted. His technique yielded the lower bound \(R(3,n)\ge n^2/\ln^2 n\text{,}\) so the two on the exponent of \(n\) is correct.

Here we will use the Lovász Local Lemma to obtain this same lower bound in a much more direct manner. We consider a random graph on \(t\) vertices with edge probability \(p\text{.}\) For each \(3\)-element subset \(S\text{,}\) we have the event \(E_S\) which is true when \(S\) forms a triangle. For each \(n\)-element set \(T\text{,}\) we have the event \(E_T\) which is true when \(T\) is an independent set. In the discussion to follow, we abuse notation slightly and refer to events \(E_S\) and \(E_T\) as just \(S\) and \(T\text{,}\) respectively. Note that the probability of \(S\) is \(p^3\) for each \(3\)-element set \(S\text{,}\) while the probability of \(T\) is \(q=(1-p)^{C(n,2)}\sim e^{-pn^2/2}\) for each \(n\)-element set \(T\text{.}\)

When we apply the Local Lemma, we will set \(x=x(S)\) to be \(e^2p^3\text{,}\) for each \(3\)-element set \(S\text{.}\) And we will set \(y=Y(T)=q^{1/2}\sim e^{-pn^2/4}\text{.}\) It will be clear in a moment where we got those values.

Furthermore, the neighborhood of an event consists of all sets in the family which have two or more elements in common. So the neighborhood of a \(3\)-element set \(S\) consists of \(3(t-3)\) other \(3\)-element sets and \(C(t-3,n-3)+3C(t-3,n-2)\) sets of size \(n\text{.}\) Similarly, the neighborhood of an \(n\)-element set \(T\) consists of \(C(n,3)+ (t-n)C(n,2)\) sets of size \(3\) and \(\sum_{i=2}^{n-1}C(n,i) C(t-n,n-i)\) other sets of size \(n\text{.}\) So the basic inequalities we need to satisfy are:

\begin{align*} p^3 \le \amp x(1-x)^{3(t-3)}(1-y)^{C(t-3,n-3)+3C(t-n,n-2)}\\ q \le \amp y(1-x)^{C(n,3)+(t-n)C(n,2)}(1-y)^{C(t-3,n-3)+3C(t-n,n-2)} \end{align*}

Next, we assume that \(n^{3/2}\lt t\lt n^2\) and then make the usual approximations, ignoring smaller order terms and multiplicative constants, to see that these inequalities can be considered in the following simplified form:

\begin{align*} p^3 \le \amp x(1-x)^{t}(1-y)^{t^n}\\ q \le \amp y(1-x)^{tn^2}(1-y)^{t^n} \end{align*}

A moments reflection makes it clear that we want to keep the terms involving \((1-y)\) relatively large, i.e., at least \(1/e\text{.}\) This will certainly be true if we keep \(t^n\le 1/y\text{.}\) This is equivalent to \(n\ln t\le pn^2\text{,}\) or \(\ln t\le pn\text{.}\)

Similarly, we want to keep the term \((1-x)^{t}\) relatively large, so we keep \(t\le 1/x\text{,}\) i.e., \(t\le 1/p^3\text{.}\) On the other hand, we want only to keep the term \((1-x)^{tn^2}\sim e^{-xtn^2}\) at least as large as \(y\text{.}\) This is equivalent to keeping \(p\le xt\text{,}\) and since \(x\sim p^3\text{,}\) this can be rewritten as \(p^{-1}\le t^{1/2}\text{.}\)

Now we have our marching orders. We just set \(\ln t=pn\) and \(p^{-1}=t^{1/2}\text{.}\) After substituting, we get \(t= n^2/\ln^2t\) and since \(\ln t=\ln n\) (at least within the kind of approximations we are using), we get the desired result \(t=n^2/\ln^2n\text{.}\)