Skip to main content
\(\newcommand{\set}[1]{\{1,2,\dotsc,#1\,\}} \newcommand{\ints}{\mathbb{Z}} \newcommand{\posints}{\mathbb{N}} \newcommand{\rats}{\mathbb{Q}} \newcommand{\reals}{\mathbb{R}} \newcommand{\complexes}{\mathbb{C}} \newcommand{\twospace}{\mathbb{R}^2} \newcommand{\threepace}{\mathbb{R}^3} \newcommand{\dspace}{\mathbb{R}^d} \newcommand{\nni}{\mathbb{N}_0} \newcommand{\nonnegints}{\mathbb{N}_0} \newcommand{\dom}{\operatorname{dom}} \newcommand{\ran}{\operatorname{ran}} \newcommand{\prob}{\operatorname{prob}} \newcommand{\Prob}{\operatorname{Prob}} \newcommand{\height}{\operatorname{height}} \newcommand{\width}{\operatorname{width}} \newcommand{\length}{\operatorname{length}} \newcommand{\crit}{\operatorname{crit}} \newcommand{\inc}{\operatorname{inc}} \newcommand{\HP}{\mathbf{H_P}} \newcommand{\HCP}{\mathbf{H^c_P}} \newcommand{\GP}{\mathbf{G_P}} \newcommand{\GQ}{\mathbf{G_Q}} \newcommand{\AG}{\mathbf{A_G}} \newcommand{\GCP}{\mathbf{G^c_P}} \newcommand{\PXP}{\mathbf{P}=(X,P)} \newcommand{\QYQ}{\mathbf{Q}=(Y,Q)} \newcommand{\GVE}{\mathbf{G}=(V,E)} \newcommand{\HWF}{\mathbf{H}=(W,F)} \newcommand{\bfC}{\mathbf{C}} \newcommand{\bfG}{\mathbf{G}} \newcommand{\bfH}{\mathbf{H}} \newcommand{\bfF}{\mathbf{F}} \newcommand{\bfI}{\mathbf{I}} \newcommand{\bfK}{\mathbf{K}} \newcommand{\bfP}{\mathbf{P}} \newcommand{\bfQ}{\mathbf{Q}} \newcommand{\bfR}{\mathbf{R}} \newcommand{\bfS}{\mathbf{S}} \newcommand{\bfT}{\mathbf{T}} \newcommand{\bfNP}{\mathbf{NP}} \newcommand{\bftwo}{\mathbf{2}} \newcommand{\cgA}{\mathcal{A}} \newcommand{\cgB}{\mathcal{B}} \newcommand{\cgC}{\mathcal{C}} \newcommand{\cgD}{\mathcal{D}} \newcommand{\cgE}{\mathcal{E}} \newcommand{\cgF}{\mathcal{F}} \newcommand{\cgG}{\mathcal{G}} \newcommand{\cgM}{\mathcal{M}} \newcommand{\cgN}{\mathcal{N}} \newcommand{\cgP}{\mathcal{P}} \newcommand{\cgR}{\mathcal{R}} \newcommand{\cgS}{\mathcal{S}} \newcommand{\bfn}{\mathbf{n}} \newcommand{\bfm}{\mathbf{m}} \newcommand{\bfk}{\mathbf{k}} \newcommand{\bfs}{\mathbf{s}} \newcommand{\bijection}{\xrightarrow[\text{onto}]{\text{$1$--$1$}}} \newcommand{\injection}{\xrightarrow[]{\text{$1$--$1$}}} \newcommand{\surjection}{\xrightarrow[\text{onto}]{}} \newcommand{\nin}{\not\in} \newcommand{\prufer}{\mbox{prüfer}} \DeclareMathOperator{\fix}{fix} \DeclareMathOperator{\stab}{stab} \DeclareMathOperator{\var}{var} \newcommand{\inv}{^{-1}} \newcommand{\lt}{ < } \newcommand{\gt}{ > } \newcommand{\amp}{ & } \)

Section16.4The Stable Matching Theorem

Now we present a light hearted optimization problem with a quite clever solution, called the Stable Matching Theorem. There are \(n\) eligible males \(b_1\), \(b_2,\dots,b_n\) and \(n\) eligible females \(g_1\), \(g_2,\dots,g_n\). (The theorem dates back many years, hence the heteronormative statement.) We will arrange \(n\) marriages, each involving one male and one female. In the process, we will try to make everyone happy—or at least we will try to keep things stable.

Each female linearly orders the males in the order of her preference, i.e., for each \(i=1,2,\dots,n\), there is a permutation \(\sigma_i\) of \([n]\) so that if \(g_i\) prefers \(b_j\) to \(b_k\), then \(\sigma_i(j)> \sigma_i(k)\). Different females may have quite different preference orders. Also, each male linearly orders the females in order of his preference, i.e., for each \(i=1,2,\dots,n\), there is a permutation \(\tau_i\) of \([n]\) so that if \(b_i\) prefers \(g_j\) to \(g_k\), then \(\tau_i(j)>\tau(k)\).

A \(1\)–\(1\) matching of the \(n\) males to the \(n\) females is stable if there do not exist two males \(b\) and \(b'\) and two females \(g\) and \(g'\) so that

  1. \(b\) is matched to \(g\);
  2. \(b\) prefers \(g'\) to \(g\); and
  3. \(g\) prefers \(b'\) to \(b\).

The idea is that given these preferences, \(b\) and \(g\) may be mutually inclined to dissolve their relationship and initiate dalliances with other partners.

So the question is whether, regardless of their respective preferences, we can always generate a stable matching. The answer is yes and there is a quite clever argument. In fact, it is one that yields a quite efficient algorithm. At Stage 1, all males go knock on the front door of the female which is tops on their list. It may happen that some females have more than one caller while others have none. However, if a female has one or more males at her door, she reaches out and grabs the one among the group which she prefers most by the collar and tells the others, if there are any, to go away. Any male rejected at this step proceeds to the front door of the female who is second on their list. Again, a female with one or more suitors at her door chooses the best among then and sends the others away. This process continues until eventually, each female is holding onto exactly one male.

It is interesting to note that each female's prospects improve over time, i.e., once she has a suitor, things only get better. Conversely, each male's prospects deteriorate over time. Regardless, we assert that the resulting matching is stable. To see this, suppose that it is unstable and choose males \(b\) and \(b'\), females \(g\) and \(g'\) so that \(b\) is matched to \(g\), but \(b\) prefers \(g'\) to \(g\) while \(g\) prefers \(b'\) to \(b\). The algorithm requires that male \(b\) start at the top of his list and work his way down. Since he eventually lands on \(g\)'s door step, and he prefers \(g'\) to \(g\), it implies that once upon a time, he was actually at \(g'\)'s door, and she sent him away. This means that at that exact moment, she had a male in hand that she prefers to \(b\). Since her holdings only improve with time, it means that when the matching is finalized, female \(g\) has a mate \(b\) that she prefers to \(b'\).