Skip to main content
\(\newcommand{\set}[1]{\{1,2,\dotsc,#1\,\}} \newcommand{\ints}{\mathbb{Z}} \newcommand{\posints}{\mathbb{N}} \newcommand{\rats}{\mathbb{Q}} \newcommand{\reals}{\mathbb{R}} \newcommand{\complexes}{\mathbb{C}} \newcommand{\twospace}{\mathbb{R}^2} \newcommand{\threepace}{\mathbb{R}^3} \newcommand{\dspace}{\mathbb{R}^d} \newcommand{\nni}{\mathbb{N}_0} \newcommand{\nonnegints}{\mathbb{N}_0} \newcommand{\dom}{\operatorname{dom}} \newcommand{\ran}{\operatorname{ran}} \newcommand{\prob}{\operatorname{prob}} \newcommand{\Prob}{\operatorname{Prob}} \newcommand{\height}{\operatorname{height}} \newcommand{\width}{\operatorname{width}} \newcommand{\length}{\operatorname{length}} \newcommand{\crit}{\operatorname{crit}} \newcommand{\inc}{\operatorname{inc}} \newcommand{\HP}{\mathbf{H_P}} \newcommand{\HCP}{\mathbf{H^c_P}} \newcommand{\GP}{\mathbf{G_P}} \newcommand{\GQ}{\mathbf{G_Q}} \newcommand{\AG}{\mathbf{A_G}} \newcommand{\GCP}{\mathbf{G^c_P}} \newcommand{\PXP}{\mathbf{P}=(X,P)} \newcommand{\QYQ}{\mathbf{Q}=(Y,Q)} \newcommand{\GVE}{\mathbf{G}=(V,E)} \newcommand{\HWF}{\mathbf{H}=(W,F)} \newcommand{\bfC}{\mathbf{C}} \newcommand{\bfG}{\mathbf{G}} \newcommand{\bfH}{\mathbf{H}} \newcommand{\bfF}{\mathbf{F}} \newcommand{\bfI}{\mathbf{I}} \newcommand{\bfK}{\mathbf{K}} \newcommand{\bfP}{\mathbf{P}} \newcommand{\bfQ}{\mathbf{Q}} \newcommand{\bfR}{\mathbf{R}} \newcommand{\bfS}{\mathbf{S}} \newcommand{\bfT}{\mathbf{T}} \newcommand{\bfNP}{\mathbf{NP}} \newcommand{\bftwo}{\mathbf{2}} \newcommand{\cgA}{\mathcal{A}} \newcommand{\cgB}{\mathcal{B}} \newcommand{\cgC}{\mathcal{C}} \newcommand{\cgD}{\mathcal{D}} \newcommand{\cgE}{\mathcal{E}} \newcommand{\cgF}{\mathcal{F}} \newcommand{\cgG}{\mathcal{G}} \newcommand{\cgM}{\mathcal{M}} \newcommand{\cgN}{\mathcal{N}} \newcommand{\cgP}{\mathcal{P}} \newcommand{\cgR}{\mathcal{R}} \newcommand{\cgS}{\mathcal{S}} \newcommand{\bfn}{\mathbf{n}} \newcommand{\bfm}{\mathbf{m}} \newcommand{\bfk}{\mathbf{k}} \newcommand{\bfs}{\mathbf{s}} \newcommand{\bijection}{\xrightarrow[\text{onto}]{\text{$1$--$1$}}} \newcommand{\injection}{\xrightarrow[]{\text{$1$--$1$}}} \newcommand{\surjection}{\xrightarrow[\text{onto}]{}} \newcommand{\nin}{\not\in} \newcommand{\prufer}{\mbox{prüfer}} \DeclareMathOperator{\fix}{fix} \DeclareMathOperator{\stab}{stab} \DeclareMathOperator{\var}{var} \newcommand{\inv}{^{-1}} \newcommand{\lt}{<} \newcommand{\gt}{>} \newcommand{\amp}{&} \)

Section12.1Minimum Weight Spanning Trees

In this section, we consider pairs \((\bfG,w)\) where \(\GVE\) is a connected graph and \(w\colon E\rightarrow\nonnegints\text{.}\) For each edge \(e\in E\text{,}\) the quantity \(w(e)\) is called the weight of \(e\text{.}\) Given a set \(S\) of edges, we define the weight of \(S\text{,}\) denoted \(w(S)\text{,}\) by setting \(w(S)=\sum_{e\in S} w(e)\text{.}\) In particular, the weight of a spanning tree \(T\) is just the sum of the weights of the edges in \(T\text{.}\)

Weighted graphs arise in many contexts. One of the most natural is when the weights on the edges are distances or costs. For example, consider the weighted graph in Figure 12.1. Suppose the vertices represent nodes of a network and the edges represent the ability to establish direct physical connections between those nodes. The weights associated to the edges represent the cost (let's say in thousands of dollars) of building those connections. The company establishing the network among the nodes only cares that there is a way to get data between each pair of nodes. Any additional links would create redundancy in which they are not interested at this time. A spanning tree of the graph ensures that each node can communicate with each of the others and has no redundancy, since removing any edge disconnects it. Thus, to minimize the cost of building the network, we want to find a minimum weight (or cost) spanning tree.

<<SVG image is unavailable, or your browser cannot render it>>

Figure12.1A weighted graph

To do this, this section considers the following problem:

Problem12.2

Find a minimum weight spanning tree \(\bfT\) of \(\bfG\text{.}\)

To solve this problem, we will develop two efficient graph algorithms, each having certain computational advantages and disadvantages. Before developing the algorithms, we need to establish some preliminaries about spanning trees and forests.

Subsection12.1.1Preliminaries

The following proposition about the number of components in a spanning forest of a graph \(\bfG\) has an easy inductive proof. You are asked to provide it in the exercises.

The following proposition establishes a way to take a spanning tree of a graph, remove an edge from it, and add an edge of the graph that is not in the spanning tree to create a new spanning tree. Effectively, the process exchanges two edges to form the new spanning tree, so we call this the exchange principle.

For both of the algorithms we develop, the argument to show that the algorithm is optimal rests on the following technical lemma. To avoid trivialities, we assume \(n\ge3\text{.}\)

Discussion12.7

Although Bob's combinatorial intuition has improved over the course he doesn't quite understand why we need special algorithms to find minimum weight spanning trees. He figures there can't be that many spanning trees, so he wants to just write them down. Alice groans as she senses that Bob must have been absent when the material from Section 5.6 was discussed. In that section, we learned that a graph on \(n\) vertices can have as many as \(n^{n-2}\) spanning trees (or horrors, the instructor may have left it off the syllabus). Regardless, this exhaustive approach is already unusable when \(n = 20\text{.}\) Dave mumbles something about being greedy and just adding the lightest edges one-by-one while never adding an edge that would make a cycle. Zori remembers a strategy like this working for finding the height of a poset, but she's worried about the nightmare situation that we learned about with using FirstFit to color graphs. Alice agrees that greedy algorithms have an inconsistent track record but suggests that Lemma 12.6 may be enough to get one to succeed here.

Subsection12.1.2Kruskal's Algorithm

In this section, we develop one of the best known algorithms for finding a minimum weight spanning tree. It is known as Kruskal's Algorithm, although some prefer the descriptive label Avoid Cycles because of the way it builds the spanning tree.

To start Kruskal's algorithm, we sort the edges according to weight. To be more precise, let \(m\) denote the number of edges in \(\GVE\text{.}\) Then label the edges as \(e_1,e_2,e_3,\dots,e_m\) so that \(w(e_1)\le w(e_2)\le \dots \le w(e_m)\text{.}\) Any of the many available efficient sorting algorithms can be used to do this step.

Once the edges are sorted, Kruskal's algorithm proceeds to an initialization step and then inductively builds the spanning tree \(\bfT=(V,S)\text{:}\)

The correctness of Kruskal's Algorithm follows from an inductive argument. First, the set \(S\) is initialized as the empty set, so there is certainly a minimum weight spanning tree containing all the edges in \(S\text{.}\) Now suppose that for some \(i\) with \(0\le i \lt n\text{,}\) \(|S|=i\) and there is a minimum weight spanning tree containing all the edges in \(S\text{.}\) Let \(\bfF\) be the spanning forest determined by the edges in \(S\text{,}\) and let \(C_1, C_2,\dots,C_s\) be the components of \(\bfF\text{.}\) For each \(k=1,2,\dots,s\text{,}\) let \(f_k\) be a minimum weight edge with one endpoint in \(C_k\) and the other not in \(C_k\text{.}\) Then the edge \(e\) added to \(S\) by Kruskal's Algorithm is just the edge \(\{f_1,f_2,\dots,f_s\}\) having minimum weight. Applying Lemma 12.6 and the inductive hypothesis, we know that there will still be a minimum weight spanning tree of \(\bfG\) containing all the edges of \(S\cup\{e\}\text{.}\)

Example12.9Kruskal's Algorithm

Let's see what Kruskal's algorithm does on the weighted graph in Figure 12.1. It first sorts all of the edges by weight. We won't reproduce the list here, since we won't need all of it. The edge of least weight is \(ck\text{,}\) which has weight \(23\text{.}\) It continues adding the edge of least weight, adding \(ag\text{,}\) \(fg\text{,}\) \(fi\text{,}\) \(fj\text{,}\) and \(bj\text{.}\) However, after doing this, the edge of lowest weight is \(fb\text{,}\) which has weight \(38\text{.}\) This edge cannot be added, as doing so would make \(fjb\) a cycle. Thus, the algorithm bypasses it and adds \(bc\text{.}\) Edge \(ai\) is next inspected, but it, too, would create a cycle and is eliminated from consideration. Then \(em\) is added, followed by \(dl\text{.}\) There are now two edges of weight \(56\) to be considered: \(al\) and \(dj\text{.}\) Our sorting algorithm has somehow decided one of them should appear first, so let's say it's \(dj\text{.}\) After adding \(dj\text{,}\) we cannot add \(al\text{,}\) as \(agfjdl\) would form a cycle. Edge \(dk\) is next considered, but it would also form a cycle. However, \(ek\) can be added. Edges \(km\) and \(dm\) are then bypassed. Finally, edge \(ch\) is added as the twelfth and final edge for this \(13\)-vertex spanning tree. The full list of edges added (in order) is shown to the right. The total weight of this spanning tree is \(504\text{.}\)

c k 23
a g 25
f g 26
f i 29
f j 30
b j 34
b c 39
e m 49
d l 55
d j 56
e k 59
c h 79

Subsection12.1.3Prim's Algorithm

We now develop Prim's Algorithm for finding a minimum weight spanning tree. This algorithm is also known by a more descriptive label: Build Tree. We begin by choosing a root vertex \(r\text{.}\) Again, the algorithm proceeds with an initialization step followed by a series of inductive steps.

The correctness of Prim's algorithm follows immediately from Lemma 12.6.

Example12.11Prim's Algorithm

Let's see what Prim's algorithm does on the weighted graph in Figure 12.1. We start with vertex \(a\) as the root vertex. The lightest edge connecting \(a\) (the only vertex in the tree so far) to the rest of the graph is \(ag\text{.}\) Next, \(fg\) is added. This is followed by \(fi\text{,}\) \(fj\text{,}\) \(bj\text{,}\) and \(bc\text{.}\) Next, the algorithm identifies \(ck\) as the lightest edge connecting \(\{a,g,i,f,j,b,c\}\) to the remaining vertices. Notice that this is considerably later than Kruskal's algorithm finds the same edge. The algorithm then determines that \(al\) and \(jd\text{,}\) both of weight \(56\) are the lightest edges connecting vertices in the tree to the other vertices. It picks arbitrarily, so let's say it takes \(al\text{.}\) It next finds \(dl\text{,}\) then \(ek\text{,}\) and then \(em\text{.}\) The final edge added is \(ch\text{.}\) The full list of edges added (in order) is shown to the right. The total weight of this spanning tree is \(504\text{.}\) This (not surprisingly) the same weight we obtained using Kruskal's algorithm. However, notice that the spanning tree found is different, as this one contains \(al\) instead of \(dj\text{.}\) This is not an issue, of course, since in both cases an arbitrary choice between two edges of equal weight was made.

a g 25
f g 26
f i 29
f j 30
b j 34
b c 39
c k 23
a l 56
d l 55
e k 59
e m 49
c h 79

Subsection12.1.4Comments on Efficiency

An implementation of Kruskal's algorithm seems to require that the edges be sorted. If the graph has \(n\) vertices and \(m\) edges, this requires \(m\log m\) operations just for the sort. But once the sort is done, the process takes only \(n-1\) steps—provided you keep track of the components as the spanning forest expands. Regardless, it is easy to see that at most \(O(n^2\log n)\) operations are required.

On the other hand, an implementation of Prim's algorithm requires the program to conveniently keep track of the edges incident with each vertex and always be able to identify the edge with least weight among subsets of these edges. In computer science, the data structure that enables this task to be carried out is called a heap.