Induction gives a new way to prove results about natural numbers and discrete structures like games, puzzles, and graphs. All of the standard rules of proofwriting still apply to inductive proofs. However, there are a few new concerns and caveats that apply to inductive proofs. This guide details some useful pointers and techniques for proofs by induction.
The Inductive Proof Template
There are many different ways to structure an inductive proof. When you're just getting started with inductive proofs, we recommend structuring your inductive proofs along the following lines. As we progress through the quarter and you start getting more comfortable with writing inductive proofs, we'll start to rely on this structure less and less.
-
Define some predicate $P(n)$ that you'll prove by induction. When writing an inductive proof, you'll be proving that some predicate is true for $0$ and that if that predicate holds for $k\text,$ it also holds for $k + 1\text.$ (Or, more generally, that the predicate holds for some number of base cases and some step size.) To make explicit what predicate that is, begin your proof by spelling out what predicate you'll be proving something about by induction. We've typically denoted this predicate $P(n)$. If you're having trouble with this, don't worry! There's a section later on in this handout describing what this $P(n)$ thing is all about.
-
State that the proof is by induction. After you've written out your predicate $P(n)$ in step (1), say that you're going to prove, by induction, that it's true for all the numbers you care about. If you're going to prove $P(n)$ is true for all natural numbers, say that. If you're going to prove that $P(n)$ is true for all even natural numbers greater than five, make that clear. This gives the reader a heads-up about how the induction will proceed.
-
State and prove your base case. All inductive proofs require some kind of base case, so it's probably best to start off by proving it. You've defined what $P(n)$ is in step (1), and now you need to prove $P(0)$ (or perhaps $P(1)$, or perhaps multiple base cases; see below). We recommend explicitly writing out what $P(0)$ actually states before you try to prove it. This will communicate to your reader what you're going to prove and helps make it clearer what you need to show. For example, you might write “We will prove $P(0)$, which states that …” or “For our base case, we'll prove $P(0)$, namely, that …”
-
State and prove the inductive step. The inductive step in a proof by induction is to show that for all choices of $k$, if $P(k)$ is true, then $P(k+1)$ is true. Typically, you'd prove this by assuming $P(k)$ and then proving $P(k+1)$. We recommend specifically writing out both what the assumption $P(k)$ means and what you're going to prove when you show $P(k+1)$. As with step (3), this makes it clearer to the reader what they're going to see next and helps you confirm that you know what you're assuming and what you're proving.
-
Conclude the proof. This usually isn't too much – you can just say something like “completing the induction” and add the little boxy $\qed$ symbol. You're done!
A Guide to $P(n)$
An important step in starting an inductive proof is choosing some predicate $P(n)$ to prove via mathematical induction. This step can be one of the more confusing parts of a proof by induction, and in this section we'll explore exactly what P(n) is, what it means, and how to choose it.
Formally speaking, induction works in the following way. Given some predicate $P(n)$, an inductive proof
- proves $P(0)$ is true as a base case;
- proves that if $P(k)$ is true, then $P(k+1)$ must be true as well; and
- concludes that $P(n)$ is true for any natural number $n$.
At a nuts-and-bolts level, induction is a tool for proving that some predicate $P(n)$ holds for any natural number $n$. When writing an inductive proof, you'll want to choose $P(n)$ so that you can prove your overall result by showing that $P(n)$ is true for every natural number $n$.
As an example, suppose that you want to prove this result from Problem Set Five:
For any $n \in \naturals$, $a_n$ is both square and triangular.
This is a universal statement – for any natural number $n$, the number $a_n$ is both square and triangular. To prove this using mathematical induction, we'd need to pick some predicate $P(n)$ so that if $P(n)$ is true for every natural number $n$, the original statement we want to prove is true. One possible choice of $P(n)$ could be this one:
P(n) is the statement “$a_n$ is both square and triangular.”
Let's look at this statement. By itself, it just says we're defining some new statement called $P(n)$, where $P(n)$ is shorthand for “$a_n$ is both square and triangular.” If you plug in any value of $n$, you get back some new statement that might be true and might be false. For example, $P(4)$ is the statement $a_4$ is both square and triangular. The statement $P(1)$ is that $a_1$ is square and triangular, and the statement $P(10)$ is that $a_{10}$ is both square and triangular.
Let's suppose that we do a proof by induction and show that $P(n)$ is true for every possible choice of natural number $n$. What would that mean? Well, it would mean that
For any natural number $n$, the statement $P(n)$ is true.
Expanding out the definition of $P(n)$ tells us that
For any natural number $n$, the number $a_n$ is both square and triangular.
This is the statement that we wanted to prove. Therefore, if we can prove that $P(n)$ is true for all choices of natural number $n$, we'll have a proof of our overall statement.
As a quick note: the notation $P(n)$ might seem confusing – it looks like it's some function of the number $n$. However, $P(n)$ isn't a mathematical quantity. Instead, it's a predicate – a logical assertion saying something about the number $n$. Consequently, be careful not to treat $P(n)$ as a number. Check the follow-up “Induction Proofwriting Checklist” for more on this.
Directionality in Induction
In the inductive step of a proof, you need to prove this statement:
If $P(k)$ is true, then $P(k+1)$ is true.
Typically, in an inductive proof, you'd start off by assuming that $P(k)$ was true, then would proceed to show that $P(k+1)$ must also be true. In practice, it can be easy to inadvertently get this backwards. Here's an incorrect proof that the sum of the first $n$ powers of two is $2^n - 1$. (Note that the result that it proves is true, but the proof itself has a logical error that we'll discuss in a second).
(Incorrect!) Proof: Let $P(n)$ be the statement “the sum of the first $n$ powers of two is $2^n - 1$.” We will prove by induction that $P(n)$ holds for all $n \in \naturals$, from which the theorem follows.
For the base case, we prove $P(0)$, that the sum of the first zero powers of two is $2^0 - 1$. Since the sum of zero numbers is $0$ and $2^0 - 1 = 0$, this result is true.
For the inductive step, assume that $P(k)$ is true for some arbitrary $k \in \naturals$, meaning that
\[2^0 + 2^1 + \dots + 2^{k-1} = 2^k - 1 \text.\]We will prove that $P(k+1)$ is true, meaning that the sum of the first $k+1$ powers of two is $2^{k+1} - 1$. To see this, note that
\[\begin{array}{lcl} 2^0 + 2^1 + \dots + 2^{k-1} + 2^k & = & 2^{k+1} - 1 \\ 2^0 + 2^1 + \dots + 2^{k-1} + 2^k & = & 2(2^k) - 1 \\ 2^0 + 2^1 + \dots + 2^{k-1} + 2^k & = & 2^k + 2^k - 1 \\ 2^0 + 2^1 + \dots + 2^{k-1} & = & 2^k - 1 \end{array}\]We've arrived at our inductive hypothesis, which we know is true. Therefore, $P(k+1)$ is true, completing the induction. $\qed$
This proof is, unfortunately, incorrect, but it might not immediately be clear why. The setup of the proof is fine, as is the proof of the base case. However, things break down when we get to the inductive step. Take a look at this part:
We will prove that $P(k+1)$ is true, meaning that the sum of the first $k+1$ powers of two is $2^{k+1} - 1$. To see this, note that
\[\begin{array}{lcl} 2^0 + 2^1 + \dots + 2^{k-1} + 2^k & = & 2^{k+1} - 1 \end{array}\]Here, the proof says that it's going to prove that the sum of the first $k+1$ powers of two is $2^{k+1} - 1$, so the proof should try to establish why this result is true. However, the very first thing it does is write out an equality asserting that the sum of the first $k+1$ powers of two is $2^{k+1} - 1$. This should give you pause: if the proof says it's going to prove something, why is it stating it as if it were a mathematical fact?
If you keep reading through the proof, you'll see that the proof works by manipulating this equality and ultimately arriving at the fact that $2^0 + 2^1 + \dots + 2^{k-1} = 2^k - 1$, the inductive hypothesis. The proof then claims that this statement is by assumption true, so the proof is complete.
Take a step back from this proof and think about what it actually ends up doing. It begins by assuming that the sum of the first $k+1$ powers of two is $2^{k+1} - 1$, then proceeds from there to prove that under this assumption, the sum of the first $k$ powers of two is $2^k - 1$. In other words, the proof has gone backwards; it assumed $P(k+1)$ is true and used that to prove $P(k)$ is true. Even though the mathematical reasoning that's written is correct, the proof is establishing the wrong result and is therefore incorrect.
Choosing and Proving Base Cases
Inductive proofs need base cases, and choosing the right base case can be a bit tricky. For example, think back to our initial inductive proof: that the sum of the first $n$ powers of two is $2^n - 1$. In that proof, we chose as our base case $n=0$. This might seem weird, since that means that we're reasoning about a sum of zero numbers. Similarly, in the counterfeit coin example, we chose our base case to be the setup where we have no weighings on the scale and need to find the counterfeit coin. Wouldn't it have made more sense to talk about cases where we have one weighing, not zero?
There are four main reasons we chose to use these base cases. First, we chose our base cases to make sure that the theorems we proved were true for every natural number $n$, not just some of them. For example, in the theorem about sums of powers of two, our goal was to prove the theorem true for every natural number, including zero. Similarly, in the counterfeit coin problem, we wanted to show that with any number of weighings $n$, including zero, you can find the counterfeit out of $3^n$ coins with $n$ weighings. Had we chosen our base case for these proofs to be one instead of zero, then our proofs would have omitted a case and would not satisfactorily proven the overall result.
Second, we chose these extremely simple base cases because they're as simplified as they can possibly be. At some level, each inductive proof works by somehow using the assumption that the theorem is true in a simpler case (say, that it's true for $k$) to build up a proof that the theorem is true for a more complex case (for example, that it's true for $k+1$). The reason the inductive proof ultimately works is that we know that it's true for the absolute simplest case, the base case. Philosophically, it's the job of the inductive step to try to simplify, and it's the job of the base case to show that when no more simplification is possible, the result must be true. Therefore, we strongly recommend making your base case something that is so simple that there's no possible way to simplify any further. After all, you can't make fewer than $0$ weighings, or sum up fewer than 0 numbers.
Third, choosing the absolute simplest possible base case forces you to think about the extreme cases of the result. There are some times where picking $0$ as a base case doesn't actually work because the result isn't actually true in that case. For example, it is not true that a square can be subdivided into $0$ squares. By trying to search for the simplest possible base case in an inductive proof, you can sometimes discover boundary cases that might not have been evident from an intuitive understanding of the theorem. Conversely, you often discover cases where the result actually does work that's not intuitively evident!
Finally, in some cases, choosing the absolutely simplest possible base case can actually make your proof shorter and simpler. For example, suppose that we changed the base case in the proof of the counterfeit coin problem to be the case where there are three coins and one weighing. In that case, the base case would have to explain how to split the coins into three piles, how to weigh those piles against one another, and how to determine which coin from the group was counterfeit. This would lengthen the proof without adding much, since that exact same line of reasoning appears in the proof of the inductive step. In fact, that leads to a good general piece of advice: if you find yourself using similar reasoning in your base case and your inductive step, chances are you can make your base case even simpler!
Induction or Complete Induction?
One of the trickier skills when first learning induction is learning when to use standard induction and when to use complete induction. This typically depends on the route that your proof takes.
At some point in your proof, you'll need to use the assumption that the theorem is true for some smaller number to establish that the theorem is true for a larger number. The difference between induction and complete induction is how much smaller “some smaller number” is. Suppose that as part of your inductive step, you're trying to prove that the result is true for $k+1$. If you can always prove this purely by assuming that the result is true for $k$, then you can just use normal induction. On the other hand, if you need to assume that the result is true for some number between $0$ and $k$, inclusive, but you're not sure in advance which of those numbers it's going to be, you should use complete induction to make sure you have those cases covered.
It might be easier to see when this second case might arise by looking at our lecture example of complete induction, where we proved that a $1 \times n$ chocolate bar can be eaten in $2^{n-1}$ ways for any $n \ge 1$. In the inductive step, we began with a chocolate bar of size $1 \times (k+1)$, then considered all possible first bites we could make. The size of the chocolate bar that remains after the first bite is made could be any number from $0$ to $k$, inclusive, depending on whether we break of $1, 2, 3, \dots, \text{ or } k+1$ pieces. By using complete induction, we can ensure that no matter how many squares are left in that chocolate bar, we know how many ways we can eat the rest of the smaller bar. If we had just used regular induction, we'd only know about what happens in the case where we had a chocolate bar of size $1 \times k$ remaining. Do you see why?
When deciding to select between induction and complete induction in your proof, try using this general heuristic:
- If you can determine in advance the size of the smaller problem, use standard induction.
- If you can't determine the size in advance, use complete induction.
An interesting detail here is that complete induction in a sense “supersedes” regular induction. In both regular and complete induction, you assume that $P(k)$ is true, but only in complete induction do you get the rest of the assumptions along with it. This gives rise to a reasonable question: why would you ever use standard induction?
Technically speaking, you never need to use standard induction. You can always use complete induction whenever you'd use standard induction. From a stylistic perspective, though, it's not considered good form to use complete induction unless you actually need those extra assumptions. The type of induction you choose to use should mirror the structure of your argument. If you always prove that the result is true for $k+1$ based purely on the assumption that the result is true for $k$, there's no reason to clutter the proof by assuming that the result is also true for $0, 1, 2, 3,$ etc. Think of it this way: there's all sorts of true statements that you might want to mention in a proof, but you tend to only include ones that are relevant. If $P(0), P(1), \dots, \text{ and } P(k)$ are all relevant assumptions, include them. If you just need $P(k)$, leave the other ones out.
Up, Down, or Neither?
As a closing remark to this discussion, I thought I'd point out one particular issue that comes up in induction that frequently trips people up.
As an example, let's look at the presents problem problem Problem Set Five. In this problem, you're asked to prove, by induction, that any group of $n$ emcees can be housed in at most seven hotels. There are many ways that you can set this problem up inductively. One technique would be to choose $P(n)$ as the statement “any group of $n$ emcees, each of whom has gifts for at most three others, can be housed in seven hotels.” If you're going about proving this result by induction, at some point you'll need to prove the inductive step, which says the following:
If $P(k)$ is true, then $P(k+1)$ is true.
Plugging in $P(n)$ tells us that the statement we wish to prove is the following:
If for any group of $k$ emcees, each of whom has gifts for at most three others, there's a way to house them in seven hotels, then for any group of $k+1$ emcees, each of whom has gifts for at most three others, there's a way to house them in seven hotels.
One of the most common mistakes we see people make is to try to prove this statement using reasoning along the following lines:
âš Incorrect âš Line of Reasoning: For the inductive step, assume that for some arbitrary natural number $k \ge 1$ that any group of $k$ emcees, each of whom has gifts for at most three others, can be housed in seven hotels. We will prove that any group of $k+1$ emcees, each of whom has gifts for at most three others, can be housed in seven hotels.
To do so, begin by taking any group $M$ of $k$ emcees, each of whom has gifts for at most three others. Add in another emcee $e$ to form a group $M'$ of $k+1$ emcees, each of whom has gifts for at most three others.
The above setup may look correct, but it's actually on the wrong track.
In a proof by induction, the idea is to use the fact that a result is true for some number $k$ to prove that it's true for some number $k+1$. Therefore, it can be tempting to prove a result like the one above by saying “let's start with a group of $k$ emcees, then add in another emcees to get $k+1$ emcees.” After all, we know something about a group of $k$ emcees and want to show something about group with $k+1$ emcees, so it seems only reasonable that we'd start with a $k$ emcees and add one more.
However, think back to the logical structure of what we're trying to prove. We're trying to show that
If for any group of $k$ emcees, each of whom has gifts for at most three others, those emcees can be housed in seven hotels, then for any group of $k+1$ emcees, each of whom has gifts for at most three others, those emcees can be housed in seven hotels..
In other words, we are assuming a universally-quantified statement and proving a universally-quantified statement. So now let's think back to the Guide to Proofs on Discrete Structures. How do you assume a universally-quantified statement, and how do you prove a universally-quantified statement? As you've seen, to assume a universally-quantified statement, you initially do nothing. You just sit back and wait for an opportunity to present itself. Here, that means sitting tight until we find a group of $k$ emcees. To prove a universally-quantified statement, we make an arbitrary choice of object, then try to prove it has some property. In this case, that means picking an arbitrary group of $k+1$ emcees, then trying to find a way to house those emcees.
This means that the actual strategy of this proof will look quite different from the above one. We'll begin with a group of $k+1$ emcees, then try to find a way to house them in seven hotels. And since we know that any group of $k$ emcees always can be housed, our approach will probably be something to the effect of finding a emcee to remove from our bigger group, rather than finding a emcee to add to a smaller group.
This can seem backwards the first time you see it – why are we starting off with a group of $k+1$ emcees when we only know something about a group of $k$ emcees? However, if you carefully unpack the statement we're trying to prove, you'll see that all we're doing here is proceeding the way that we normally would if we were trying to prove a universal statement.
To summarize – before you write out a proof by induction, write out explicitly what the inductive step will be. It's going to be a big implication, meaning that you'll assume the antecedent and prove the consequent. Make sure that you take the same steps when assuming the antecedent and trying to prove the consequent that you would when proving any general implication. That might mean that you need to start with an object of size $k+1$ and then massage it down to an object of size $k$, or it might mean that you need to start with an object of size $k$ and grow it up into an object of size $k+1$. Or it might mean neither of those! You'll know which case it is based on the structure of the implication you're trying to prove. When it doubt, write it out!
Exercises
Here are some (optional, not collected or graded) exercises you can work through to practice the concepts from this guide.
-
Suppose you want to prove the following statement by induction:
For all natural numbers $n\text,$ the sum of the first $n$ natural numbers is $\frac{n(n-1)}{2}\text.$
Give a choice of $P(n)$ that you could use in your induction proof.
We need a predicate $P(n)$ where, if $P(n)$ is true for all natural numbers $n\text,$ then the statement would be proved. By pulling off the universal quantifier from the statement and leaving behind just what's said to be true about $n\text,$ we get the following:
$P(n)$ is the statement "the sum of the first $n$ natural numbers is $\frac{n(n-1)}{2}\text.$
-
Suppose you want to prove the following statement by induction:
For all odd natural numbers $n\text,$ there is a tournament with $n$ players where every player is a champion.
Give a choice of $P(n)$ that you could use in your induction proof.
Here's one possible choice:
$P(n)$ is the statement "there is a tournament with $n$ players where every player is a champion."
Notice that the bit about odd numbers has disappeared here. That's okay! We don't need to put that anywhere in $P(n)$ itself. Instead, the induction proof could be structured so that it takes steps of size two with a base case of 1.
Another option would be this one:
$P(n)$ is the statement "there is a tournament with $2n+1$ players where every player is a champion."
Here, we explicitly convert each natural number $n$ to the odd natural number $2n+1\text.$ If we prove this using standard induction (base case of 0, steps of size 1), then we would prove something about every natural number.
-
Prove that for every natural number $n \ge 1,$ we have $\frac{1}{1 \cdot 2} + \frac{1}{2 \cdot 3} + \dots + \frac{1}{n(n+1)} = \frac{n}{n+1}\text.$
Our first step will be to choose our predicate $P(n)\text.$ The statement of the theorem is universally-quantified – for any natural number $n \ge 1$, some equality holds. To prove this using mathematical induction, we'd need to pick some predicate $P(n)$ so that if $P(n)$ is true for every natural number $n$, the original statement we want to prove is true. One possible choice of $P(n)$ could be this one:
$P(n)$ is the statement “$\frac{1}{1 \cdot 2} + \frac{1}{2 \cdot 3} + \dots + \frac{1}{n(n+1)} = \frac{n}{n+1}\text.$”
Let's look at this statement. By itself, it just says we're defining some new statement called $P(n)$, where $P(n)$ is shorthand for “$\frac{1}{1 \cdot 2} + \frac{1}{2 \cdot 3} + \dots + \frac{1}{n(n+1)} = \frac{n}{n+1}\text.$” If you plug in any value of $n$, you get back some new statement that might be true and might be false. For example, $P(4)$ says that $\frac{1}{1 \cdot 2} + \frac{1}{2 \cdot 3} + \frac{1}{3 \cdot 4} + \frac{1}{4 \cdot 5} = \frac{4}{5}\text,$ which (as we saw above) is true. The statement $P(2)$ says that $\frac{1}{1 \cdot 2} + \frac{1}{2 \cdot 3} = \frac{2}{3}$ (you can check that this is true) and the statement $P(10)$ is that $\frac{1}{1 \cdot 2} + \frac{1}{2 \cdot 3} + \dots + \frac{1}{10 \cdot 11} = \frac{10}{11}\text.$ (Feel free to check that last one if you'd like!)
Now that we have our predicate $P(n)$ picked out, let's see if we can prove that $P(n)$ is indeed true for all natural numbers $n \ge 1\text.$ To do so, we need to figure out our base case and our inductive step. We want to prove that the equality holds for all $n \ge 1\text,$ so it makes sense to pick $1$ as our base case. That means that we'll need to prove $P(1)\text.$
So what is $P(1)\text?$ In general, $P(n)$ says that $\frac{1}{1 \cdot 2} + \frac{1}{2 \cdot 3} + \dots + \frac{1}{n(n+1)} = \frac{n}{n+1}\text,$ where there are $n$ terms in the sum. Thus if we're talking about $P(1)\text,$ we'd have just one term in our sum, which would be $\frac{1}{1 \cdot 2}\text.$ The right-hand side of our equation is $\frac{1}{2}\text,$ and that's pretty clearly equal to $\frac{1}{1 \cdot 2}\text.$ As is often the case in base cases, there isn't that much to do here.
Next, we need to think about our inductive step. The general shape of our inductive step is to assume $P(k)$ is true for some natural number $k$ (and, specifically, a natural number $k$ that's greater than or equal to one, since one is our base case), then prove $P(k+1)$ is true. This means that we'll assume $P(k)$ is true, meaning we know the following:
\[\frac{1}{1 \cdot 2} + \frac{1}{2 \cdot 3} + \dots + \frac{1}{k(k+1)} = \frac{k}{k+1}\text.\]We need to prove $P(k+1)\text,$ which is the following:
\[\frac{1}{1 \cdot 2} + \frac{1}{2 \cdot 3} + \dots + \frac{1}{k(k+1)} + \frac{1}{(k+1)(k + 2)} = \frac{k+1}{k+2}\text.\]Now, how might we go about doing this? If we just had to prove $P(k+1)$ in isolation, we'd be left with the unenviable task of trying to collapse all those terms together. But we're not starting in isolation: we know that $P(k)$ is true, so we have a nice formula for the sum of the first $k$ terms on the left-hand side. So let's start with the left-hand side of the equation and simplify it a bit:
\[\begin{aligned} \frac{1}{1 \cdot 2} + \dots + \frac{1}{k(k+1)} + \frac{1}{(k+1)(k + 2)} & = \left( \frac{1}{1 \cdot 2} + \dots + \frac{1}{k(k+1)} \right) + \frac{1}{(k+1)(k + 2)} \\ &= \frac{k}{k+1} + \frac{1}{(k+1)(k+2)}\text. \end{aligned}\]Wow, that's a lot nicer! We only have two terms in our sum, and that makes things easier to work with. Now we just do some standard algebra:
\[\begin{aligned} \frac{k}{k+1} + \frac{1}{(k+1)(k + 2)} &= \frac{k(k+2)}{(k+1)(k+2)} + \frac{1}{(k+1)(k+2)} \\ &= \frac{k^2 + 2k}{(k+1)(k+2)} + \frac{1}{(k+1)(k+2)} \\ &= \frac{k^2 + 2k + 1}{(k+1)(k+2)} \\ &= \frac{(k+1)^2}{(k+1)(k+2)} \\ &= \frac{k+1}{k+2}\text. \end{aligned}\]And hey! That's what we were supposed to get back. Isn't it cool how the math works out here?
We've just sketched out a line of reasoning that works for our base case and for our inductive step. All that's left to do now is to put the two together:
Proof: Let $P(n)$ be the statement "$\frac{1}{1 \cdot 2} + \frac{1}{2 \cdot 3} + \dots + \frac{1}{n(n+1)} = \frac{n}{n+1}\text.$" We will prove by induction that $P(n)$ holds for all natural numbers $n \ge 1\text,$ from which the theorem follows.
As our base case, we prove $P(1)\text,$ that $\frac{1}{1 \cdot 2} = \frac{1}{2}\text.$ The left and right sides of this equation are each $\frac{1}{2}\text,$ and so they're equal.
For our inductive step, pick some $k \in \naturals$ where $k \ge 1$ and assume $P(k)$ is true, meaning that
\[\frac{1}{1 \cdot 2} + \frac{1}{2 \cdot 3} + \dots + \frac{1}{k(k+1)} = \frac{k}{k+1}\text. \qquad (1)\]We need to prove $P(k+1)\text,$ which means we need to prove
\[\frac{1}{1 \cdot 2} + \frac{1}{2 \cdot 3} + \dots + \frac{1}{k(k+1)} + \frac{1}{(k+1)(k+2)} = \frac{k+1}{k+2}\text.\]To see this, note that
\[\begin{aligned} \frac{1}{1 \cdot 2} + \dots + \frac{1}{k(k+1)} + \frac{1}{(k+1)(k + 2)} & = \left( \frac{1}{1 \cdot 2} + \dots + \frac{1}{k(k+1)} \right) + \frac{1}{(k+1)(k + 2)} \\ &= \frac{k}{k+1} + \frac{1}{(k+1)(k+2)} \text{ (By (1))} \\ &= \frac{k(k+2)}{(k+1)(k+2)} + \frac{1}{(k+1)(k+2)} \\ &= \frac{k^2 + 2k}{(k+1)(k+2)} + \frac{1}{(k+1)(k+2)} \\ &= \frac{k^2 + 2k + 1}{(k+1)(k+2)} \\ &= \frac{(k+1)^2}{(k+1)(k+2)} \\ &= \frac{k+1}{k+2}\text. \end{aligned}\]Thus $P(k+1)$ holds, completing the induction. $\qed$
-
Consider the following recurrence relation:
\[a_0 = 1 \quad a_1 = 3 \quad a_2 = 7 \quad a_{n+3} = a_n + a_{n+1}.\]So, for example, $a_3 = 4\text,$ $a_4 = 10\text,$ and $a_5 = 11\text.$
Prove that $a_0 + a_1 + \dots + a_n = a_{n+5} - 10$ for all $n \in \mathbb{N}\text.$
Note that the sum $a_0 + a_1 + \dots + a_n$ uses the terms $a_0$ up through and including $a_n$. For example, when we take $n = 3$, we're looking at the sum $a_0 + a_1 + a_2 + a_3$.
I personally find this proof beautiful - it involves expanding and contracting the recurrence in different ways.
Proof: Let $P(n)$ be the statement "$a_0 + \dots + a_n = a_{n+5} - 10$." We will prove by induction on $n$ that $P(n)$ holds for all $n \in \naturals$, from which the theorem follows.
As our base case, we prove $P(0)$, that $a_0 = a_{5} - 10$. To see this, note that $a_0 = 1$ and, as we saw above, $a_5 - 10 = 11 - 10 = 1$. This means that $a_0 = a_5 - 10$, as required.
For our inductive step, pick some $k \in \naturals$ and assume that $P(k)$ is true, meaning that
\[a_0 + \dots + a_k = a_{k+5} - 10 \text.\]We need to show that $P(k+1)$ is true, meaning that
\[a_0 + \dots + a_k + a_{k+1} = a_{k+6} - 10 \text.\]To see this, note that
\[\begin{aligned} a_0 + \dots + a_k + a_{k+1} &= (a_0 + \dots + a_k) + a_{k+1} \\ &= (a_{k+5} - 10) + a_{k+1} && \text{(by our IH)} \\ &= a_{k+1} + a_{k+5} - 10 \\ &= a_{k+1} + a_{k+2} + a_{k+3} - 10 \\ &= a_{k+3} + a_{k+4} - 10 \\ &= a_{k+6} - 10 \text. \end{aligned}\]Thus $a_0 + \dots + a_{k+1} = a_{k+6} - 10$, so $P(k+1)$ holds, completing the induction. $\qed$
-
Contract rummy is a card game for any number of players in which players are dealt a hand of cards and, through several iterations of drawing and discarding cards, need to accumulate sets and sequences. A set (at least, in this context) is a collection of three cards of the same value, and a sequence is a collection of four cards of the same suit that are in ascending order. The game proceeds in multiple rounds in which players need to accumulate a different number of sets and sequences. The rounds are as follows:
- Two sets (six cards)
- One set, one sequence (seven cards)
- Two sequences (eight cards)
- Three sets (nine cards)
- Two sets and a sequence (ten cards)
- One set and two sequences (eleven cards)
- Three sequences (twelve cards)
Notice that in each round, the requirements are such that the number of cards required increases by one. It's interesting that it's always possible to do this, since the total number of cards must be made using just combinations of three cards and four cards.
Prove, by induction, that any natural number greater than or equal to six can be written as $3m + 4n$ for some natural numbers $m$ and $n$.
We know of two ways to prove this. The first of these uses induction with steps of size three, which we find to be the easiest way to do this.
Proof 1: Let $P(n)$ be the statement "there exists natural numbers $x$ and $y$ where $n = 3x + 4y\text{."}$ We will prove by induction that $P(n)$ holds for all natural numbers $n \ge 6\text,$ from which the theorem follows.
As our base cases, we prove $P(6)\text,$ $P(7)\text,$ and $P(8)\text.$ To see this, note that
\[6 = 3 \cdot \mathbf{2} + 4 \cdot \mathbf{0} \qquad 6 = 3 \cdot \mathbf{1} + 4 \cdot \mathbf{1} \qquad 8 = 3 \cdot \mathbf{0} + 4 \cdot \mathbf{2}\]For our inductive step, pick some natural number $k \ge 6$ and assume that $P(k)$ is true, meaning that there are natural numbers $x$ and $y$ for which $k = 3x + 4y\text.$ We will prove $P(k+3)\text,$ that there are natural numbers $x'$ and $y'$ where $3x' + 4y' = k+3\text.$ To do so, pick $x' = x + 1$ and $y' = y\text.$ Then we see that
\[\begin{aligned} 3x' + 4y' &= 3(x + 1) + 4y \\ &= 3x + 3 + 4y \\ &= (3x + 4y) + 3 \\ &= k + 3\text.\end{aligned}\]Thus $P(k+3)$ holds, completing the induction. $\qed$
However, it's also possible to do this with steps of size one, though it's a bit harder. Here's what that looks like:
Proof 2: Let $P(n)$ be the statement "there exists natural numbers $x$ and $y$ where $n = 3x + 4y\text{."}$ We will prove by induction that $P(n)$ holds for all natural numbers $n \ge 6\text,$ from which the theorem follows.
As our base case, we prove $P(6)\text,$ that there exists natural nunmbers $x$ and $y$ where $6 = 3x + 4y\text.$ Picking $x = 2$ and $y = 0$ gets the job done, so $P(6)$ is true.
For our inductive step, assume for some natural number $k \ge 6$ that $P(k)$ is true and that there are natural numbers $x$ and $y$ where $3x + 4y = k\text.$ We need to prove $P(k+1)\text,$ that there are natural numbers $x'$ and $y'$ where $3x' + 4y' = k+1\text.$ To do so, we consider two cases:
-
Case 1: $x \gt 0\text.$ This means that $x - 1 \ge 0\text,$ so $x - 1 \ge 0\text,$ so $x - 1 \in \naturals\text.$ Then pick $x' = x - 1$ and $y' = y + 1$ and we see that
\[\begin{aligned} 3x' + 4y' &= 3(x - 1) + 4(y + 1) \\ &= 3x - 3 + 4y + 4 \\ &= (3x + 4y) + (4 - 3) \\ &= k + 1\text, \end{aligned}\]as required.
-
Case 2: $x = 0\text.$ Then since $3x + 4y = k$ and $k \ge 6\text,$ we know that $y \ge 2\text,$ so $y - 2 \ge 0$ and so $y - 2 \in \naturals\text.$ Let $x' = x + 3$ and $y' = y - 2\text.$ We then see that
\[\begin{aligned} 3x' + 4y' &= 3(x + 3) + 4(y - 2) \\ &= 3x + 9 + 4y - 8 \\ &= (3x + 4y) + 9 - 8 \\ &= k + 1\text, \end{aligned}\]as required.
In either case, we see that there are $x', y' \in \naturals$ where $3x' + 4y' = k+1\text,$ so $P(k+1)$ holds, completing the induction. $\qed$
-
For any natural number $n\text,$ we define the set $V_n$ as follows:
\[V_n = \set{V_k \suchthat k \in \naturals \land k \lt n }\text.\]Next, we define the magnitude of a set $S\text,$ denoted $M(S)\text,$ to be one plus the sum of the magnitudes of all its elements. (The magnitude of a non-set is defined to be zero.)
Prove by complete induction that $M(V_n) = 2^n$ for all $n \in \naturals\text.$
Before diving into the proof, we should take a few minutes to play around with the relevant definitions, as there's a lot to unpack here.
Let's begin by exploring what these sets $V_n$ are. It never hurts to start with some simple cases, so let's look at $V_0\text,$ which is defined to be the set $V_0 = \set{ V_k \suchthat k \in \naturals \land k \lt 0 }\text.$ There are no natural numbers $k$ where $k \lt 0\text,$ so this is the empty set: $V_0 = \emptyset\text.$
Next, let's look at $V_1 = \set{ V_k \suchthat k \in \naturals \land k \lt 1 }\text.$ This is the set of all $V_k$ where $k$ is a natural number less than one. The only such natural number is 0, so this is the set $V_1 = \set{V_0} = \set{\emptyset}\text.$
What's $V_2\text?$ Formally, it's $\set{ V_k \suchthat k \in \naturals \land k \lt 2 }\text.$ This means that $V_2 = \set{ V_0, V_1 }\text,$ which we can simplify as $V_2 = \set{\emptyset, \set{\emptyset}}\text.$
What about $V_3\text?$ That would be $\set{ V_k \suchthat k \in \naturals \land k \lt 2}\text,$ which is the set $\set{ V_0, V_1, V_2 }\text.$ We can expand that out to be $\set{\emptyset, \set{\emptyset}, \set{\emptyset, \set{\emptyset}}}$ if we'd like, but this is starting to get hard to write.
More generally, it seems like $V_n = \set{V_0, V_1, V_2, \dots, V_{n-1}}\text.$ Expanding this out would be tough but difficult. Once $n$ gets to any reasonable number (say, $n = 5\text{),}$ you'd probably need a computer program to output what $V_n$ is. In fact, I happen to have written such a program; here's $V_5\text:$
\[\set{\emptyset,\set{\emptyset},\set{\emptyset,\set{\emptyset}},\set{\emptyset,\set{\emptyset},\set{\emptyset,\set{\emptyset}}},\set{\emptyset,\set{\emptyset},\set{\emptyset,\set{\emptyset}},\set{\emptyset,\set{\emptyset},\set{\emptyset,\set{\emptyset}}}}}\]It's a lot easier if we instead think of $V_5$ as $\set{V_0, V_1, V_2, V_3, V_4}\text,$ since that's a lot easier to work with.
Next, on to magnitudes. We're given that the magnitude of a set is one plus the magnitudes of its elements. That seems recursively - and potentially infinitely so, since nothing here seems like a base case! But that's okay; let's play around and see what we find.
We're interested in showing $M(V_n) = 2^n\text,$ so let's look at some simple values of $n\text.$ For example, what is $M(V_0)\text?$ We know that $V_0 = \emptyset\text,$ so we need to figure out what $M(\emptyset)$ is. Well, $M(\emptyset)$ is defined as "one plus the sum of the magnitudes of all the elements of $\emptyset\text,$" and $\emptyset$ has no elements, so the sum of its (zero) elements is the empty sum, 0. This means that $M(\emptyset)$ is just 1. And hey, what do you know, $1$ happens to be $2^0\text.$ This is promising.
What about $M(V_1)\text?$ We said that $V_1 = \set{\emptyset}\text,$ so $M(V_1) = M(\set{\emptyset})\text.$ This comes out to "one plus the sum of the magnitudes of the elements of $\set{\emptyset}\text.$ The only element of $\set{\emptyset}$ is $\emptyset\text,$ which has magnitude one. Thus $M(\set{\emptyset}) = 1 + 1 = 2 = 2^1\text.$
How about $M(V_2)\text?$ We know from above that $V_2 = \set{\emptyset, \set{\emptyset}}\text,$ but it might be easiest to think about this by writing this out as $V_2 = \set{V_0, V_1}\text.$ If we do that, it's a bit easier to see that $M(V_2) = M(V_0) + M(V_1) + 1\text.$ And since we already know that $M(V_0) = 1$ and $M(V_1) = 1\text,$ this means that $M(V_2) = 1 + 2 + 1 = 4 = 2^2\text,$ so the pattern seems to continue here.
More generally, what about $M(V_n)$ for some arbitrary $n \in \naturals\text?$ As we saw above, we know that $V_n = \set{V_0, V_1, \dots, V_{n-1}}\text,$ so $M(V_n) = M(V_0) + M(V_1) + \dots + M(V_{n-1}) + 1\text.$ In other words, to know what $M(V_n)$ is, we need to know something about $M(V_0)\text,$ $M(V_1)\text,$ $\dots\text,$ and $M(V_{n_1})\text.$ That should immediately make the word "induction" flash past our eyes. But more strongly, it should make the phrase "complete induction" jump to mind. That's because, if we find ourselves in a situation where we need to know something about all natural numbers smaller than us, we can use complete induction to assume we know what happens in those smaller cases.
If we use complete induction here and pick up where we left off with this line of reasoning, we get the following proof:
Proof: Let $P(n)$ be the statement "$M(V_n) = 2^n\text.$" We will prove by induction that $P(n)$ holds for all $n \in \naturals\text,$ from which the theorem follows.
As our base case, we prove $P(0)\text,$ that $M(V_0) = 2^0\text.$ We note that $V_0 = \emptyset$ (there are no choices of $k \in \naturals$ where $k \lt 0\text{),}$ so $M(V_0) = M(\emptyset)\text.$ Furthermore, since $\emptyset$ has no elements, we see that $M(V_0) = 1 = 2^0\text,$ as required.
For our inductive step, pick some $k \in \naturals$ and assume that $P(0)\text,$ $P(1)\text,$ $\dots\text,$ and $P(k)$ are all true, meaning that $M(V_r) = 2^r$ for all $0 \le r \le k\text.$ We need to prove $P(k+1)\text,$ meaning that $M(V_{k+1}) = 2^{k+1}\text.$
First, notice that $V_{k+1}$ has $k+1$ elements, namely, $V_0\text,$ $V_1\text,$ $\dots,$ and $V_k\text.$ This means that
\[\begin{aligned} M(V_{k+1}) & = M(V_0) + \dots + M(V_k) + 1 \\ & = 2^0 + 2^1 + \dots + 2^k + 1 && \text{(By our IH)} \\ & = (2^{k+1} - 1) + 1 && \text{(As proved in lecture)} \\ & = 2^{k+1}\text. \end{aligned}\]Thus $P(k+1)$ holds, completing the induction. $\qed$
-
Let $G = (V, E)$ be an (undirected) graph. A c-vertex-coloring of $G$ is a way of painting the nodes of $G$ using at most $c$ colors so that no two adjacent nodes have the same color.
Let $c \in \naturals$ be an arbitrary positive natural number. Prove the following theorem by induction: for every graph $G\text,$ if every node of $G$ has degree $c$ or less, then there exists a $(c+1)$-vertex-coloring of $G\text.$ As a hint, use induction on the number of nodes in $G\text.$
This is one where, once you've set up the problem correctly, the solution naturally presents itself.
Let's begin by thinking about what our predicate $P(n)$ is. Our hint was to use induction on the number of nodes in $G\text,$ so we might come up with something like this:
$P(n)$ is the statement "for every graph $G$ with $n$ nodes, if every node of $G$ has degree $c$ or less, then there exists a $(c+1)$-vertex-coloring of $G\text.$"
Now, let's pause for a moment. Is this predicate universally-quantified, existentially-quantified, or neither? It happens to contain both quantifiers (for every graph $G$ with $n$ nodes … there exists a $(c+1)$-vertex-coloring of $G\text.$). However, when considering the overall structure of $P(n)$ we see that the top-level quantifier is the universal quantifier, and so that's the one we care about.
Earlier, we discussed how, if you have a universally-quantified predicate $P(n)$ in an induction proof, we should "build down." This means that the general shape of our inductive step will look like this:
- Assume that every $k$-node graph where each node has degree $c$ or less has a $(c+1)$-vertex-coloring.
- Pick an arbitrary $(k+1)$-node graph $G$ where each node has degree $c$ or less. Somehow show it has a $(c+1)$-vertex coloring. Attempt to do this by finding some way to remove a node from $G$ so that we can apply our inductive hypothesis to it.
So let's suppose we pick some node $v$ from $G$ and remove it to form a graph $G'\text.$ This gives us back a graph where each node has degree $c$ or less (removing nodes can't increase degrees), so $G'$ has a $(c+1)$-vertex-coloring. That means we can paint the nodes of $G'\text,$ which are the remaining nodes of $G\text,$ using at most $c+1$ colors where no two adjacent nodes share a color.
We have a coloring for every node of $G$ other than $v\text,$ so if we can find a color to give to $v$ that isn't used by any of its neighbors, we're done. And fortunately, we can! We know $v$ is adjacent to at most $c$ other nodes, so with $c+1$ colors we can always find a color to give it that is different than those of its neighbors.
Putting this all together, we can come up with the following proof.
Proof: Let $P(n)$ be the statement "for every graph $G = (V, E)$ with $n$ nodes, if every node of $G$ has degree $c$ or less, then there exists a $(c+1)$-vertex-coloring of $G\text.$" We will prove by induction that $P(n)$ holds for all $n \in \naturals\text,$ from which the theorem follows.
As our base case, we prove $P(0)\text,$ that every graph where every node has degree 0 or less has a 1-vertex-coloring. Choose any graph $G = (V, E)$ where every node in $G$ has degree 0. Now, paint every node of $G$ with the same color (say, blue). Because every node has degree 0, no two nodes in $G$ are adjacent, and so it's vacuously true that every pair of adjacent nodes is painted a different color. Since we've used one color, this coloring is thus a 1-vertex-coloring, as required.
For our inductive step, pick some $k \in \naturals$ and assume $P(k)$ is true, that any graph $G$ with $k$ nodes and where each node has degree $c$ or less has a $(c+1)$-vertex-coloring. We will prove $P(k+1)\text,$ that any graph $G$ with $k+1$ nodes and where each node has degree $c$ or less has a $(c+1)$-vertex-coloring.
Choose any graph $G = (V, E)$ with $k+1$ nodes where each node of $G$ has degree $c$ or less. Choose any node $v \in V$ and form a new graph $G' = (V', E')$ by deleting $v$ from $G\text.$
We claim that every node in $G'$ has degree $c$ or less. To see this, choose any $u \in V'\text.$ Then $u$ is adjacent to the same set of nodes in $G'$ as in $G\text,$ with the possible exception of $v\text.$ Since $u$ was adjacent to $c$ or fewer nodes in $G\text,$ this means that $u$ is adjacent to $c$ or fewer nodes in $G'$ as well. Thus, by our inductive hypothesis, we know there is a way to paint the nodes of $G'$ with $c+1$ colors so that no two adjacent nodes in $G'$ have the same color.
We can extend this coloring to all of $G$ as follows. Since $v$ has degree $c\text,$ it is adjacent to at most $c$ nodes of $G'\text.$ By the generalized pigeonhole principle, since there are $c$ nodes and $c+1$ colors, there exists a color (WLOG assume it's blue) that appears on $\floor{\frac{c}{c+1}} = 0$ of $v$'s neighbors. Assign this color to $v\text.$
We claim this is a $(c+1)$-vertex-coloring of $G\text.$ First note that we have used only $c+1$ colors. Now consider any edge $e$ in $G\text.$ If neither of $e$'s endpoints are $v\text,$ then both nodes are in $G'$ and thus are given different colors. Otherwise, one of the endpoints is $v\text,$ and as described above this means the endpoints have different colors.
We thus have a $(c+1)$-vertex-coloring of $G\text,$ so $P(k+1)$ holds, completing the induction. $\qed$
-
Recall that $K_n$ is the complete graph with $n$ nodes: a graph consisting of $n$ nodes with an edge between each distinct pair of nodes.
Prove by induction that for all natural numbers $n\text,$ there exists a coloring of the edges of $K_{2^n}$ using $n$ colors that does not have any monochrome triangles. (Notice that we're working with $K_{2^n}$ here, the complete graph on $2^n$ nodes.)
At first glance this might seem like a daunting problem. We need to find a way to color an infinite collection of graphs $(K_{2^0}, K_{2^1}, K_{2^2}, \dots \text)$ such that none of them have a monochrome copy of $K_3\text.$ How exactly might we approach this? Fortunately, you're given a big hint in the problem statement: use induction. And as soon as we have that insight, we can break this challenging problem down into some smaller pieces, each of which ends up being pretty managable. Let's see how to do this.
In any inductive proof, we need a base case, and picking $n = 0$ seems like a good candidate here since we need this result to hold for all natural numbers $n\text.$ In that case, we are tasked with coloring the edges of $K_{2^0} = K_1$ with $0$ colors so that the graph contains no monochrome copies of $K_3\text.$ Fortunately, that turns out not to be that hard! The graph $K_1$ has no edges at all, so we can pick up our paintbrush, paint all 0 of those edges, put the paintbrush down, and announce "okay, I painted everything!" And sure enough, the graph $K_1$ isn't going to have a monochrome copy of $K_3$ because it doesn't have enough nodes or edges for there to be a $K_3$ lurking anywhere. Done!
That takes us to the harder part of the proof: the inductive step. Let's map out what exactly it is that we're going to be assuming and proving. Our normal approach here is to pick some $k \in \naturals\text,$ assume the theorem is true for $n = k\text,$ then prove that the theorem is true for $n = k+1\text.$ We're using capital $K$ to denote a complete graph, and it can be a bit confusing to read $K_{2^k}$ out loud, so let's pick a different variable name (we'll go with $r$ here) just to avoid confusion. So we'll assume that there is a way to color the edges of $K_{2^r}$ using $r$ colors without forming a monochrome $K_3\text,$ and we'll prove that there's a way to color the edges of $K_{2^{r+1}}$ using $r+1$ colors without forming a monochrome $K_3\text.$ The question now is how to do this.
Our first question is what the shape of this proof is going to look like. Is this a "build up" induction, a "build down" induction, or neither? Well, the statement in question - there exists a way to color the edges of $K_{2^n}$ with $n$ colors without forming a monochrome $K_3$ - is an existentially-quantified statement. Therefore, we're going to use a "build up" approach. We'll assume we know how to color $K_{2^r}$ with $r$ colors without any monochrome $K_3\text{'s,}$ and then our goal is to find a way to color $K_{2^{r+1}}$ with $r+1$ colors without any monochrome $K_3\text{'s.}$
An important detail here is that we are building up the coloring rather than the graph. We will begin with the knowledge of how to color a graph with $2^r$ nodes using $r$ colors. Our goal is to find a way to color a graph with $2^{r+1}$ nodes using $r+1$ colors. This means that we will not begin the proof by picking an arbitrary coloring of the $2^{r+1}\text{-node}$ graph, but instead by thinking about how to create a new coloring for this graph using a coloring that works for smaller graphs.
So how do we do this? Well, let's look at the graph $K_{2^{r+1}}\text.$ We have a color palette of $r+1$ colors to pick from, and we need to paint all the edges without forming a monochrome triangle. Let's compare this situation with the one outlined in the inductive hypothesis. Compared with our IH, we have
- a graph with exactly twice as many nodes as it needs, and
- one too many colors.
That might give us a hint about what to do. We need to
- reduce the size of our graph by a factor of two, and
- get rid of one of the colors.
At this point, it's just a matter of trial and error to figure out how best to do this. With a little bit of thought, you might come up with the following approach:
- Take the graph $K_{2^{r+1}}$ and partition its nodes into two groups of size $2^r\text.$ Each of these groups of nodes forms a copy of $K_{2^r}\text,$ and there are a bunch of edges running from each group into the other.
- Use our inductive hypothesis to color each of the smaller $K_{2^r}$ graphs using $r$ colors without any monochrome triangles.
- Color all the remaining edges - the ones that cross-link the two smaller graphs - using an $(r+1)\text{st}$ color.
This approach works for the following reason. If you pick a triangle in the graph, then either
- it's purely in one of the two smaller graphs and therefore isn't monochrome, or
- it crosses between the two smaller graphs, meaning that it has at least one edge of the new color and at least one edge of an older color and thus isn't monochrome.
This is, of course, not the only way you can prove this result. We've included two different proofs below. The first one follows pretty closely with the above line of reasoning. The second one works by starting with a colored version of $K_{2^r}\text,$ building a new $K_{2^{r+1}}$ around it, and showing how to color that graph along the way. Each of these options work, though we think the first route is a bit easier.
Proof 1: Let $P(n)$ be the statement "there is a way to color the edges of $K_{2^n}$ with $n$ colors so that there are no monochrome copies of $K_3\text{."}$ We will prove by induction on $n$ that $P(n)$ holds for all $n \in \naturals\text,$ from which the theorem follows.
As our base case, we prove $P(0)\text,$ that there is a way to color the edges of $K_{2^0} = K_1$ with $0$ colors so that there are no monochrome $K_3\text{'s.}$ The graph $K_1$ has no edges, so if we color all 0 edges using 0 colors we will have no monochrome $K_3\text{'s}$ because $K_3$ contains three edges and $K_1$ has no edges.
For our inductive step, pick some $r \in \naturals$ and assume that $P(r)$ is true, meaning that there is a way to color the edges of $K_{2^r}$ with $r$ colors so that there are no monochrome copies of $K_3\text.$ We will prove that $P(r+1)$ is true by showing how to color the edges of $K_{2^{r+1}}$ with $r+1$ colors such that the resulting graph has no monochrome $K_3\text{'s.}$
Begin by partioning the nodes in $K_{2^{r+1}}$ into two groups of $2^r$ nodes each (call them $A$ and $B\text{).}$ This leaves us with two copies of $K_{2^r}$ along with a collection of edges linking nodes in $A$ to nodes in $B\text.$ Color the edges in the second category zomp. Then, select any $r$ other colors and assign colors to the edges in $A$ and $B$ using the coloring guaranteed by the inductive hypothesis.
We claim that this new coloring has no monochrome copies of $K_3\text.$ To see this, choose any three distinct nodes $x, y, z$ from $K_{2^{r+1}}\text;$ we'll prove that the copy of $K_3$ formed from $x\text,$ $y\text,$ and $z$ is not monochrome. To see this, first, by the pigeonhole principle, since we have three nodes and each node belongs to one of $A$ and $B\text,$ one of $A$ and $B$ contains at least two of these nodes. Assume WLOG that $x$ and $y$ are in $A\text.$ We now consider two cases:
-
Case 1: $z \in A\text.$ Then by our inductive hypothesis the copy of $K_3$ formed from $x\text,$ $y\text,$ and $z$ are not all the same color.
-
Case 2: $z \notin A\text.$ Then the edge between $x$ and $z$ is zomp while the edge between $x$ and $y$ is not zomp, so the copy of $K_3$ formed by $x\text,$ $y\text,$ and $z$ is not monochrome.
Either way the three nodes do not form a monochrome triangle, as needed. Thus $P(r+1)$ holds, completing the induction. $\qed$
Proof 2: Let $P(n)$ be the statement "there is a way to color the edges of $K_{2^n}$ with $n$ colors so that there are no monochrome copies of $K_3\text{."}$ We will prove by induction on $n$ that $P(n)$ holds for all $n \in \naturals\text,$ from which the theorem follows.
As our base case, we prove $P(0)\text,$ that there is a way to color the edges of $K_{2^0} = K_1$ with $0$ colors so that there are no monochrome $K_3\text{'s.}$ The graph $K_1$ has no edges, so if we color all 0 edges using 0 colors we will have no monochrome $K_3\text{'s}$ because $K_3$ contains three edges and $K_1$ has no edges.
For our inductive step, pick some $r \in \naturals$ and assume that $P(r)$ is true, meaning that there is a way to color the edges of $K_{2^r}$ with $r$ colors so that there are no monochrome copies of $K_3\text.$ We will prove that $P(r+1)$ is true by showing how to color the edges of $K_{2^{r+1}}$ with $r+1$ colors such that the resulting graph has no monochrome $K_3\text{'s.}$
Choose any $k+1$ colors $c_1, c_2, \ldots, c_{r+1}\text.$ Begin with two graphs $A$ and $B\text,$ each of which is a copy of $K_{2^r}\text,$ and color them using colors $c_1, c_2, \ldots, c_r$ such that neither $A$ nor $B$ contains any monochrome copies of $K_3\text.$ Next, form graph $G$ by adding edges from each node in $A$ to each node in $B$ that are given color $c_{r+1}\text.$ We need to show that graph $G$ is a copy of $K_{2^{r+1}}$ colored using $k+1$ colors and which has no monochrome triangles.
To see that $G$ is a copy of $K_{2^{r+1}}\text,$ pick any two nodes $u, v$ in $G\text.$ We need to show that $u$ and $v$ are adjacent. To see this, note that if $u$ and $v$ both belong to $A$ or both belong to $B\text,$ then they are adjacent because they are nodes in the same complete graph. Otherwise one of $u$ and $v$ is in $A$ and the other is in $B\text,$ and by construction we added an edge between them in $G\text.$
To see that we've only used $k+1$ colors, note that in coloring $G$ we only used colors from $c_1, c_2, \ldots, c_{r+1}\text,$ and there are $r+1$ colors in that list.
Now we must show that $G$ has no monochrome copies of $K_3\text.$ To do this, suppose for the sake of contradiction that such a copy exists using nodes $u\text,$ $v\text,$ and $w\text.$ We consider two cases:
-
Case 1: The $K_3$ formed this way uses one of the colors $c_1\text,$ $c_2\text,$ $\ldots\text,$ or $c_r\text.$ Then by construction this copy of $K_3$ belongs to one of $A$ or $B\text,$ which is impossible because neither $A$ nor $B$ contain a monochrome $K_3\text.$
-
Case 2: The $K_3$ formed this way is color $c_{r+1}\text.$ This means that the edge between $u$ and $v$ is colored $c_{r+1}\text,$ which means that exactly one of $u$ and $v$ is in $A$ and the other is in $B\text.$ Assume WLOG that $u \in A$ and $v \in B\text.$ Assume also WLOG that $w \in A\text.$ But then the edge between $u$ and $w$ is not given color $c_{r+1}\text,$ contradicting that this is a $c_{r+1}$-colored $K_3\text.$
In either case we reach a contradiction, so our assumption was wrong and no monochrome $K_3$ exists in $G\text.$ Thus $P(k+1)$ is true, completing the induction. $\qed$