DAVID MALAN: Let's now blow your mind. It turns out in the real world 1 divided by 10 is indeed 1/10, or 0.1. But in computers that only have a finite number of bits with which to represent numbers, you can't always represent numbers like 1/10 with perfect precision. In other words, computers sometimes have to make judgment calls and not necessarily represent the number you want as precisely as you intend. For instance, suppose I go back into this program and change the 0.1 to, oh, 0.28, thereby indicating that I'd like printf to printf to 28 places of precision. Let's now save and compile the program, this time with make floats2. Run it with dot slash floats2. And, dear God, this time I see not 0.1, but 0.10000000, which is pretty good so far. But then, 14901161193847656250. Well, what's going on? Well, it turns out that a float is typically stored inside of a computer with 32 bits. 32 is obviously a finite number, which implies that you can only represent with 32 bits a finite number of floating point values. Unfortunately, that means that the computer cannot represent all possible floating point numbers, or real numbers, that exist in the world, because it only has so many bits. And so what the computer's apparently done in this case is represent 1/10 to the closest possible floating point value that it can. But if we look, as we have here, to 28 decimal places, we start to see that imprecision. So this is a problem with no perfect solution. We can use a double instead of a float, which tends to use 64 bits as opposed to 32. But of course, 64 is also finite, so the problem will remain even with doubles.