[MUSIC PLAYING] PROFESSOR: All right. This is CS50 and this is the end of week three. So we're here today, not in Sanders Theater, instead in Weidner Library. Inside of which is a studio known as Hauser Studio, or shall we say Studio H, or shall we say-- if you enjoyed that joke, it's actually from classmate, Mark, online, who suggested as much via Twitter. Now what's cool about being here in a studio is that I'm surrounded by these green walls, a green screen or chromakey, so to speak, which means that CS50's production team, unbeknownst to me right now, could be putting me most anywhere in the world, for better or for worse. Now what lies ahead, problem set two is in your hands for this week, but with problem set three this coming week, you will be challenged with the so-called game of 15, an old party favor that you might recall receiving as a child that has a whole bunch of numbers that can slide up, down, left and right, and there's one gap within the puzzle, into which you can actually slide those puzzle pieces. Ultimately you receive this puzzle in some semi random order, and the goal is to sort it, top to bottom, left to right, from one all the way up through 15. Unfortunately, the implementation you'll have at hand is going to be software based, not physically. You're actually going to have to write code with which a student or user can play the game of 15. And in fact, in the hacker edition of game of 15, you'll be a challenge to implement, not just the playing of this old school game, but rather the solving of it, implementing god mode, so to speak, that actually solves the puzzle for the human, by providing them with hint, after hint, after hint. So more on that next week. But that's what lies ahead. For now recall that earlier this week we had this cliffhanger, if you will, whereby the best we were doing sorting wise was an upper bound of big o of n squared. In other words, bubble sort, selection sort, insertion sort, all of them, while different in their implementation, devolved into an n squared running time in the very worst case. And we generally assume that the very worst case for sorting is one that your inputs are completely backwards. And indeed, it took quite a few steps to implement each of those algorithms. Now at the very end of class recall, we compared bubble sort against selection sort against one other that we called merge sort at the time, and I propose that it's taking advantage of a lesson from week zero, divide and conquer. And somehow achieving some kind of logarithmic running time ultimately, instead of something that's purely quadratic. And it's not quite logarithmic, it's a bit more than that. But if you recall from class, it was much, much faster. Let's take a look at where we left off. Bubble sort versus selection sort versus merge sort. Now they're all running, in theory, at the same time. The CPU is running at the same speed. But you can feel how boring this is very quickly going to become, and just how fast, when we inject a bit of week zero's algorithms, can we speed things up. So mark sort looks amazing. How can we leverage it, in order to sort numbers more quickly. Well let's think back to an ingredient that we had back in week zero, that of searching for someone in a phone book, and recall that the pseudocode that we proposed, via which we can find someone like Mike Smith, looked a little something like this. Now take a look in particular at line 7 and 8, and 10 and 11, which induce that loop, whereby we kept going back to line 3 again, and again, and again. But it turns out that we can view this algorithm, here in pseudocode, a little more holistically. In fact, what I'm looking at here on the screen, is an algorithm for searching for Mike Smith among some set of pages. And indeed, we could simplify this algorithm in those lines 7 and 8, and 10 and 11 to just say this, which I've presented here in yellow. In other words, if Mike Smith is earlier in the book, we don't need to specify step by step now how to go find him. We don't have to specify to go back to line 3, why don't we just instead, say, more generally, search for Mike in the left half of the book. Conversely, if Mike is actually later in the book, why don't we just quote unquote search for Mike in the right half of the book. In other words, why don't we just sort of punt to ourselves saying, search for Mike in this subset of the book, and leave it to our existing algorithm to tell us how to search for Mike in that left half of the book. In other words, our algorithm works whether it's a phone book of this thickness, of this thickness, or any thickness whatsoever. So we can recursively define this algorithm. In other words, on the screen here, is an algorithm for searching for Mike Smith among the pages of a phone book. So in line 7 and 10, let's just say exactly that. And I use this term a moment ago, and indeed, recursion is the buzzword for now, and it's this process of doing something cyclical by somehow using code that you already have, and calling it again, and again, and again. Now it's going to be important that we somehow bottom out, and don't do that infinitely long. Otherwise we're going to have indeed an infinite loop. But let's see if we can borrow this idea of a recursion, doing something again and again and again, to solve the sorting problem via merge sort, all the more efficiently. So I give you merge sort. Let's take a look. So here is pseudocode, with which we could implement sorting, using this algorithm called merge sort. And it's quite simply this. On input of n elements, in other words, if you're given n elements and numbers and letters or whatever the input is, if you're given n elements, if n is less than 2, just return. Right? Because if n is less than 2, that means that my list of elements is either of size 0 or 1, and in both of those trivial cases, the list is already sorted. If there is no list, it's sorted. And if there's a list of length 1, it's obviously sorted. So the algorithm only needs to really do something interesting, if we have two or more elements given to us. So let's look at the magic then. Else sort the left half of the elements, then sort the right half of elements, then merge the sorted halves. And what's kind of mind bending here, is that I don't really seem to have told you anything just yet, right? All I've said is, given a list of n elements, sort the left half, then the right half, then merge the sorted halves, but where is the actual secret sauce? Where is the algorithm? Well it turns out that those two lines first, sort left half of elements, and sort right half of elements, are recursive calls, so to speak. After all, at this point in time, do I have an algorithm with which to sort a whole bunch of elements? Yes. It's right here. It's right here on the screen, and so I can use that same set of steps to sort the left half, as I can the right half. And indeed, again, and again. So somehow or other, and we'll soon see this, the magic of merge sort is embedded in that very final line, merging the sorted halves. And that seems fairly intuitive. You take two halves, and you, somehow, merge them together, and we'll see this concretely in a moment. But this is a complete algorithm. And let's see exactly why. Well suppose that we're given these same eight elements here on the screen, one through eight, but they're in seemingly random order. And the goal at hand is to sort these elements. Well how can I go about doing it using, again, merge sort, as per this pseudocode? And again, ingrain this in your mind, for just a moment. The first case is pretty trivial, if it's less than 2, just return, there's no work to be done. So really there's just three steps to really keep in mind. Again, and again, I'm going to want to have to sort the left half, sort the right half, and then once their two halves are sorted, I want to merge them together into one sorted list. So keep that in mind. So here's the original list. Let's treat this as an array, as we started to in week two, which is a contiguous block of memory. In this case, containing eight numbers, back to back to back. And let's now apply merge sort. So I first want to sort the left half of this list, and let's, therefore, focus on 4, 8, 6, and 2. Now how do I go about sorting a list of size 4? Well I have to now consider sorting the left of the left half. Again, let's rewind for just a moment. If the pseudocode is this, and I'm given eight elements, 8 is obviously greater than or equal to 2. So with the first case doesn't apply. So to sort eight elements, I first sort the left half of elements, then I sort the right half, then I merge the two sorted halves, each of size 4. OK. But if you've just told me, sort the left half, which is now of size 4, how do I sort the left half? Well if I have an input of four elements, I first sort the left two, then the right two, and then I merge them together. So again, it becomes a bit of a mind bending game here, because you, kind of, have to remember where you are in the story, but at the end of the day, given any number of elements, you first want to sort the left half, then the right half, then merge them together. Let's start to do exactly that. Here's the input of eight elements. Now we're looking at the left half here. How do I sort four elements? Well I first sort the left half. Now how do I sort the left half? Well I've been given two elements. So let's sort these two elements. 2 is greater than or equal to 2, of course. So that first case doesn't apply. So I now have to sort the left half of these two elements. The left half, of course, is just 4. So how do I sort a list of one element? Well now, that special base case up top, so to speak, applies. 1 is less than 2, and my list is indeed of size 1. So I just return. I don't do anything. And indeed, look at what I've done, 4 is already sorted. Like I'm already partially successful here. Now that seems kind of stupid to claim, but it is true. 4 is a list of size 1. It's already sorted. That's the left half. Now I sort the right half. My input is one element, 8 similarly, already sorted. Stupid, too, but again, this basic principle is going to allow us to now build on top of this successfully. 4 sorted, 8 is sorted, now what was that last step? So the third and final step, any time you're sorting a list, recall, was to merge the two halves, the left and the right. So let's do exactly that. My left half is, of course, 4. My right half is 8. So let's do this. First I'm going to allocate some additional memory, that I'll represent here, as just a secondary array, that's big enough to fit this. But you can imagine extending that rectangle the whole length, if we need more later. How do I take 4 and 8, and merge those two lists of size 1 together? Here, too, pretty simple. 4 comes first, then comes 8. Because if I want to sort the left half, then the right half, and then merge those two halves together, in sorted order, 4 comes first, then comes 8. So we seem to be making progress, even though I haven't done any actual work. But remember where we are in the story. We originally took eight elements. We sorted the left half, which is 4. Then we sorted the left half of the left half, which was 2. And here we go. We're done with that step. So if we've sorted the left half of 2, now we have to sort the right half of 2. So the right half of 2 is these two values here, 6 and 2. So let's now take an input of size 2, and sort the left half, and then the right half, and then merge them together. Well how do I sort a list of size 1, containing just the number 6? I'm already done. That list of size 1 is sorted. How do I sort another list of size 1, the so-called right half. Well it, too, is already sorted. The number 2 is alone. So now I have two halves, left and right, I need to merge them together. Let me give myself some extra space. And put 2 in there, then 6 in there, thereby sorting that list, left and right, and merging it together, ultimately. So I'm in slightly better shape. I'm not done, because clearly 4, 8, 2, 6 is not the final ordering that I want. But I now have two lists of size 2, that have both, respectively, been sorted. So now if you rewind in your mind's eye, where did that leave us? I started with eight elements, then I whittled it down to the left half of 4, then the left half of 2, and then the right half of 2, I finished, therefore, sorting the left half of 2, and the right half of 2, so what's the third and final step here? I have to merge together two lists of size 2. So let's go ahead. And on the screen here, give me some additional memory, though technically, notice that I've got a whole bunch of blank space up top there. If I want to be especially efficient space wise, I could just start moving the elements back and forth, top and bottom. But just for visual clarity, I'm going to put it down below, to keep things nice and clean. So I've got two lists of size 2. The first list has 4 and 8. The second list has 2 and 6. Let's merge those together in sorted order. 2, of course, comes first, then 4, then 6, then 8. And now we seem to be getting somewhere interesting. Now I've sorted half of the list, and coincidentally, it's all the even numbers, but that is, indeed, just a coincidence. And I now have sorted the left half, so that it's 2, 4, 6, and 8. Nothing's out of order. That feels like progress. Now it feels like I've been talking forever now, so what remains to be seen if this algorithm is, indeed, more efficient. But we're going through it super methodically. A computer, of course, would do it like that. So where are we? We started with eight elements. I sorted the left half of 4. I seem to be done with that. So now the next step is to sort the right half of 4. And this part we can go through a little more quickly, although you're welcome to rewind or pause, just think through it at your own pace, but what we have now is an opportunity to do the exact same algorithm on four different numbers. So let's go ahead, and focus on the right half, which we are here. The left half of that right half, and now the left half of the left half of that right half, and how do I sort a list of size 1 containing just the number 1? It's already done. How do I do the same for a list of size 1 containing just 7? It's already done. Step three for this half then is to merge these two elements into a new list of size 2, 1 and 7. Don't seem to have done all that much interesting work. Let's see what happens next. I just sorted the left half of the right half of my original input. Now let's sort the right half, which contains 5 and 3. Let's again look at the left half, sorted, right half, sorted, and merge those two together, into some additional space, 3 comes first, then comes 5. And so now, we have sorted the left half of the right half of the original problem, and the right half of the right half of the original problem. What's the third and final Step? Well to merge those two halves together. So let me get myself some extra space, but, again, I could be using that spare space up top. But we're going to keep it simple visually. Let me merge in now 1, and then 3, and then 5, and then 7. Thereby leaving me now with the right half of the original problem that's perfectly sorted. So what remains? I feel like I keep saying the same things again, and again, but that's reflective of the fact that we're using recursion. The process of using an algorithm again, and again, on smaller subsets of the original problem. So I now have a left sorted half of the original problem. I have a right sorted half of the original problem. What's the third and final step? Oh, it's merging. So let's do that. Let's allocate some additional memory, but my god, we could put it anywhere now. We have so much space available to us, but we'll keep it simple. Instead of going back and forth with our original memory, let's just do it visually down here below, to finish up merging the left half and the right half. So by merging, what do I need to do? I want to take the elements in order. So looking at the left half, I see the first number is 2. I look at the right half, I see the first number is 1, so obviously which number do I want to pluck out, and put first in my final list? Of course, 1. Now I want to ask that same question. On the left half, I've still got the number 2. On the right half, I've got the number 3. Which one do I want to choose? Of course, number 2 And now notice the candidates are 4 on the left, 3 on the right. Let's, of course, choose 3. Now the candidates are 4 on the left, 5 on the right. We, of course, choose 4. 6 on the left, 5 on the right. We, of course, choose 5. 6 on the left, 7 on the right. We choose 6, and then we choose 7, and then we choose 8. Voila. So a huge number of words later, we have sorted this list of eight elements into a list of one through eight, that's increasing with each step, but how much time did it take us to do that. Well I've deliberately laid things out pictorially here, so that we can kind of see or appreciate the division in conquering that's been happening. Indeed if you look back at the wake, I've left all of these dotted lines in place holders, you can, kind of, see, in reverse order, if you kind of look back in history now, my original list is, of course, of size 8. And then previously, I was dealing with two lists of size 4, and then four lists of size 2, and then eight lists of size 1. So what does this, kind of, remind you of? Well, indeed, any of the algorithms we've looked at thus far where we divide, and divide, and divide, keep having things again, and again, results in this general idea. And so there's something logarithmic going on here. And it's not quite log of n, but there's a logarithmic component to what we've just done. Now let's consider how that actually is. So log of n, again was a great running time, when we did something like binary search, as we now call it, the divide and conquer strategy via which we found Mike Smith. Now technically. That's log base 2 of n, even though in most math classes, 10 is usually the base that you assume. But computer scientists almost always think and talk in terms of base 2, so we generally just say log of n, instead of log base 2 of n, but they're exactly one and the same in the world of computer science, and as an aside, there's a constant factor difference between the two, so it's moot anyway, for more formal reasons. But for now, what we care about is this example. So let's not prove by example, but at least use an example of the numbers at hand as a sanity check, if you will. So previously the formula was log base 2 of n, but what is n in this case. I had n original numbers, or 8 of original number specifically. Now it's been a little while, but I'm pretty sure that log base 2 of the value 8 is 3, and indeed, what's nice about that is that 3 is exactly the number of times that you can divide a list of length 8 again, and again, and again, until you're left with lists of just size 1. Right? 8 goes to 4, goes to 2, goes to 1, and that's reflective of exactly that picture we had just a moment ago. So a little sanity check as to where the logarithm is actually involved. So now, what else is involved here? n. So notice that every time I split the list, albeit in reverse order in history here, I was still doing n things. That merging step required that I touch every one of the numbers, in order to slide it into its appropriate location. So even though the height of this diagram is of size log n of n or 3, specifically, in other words, I did three divisions here. How much work did I do horizontally along this chart each time? Well, I did n steps of work, because if I've got four elements and four elements, and I need to merge them together. I need to go through these four and these four, ultimately to merge them back into eight elements. If conversely I've got eight fingers over here, which I don't, and eight fingers-- sorry-- If I've got four fingers over here, which I do, four fingers over here, which I do, then that's the same example as before, if I do have eight fingers though in total, which I can, kind of, do. I can exactly do here, then I can certainly merge all of these lists of size 1 together. But I certainly have to look at each element exactly once. So the height of this process is log n, the width of this process, so to speak, is n, so what we seem to have, ultimately, is a running time of size n times log n. In other words, we divided the list, log n times, but each time we did that, we had to touch every one of the elements in order to merge them all together, which was n step, so we have n times log n, or as a computer scientist would say, asymptotically, which would be the big word to describe the upper bound on a running time, we are running in a big o of log n time, so to speak. Now this is significant, because recall what the running times were with bubble sort, and selection sort, and insertion sort, and even a few others that exist, n squared was where we were at. And you can, kind of, see this here. If n squared is obviously n times n, but here we have n times log n, and we already know from week zero, that log n, the logarithmic, is better than something linear. After all, recall the picture with the red and the yellow and the green lines that we drew, the green logarithmic line was much lower. And therefore, much better and faster than the straight yellow and red lines, n times log n is, indeed, better than n times n, or n squared. So we seem to have identified an algorithm merge sort that runs in much faster time, and indeed, that's why, earlier this week, when we saw that contest between bubble sort, selection sort, and merge sort, merge sort really, really won. And indeed, we didn't even wait for bubble sort and selection sort to finish. Now let's take one other pass at this, from a slightly more formal perspective, just in case, this resonates better than that higher level discussion. So here's the algorithm again. Let's ask ourselves, what the running time is of this algorithms various steps? Let's divide it into the first case and the second case. The IF and the ELSE In the IF case, IF n is less than 2, just return. Feels like constant time. It's, kind of, like two steps, IF n is less than 2, then return. But as we said on Monday, constant time, or big o of 1, can be two steps, three steps, even 1,000 steps. What matters is that it's a constant number of steps. So the yellow highlighted pseudocode here runs in, we'll call it, constant time. So more formally, and we're going to-- this will be the extent to which we formalize this right now-- T of n, the running time of a problem that takes n somethings as input, equals big o of one, IF n is less than 2. So it's conditional on that. So to be clear, IF n is less than 2, we have a very short list, then the running time, T of n, where n is 1 or 0, in this very specific case, it's just going to be constant time. It's going to take one step, two steps, whatever. It's a fixed number of steps. So the juicy part must surely be in the other case in the pseudocode. The ELSE case. Sort left half of elements, sort right half of elements, merge sorted halves. How long does each of those steps take? Well, if the running time to sort n elements is, let's call it very generically, T of n, then sorting the left half of the elements is, kind of, like saying, T of n divided by 2, and similarly sorting the right half of elements is, kind of, like saying, T of n divided 2, and then merging the sorted halves. Well if I've got some number of elements here, like four, and some number of elements here, like four, and I have to merge each of these four in, and each of these four in, one after the other, so that ultimately I have eight elements. It feels like that's big o of n steps? If I've got n fingers and each of them has to be merged into place, that's like another n steps. So indeed formulaically, we can express this, albeit a little scarily at first glance, but it is something that captures exactly that logic. The running time, T of n, IF n is greater than or equal to 2. In this case, the ELSE case, is T of n divided by 2, plus T of n divided by 2, plus big o of n, some linear number of steps, maybe exactly n, maybe 2 times n, but it's roughly, order of n. So that, too, is how we can express this formulaically. Now you wouldn't know this unless you've recorded it in your mind, or look it up in the back of a textbook, that might have a little cheat sheet at the end, but this is, indeed, going to give us a big o of n log n, because the recurrence that you're seeing here on the screen, if you actually did it out, with an infinite number of examples, or you did it formulaically, you would see that this, because this formula itself is recursive, with t of n over something on the right, and t of n over on the left, this can actually be expressed, ultimately, as big go of n log n. If not convinced, that's fine for now, just take on faith, that that's, indeed, what that recurrence leads to, but this is just a bit more of a mathematical approach to looking at the running time of merge sort based on its pseudocode alone. Now let's take a bit of a breather from all of that, and take a look at a certain former senator, who might look a little familiar, who sat down with Google's Eric Schmidt, some time ago, for an interview on stage, in front of a whole bunch of people, talking ultimately about a topic, that's pretty now familiar. Let's take a look. ERIC SCHMIDT: Now Senator, you're here at Google, and I like to think of the presidency as a job interview. Now it's hard to get a job as president. PRESIDENT OBAMA: Right. ERIC SCHMIDT: And you're going to do [INAUDIBLE] now. It's also hard to get a job at Google. PRESIDENT OBAMA: Right. ERIC SCHMIDT: We have questions, and we ask our candidates questions, and this one is from Larry Schwimmer. PRESIDENT OBAMA: OK. ERIC SCHMIDT: What? You guys think I'm kidding? It's right here. What is the most efficient way to sort a million 32 bit integers? PRESIDENT OBAMA: Well-- ERIC SCHMIDT: Sometimes, maybe I'm sorry, maybe-- PRESIDENT OBAMA: No, no, no, no, no, I think-- ERIC SCHMIDT: That's not it-- PRESIDENT OBAMA: I think, I think the bubble sort would be the wrong way to go. ERIC SCHMIDT: Come on. Who told him this? OK. I didn't the computer science on-- PRESIDENT OBAMA: We've got our spies in there. PROFESSOR: All right. Let's leave behind us now the theoretical world of algorithms in the asymptotic analysis thereof, and return to some topics from week zero and one, and start to remove some training wheels, if you will. So that you really understand ultimately from the ground up, what's going on underneath the hood, when you write, compile, and execute programs. Recall in particular, that this was the first C program we looked at, a canonical, simple program of sorts, relatively speaking, wherein, it prints, Hello World. And recall that I said, the process that source code goes through is exactly this. You take your source code, pass it through a compiler, like Clang, and out comes object code, that might look like this, zeros and ones that the computer's CPU, central processing unit or brain, ultimately understands. It turns out that that's a bit of an oversimplification, that we're now in a position to tease apart to understand what's really been going on underneath the hood every time you run Clang, or more generally, every time you make a program, using Make and CF 50 IDE. In particular, stuff like this is first generated, when you first compile your program. In other words, when you take your source code and compile it, what's first being outputted by Clang is something known as assembly code. And in fact, it looks exactly like this. I ran a command at the command line earlier. Clang dash capital s hello.c, and this created a file for me called hello.s, inside of which were exactly these contents, and a little more above and a little more below, but I've put the juiciest information here on the screen. And if you look closely, you'll see at least a few familiar keywords. We have main at top. We have printf down in the middle. And we also have hello world backslash n in quotes down below. And everything else in here is very low level instructions that the computer's CPU understands. CPU instructions that move memory around, that load strings from memory, and ultimately, print things on the screen. Now what happens though after this assembly code is generated? Ultimately, you do, indeed, still generate object code. But the steps that have really been going on underneath the hood look a little more like this. Source code becomes assembly code, which then becomes object code, and the operative words here are that, when you compile your source code, out comes assembly code, and then when you assemble your assembly code, out comes object code. Now Clang is super sophisticated, like a lot of compilers, and it does all of these steps together, and it doesn't necessarily output any intermediate files that you can even see. It just compiles things, which is the general term that describes this entire process. But if you really want to be particular, there's a lot more going on there as well. But let's also consider now that even that super simple program, hello.c, called a function. It called printf. But I did not write printf, indeed, that comes with c, so to speak. It's a function recall that's declared in standard io.h, which is a header file, which is a topic we'll actually dive into more depth before long. But a header file is typically accompanied by a code file, source code file, so much like there exists standard io.h. Sometime ago, someone, or someones, also wrote a file called standard io.c, in which the actual definitions, or implementations of printf, and bunches of other functions, are actually written. So given that, if we consider having here on the left, hello.c, that when compiled, gives us hello.s, even if Clang doesn't bother saving in a place we can see it, and that assembly code gets assembled into hello.o, which is, indeed, the default name given whenever you compile source code into object code, but aren't quite ready to execute it yet, because another step has to happen, and has been happening for the past few weeks, perhaps unbeknownst to you. Specifically somewhere in CS50 IDE, and this, too, will be a bit of an oversimplification for a moment, there is, or was upon a time, a file called standard io.c, that someone compiled into standard io.s or the equivalent, that someone then assembled into standard io.o, or it turns out into a slightly different file format that can have a different file extension altogether, but in theory and conceptually, exactly those steps had to happen in some form. Which is to say, that now when I'm writing a program, hello.c, that just says, hello world, and I'm using someone else's code like printf, which was once upon a time, in a file called standard io.c, then somehow I have to take my object code, my zeros and ones, and that person's object code, or zeros and ones, and somehow link them together into one final file, called hello, that has all of the zeros and ones from my main function, and all of the zeros and ones for printf. And indeed, that last process is called, linking your object code. The output of which is an executable file. So in fairness, at the end of the day, nothing has changed since week one, when we first started compiling programs. Indeed, all of this has been happening underneath the hood, but now we're in a position where we can actually tease apart these various steps. And indeed, at the end of the day, we're still left with zeros and ones, which is actually a great segue now to another capability of C, that we've not had to leverage most likely to date, known as bitwise operators. In other words, thus far, anytime we've dealt with data in C or variables in C, we've had things like chars and floats and ins and longs and doubles and the like, but all of those are at least eight bits. We've never yet been able to manipulate individual bits, even though an individual bit, we know, can represent a 0 and a 1. Now it turns out that in C, you can get access to individual bits, if you know the syntax, with which to get at them. So let's take a look at bitwise operators. So pictured here are a few symbols that we've, kind of, sort of, seen before. I see an ampersand, a vertical bar, and some others as well, and recall that ampersand ampersand is something we have seen before. The logical AND operator, where you have two of them together, or the logical OR operator, where you have two vertical bars. Bitwise operators, which we'll see operate on bits individually, just use a single ampersand, a single vertical bar, the caret symbol comes next, the little tilde, and then left bracket left bracket, or right bracket right bracket. Each of these have different meanings. In fact, let's take a look. Let's go old school today, and use a touch screen from yesteryear, known as a white board. And this white board is going to allow us to express some fairly simple symbols, or rather some fairly simple formulas, that we can then ultimately leverage, in order to access individual bits within a C program. In other words, let's do this. Let's first talk for a moment about ampersand, which is the bitwise AND operator. In other words, this is an operator that allows me to have a left-hand variable typically, and a right-hand variable, or an individual value, that if we AND them together, gives me a final result. So what do I mean? If in a program, you have a variable that stores one of these values, or let's keep it simple, and just write out zeros and ones individually, here's how the ampersand operator works. 0 ampersand 0 is going to equal 0. Now why is that? It's very similar to Boolean expressions, that we've discussed thus far. If you think after all, the 0 is false, 0 is false, false and false is, as we've discussed logically, also false. So we get 0 here as well. If you take 0 ampersand 1, well that, too, is going to be 0, because for this left-hand expression to be true or 1, it would need to be true and true. But here we have false and true, or 0 and 1. Now again, if we have 1 ampersand 0, that, too, is going to be 0, and if we have 1 ampersand 1, finally do we have a 1 bit. So in other words, we're not doing anything interesting with this operator just yet, this ampersand operator. It's the bitwise AND operator. But these are the ingredients via which we can do interesting things, as we'll soon see. Now let's look at just the single vertical bar over here on the right. If I have a 0 bit and I OR it with, the bitwise OR operator, another 0 bit, that's going to give me 0. If I take a 0 bit and OR it with a 1 bit, then I'm going to get 1. And in fact, just for clarity, let me go back, so that my vertical bars aren't mistaken for 1's. Let me rewrite all of my 1's a little more clearly, so that we next see, if I have a 1 OR 0, that's going to be a 1, and if I have a 1 OR 1 that, too, is going to be a 1. So you can see logically that the OR operator behaves very differently. This gives me 0 OR 0 gives me 0, but every other combination gives me 1. So long as I have one 1 in the formula, the result is going to be 1. By contrast with the AND operator, the ampersand, only if I have two 1's in the equation, do I actually get a 1 out. Now there's a few other operators as well. One of them is a little more involved. So let me go ahead and erase this to free up some space. And let's take a look at the caret symbol, for just a moment. This is typically a character you can type on your keyboard holding Shift and then one of the numbers atop your US keyboard. So this is the exclusive OR operator, exclusive OR. So we just saw the OR operator. This is the exclusive OR operator. What's actually the difference? Well let's just look at the formula, and use this as ingredients ultimately. 0 XOR 0. I'm going to say is always 0. That's the definition of XOR. 0 XOR 1 is going to be 1. 1 XOR 0 is going to be 1, and 1 XOR 1 is going to be? Wrong? Or right? I don't know. 0. Now what is going on here? Well think about the name of this operator. Exclusive OR, so as the name, kind of, suggests, the answer is only going to be a 1 if the inputs are exclusive, exclusively different. So here the inputs are the same, so the output is 0. Here the inputs are the same, so the output is 0. Here are the outputs are different, they are exclusive, and so the output is 1. So it's very similar to AND, it's very similar, or rather it's very similar to OR, but only in an exclusive way. This one is no longer a 1, because we have two 1's, and not exclusively, just one of them. All right. What about the others? Well the tilde, meanwhile, is actually nice and simple, thankfully. And this is an unary operator, which means it's applied to only one input, one operand, so to speak. Not to a left and a right. In other words, if you take tilde of 0, the answer will be the opposite. And if you take tilde of 1, the answer there will be the opposite. So the tilde operator is a way of negating a bit, or flipping a bit from 0 to 1, or 1 to 0. And that leaves us finally with just two final operators, the so-called left shift, and the so-called right shift operator. Let's take a look at how those work. The left shift operator, written with two angle brackets like that, operates as follows. If my input, or my operand, to the left shift operator is quite simply a 1. And I then tell the computer to left shift that 1, say seven places, the result is as though I take that 1, and move it seven places over to the left, and by default, we're going to assume that the space to the right is going to be padded with zeros. In other words, 1 left shift 7 is going to give me that 1, followed by 1, 2, 3, 4, 5, 6, 7 zeros. So in a way, it allows you to take a small number like 1, and clearly make it much much, much bigger in this way, but we're actually going to see more clever approaches for it instead, as well, All right. That's it for week three. We will see you next time. This was CS50. [MUSIC PLAYING] SPEAKER 1: He was at the snack bar eating a hot fudge sundae. He had it all over his face. He's wearing that chocolate like a beard SPEAKER 2: What are you doing? SPEAKER 3: Hmmm? What? SPEAKER 2: Did you just double dip? You double dipped the chip. SPEAKER 3: Excuse me. SPEAKER 2: You dipped the chip, you took a bite, and you dipped again. SPEAKER 3: SPEAKER 2: So that's like putting your whole mouth right in the dip. Next time you take a chip, just dip it once, and end it. SPEAKER 3: You know what, Dan? You dip the way that you want to dip. I'll dip the way that I want to dip.