CARTER ZENKE: All right, hello everyone. Welcome to our first test review session of the fall semester. My name is Carter. I'm [INAUDIBLE] preceptor, joined also by Phyllis, one of our head course assistants. Super excited to be here today and to help you all go through some of the key concepts from this test. It is just about 7 o'clock, which is the time we said we would start at the session. Or is it? Because on the course website, we said that we start at November 10 at 7:00 PM in quotes, direct characters, right? But on your computer, I would argue that actually this isn't what actually went on. We actually have this number in your computer, this Unix time representing how many seconds have passed since January 1, 1970. So what time is it really is a great question that is going on right here. So if we're trying to represent this session start time right now, 7:00 PM on November 10th, well, I might say that this string is the best way to represent that. Like I could send an email, I could send a text that is saying November 10, 7:00 PM, those actual characters. But Phyllis on the other hand, might say, well, this integer is actually a better way to actually represent this because maybe it's more clean, right? But in talking about these different ways to represent this actual date, this actual time that we're in right now, well these would lead us to different binary representations of this same date. So in the chat if you'd like, how would this actually change our representation of this date, depending on if we held this date as a string or as an integer? What should we be thinking about as we go through and write some of these dates in actual machine language, or machine code, this language computer speak? And go ahead and put some answers in the chat based on what you think. So one thing I'm seeing is that if we chose to represent this date as a string, well, we'd be dealing with maybe ASCII characters, right, ASCII characters where each character would have its own code that we could put into binary. So for example, if I were to look at this string notation, or this string representation of this date, well, that N there would be-- not sure the ASCII code actually, but the binary for that is 0100110. And I looked that up earlier, right? But this would be the binary representation of the ASCII number for N in November. Now, further we go on to the o and we might say, well, that's o-- lowercase o specifically-- is 01101111. And that again is the binary representation for the ASCII value of this o character. But for this integer, right, this is a whole different way representing that date. And so we might have to actually use something different. Here we have this integer that is actually going to be a 32-bit number, or represented with 32 bits, or in this case, let's see, 4 bytes right? So this would be-- I don't type all of it out, but 0110000110011 and so on, all the way to having 32 bits that would show off this entire integer number here. So on Stack Overflow I found this question of, if 32-bit machines can only handle numbers that go up to 2 to the 32, if we can only have numbers represented with 32 bits, how come we can write this number right here? This 1 trillion number, right? Well I hope looking at this shows us that, well, actually we're typing in the ASCII characters for this number and not necessarily storing it in 32 bits, right? There's a difference between representing things with ASCII and representing things with actual numbers. So these are all questions of representation, right, these issues of how we can store numbers, how we can talk about numbers inside of a computer. But what happens when things might go wrong with representation? That's a big theme from this first week of the course. Well let's say we wanted to add the year to our date, not just November 10, 7:00 PM, but November 10, 7:00 PM, 2021. And we have 1,000 years until we're going to change from 2000 to 3000 or even hundreds years before we change from 2020 to 2100, so I think we'll be safe with just two digits, right? But as we go through-- if we start with 21, maybe next year 2022, and so on, well we could get all the way to 98, and 99. And then eventually, right, we would hit this point where we can't count any higher. And we'd get this overflow, this transition from 99 to 00, right? Because if we added 1 to the 9 on the right, we'd have to carry a 1, which would make it a 0, and then add this other 0 to their side. And so of course we can only count up to 99, but then if we get to 00, what year are we talking about anymore, right? Are we talking about 2000? Are we talking about 2100? We don't know anymore. And so we didn't have enough information to represent all of the possible years we could be talking about here. Now, that again is a question of overflow, this idea of running out of space to store the information that we're trying to store in this case. Now the Unix time we talked about earlier sort of actually came in to play here and when we were talking about the January 1, 2000 Y2K bug, right? We went from basically two digits to store the year, from 1999 to 2000, flipping back over. But again, this was sort of going into 1900 and not 2000 in this case. But of course, we've since sort of developed better ways to represent numbers now, and we have these 32-bit integers that can count even higher. But of course, we're running the same problem later on on 19 of January 2038. So maybe the key here is to use these 64-bit numbers that would give us much more space to represent all kinds of possible times here. But again, this gets at this idea of having a certain amount of space in the computer to represent our information. So certainly welcome here to pose questions in the chat if you have anything you want to talk about based on the representation question before we move on to some of these topics on programming languages and so on. I'll just wait out a minute here for some questions to come in via the chat. All right, so I don't see any questions coming in. But definitely feel free to message them to me or to Phyllis, and we'll do our best to respond to them live. Ah, I see a question here. So what is stack overflow? Right, there's this idea of going back to overflow. Well we were talking about overflow in terms of representation of numbers, representation of digits here. But stack overflow can be somewhat similar in the sense that when we call a function in our computer, right, we keep putting the computer's memory higher and higher on that stack that we talked about earlier, in week probably four or so. And as we keep calling more and more functions, we keep adding more and more memory. Well, the stack only has a finite amount of space, right? So if we were to climb all the way to top of that stack, you eventually reach the top of that and run out of space there. So that is what's called stack overflow, the same idea of sort of running out of space to store the information we need to store. And another question is saying, can an integer be represented high as 2 to the 31 minus 1? And in this case, that's actually correct. If we have a 32-bit integer, right, let's see-- I don't think I have an example of 32 bits here. I can go ahead and type one in. So here we left off at 11, and let's go ahead and just add-- let's say that's about 32 bits, right? We could theoretically represent numbers that can go as high as 2 to the 31 minus 1 if we were only caring about numbers that were positive or unsigned, they don't have a positive or negative sign in front of them. If we wanted to represent both positive and negative numbers, we'd only have half that much room to go away from 0, right, because we're sort of dividing our space in half as we include both negative and positive numbers. Another question about stack overflow I'm seeing, how would there is a stack overflow error? And would it say that there is one? I think most compilers would tell you if there is a stack overflow error, right? They're going to be running your code, going through and most cases, probably a recursion issue would happen. And so it can tell you, look, we've run out of memory. This is a stack overflow error. And certainly feel free to take other questions here. All right. So, going back to this idea of running out of time. We have certainly enough time before this January 19, 2038 to think about how we can address this issue. And maybe part of that would involve thinking more deeply about programming languages, how they store data, and so on. So when we're talking about programming languages, one of the very first things that come to mind is this idea of a variable, storing information, right? In this case, we have a date, in this case is pointing to this location in memory of this actual integer representation of that date. But how did it get there? Well that's where assignment in a programming language can come in. We have this equal sign, which is this assignment operator that can take whatever you put on the right and store it in a name sort of space on the left. So here we have this high number that's representing the date right now and storing that in this location that we're calling date, right? Now if we want to keep track of the date, we could keep assigning new numbers to this date location. We could go ahead and give it the next second on, the next second on, and then so on. But a better way to do that is with these operators that are part of programming languages. And in this case, we have an operator that is just plus, right? We can add to current values that we have stored in our computer's memory. So here we have this operator that is first looking at date, the variable that is stored, reading from right to left here. And it's going to add a 1 to this location, so I might go ahead and add one here. And it's going to restore that value in that same location there. So these operators allow us to take certain data, manipulate it, and store it back in the same location. But what if we also wanted to ask questions about our data, right, not just store things, and add to them, and so on? Well, that's where conditionals can come in. And we saw these both in Python, in C, and even in JavaScript a little bit. Asking questions of our data, is date greater than zero? Well, certainly it is in this case. So we'd get back this value of true. Or is date greater than maybe 2 million, or 2 trillion? And in this case, well, it's not. And so we get back this false value. So conditionals are what help us sort of ask questions of our data and get back answers that are either yes or no, true or false, these Boolean values but we can talk about in computer programming. Now that's OK, right? But what if we have something that we want to do that's not just the single operation. So if we go back to our operators here, we've added 1 but since time keeps moving, time keeps going on, we want actually be able to keep updating this time. That's where this idea of loops come in handy. So here we have this sort of English sentence of saying, I want to do something until the date is less than, let's say 2 trillion here. And again, we're counting seconds from January 1, 1970 in this case. So this combination of conditions, or this implementation of conditions, allows us to define this loop that can help us do something until this date reaches a certain state of being true. So here is basically what we're trying to implement in the terms of a while loop, right? We saw that this while loop can help us do something until a certain condition is no longer true, right? While something is true, we'll do that. But when it's no longer true, let's go ahead and break out of this loop and not do it anymore. Now, of course, there's more than a while loop. But most loops actually come from this very simple collection of conditions and these loops here. So if I go here to this next kind of loop, here I'm doing a few different things at once, right? I'm defining a certain value for date. I'm asking, is date less than, let's see, 2 trillion here? And if it is, I want to do something, and at the very end I want to add 1 to our date. So here we're combining assignment in that first line there. We're adding conditionals in that second line, and on that fourth line, we're working with operators to add 1 to our date, right? But this is pretty much equivalent to a for loop, right? We could sort of combine all this into one place, which is what Python and C allow us to do when we have these different for loops. So again, thinking about programming languages here, the important thing is not so much the individual syntax of Python, or C, or JavaScript, and so on, but more this idea of loops and building from those loops, all the kinds of functions that programming languages can have. So thinking about this and using these as building blocks, a question for us in the chat is, you know, Phyllis and I have developed this programming language that we've sort of shown so far, but we've only added this addition operator. I didn't show you subtraction, I didn't show you division, or multiplication, but we actually want to be able to multiply things. Right? We don't actually want to add them, we want to multiply them. And so if we can have variables, if we can have conditions, if we can have loops, well, what constructs could we use to actually make multiplication here? And I'll give you all about two minutes to think in the chat about what kinds of constructs from the past would we be able to use here to build multiplication into our programming language, based on only the building blocks we've had so far. And I'll also look for questions as well if you would like to ask those too. All right, I'm seeing some answers come in. I'll give folks about a minute more. All right, I'm seeing some more answers. And a lot of these are talking about addition, right, we know we can do addition. And they're also talking about loops, right? If we sort of think about it, multiplication is really just addition but looped, right? If we want to say do 5 times 5, we would add 5 together 5 times. 5 plus 5 plus 5 plus 5 plus 5. And so in this case, we could actually have basically a few more variables here that could help us define, are we all the way through our number or not? But basically, this idea of using loops to sort of add more to this some that we're making for multiplication. All right. And again, any other questions here before we move on to functions and talking about how functions and programs might work? Let's see. All right. I'm seeing a few, and some will come back to even later. All right. So let's go ahead and move on to functions. Now, there's this idea that we saw at the very beginning of the course, as functions as these black boxes that take inputs and then give us outputs, right? But as I hope we've seen, these functions actually have pretty predictable elements. We can actually use these building blocks to make functions, including variables, assignments, operations, conditionals, and loops. These are all parts of functions and building blocks we can use to make our own functions. Now, when you're writing your functions in Python for example, you use syntax like this where you say, define a function called add time that takes two inputs, time and addition, and then gives us back or outputs time plus addition. So here I could give a certain time, like the start of the seminar, add maybe 30 minutes to it and we'd arrive at maybe the time that we're at right about now, right? But the key thing here I want to highlight for you all is not necessarily the Python syntax, but this idea of these inputs and these return values. So here we have again, our inputs of time and addition, and this return statement giving us back a value we can use later on in our program, right? So instead of inputs and outputs, maybe more specifically, we could say functions take arguments, again, that we've defined here like time and addition. And they give back return values that we can say or we can name with these return functions that occur in most any programming language, even in C, in Python, and so on. And these return values are different from side effects, for example, which are things of happen as the function runs. These return values are actually data we can use and, later on, incorporate into our program that these functions give back to us over time. Now, programs themselves can be seen in a very similar way when we're writing programs, especially later on. In weeks 2 and 3, we were talking about giving command line arguments to our programs. Well, in Python, defining the main part of our program is very similar to defining a function. We have this main function that is our program. And in this case, you'll see where we could sys.argv and so on to get the command line arguments. But we have this very similar value of returning something-- in this case, a status code to say, hey, we were successful at returning 0 or returning 1 in those cases. So programs can be seen as a more general case of functions. They are functions themselves. They take command line arguments, which are their inputs. And they give us status codes as their outputs. And so what questions are there on functions and algorithms so far-- before we move on to algorithms, that is? So good question, what's a status code? So earlier on in our programs, we'd talk about giving a program a certain number of inputs. I believe for Cesar, for example, we were trying to give it maybe one command line argument that said dot slash Cesar, the number of steps we wanted to rotate our key. So a status code can tell us, did our program function as we intended or did the user use our program as intended. If we didn't give Cesar the right number of command line arguments, we might return 2 or we might return 3 to symbolize-- hey, this is actually not here. You're supposed to use this program. And later on, thinking about web programming and talking about HTTP 404 errors, These are basically numbers that give us a clue as to what happened when our program ran. And so these are the outputs of our program that help us interpret what happened as it finished running. And I'll also pause here for a few more questions before moving on. So programs themselves can implement algorithms. And that was the topic for week 3. Phyllis, here, is our expert on algorithms. So I'll turn it over to Phyllis to talk to us about algorithms. PHYLLIS ZHANG: Cool. We'll talk about some algorithms and have some fun thinking about how some algorithms-- how much time they'll take, how much space they'll take. It'll be quite fun. So let's do a quick review about some of the sorting algorithms you might remember from week 3. The three that we'll be talking about today is bubble sort, selection sort, and merge sort. We'll talk about their best case, as well as their worst case, as well as how much space they might take. So just as a quick thing for bubble sort, let me come up with unsorted list. So I will do 5, 0, 1, 2, and 1. So what bubble sort will do-- you might remember we take a look at the first two elements. And we notice that they are not in the ascending order we would like them to be in, so we're going to make a switch. And so we're going to switch them to be 0, 5, 1, 2, and 1. And then, we look at the two elements including this one that we just switched. And we notice they're not in the right place either. So we're also going to switch that over to 0, 1, 5, 2, 1. Similarly, we're going to continue going down the line. So we switch those, the 5 and the 2. And then, finally, we're going to switch that 1 and the 5. So this was the very first iteration. And what we did here is, we went through all n of these elements and compared it to another, performing swaps as necessary. So for the first iteration, we took n total steps. And notice that this is not anywhere near sorted. This is actually not in ascending order-- surprising. So this will continue until this is in sorted order or whenever we stop making passes. And so can someone in a chat tell me what the worst case time complexity for bubble sort might be? We'll go with n square, that is correct. So we'll go with n square. And this is because we can't make a total on the order of n total passes in order to-- n total iterations to make sure that this bubble is already sorted. And then, if we consider the case where we have 1, 2, 3, 4, 5, where it's already sorted-- then, bubble sort will only take one pass because it's going to go through the entire thing and look-- hey, this is already sorted. We're good. And then, it's going to look at these two. It's going to be like, oh, we're already sorted. We're good too. I'm going to look at these two. They're sorted. I'm going to look at these two. They're sorted. We did not make any changes in this pass. We are good. And so bubbles sort is going to be like, all right, I'm done. And we only made n total checks. And so this is only going to take linear time for best case. So in the best case, we do this big omega here. And it takes linear time. And then, how much space does it take? Think about it. So in terms of space, if we are just making swaps, we don't take up any extra space. We're saying, on the very first thing here, for example, we're switching the 5 and the 0. If you remember, that's three lines. We maybe have a temporary integer. And that's all we need in order to make these swaps. And since bubble sort is just a bunch of these swaps, then the space complexity is just going to be a bunch of constant things. And so the space complexity, overall, is going to be constant. So we'll say this is going to be the worst case is constant. So For selection sort, we'll use the exact same example. And we'll also use the first iteration. We'll have 5, 0, 1, 2, and 1. And so if you remember what selection sort does is at every single point, it's going to look at the unsorted part of the list and find the minimum. So I'm going to, in my first iteration, go through the list and find the minimum. So I'm currently at 5. That's the smallest I have. I'm like, OK, the minimum is 5. And then, we're going to go look at 0. And 0 smaller than 5. So I'm like, OK, 0 is now the smallest. And then, we're going to go look at 1. It's not smaller than 0. Look at 2, it's also not smaller than 0. We'll look at 1. It's also not smaller than 0. And so then, selection sort is going to note that this is the current smallest. And I'm going to swap it with the beginning one. And so after the first iteration, I'm going to get 0, 5, 1, 2, and 1. And so notice that we have to look through the entire thing to figure out what was the minimum. And so in that case, since we had to look to the entire thing to figure out what was the minimum, this took n total comparisons. And we'll have to do this every single time for n total iterations because every single time-- now that we're on a second iteration, we have to find the new smallest and move it here until we get to the very end, the n-th one who, we'll know that is the largest one. So the worst case for selection sort, you might remember, is O of n squared. And does someone want to send me, in the chat, with the best case for selection sort is? Awesome, yes. It is n square. And it's n squared because even though it's in sorted order, every single time, we still have to go look for the smallest. So we still have to go through the whole thing to make sure that value is the smallest. To solidify things-- if we have 1, 2, 3, 4, 5, my thing is going to be like, oh. So it's going to look for the smallest thing. So it's going to start with 1 and go, OK, 1 is small. I'm going to compare it to 2, and then compare it to 3, and then compare it to 4, then compare to 5. And then, there's 1 is the smallest, so 1 is going to stay here. But I have to do the exact same thing that I did before because I still have to check if that is indeed the smallest. So the or the best-case time complexity is going to n squared. The space complexity is actually the same thing as bubble sort. Notice that all we're doing is about to swaps. We don't actually need to store anything. So this is also going to be a constant. And then in terms merge sort, we get to exploit a very nice property of dividing our array into two parts and [INAUDIBLE]. And so if we were to have this example earlier-- 5, 0, 1, 2, 1-- I would split it into two parts. So let's say I split it down here. And then, now, I sort the two arrays-- 5, 0, 1 and then 2, 1. 5, 0, 1 can be split into two arrays again. And so let's say I do 5. And then, we'll put it here. And then, I can split this again. But really, when I split this again, it's going to be a 5 and a 0. And then, I'm going to merge them together. The array with a 0 it's going to be smaller than that 5. And so we can put 0 and 5. And then, when I merge this with the 1 here, I'm going to-- I'm running out of space. When I merge this with the 1 here, I'm going to realize-- so 0 is the smallest. And between 1 and 5, 1 is the smallest. I'm going to have 5. And then, the same thing is going to happen on the right-hand side, where I'm going to have resorted to a 1 and 2. And then, I'm going to merge these together to be the smallest out of these two arrays, the leftmost pointer for each, the smallest is 0. So I'm going to take out the 0. And then, the smallest of the leftmost is 1. So I can mark out either one. And then, the smallest of this 5 and this 1 is also 1. So I can mark out this one. And then, the smallest between the 2 and the 5 is going to be 2, so write down 2 and mark this down. And then, the smallest left, the only thing left is the 5. So that's how merge sort works. And the nice thing about merge sort is that every single time, we keep dividing the array in two. And so whenever we divide the array into two, consecutively, this is going to take log n total of these divisions. So it's worst case is going to be part log n. But in addition to dividing it, we also have to merge it together. And merging them together ultimately does take n total because we have to make all of these comparisons. If you remember, at the very end, we made all of these sorts of comparisons in order to merge them together. So it's going to be n log n. And just like selection sort, it doesn't matter what order it's already in. Merge sort will take these and still merge them together, still cut them all down into tiny pieces and try to piece them together. So the best case is also going to be n log n. And so finally, the space is going to be linear because we have to copy this array. We can't just like chop the array in the space they already give us and make swaps. We have to store it somewhere else before being able to move them around. So it's just going to take linear. So we're going to take a look at some examples of space and time complexity. I have an example here. It's going to be called the mega-lazy sort. So in this sorting mechanism, I'm going to go through a list number by number. And when I see a number that is not in the right order, I'm going to throw it away because I'm lazy and it can't be a problem if it's not there. So can someone tell me what the worst case time complexity is via chat in this particular example? Perfect. We'll go ahead and do something like the example that we did. But you're right. This is linear. So if we did our 5, 0, 1, 2, and 1. I'm lazy. I look at the 5. I'm like, OK, cool. Now, looking at the 0, I'm like 0 is not bigger than 5. I'm going to throw it away. And then, I'm going to check the 1. 1 is not bigger than 5. I'm also going to throw it away. 2 is also not greater than 5. We can throw it away way. The 1 is also not greater than 5. I'm also going to throw it away. So basically, I have a sorted array. But really, it's not sorted from the previous array. But this is indeed sorted. And I had to look through every single one of the elements inside this array in order to make this new 5 here. So for those of you that said linear, you're right. This worst case time complexity is O of n if there are n elements in this list. And so now, we're going to do mega-lazy sort but with friends. So Carter and I are going to divide my list into two parts. And the Carter and I are both mega lazy. So we're going to use that to sort our respective parts. And then, I'm going to take the time to merge the two lists that Carter and I get. So does someone want to tell me what the time complexity for this is? We'll go ahead and get started. And you can still message me. So let's do 5, 0, 1, 2, and 1. And we're going to divide it like this. But because I'm lazier than Carter, I'm going to take the first half-- me. So whenever I sort this, I'm going to write down a 5. And I'm going to say 0 is not bigger than 5, so I'm going to throw it away. That's now my list, my new list, "my sorted list." And then, whenever Carter sorts his list, he's going to notice that there is a 1. And then, there's also a 2. That's in the right order. But the 1 after is not in the right order. So he's going to throw away in that one. And so this is now Carter's sorted list. Now, I'm going to take the effort to merge it. And I'm going to notice that the left-most on mine and the left-most on his-- the 1 is the smallest. So my final answer is going to have a 1. And then, now that we've written the 1, we can take a look at the 5 and the 2. And then, Carter's next smallest is the 2. And so we're going to get rid of that 2. And the last number left is 5. So now, I have a 1, 2, and a 5. And that's my final sorted array. And we'll take a look. And this is actually linear again because whenever I did this sort, I had about half of the elements. And so I had to make a pass that took about n over 2 total comparisons. And then, Carter also did about n over 2 comparisons. So if you combine the amount of comparisons that Carter and I did in the beginning, that's about n total comparisons. And then, whenever I put our list together, whenever I merged it, there is a maximum of n total comparisons I have to make. That is assuming that I don't delete anything and Carter doesn't believe anything, then I would have to make a total of n total comparisons. And so the number of comparisons I have to make while merging is upper bounded by n. And so we have n for the original sorting, and then n for the merging. And that's going to be 2n. And remember, we can drop multiplicative constants for our time complexity. So this is linear, or O of n. We will go to our next example. So next, I'm going to use internet sort. I'm going to go through a list of a, b, c, d, and e. And first, I'm going to Google, "which is bigger, a or b?" And then, if Google says, a, then I swap a and b. And then, I ask Google about the next two elements, c and whichever was larger between a and b. And then, I make as many passes through the list as necessary to sort the list. So if you can send me in chat what sort of sort this remind you of, that would be great. I'll give you a second to consider what sort this is like. Hint, it's one of those three sorts that we just talked about. So most of you, yes, it is bubble sort, in fact. Let's run a quick simulation with the numbers that we had earlier. Let's have a be 5, b be 0, c be 1, d be 2, and e be 1. And then, I'm asking Google, "which one's bigger, a or b?" 5 or 0. Google's probably going to tell me that 5 is bigger. And so then, I'm going to swap them to get this. And then, I ask Google, which one's bigger between c-- which is this one-- and whichever was larger a and b, so this one. I'm going to ask Google, is 5 bigger or is 1 bigger? And Google's probably also going to tell me that 5 is bigger. So then, I'm going to make this switch. And then, I just keep going over and over until the end of this iteration. So I make as many passes through the list as necessary to sort the list. So I have, here, this first pass. And you might notice that this is, in fact, not sorted. And so I would have to make more and more passes until this is done sorting. And so this is the exact same as bubble sort. And the worst case time complexity would be just like bubble sort, for o of n squared. And the space complexity is going to be constant because all I'm doing is making a bunch of swaps. Cool. Next here is bogosort. Next, we're going to do the very, very fun and not tedious sort of modified bogosort. I am brilliant at sorting. And I'm going to generate all possible permutations of a list and then store it in a list of lists. And then, I'm going to check every single one of these permutations to see if it's ordered correctly. So if you could type in chat what you think the time and space complexity is, that would be great. And we're going to go for just a slightly smaller example this time so I don't have to sit here for the next hour writing out the permutations. So let's say I have 2, 5, and 1. So n equals 3 this time. And I would have to write out every single permutation. So I have 2, 5, 1; 2, 1, 5; 5, 1, 2; 5, 2, 1; 1, 2, 5; 1, 5, 2. And then, we're going to go through every single one of them to see which of them are sorted. So 2,5, 1. 5 seems greater than 2, so that's not it. Sorry, 1 seems less than 5, so that's not it. And then, this one, 1 is less than 2, so that also seems not it. 1 seems less than 5. That's not it. 2 seems to be less than 5. That's not it. This one, 1 is less than 2, that's good. 2 is less than 5, that seems good. Cool, we found it. That did not take forever. So let's take a look. Each time we check one of these to see if it is actually a ascending ordering, we have to look at every single number. So when we check this one, we checked is 1 less than 2? Yeah. Is 2 less than 5? Yeah. So each time, we check something, a list, it takes n comparisons. And then, whenever we are making all these permutations, you might know the number of permutations is n factorial. And so the total amount of time it takes, then, it is going to be n times n factorial because for every single permutation, we're going to have to compare it. So that's the worst case time complexity. And then, because I store it all in a list of lists, I have to store all of these. And so since there are n factorial permutations and each one takes n space, I also have a worst case space complexity, or an actual space complexity, of n times n factorial. So in this case, both of these are O of n times n factorial. Finally, we're going to go do a zen sort. I'm achieving enlightenment now after writing out factorial permutations of numbers. And I now have a list. And I realize that the list, like all things, is ephemeral and its order is insignificant. So I just leave it like that. And I pursue enlightenment instead. If you want to send me how much time and how much space this might take, do let me know in the chat. Yeah, this takes constant time. In fact, it takes one operation. It takes me sitting here and quitting because I'm achieving enlightenment instead. So no matter how long the list is, I am going to realize that the list, like all things, is ephemeral. And I'm just going to quit. So it doesn't really depend on n. It's all constant. I sit there and I have to make that realization myself. And so the worst case time and space is going to be constant time. Now, we're going to talk a little bit about searches. So let's say I have an unordered list and I'm searching for a 5. And I want to figure out what is the fastest way to solve this problem. Can y'all send me in the chat what the fastest way to solve this problem is if I have an unordered list? So let's consider linear search. So linear search is going to require that I look through the entire list. And this is going to have a worst case of linear time because I have to go through every single one of them to search for a 5. And so in this case, we actually cannot use binary search because it's unordered. And so for example, if I was looking for a 5, 0, 1, 2, and 1. And let's say I was looking for-- what else? Let's put it negative 1 here. Let's say I was looking for a negative 1. I start with binary search. I go to the middle number. This is 1. So binary search told me I should go to the left. And as you can tell, if I go to the left, I'll never find a negative 1, even though it exists right here. So for an unordered list, you cannot use a binary search. But for an ordered list, you can use a binary search. And the fastest way to solve this problem is, indeed, a binary search. Any questions before we go on to recursion? So whenever we talk about recursion, which is a technique that allows us to continuously call on the same function or call our own function, we're going to consider a few different things. So if you just sent me a question, I will answer it later. We're going to consider the base case, which is when does the recursion stop? We have to define when the recursion stops, or else we'll get stack overflow because there's going to be too many functions being called. And the stack frame won't be able to handle it, so we'll get a stack overflow-- so "when we stop the recursion." And then, the recursive step is how do we call our own function? "How to call own function." And the impacts on space complexity-- like I said, every single time we make a function, that goes into the stack. And so we call this function infinite times. And we don't have a base case. We're going to add to the stack infinite times. And then, we're going to have a stack overflow. So just as a quick example, we're going to take a look at this function. I've called it nom. And whenever x is less than or equal to 0, I'm going to return. And then, otherwise, I'm going to print("nom") and then return nom(x - 1). So if you could, in the chat, tell me what this prints and what its time and space complexity is, that would be great. So for example, if we had nom(3), this would print "nom nom nom." So then, nom(3) is going to go check x equals 3. And it's not less than or equal to 0. And it's going to print("nom"). And then, it's going to call nom(2), which is not less than or equal to 0. So it's going to print("nom"). And then, it's going to call nom(1), which is not less than or equal to 0. So it's going to print("nom") again and then call nom(0). And then, it's going to be equal to 0. So it's going to return. And then, it will stop there. And so in terms of space and time, that's right. This takes linear time, linear for both. So for a time, at least, every single time I have a number, I do a constant time of operation. And then, I go to that number of minus 1. So I make n total of those constant time operations. And then, in terms of the space, I make n total calls to the nom function. And since there are n total calls to the nom function, it's going to be linear. And so we'll do a quick problem. I hate climbing stairs. This is a true statement. And so I always have to make snarky comments when my friends have to flex their cool legs and climb up the stairs one or two steps at the time. And so let's suppose it takes n steps to reach the top of the staircase. And every time, my cool friends can either climb one or two steps. And instead of climbing stairs, I like to think about how many distinct ways my friends can climb to the top. So there are multiple ways to solve this. We'll talk about both ways. The first way is to use the recursion. And the second one is more of an iterative solution, where we build up to an answer. For recursion, we will write a function in Python. And we'll call it climb_stairs. And it's going to take a number, n, that's the number of floors. And so let's do first component recursion, we talked about, is a base case. So we want to check if n equals 1. If n equals 1, that means there's one stairs. There's only one way to go about it. No matter how cool my friends are, they can only take one step. So then, we're going to return 1. If there are two steps, my friends can be cool, could go up the two steps in one go or go up the two steps one at a time. There are two ways to do it, so we're going to return 2. Otherwise, we can think about it like if there are three steps, then they can either take one step and then make the two-step hop. Or they're on the second step and they make the one-step hop. So otherwise, we can return climb_stairs(n-1) plus climb_stairs(n-2). Think about if we're on the n-th step, then they can either take one step from the n minus 1 or they can take that two-step hop from the minus 2 step. And we will think about the iterative solution really quickly. And then, if you have questions, I can answer them afterwards. So the iterative solution is building up towards that answer. So earlier, we said if there are n steps, I want to think about the n minus 1 step and the n minus 2 step. The iterative solution lets us think about-- how do I get one step, how do I get two steps, how do I get to three steps, how do I get to four steps, all the way up to how do I get to n steps. So instead of going top-down, we're going to go from 1 to n. So that's part of another solution where we have another climb_stairs function just like earlier. And we're also going to do it like if n equals 1, same thing. We want to go ahead and return 1. And if n equals 2, we're going to go ahead and return 2. And then, in order to store all of this information, 1 all the way to n, we're going to have an array. I'm going to instantiate this array with some zeros. This is some syntax that you might not be familiar with that we can talk about later. But essentially, this gives me n 0's. And then, we're going to say, in this particular answer, the 0-th one-- or in this case, it's the 0 index. The first step, there's only one possible way to do it. And then, the second step has two possible ways to do it. And then, we can do a for loop. So for the rest of them, what I can do is look at the two before. So just like I did earlier, where I did n minus 1 plus n minus 2, I'm going to populate this array in the same way. So answer i it's going to be answer i minus 1 plus answer i minus 2. And at the very end, since I computed it from 1 to n, all I have to do is return that last number, which is going to be the minus 1 index. Any questions on that, we will answer shortly. But I'll hand it back to Carter. CARTER ZENKE: So for our last about 10 minutes, here, we thought we would do some practice with practice questions from the past test. And so go ahead and open two breakout rooms. One will be for practice with programming constructs. And there's a prior test question called programming in B that might be useful there. And then, for more practice with complexity, we'll have this past test question called complexities of a space. I think it might be useful there. I think Phyllis will be able to help in the complexity breakout room. I'll be able to help in the programming constructs breakout room. And we'll certainly be able to be here until about 8 o'clock or so. But the goal here is to give you practice with some of these past test questions. And certainly happy to answer any questions that you all have in the moment from the past algorithms work, as well. I think that's going to conclude our review session. So we're going to go ahead and do some practice with past test questions.