### Announcements and Demos (0:00-1:00) + This is CS50. + Last year, Harvard launched the [innovation lab](http://ilab.harvard.edu/) (i-lab), a space on campus for students to get together and collaborate on innovative and entrepreneurial ideas. During J-Term, i-lab also offers a number of immersion trips, one to New York, one to Silicon Valley. Check out [this link](http://i-lab.harvard.edu/winter-break-trips) for more details. + Another edition of [CS50 Lunch](http://cs50.net/lunch) is this Friday, 1:15 p.m. at Fire & Ice! ### From Last Time (1:00-4:00) + Think back to Week 0 when we tore a phone book in half. The way we searched for the name Smith might not have been the most obvious, but it was fast and effective. The same kinds of ideas apply to searching billions of web pages as Google and Bing do in a fraction of a second. + Consider again the algorithm we executed in order to count the number of students in the lecture hall: 1. stand up and think of the number 1 2. pair off with someone standing, add your numbers together, and adop the sum as your new number 3. one of you should sit down; the other should go back to step 2 + The powerful idea here is that if we double the number of students in the room, we only need one additional step to count them as well. ### Search, Sort, and Runtime (4:00-65:00) + Let's try another algorithm, this time to find the tallest person in the orchestra section: 1. stand up if in orchestra section 2. pair off with someone standing; stay standing if you’re taller, sit down if you’re shorter; break tie randomly 3. if still standing, go to step 2 + Alternatively, we could've compared everyone two at a time, telling the shorter person to sit down. But in the worst case (i.e. the tallest student is the last one we encounter), if there are *n* students in the orchestra section, this algorithm will take *n* steps to run. In contrast, the algorithm above in which we cut the problem in half over and over again, will take log *n* steps in the worst-case scenario. + To see how *n* compares to log *n*, take a look at the following graph: ![Runtimes graph](graph.png "Runtimes graph.") + Actually, *n* doesn't seem so bad on this graph. *n* log *n* and n2 are both a lot worse. Worse still are *n*3 and 2*n*. + On Facebook, you might have *n* friends, each of whom has *n* friends of their own. To search this social graph, then, you might start encountering *n*2 runtimes if you aren't careful. + Recall from last time the awkward footage of Sean searching for the number 7 behind 8 pieces of paper taped to a chalkboard. Although he took a while to do it, Sean did as well as any human could have: linear search. In this case, there isn't any better option than lifting up every piece of paper from left to right or from right to left. + Of course, if we removed the pieces of paper, it would be easy to scan all the numbers and find 7 instantly. However, it's not exactly possible for a computer to do this. The best it can do is consider one number at a time. Thus, to simulate this, we covered up the numbers. + Let's change the problem a little and bring a new volunteer onstage. Alex will be this year's Sean. Now we have a second row of numbers behind pieces of paper on the chalkboard, but this time the numbers are sorted. Like Sean, Alex moves left to right through the pieces of paper until she finds that the number 7 is behind the third piece of paper. Still, she hasn't improved on Sean's algorithm. How might we do better? + Instead of starting at the leftmost number, this time Alex starts from the middle and finds the number 13. Knowing that the numbers are sorted, we can throw away all the numbers to the right of 13 (just as we ripped the phonebook in half in the first lecture). Having thrown away the right half, we're left with 4 pieces of paper. Again, we choose the piece of paper in the middle and find the number 4. Since 4 is less than 7, we can throw away all the numbers to the left of 4. Left with only 2 pieces of paper, we again find the number 7. + Sean's algorithm takes *n* steps in the worst case. Alex's algorithm (starting in the middle and throwing half the numbers away) takes log *n* steps in the worst case. When there are only 8 pieces of paper to look behind, *n* might not seem all that much bigger than log *n*. But if we have 4 billion pieces of paper, the difference certainly matters. In that case, Sean's algorithm would take 4 billion steps but Alex's would only take 32. Alex's divide-and-conquer approach is more properly known as *binary search*. + Alex had an advantage, though. Her numbers were sorted and she didn't even have to sort them herself. How many steps does it take to sort the numbers? #### Bubble Sort + For the purposes of this example, let's consider the numbers 1 through 8 in the following unsorted order: 4 2 6 8 1 3 7 5 + For our first algorithm, we'll work our way from left to right and compare two numbers at a time. If they're out of order we'll swap them and move on. So in the first run, we swap 4 and 2, then 8 and 1, then 8 and 3, then 8 and 7, then 8 and 5. This is what we're left with: 2 4 6 1 3 7 5 8 + Not quite fully sorted, but at least the number 8 bubbled up to the end where it belongs. Let's do it a few more times: 2 4 1 3 6 5 7 8 2 1 3 4 5 6 7 8 1 2 3 4 5 6 7 8 + If we're pretending to be a computer, we would actually need to walk through the numbers one more time to make sure they're sorted. Essentially, we'll walk through the numbers and keep track of the number of swaps we make on each iteration. As soon as we walk through without making any swaps, we know the list is sorted. + How many steps does this algorithm take? It feels like *n*2 because we take *n* steps on each of *n* iterations through the numbers. This covers the worst case in which the numbers are in reverse order. In the worst case, the number 8 would bubble up to its correct place in the first iteration, the number 7 would bubble up to its correct place in the second iteration, and so on. Unfortunately, *n*2 is slower than anything we've encountered so far. Let's try another approach. #### Selection Sort + For our second algorithm, we'll again work our way from left to right and keep track of the smallest number we encounter on each iteration through the numbers. On the first iteration of the loop, we find that the number 1 is the smallest. We then swap 1 with the number at the front of the list. Alternatively, we could have shifted all the numbers at the front of the list to the right, but that is way more time consuming. Using the swap instead, our numbers now look like this: 1 2 6 8 4 3 7 5 + On the second iteration, we find that the smallest number (2) is already in the correct place. Successive iterations transform the list as follows: 1 2 3 8 4 6 7 5 1 2 3 4 8 6 7 5 1 2 3 4 5 6 7 8 + What's the runtime of this algorithm in the worst case? It's *n*2 again. On each iteration, we have to walk through all *n* numbers to make sure we have the smallest number. And we have to iterate *n* times so that we can handle every numer. + What about the best case? Unfortunately, it's still *n*2. Even if the numbers are already sorted, we have to walk through the whole list on every iteration just to make sure each number is in its proper place. #### Insertion Sort + One more sort method we'll try. Instead of sorting the numbers in place, we'll create a new list which we'll insert them into. If we leave the numbers in sorted order and then apply this algorithm, we actually only have to iterate through them once. When we encounter 1, we insert it into the list. When we encounter 2, we insert it just after 1. When we encounter 3, we insert it just after 2. And so on. Thus, in the best-case scenario, this algorithm takes *n* steps. #### Visualization + Check out this [visualization](http://cs.smith.edu/~thiebaut/java/sort/demo.html) of the algorithms we just discussed. Notice how with bubble sort, the largest bars appear to bubble up to the end. Notice also that although all of these algorithms take *n*2 steps in the worst-case scenario, insertion sort appears much faster in some cases. Practically speaking, although an algorithm might be slow in the worst-case scenario, it might perform well in many other scenarios. + Recall also that selection sort has an opportunity for optimization. On the first iteration, we put the smallest number in its proper place. On the second iteration, then, we don't need to consider this number, so we only take *n* - 1 steps. On the third iteration, we take *n* - 2 steps. So selection sort actually takes *n* + *n* - 1 + *n* - 2… steps. This series can be reduced to the formula *n* * (n - 1) / 2, which equals (1/2)*n*2 - (1/2)*n*. When we talk about worst-case runtime, we throw out the coefficient (1/2) and the second term (1/2)*n*. As *n* gets very large, this coefficient and this second term have negligible effect on the total number of steps. Still, practically, they might have an effect on the runtime when *n* is not so large. #### Notation + To make all of this easier to keep track of, let's introduce some notation for discussing the runtime of algorithms. As we've seen, the "worst-case" scenario in sort is when everything is exactly reversed, but in other contexts it might be something different. When we discuss worst-case runtime, we use *O*. When we discuss best-case runtime, we use *Ω* When we discuss average-case runtime, we use *Θ*. Let's fill in the following chart for the search and sort algorithms we've discussed so far:
linear search binary search bubble sort selection sort insertion sort merge sort
O n log n n2 n2 n2
Ω 1 1 n n2 n
Θ n2
+ With search, the worst-case scenario is that the number we're looking for is at the very end of the list. Using linear search, we have to step through every element of the list to find this number, hence *n* steps. Using binary search, we divide and conquer, so it only takes log *n* steps. The best-case scenario is that the number we're looking for is the very first element in the list. Using either linear search or binary search, we find it immediately, hence 1 step. We'll ignore the average-case scenario for now. + All of our sort algorithms (except merge sort) take *n*2 steps in the worst case. Bubble sort, however, had an optimization in the best-case scenario where it would exit early if there were no swaps on a given iteration. Thus, its *Ω* value is *n* because we have to walk through the list at least once. Selection sort had no such optimization, so it runs in *n*2 in the best-case scenario as well as the worst-case scenario. Because the best-case and worst-case runtimes for selection sort are the same, its average-case runtime must also be the same. We saw that insertion sort had a best-case runtime of *n*, but it still has a worst-case runtime of *n*2 because if the numbers in the list are in reverse order, each time we insert one into our new list, all of the previously inserted numbers have to move to make room for it. #### Merge Sort + Merge sort's runtimes are left blank for right now because we haven't discussed it yet. But it turns out that merge sort is much faster than bubble sort, selection sort, and insertion sort. How do we implement merge sort? + In pseudocode, merge sort is fairly easy to express: On input of n elements: If n < 2 Return. Else: Sort left half of elements. Sort right half of elements. Merge sorted halves. + When *n* is less than 2, there's nothing to do. A list of one element is already sorted. This we'll call the *base case*. + Seems like we're cheating a little bit here. We asked how to sort a list, and this algorithm tells us to sort the left half and sort the right half. Duh. However, there is some genius here. Using this algorithm, we're going to continue cutting the list in half until we're left with only one element. Our base case tells us that we have nothing to do when there's only one element to consider, so we move on. If nothing is really happening in the "Sort left half" and "Sort right half" steps, then the magic must be in the "Merge sorted halves" step. + What does merging really entail? Rob Bowden has [something to say](http://cs50.tv/2012/fall/shorts/merge_sort/merge_sort-720p.mp4) on this matter. One thing that Rob's method leverages that we haven't so far is additional space. His bi-level table seemed to suggest that he had enough space to hold the list twice in memory. + To summarize, merge sort breaks down a list of size *n* into *n* lists of size 1. Then it merges the lists pairwise to have *n* / 2 lists of size 2. Then again to have *n* / 4 lists of size 4. This is the same divide-and-conquer approach we saw earlier with binary search. + To calculate the runtime of merge sort, we can use the following notation: T(n) = 0, if n < 2 T(n) = T(n/2) + T(n/2) + n, if n > 1 + So far, all this says is that it takes 0 steps to sort a list of 1 or 0 elements and that sorting a list of *n* elements takes as many steps as sorting two lists of *n* / 2 elements plus an extra *n* steps to do the merging. + Consider the case where *n* = 16: T(16) = 2 * T(8) + 16 T(8) = 2 * T(4) + 8 T(4) = 2 * T(2) + 4 T(2) = 2 * T(1) + 2 T(1) = 0 + Since the base case takes 0 steps, we can now substitute 0 in for `T(1)` and calculate `T(2)`: T(2) = 2 * 0 + 2 = 2 + Now we can substitute 2 in for `T(2)` and so on until we get: T(16) = 2 * 24 + 16 T(8) = 2 * 8 + 8 T(4) = 2 * 2 + 4 T(2) = 2 * 0 + 2 T(1) = 1 + Thus, `T(16)` is 64. This number is actually *n* log *n*. Dividing the list successively accounts for log *n*, but the addition *n* factor comes from the merge step. Our runtime chart now looks like this:
linear search binary search bubble sort selection sort insertion sort merge sort
O n log n n2 n2 n2 n logn
Ω 1 1 n n2 n
Θ n2
+ If you don't believe that *n* log *n* is faster than *n*2, check out [this animation](http://cg.scs.carleton.ca/~morin/misc/sortalg/) to compare them side by side. Or, if you prefer, you can [hear what these sorting algorithms sound like](http://www.youtube.com/watch?v=t8g-iYGHpEA).