BRIAN SCASSELLATI: Welcome to the CS50 AI video series. My name is Scas. And today, we're going to be talking about self-driving cars. Now, I'm a little bit embarrassed to admit that when I was a kid a large part of my childhood revolved around David Hasselhoff. Now, this was before he was involved in anything like America's Got Talent, or involved as a lifeguard in Baywatch, or even before he had sort of history as a pop star in Germany. In my childhood, David Hasselhoff was the supporting actor on a well-known television series called Knight Rider. And I say he was the supporting actor, because really the star of this show was a car, a Trans Am named Kit, who could drive by itself. Kit was amazing. It could talk to you. It could solve problems. It could drive all over the place. It also had lasers and rockets. So it was a fantastic vehicle to start with. But this was the science fiction of the time, autonomous cars that could make decisions, that could drive along the road. And at every point in one of these episodes, David Hasselhoff would get in trouble and the car would start driving and go and save him. That was our science fiction. Even in just the last 30 years, that science fiction has gone from on television and on the screen into reality, into commercial products. Today, we're going to talk about how it is that autonomous vehicles are really able to go and drive and to absolutely amazing things. But let's start with the history. Because these self-driving cars didn't come out of nowhere. And in fact, the very first self-driving cars, the really first influential research projects, came out of a project called Navlab. Navlab was a project that spanned almost two decades at Carnegie Mellon University. And they built a variety of different vehicles that started out looking like small minivans, that were Humvees, that were eventually just sedans, and minivans, even city buses. And these different devices have sensors in them, had computational systems put into them, so that they could steer, and brake, and accelerate autonomously all on their own. Now, these systems were very primitive at the start. And they relied upon very specific lane markings. So the most impressive system that they had built was part of Navlab five. And in 1995, this vehicle, it was minivan, drove from Pittsburgh to San Diego, almost 3,000 miles. And 98% of the time, the only thing controlling the steering was the computer. 98% of the time, it was completely autonomous as it drove almost from coast to coast. Now, that's incredibly impressive. And when we think about that, there's a lot for us to consider. What kinds of sensors was it using? What kind of decision was it really making? What was that other 2% of the time? We're going to try to get to some of these issues today. And as we look at them, we're going to try to uncover what it is that the structure underneath and how the computation is directly driving these applications. Now, Navlab was a tremendous success. And it was the basis of all of our modern thoughts about autonomous vehicles. But self-driving cars didn't really become well-known and popular until in 2005 DARPA, as part of their grand challenge effort, put together a program to try to build an autonomous vehicle. And they made this a challenge. They put a $2 million prize out there with the idea that some really smart research team would come along and be able to claim this $2 million prize. Now, the prize wasn't going to be easy to get. In order to get the prize, you would have to build a vehicle that with no human intervention was able to drive 150 mile course through the rough terrain of the desert. And that was a lot task. And at the time, people thought that they were really crazy in doing this. So the first meeting of the grand challenge was in 2004. And out of that 150 mile course, the farthest, the best team that they had, went about 11 miles before the system failed completely. Now, to give you an idea of how difficult that is, that 11 miles was seen as an absolute phenomenal success. But DARPA wanted more. And so they offered this same prize the following year. And just one year later, the technology had advanced to the point where not just one system was able to complete the course, but five different robot cars finished that course. The fastest one finished 132 miles in under seven hours. That was a robot called Stanley. Stanley was built by the Stanford racing team. And as you can see at the top, it had a number of different sensors up on the hood, up on the top of the vehicle, and all throughout. Using a combination of cameras, infrared, and regular light, using radar and sonar systems on board, using laser range finders to detect obstacles, this vehicle was able to navigate over very rough terrain autonomously steering, autonomously breaking, autonomously applying the gas. That was a real achievement. Today, we see this happening even as a grander scale. Many of you have heard of the Google self-driving car. And these vehicles have logged over 1.2 million miles in the last few years, no human intervention whatsoever. In fact, every time that the Google car has been involved in any kind of accident, it's either been because it was parked, or because some human was so interested in what it was doing that they ran into the car. So with all of these systems, we see this complexity emerging. And in this very short period of time, we've gone from the realm of science fiction to commercial reality. So let's start to take these systems apart. Let's try to understand how it is that they work, what do they actually doing. To do that, we're going to use the same kinds of skills that we've talked about in class. Whenever you see a problem, what we're going to try to do is to try to decompose it. Start with the simplest form that we can. And then build outward from that simple form. So that leads us to the question, what is the simplest form of autonomous driving? At what point is a computer actually in control of my car? Now, the answer may surprise you there. Because almost every vehicle sold today in the US or Europe or anywhere actually is partially an autonomous vehicle. Using systems like anti-lock brakes, these systems are really autonomous. That is, when I step on the break, what I'm doing is I'm asking the car please break now. I'm not actually directly stepping on something that applies the brake pad to the rotor. And the whole point of anti-lock brakes is that at some point along the way I'll be able to press down on the break. But the car will recognize that the wheel is slipping. And it will throttle that break signal, so that the brake doesn't lock up. These anti-lock brake systems are, in a way, making decisions for you. And really, they're the ones that are in charge of the braking system. You're making a request. But you're not actually in control. So we could try to recognize this and break it down into the component parts. And we could think about it as a little bit of pseudocode code. That is, while I'm stepping on the brake, while I'm applying pressure to the brake pedal, anti-lock brake system is checking continuously to see is each of these wheels slipping. And using some internal sensors within the car, they're detecting whether or not the wheel is actually stopping or whether it's sliding. And if it's sliding, the anti-lock brake systems disengages the brakes, and then lets it go. And when the wheels stop sliding, it reapplies them. That is, I'm making a request. I'm stepping on the brake. But the actual breaking is being decided by this small piece of software. So really, all of our cars are already autonomous vehicles. Now, that's not what we think about when we think about autonomous vehicles. We think about cars where I can take my hands away from the wheel, and we can just let it go. Now, that's not happening on a grand scale everywhere yet today. But there are pieces of that that are starting to come into the commercial sector. Since 2003, Toyota, and following that many other manufacturers, everyone from Ford and Lincoln to Mercedes Benz, has been offering some type of intelligent parking assist. That is, there are sensors in the car, typically ultrasonic sensors for short range detection of obstacles, that are able to recognize where there are cars, vehicles, people, any type of obstacle around the vehicle. You then press a button on the dash and ask the car, please park now. You issue a request. The autonomous system then takes over and using those sensors is able to guide the car into a particular parking position. In some of these models, there's a parallel parking version and a backing into a spot version. And each of these different applications evokes a different piece of software. Now, that software isn't anything strange or isn't anything that you can't understand at this point. It's just following these sensor signals. If there's something to close on the left-hand side and I have space on the right, then I'll steer a little bit, so that I can move over to the right. Many of the early parking systems would control the steering angle, but require the user, the human driver, to actually step on the accelerator or step on the break. More modern systems actually control that completely by themselves. So for example, in a Mercedes S Class vehicle right now, you can pull alongside where you'd like to park, press a button, and it will parallel park for you without your hands on the wheel or your feet on the pedals. Now all of these systems rely upon the sensors that they are building into these vehicles today. And whether we use those sensors for detecting potential obstacles and alerting the user or whether we use those sensors to detect an obstacle, and then automatically steer away, that's just a matter of software. In fact, just a few weeks ago, Tesla, who's been building fantastic vehicles with all of these sensors in them for years now, issued a software update. And that software update allowed the vehicles for the first time to enter an automatic driving more, an auto pilot they called it. This auto pilot allowed the vehicle to detect collisions and automatically break, to follow another vehicle that's in front of it, matching speed, to stay within the lanes, to look with cameras, both infrared and visible light, and to be able to say whether or not you're drifting out of your lane or not, adjust the steering appropriately, and even to change lanes when the user signals. All of these different features were just a matter of a software update. That is, all of these users woke up one morning to find this new software available in their vehicles. Because the sensor systems were already there. Now, in all of these cases, we're seeing these software based systems becoming more and more prevalent. They're out there in commercial products already. And the future is that we're going to see more of that. In fact, just this year, Freightliner was able to unveil an autonomous truck, an autonomous tractor trailer, that they're testing legally on the road in Nevada. These vehicles, again, follow a predetermined route. They stay within their lane. They accelerate and decelerate in response to obstacles or traffic conditions. And they even obey some of the other niceties of the road. All of these systems are becoming more and more complex. But they're still not quite autonomous. They're still not quite doing everything. That is, they're still requiring a human driver to be present to make some high level decisions. And one of the things that we're going to see in the next five years are a variety of legal and ethical questions that revolve around the software being built for these driverless cars. How is it that a driverless car should respond if it's surrounded by a group of people? What happens if the driverless car is skidding on the road and you can steer towards a crowd of 10 people or a crowd of 7 people? What should the car do? In all of these cases, there's a rich variety of questions to be asked. And they are not just software questions, legal questions, ethical questions, philosophical questions. And they are ones that we as a community will have to address. So I'll leave you with one last thought, this one from Randall Munroe, of XKCD, one of my favorite comics. It's not just that we're going to see these vehicles being built and being designed with software. But we're going to see people try to exploit them as well. How is it going to be when someone can, over Wi-Fi, download a patch or upload a virus to your car? What kinds of things will happen then? This one's a little bit more playful of an example. But these are the questions we're going to deal with soon. Thanks for joining me. I hope you've enjoyed it. And we'll see you next time.