[MUSIC PLAYING] DAVID MALAN: This is CS50. This is the end of week 10. And boy, do we have a good class for you today. We are so excited to invite two of our friends from Yale up to us today and to look at the intersection of artificial intelligence, robotics, natural language processing, and more. And indeed, over the past few weeks, we've certainly spent a lot of time, especially in the earlier psets, focusing on pretty low-level details. And it's very easy to lose sight of the forest for the trees and get hung up on loops and conditions and pointers, certainly, and the like. But the reality is you guys now have the ingredients with which you can really solve some interesting problems, among them those that our friends at Yale work on just shy of Cambridge. So allow me first to introduce our head teaching assistant from Yale, Andy. [APPLAUSE] ANDY: First of all, just thank you for allowing a couple Yalies to pop on down to Cambridge today. We really appreciate it. Secondly, to our friends back home-- Jason, thanks for staying and running lecture. Hope it's all good in New Haven. So yeah, I'm super excited to introduce Scaz today. Scaz runs the robotics lab. He's a professor of, like, five different departments at Yale. In his lab, he has many, many robots that he likes to play with. He has, like, the coolest job in the world. And he gets to kind of mess around with that all day long and do some work, as well. And so we actually brought one of them down with us today. So without further ado, Scaz is going to go ahead and introduce us to his robot friend. [APPLAUSE] BRIAN SCASSELLATI: Thanks, David. Thanks, Andy. It is so wonderful to be here with everyone today. I want to first be very clear that the CS50 staff here in Cambridge has been incredibly hospitable to us. We are so thankful for everything they've done to support us. And so we'd like to be able to return the kindness. So today, we get to announce that we're going to have a new, one-of-a-kind CS50 event happening in New Haven next week. And this is the CS50 Research Expo. So we're going to be inviting everyone-- CS50 students, staff from both Harvard and Yale-- to come down and visit with us on Friday. We'll have a wide variety of over 30 different people presenting and exhibiting-- upperclassmen showing off some of their research products. We'll have some startups, even, looking for a little bit of new tech talent, startups from both Harvard and Yale. And we'll have some student groups looking for some new membership. It's going to be a very exciting time. Hopefully those of you who are coming down for the Harvard-Yale game will be able to stop by a little bit early, right in the center of campus, Sterling Memorial Library. We're going to have a set of exhibits that range from autonomous sailboats to ways of using software to preserve medieval manuscripts. We're going to have ad hoc networking and people teaching software coding in Cape Town. We'll have computer music demonstrations. And we'll of course have more robots. So we do hope you'll join us for this event. It should be a lot of fun, a little bit of food, and a lot of interesting things to talk about. So today, we're going to talk about natural language processing. And this is the attempt for us to build a new way of interfacing with our devices because for the last few weeks, you've been focused on how it is that you can write code, write software that is a way of being able to say to a machine, this is what I want you to do. But we shouldn't need to expect that everything that's out there that's used by everyone in the world is going to be proficient in this kind of instruction. So we distinguish between computer languages and natural languages-- that is, things that humans use to communicate with other humans. And we try to build interfaces that use these natural communication mechanisms. Now, just like every other topic that we've started with in CS50, we're going to start with the simplest bit of natural language processing that we can imagine. We're going to start with the historical part of natural language. And then we'll build up to more and more recent systems and have some fun demos along the way. So we're going to start with what was probably the first natural language processing system. This was a software system written in 1966 by Joseph Weizenbaum called ELIZA. And ELIZA was designed to replicate the kind of interaction you would have with a Rogerian psychotherapist. Now, the Rogerians, they had an idea that psychotherapy involved being able to mirror back to a patient and talk to them, basically, by only giving them a tiny little bit of the therapist. That is, everything that the therapist said was supposed to be just a reflection of what the patient was telling to them. So let's try and demo this. Do we have a volunteer who'd be willing to share their deepest, darkest, and/or fake secrets with ELIZA? Sure, why don't you come on up. Fantastic. And your name? MILES: Miles. BRIAN SCASSELLATI: Miles. Nice to meet you, Miles. If you would hold onto that. And we're going to use a version of ELIZA that we've replicated within the CS50 IDE. And we'll look at the source code to this in a little while. But for right now, we're just going to invoke it by typing eliza. And ELIZA is going to give you responses that are going to be in all capitals. And going to ask you some questions. And we'll respond and type in responses to ELIZA. So ELIZA has said, "Hi, I'm ELIZA. What's your problem?" What would you like to tell ELIZA about today? MILES: I can only fall asleep if I have bunny slippers on. BRIAN SCASSELLATI: All right. All right. [LAUGHTER] "I can only fall asleep with bunny slippers on." And ELIZA says, "Say, do you have any psychological problems? Maybe this is a good indicator." OK. So what would you like to respond? MILES: Yes. BRIAN SCASSELLATI: Yes. Man of many words. I enjoy that. Eliza says, "You seem quite positive." MILES: No, I am, in fact, an electron. BRIAN SCASSELLATI: "No. In fact, I am an electron." "Did you come to me because you are an electron?" Now, we can see some of the things-- I'm going to pause you for just a second, Miles. Don't go anywhere. We're not done with you yet. ELIZA is, in some ways, responding to certain things that he's saying but without any real understanding. And it's also parroting back some of the parts of his sentence. So we said, "No. In fact, I am an electron." Eliza responded back, "Did you come to me because you are an electron?" All right. Let's give it one more response here. Give it a good one, Miles. MILES: Good one? Yes, please help me. BRIAN SCASSELLATI: "Yes. Please help me." "Are you sure?" MILES: No. BRIAN SCASSELLATI: No. "What does that suggest to you?" MILES: I may not be an electron. BRIAN SCASSELLATI: "I may not be an electron." "Are you saying no just to be negative?" All right. So Miles, thank you so much. MILES: Thank you. BRIAN SCASSELLATI: Thanks very much. [APPLAUSE] So this ELIZA program, in many ways, is responding just to the pieces that we're providing without any deep understanding of what's happening here. This is a kind of system called pattern matching, where we're looking for certain bits of text that we're then going to take out of what was provided as input, convert it, potentially, in some way, and then give it back to the user. Do any of you think that ELIZA is actually performing valid psychoanalysis here? One person, maybe. AUDIENCE: [INAUDIBLE]. BRIAN SCASSELLATI: And how does that make you feel? Yes, in fact, it does. And we're going to see, actually, the source code for it in just a moment. And so you're going to be able to do exactly this. Now, ELIZA is one form of what we would call today a chat bot. It just goes through the text that you're providing, provides the bare minimum amount of understanding or processing, and then parrots it back to you. So let's take a look, conceptually, and talk about what it is that ELIZA is actually doing. ELIZA is taking a sentence-- let's say, "I want to impress my boss." And ELIZA is looking through that sentence and trying to find and match certain patterns. So, for example, one of the patterns that ELIZA is looking for are the words "I want." And any time it sees something that has "I want" in it, it formulates a response. And that response is a fixed string. In this case, it's "why do you want?" And I put a little star at the end because that's just the beginning of our response. And the star indicates that we're going to take the rest of the user's utterance-- "to impress my boss"-- and we're going to append that onto the end of this string. So now, rather than saying, "why do you want to impress my boss," there's a little bit of additional processing that we'll do. That is, we'll have to convert some of the pronouns here from "my boss" to "your boss." And there might be a few other changes that we need to make. So rather than just sticking it directly onto the end, what we'll do is we'll take the rest of the user's utterance-- in white here-- and we'll take it one piece at a time and convert each string token, each word, into the sentence. So we'll take the word "to." There's no conversion that we need to do that. "Impress." There's no conversion we need to do there. "My" will convert to "your." And "boss" we'll just leave as "boss." And then finally, anything that ends with a period, we'll convert it into a question. This very simple pattern matching is actually quite successful. And when this was introduced in 1966-- Joseph Weizenbaum programmed this on a computer. Now, computers at that time weren't desktop models. They were shared resources. And his students would go and chat with ELIZA. Eventually, he had to restrict access to it because his students weren't getting any work done. They were just chatting with ELIZA. And, in fact, he had to fire his assistant, who spent all of her time talking to ELIZA about her deep and worrisome problems. Everyone who used these systems started to anthropomorphize them. They started to think of them as being animate and real people. They started to recognize some of the things that they were saying were coming back to them. And they were finding out things about themselves. And, in fact, even the experts, even the psychotherapists, started to worry that, in fact, maybe ELIZA would be replacing them. And even the computer scientists worried that we were so close to solving natural language. Now, that wasn't anywhere close to true. But that's how impressive these systems can seem. So let's start to look underneath and try to get a little bit of a question of where this code actually happens. So we'll make this code available afterwards. And this is a very simple and direct port of the original ELIZA implementation. So some of these stylistic things that you'll see here are not stylistically what we would want you to do or what we've been teaching you to do. But we've tried to keep them the same across the many ports that this has had so that it has the flavor of the original. So we're going to include a bunch of things, and then we'll have a set of keywords, things that ELIZA will recognize and respond to directly. So if you have words like "can you" or "I don't" or "no" or "yes" or "dream" or "hello," then ELIZA will respond selectively to those. We'll also have a certain number of things that we will swap, like converting "my" to "your." And then we'll have a set of responses that for each of these keywords, we'll rotate through these different responses. So if I say "yes" three times in a row, I might get three different responses from ELIZA. Our code, then, is actually remarkably simple. If I scroll down past all of these responses that we have programmed in and we get down to our main, we're going to initialize a couple of different variables and do a little bit of housekeeping in the beginning. But then there's absolutely a set of code that you can understand. One big while loop that says I'm going to repeat this over and over. I'll read in a line, and I'll store that in an input string. I'll check and see if it's the special keyword "bye," which means exit the program. And then I'll check and see whether somebody is just repeating themselves over and over. And I'll yell at them if they do. I'll say "don't repeat yourself." As long as none of those happen, we'll then scan through and loop through, on lines 308 to 313 here, and check and see are any of those keyword phrases contained in the input that I was just given? If there is a match for them, well then, I'll remember that location. I'll remember that keyword. And I'll be able to build a response. If I don't find one, well then, the last thing in my keyword array will be my default responses, when nothing else matches. I'll ask questions like "Why did you come here?" or "How can I help you?" that are just partially appropriate no matter what the input is. We'll then build up ELIZA's response. We'll be able to take that base response, just as we did in that "my boss" example. If that's all that there is-- if it's just one string that I'm supposed to respond-- I can just send it back out. If it has an asterisk at the end of it, then I'll process each individual token in the rest of the user's response and add those in, swapping out word for word as I need to. All of this is absolutely something that you could build. And in fact, the ways in which we have processed command line arguments, the way in which you have processed through HTTP requests follow the same kinds of rules. They're pattern matching. So ELIZA had a relatively important impact on natural language because it made it seem like it was a very attainable goal, like somehow we'd be able to solve this problem directly. Now, that's not to say that ELIZA does everything that we would want to do. Certainly not. But we should be able to do something more. Our first step to go beyond ELIZA is going to be able to look at not text being entered into the keyboard but speech, actual speech recorded into a microphone. So as we look at these different pieces, we're going to have to build a set of models. We're going to have to be able to go from the low-level acoustic information-- pitch, amplitude, frequency-- and convert that into some units that we're able to more easily manipulate and, finally, manipulate them into words and sentences. So most speech recognition systems that are out there today follow a statistical model in which we build three separate representations of what that audio signal actually contains. We start with a phonetic model that talks about just the base sounds that I'm producing. Am I producing something that is a B as in boy or a D as in dog? How do I recognize those two different phones as separate and distinct? On top of that, we'll then build a word pronunciation model, something that links together those individual phones and combines them into a word. And after that, we'll take the words and we'll assemble them with a language model into a complete sentence. Now, we're going to talk about each of these independently and separately. But these three models are all just going to be statistics. And that means when we work with them, we'll be able to work with them all simultaneously. All right. Let's start with our phonetic model. So phonetic models rely on a computational technique called hidden Markov models. These are graphical models in which I have and recognize a state of the world as being characterized by a set of features. And that state describes one part of an action that I'm engaged in. So if I think about making the sound "ma" like mother, there are different components to that sound. There's a part where I draw in breath. And then I purse my lips. And I roll my lips back a little bit to make that "ma" sound. And then there's a release. My lips come apart. Air is expelled. "Ma." Those three different parts would be represented by states in this graph-- the onset, the middle, and the end. And I would have transitions that allowed me to travel from one state to the next with a certain probability. So, for example, that M sound might have a very, very short intake at the beginning-- "mm"-- and then a longer, vibratory phase where I'm holding my lips together and almost humming-- "mmmm"-- and then a very short plosive where I expel breath-- "ma." The hidden Markov model is designed to capture the fact that the way that I make that sound "ma" is going to be slightly different in its timing, is frequency, and its features than the way that you make it or the way that I might make it when I'm talking about different uses of the letter. "Mother" and "may I" will sound slightly differently. So to recognize a particular sound, we would build Markov models, these hidden Markov models, of every possible phone that I might want to recognize, every possible sound, and then look at the acoustic data that I have and determine statistically which one is the most likely one to have produced this sound. OK. With that model, we then start to build on top of it. We take a pronunciation model. Now, sometimes pronunciation models are simple and easy because there's only one way to pronounce something. Other times, they're a little bit more complicated. Here's a pronunciation guide for that red thing that is a fruit that you make ketchup out of. People don't think it's a fruit. Right? Now, there are many different ways that people will pronounce this word. Some will say "toe-may-toe." Some will say "toe-mah-toe." And we can capture that with one of these graphical models where, again, we represent transitions as having a certain probability and associated probability with them. So in this case, if I were to follow the top route through this entire graph, I would be starting at the letter on the far left, the "ta" sound. I would take the top half, the "oh," and then a "ma," and then an "a," and then a "ta," and an "oh." "Toe-may-toe." If I took the bottom path through this, I will get "ta-mah-toe." And if I went down and then up, I would get "ta-may-toe." These models capture these differences because whenever we deploy one of these recognition systems, it's going to have to work with lots of different kind of people, lots of different accents, and even different uses of the same words. Finally, on top of that, we'll build something that looks really complicated, called the language model, but in fact is the simplest of the three because these operate on what are called n-gram models. And in this case, I'm showing you a two-part n-gram model, a bigram. We're going to make physical the idea that sometimes, certain words are more likely to follow a given word than others. If I just said "weather forecast," the next word could likely be "today" or could be "the weather forecast tomorrow." But it's unlikely to be "the weather forecast artichoke." What a language model does is it captures those statistically by counting, from some very large corpus, all of the instances in which one word follows another. So if I take a large corpus-- like every Wall Street Journal that has been produced since 1930, which is one of the standard corpuses-- and I look through all that text, and I count up how many times after "forecast" do I see "today" and how many times do I see "forecast" followed by "artichoke," the first one is going to be much more likely. It's going to appear far more frequently. And so it'll have a higher probability associated with it. If I want to figure out the probability of an entire utterance, then, I just break it up. So the probability of hearing the sentence "the rat ate cheese" is the probability of the word "the" starting a sentence, and then the probability that the word "rat" follows the word "the," and the probability that the word "ate" follows "rat," and the probability that "cheese" follows "ate." This sounds like a lot of statistics, a lot of probabilities. And that's all that it is. But the amazing thing is if you do this with a large enough sample of data, it works. And it works tremendously well. We all know these technologies. Most operating systems come with voice recognition at this point. We use Siri and Cortana and Echo. And these things are based upon this type of three-layer model-- a phonetic model at the bottom, a pronunciation model in the middle, and a language model on top of them. Now, they have to do a little bit more than that in order to answer questions. But the recognition of what you're saying depends exactly on that. So let's take an example here. So I have my phone sitting up here underneath the document camera. And we're going to asks Siri a few questions. All right? So let's wake up my phone here. Siri, what is the weather like in New Haven today? SIRI: Here's the weather for New Haven, Connecticut today. BRIAN SCASSELLATI: OK. So first you saw that Siri recognized each of the individual words and then produced a response. We'll talk about how that response comes about in a little bit. But now that we know that this is just based on the raw statistics and this pattern matching type of approach, we can play some games with Siri. So I can try again. Siri, what is the weather hippopotamus New Haven, today? SIRI: OK. Here's the weather for New Haven, Connecticut for today. BRIAN SCASSELLATI: Siri's not daunted by that because it's found the pattern-- "weather," "today," "New Haven." That's what it's responding to, just like ELIZA. All right. Let's give it one more even more ridiculous example. Siri, weather artichoke armadillo hippopotamus New Haven? SIRI: Let me check on that. Here's what I found on the web for what are artichokes armadillo hippopotamus New Haven. BRIAN SCASSELLATI: OK. So if I go far enough away from this model, I'm able to confuse it because it no longer matches the pattern that it has. And that statistical engine that's saying, what's the likelihood that you've got the words hippopotamus and artichoke together, and armadillo? That's got to be something new. So these technologies we use every day. If we want to take them one step further, though, if we actually want to be able to talk about what it is that these systems are responding to, we have to talk, again, about a more fundamental set of questions. And that's a topic in communication that we call question answering. That is, we want to be able to-- yeah? AUDIENCE: [INAUDIBLE]. BRIAN SCASSELLATI: Do we get into latent semantic processing? So yes. There are a lot of things that are happening below the surface with Siri and in some of the examples I'm going to show you next where there is quite a bit in terms of the structure of what you're saying that's important. And, in fact, that's a great precursor for the next slide for me. So in the same way that our speech recognition was built up of multiple layers, if we want to understand what it is that's actually being said, we're going to again rely on a multi-layer analysis of the text that's being recognized. So when Siri is actually able to say, look I found these words. Now what do I do with them? The first component is often to go through and try to analyze the structure of the sentence. And in what we've seen in grade school, often, as sort of diagramming sentences, we're going to recognize that certain words have certain roles. These are nouns. These are pronouns. These are verbs. And we're going to recognize that for a particular grammar, in this case English grammar, there are valid ways in which I can combine them and other ways that are not valid. That recognition, that structure, might be enough to help guide us a little bit. But it's not quite enough for us to be able to give any meaning to what's being said here. To do that, we'll have to rely on some amount of semantic processing. That is, we're going to have to look at underneath what each of these words actually carries as a meaning. And in the simplest way of doing this, we're going to associate with each word that we know a certain function, a certain transformation that it allows to happen. In this case, we might label the word "John" as being a proper name, that it carries with it an identity. And we might label "Mary" as the same way. Whereas a verb like "loves," that constitutes a particular relationship that we're able to represent. Now, that doesn't mean that we understand what love is but only that we understand it in the way of a symbolic system. That is, we can label it and manipulate it. With each of these types of approaches, any type of semantic processing here is going to require a little bit of knowledge and a lot of work on our part. We're no longer in the realm where just plain statistics are going to be enough for us. Now, in order to go from this point to being able to talk about the inside of what's actually happening here, to being able to manipulate this structure and understand a question and then being able to go out and search, that requires a more complex cognitive model. The way in which these systems are built is for the most part very, very labor intensive. They involve humans spending a great deal of time structuring the ways in which these kinds of sentences can be represented in some logic. It gets even a little more complex, though. Even once we've dealt with semantics, we'll still have to look at the pragmatics of what's being said. That is, how do I relate the words that I have to something physically out there in the world or at least some information source that I can manipulate? Sometimes, these lead to wonderful bits of ambiguity. "Red-hot star to wed astronomer." OK. Now, we read that as the funny type of headline that we would see on late night TV because we don't interpret "star" to have its celestial body meaning. We know that it means the more commonplace actor or actress with high amounts of visibility. "Squad helps dog bite victim." Is it that the squad is actually out there assisting a dog in going around and biting victims? Or is it that there was an individual who was bitten by a dog who needed some help? Just from looking at the syntax and the semantics of the sentences, we can't determine that. "Helicopter powered by human flies." Are there, in fact, actual little things flying around that are people with wings powering helicopters for the good of mankind? Or is there one mechanical device that derives its power from a person? When we look at question answering systems, these are the layers that we need to deal with. Now, the best and the most sort of prevalent of these systems today are able to do things at a level that are far beyond what we had 10 years ago. Here's one of the best systems that's out there today. And this is a system from IBM called Watson. This is from our friends at IBM. This is Katherine banks, Jia Chen, and Priscilla Moraes giving us a demonstration of Watson yesterday in IBM Yorktown. [VIDEO PLAYBACK] -How are you? -I am well. I am always happy when I am thinking, especially when I can stay cool in these server rooms. -Who is your favorite actress? -My favorite actress Angelina Jolie as Lara Croft in Tomb Raider because her original form was as a virtual character like me. -How can Watson help patients? -Is the plan for patients to have access to Watson directly? Access to Watson advisory applications will vary by use case. But currently, Watson will only provide possible diagnosis and treatment options to medical professional. Where appropriate, patients may provide information to Watson, perhaps through a portal web interface, tablet, or even a mobile phone. For example, a patient may be asked to describe their symptoms in natural language directly into a Watson solution, allowing the physician to focus on diagnosis and not data collection. -Who assassinated Abraham Lincoln? -John Wilkes Booth. -In what year did the Arizona Diamondbacks win the World Series? -2001. [END PLAYBACK] BRIAN SCASSELLATI: So these kinds of systems have to rely upon first of all recognizing the speech; second, converting it into a meaningful internal representation; and then, third, being able to go out and find the information source that allows them to answer that question. This level of complexity involves the same types of programmatic things that you have been doing in problem sets. We're able to parse HTTP requests in the same type of low-level pattern matching that ELIZA can do. We're able to convert those into an internal representation, and then use them to query some external database, possibly using SQL. All of the systems that are being built today to do this type of natural language communication are being built upon these same principles. Now, even a system like Watson isn't complex enough to be able to answer arbitrary questions about any topic. And in fact, they have to be structured within a given domain. So you can go online and you can find versions of Watson that operate well within medical informatics. Or there's one online that just deals with how to make good recommendations about what beer will go with which food. And within those domains, it can answer questions, find the information that it needs. But you can't mix and match them. The system that's been trained with the database of food and beer doesn't work well when you suddenly put it in with the medical informatics database. So even our best systems today rely upon a level of processing in which we are hand coding and building in the infrastructure in order to make this system run. Now, the last topic I want to be able to get to today is about nonverbal communication. A great mass of information that we communicate with each other doesn't come about through the individual words that we're applying. It has to do with things like proximity, gaze, your tone of voice, your inflection. And that communication is also something that many different interfaces care a great deal about. It's not what Siri cares about. I can ask Siri something in one voice or in a different tone of voice, and Siri's going to give me the same answer. But that's not what we build for many other types of interfaces. I want to introduce you now to one of the robots. This was built by my longtime friend and colleague Cynthia Breazeal and her company Jibo. And this robot-- we're going to have a couple volunteers come up to interact with this. So can I have two people willing to play with the robot for me? Why don't you come on up, and why don't you come on up. If you'd join me up here, please. And if I could have you come right over here. Thanks. Hi. ALFREDO: Nice to meet you. Alfredo. BRIAN SCASSELLATI: Alfredo. RACHEL: Rachel. BRIAN SCASSELLATI: Rachel. Nice to meet you both. Alfredo, I'm going to have you go first. Come right up here. I'm going to introduce you-- if I can get this off without knocking the microphone-- to a little robot named Jibo. OK? Now, Jibo is designed to be interactive. And although it can give you speech, much of the interaction with the robot is nonverbal. Alfredo, I'm going to ask you to say something nice and complimentary to the robot, please. ALFREDO: I think you look cute. [WHIRRING SOUND] BRIAN SCASSELLATI: OK. Its response isn't verbal. And yet it gave you both a clear acknowledgement that it had heard what you said and also somehow understood that. OK? Step right back here for one second. Thank you. Rachel, if you would. Now, I'm going to give you the much harder job. If you'd stand right here, back up just a little bit so we can get you on camera and look this way. I'm going to ask you to say something really mean and nasty to the robot. RACHEL: What you just seemed to do was completely absurd. [HUMMING SOUND] That was even more absurd. What's going on with you? Aw, don't feel bad. I'll give you a hug. BRIAN SCASSELLATI: All right. Thanks, Rachel. Alfredo, Rachel, thanks guys very much. [APPLAUSE] So this kind of interaction has in many ways some of the same rules and some of the same structure as what we might have in linguistic interaction. It is both communicative and serves an important purpose. And that interaction, in many ways, is designed to have a particular effect on the person interacting with or listening to the robot. Now, I'm lucky enough to have Jibo here today. Sam Spaulding is here helping us out with the robot. And I'm going to ask Sam to give us one nice demo of Jibo dancing that we can watch at the end here. So go ahead, Jibo. SAM: OK, Jibo. Show us your dance moves. [MUSIC PLAYING] BRIAN SCASSELLATI: All right, everybody. Thanks to our friends at Jibo. [APPLAUSE] And thanks to our friends at IBM for helping out today. Communication is something that you're going to see coming up more and more as we build more complex interfaces. Next week, we'll be talking about how to interface with computer opponents in games. But if you have questions about this, I'll be around at office hours tonight. I'm happy to talk to you about AI topics or to get into more detail. Have a great weekend. [APPLAUSE] [MUSIC PLAYING]