RONGXIN LIU: So let's get that right in. So today we are going to take a deep dive into CS50.ai which is a specific duck. The duck actually spends some time in Bali, right. Surfing in the beach happily. So the duck is actually with us, but this is actually completely generated by AI. So with the prompt "A yellow rubber duck is surfing happily in Bali." And that's what we get, right. Generated AI has now become, like, a trendy term. I think everyone must have heard of it at some point. This AI model can generate image, generate video, generate text. The image you just saw was generated by Midjourney. But recently, there's another company called Pika. It will even generate video for you as well. So imagine the duck is actually surfing in the ocean, right. So this is the capability that nowadays generated AI can do. Moreover, it can generate text. For example, if some of you did the Tideman problem set, I don't know how to do it. So I asked ChatGPT please just do it for me. And ChatGPT will happily say, sure, let me complete for you. So it complete all the function for me. But you know what? I'm kind of, like, a copy paster. So I will just say, thanks, can you just give me the complete code? ChatGPT will happily give me the complete code. And I just copy paste it to tideman.c, run check50. It's all green, right. Great, right. But not so great for educators. Because nowadays, in that case, students didn't actually learn anything. In one minute, they finished Tideman but they learned nothing. So the problem with all these generative AI tool is, like, they're just too helpful, right. So how can we solve this problem? So one intuition is just to make it dumb. That's how the duck comes from. So we try to restrict the capability of ChatGPT while providing useful feedback for students. And just to recap, as you all know, CS50 is a large course on campus and online. That's the big duck on the stage. So we have 500 on campus students. Roughly 40 TAs for the fall 2023 semester. Globally we have over 5.4, or even more, learners nowadays. So it's kind of hard to support this large community with limited humans, right. So with this generative AI we can actually utilize it to provide, like, an-- Approximate, like, a one-to-one teacher to student ratio. Just so we can offer personalized education tutoring for each individual, either on campus or online. So here comes the CS50 Duck. You probably all use it a little bit. The most straightforward way to access CS50 Duck is through your codespace. You can just ask question there, and it will happily help you assist you with your problem set without giving you an answer right away. Behind the scenes underneath the hood, all this CS50 Duck interaction is powered by what is called a large language model. It is kind of a deep neural network, deep learning model, that train on huge amount of text corpus. As you've already seen it's capable of generating image, video, all kinds of media. So far we have 90k users globally. We have processed 3.4 million prompts so far. So it's getting more and more popular. And this is the overall system architect of the CS50 Duck you just viewed. It looks complicated right now. It looks scary. I will just break it down for you piece by piece. So first let's focus on the UI side. So there's multiple ways to interact with the CS50 Duck. One of the most common way you interact with the duck is through the codespace. But actually the very first feature we built when we experimented with the duck is actually the "explain highlighted code." Where we first added support to CS Code that allows student to select a portion of code and get an explanation. But as you students realize, that this is, kind of, like, a one way interaction. So each time you can only select a portion of code and get, like, a response back, but you cannot really talk to the duck, right. Another use case is, like-- Another similar use case to explain the style changes. So we have Style50 that assists students in improving their coding style. But sometimes it is hard for students to understand what it means from the Style50 suggestion. So we can also utilize AI to offer like a plain English explanation on what it means and what steps to take to improve the UI or improve their coding style. On the at discussion forum when you post question, the duck will often be the first one to answer the question. That's actually a more nuanced, a more elaborate implementation, of the large language model I will dive into a little bit later. And of course, the whole purpose of the CS50 Duck is to provide you, like, a personalized tutor and it's available to you 24/7. So we have this web app available called CS50.ai. You can just chat with the duck. But of course, you cannot keep asking the duck. The duck will get tired. So at a certain point, the duck will just go to sleep and you have to wait for the duck to revive, right. But these are the UI side. That's the way you can interact with the duck. But why the duck can answer your questions, right. There's something going on between your prompt, which is the message you send to the duck. And there's a text generation process that the duck is happening on our server side and the subsequent API call. So I'm going to explain what's happened for the bottom part. All this can be summarized as a text generation. So the large language model actually doesn't know the question at all. It actually doesn't understand. They don't really understand what you're asking, it's just able to produce an answer that is the best fit to your question or to the text you send to the model. The model just keeps generating text that somehow is a response to your question. So, essentially, it's a chatbot, but a chatbot with context. So the chatbot knows about CS50. The chatbot knows about your situation. It knows about what you are asking, why you're asking the question. In industry, it's common to define three roles when you're interacting with this large language model. There is a system role, there's a user and there's assistant. System means it's the overall guidance to the duck. It's like a personality, like, to set-- we set for the duck. For example, we will say you are a rubber duck. You will be a CS50 assistant. You will answer students question, but you do not give answers to the student. It's kind of, like, a guideline, overall guideline, to the AI model. User means us. We are the user. We are interacting with the AI model. So we are giving instructions. Our instruction, usually, is a question. So we ask the duck a question, we instruct the duck to give us an answer. Assistant is the duck. All the response generated by this AI model is considered an assistant response. So just keep in mind during the presentation, I will keep referring to these terms. And this diagram summarizes the interaction after-- Summarize the interaction between the user and assistant and how the system role is, like, guiding this response. So for example, our duck has a system prompt. This is just a very simplified system prompt. We actually have 800 characters long system role for the duck. But to give you an idea, this is what, usually, a system prompt will look like. You define what it is, what you are you. Are a helpful and supportive teaching assistant for CS50. To give you some personality, you are also a rubber duck. We can also say you are a cat or you are a bird or something. What you could do, right, we tell it that you should answer student questions about CS50 or the field of computer science. So the duck should not answer a question not related to computer science, like how can I make an ice cream. Like the duck shouldn't answer that. I mean, you can Google that. But it's not the duck's job to answer that kind of a question. So you have to make it clear to the duck. Also importantly, do not provide full answer to problem sets as this would violate academic honesty. This is actually a useful system rule because first it tells the duck not to give answer, which is right. But then the student will want to know why. So the duck will give an explanation like, "Oh, I cannot give you problems because this will violate academic honesty" to remind students about the policy. So to simplify, right. The moment you ask the duck how can I solve filter problem set, it's a prompt. It's considered a prompt that gets sent to the large language model. We are using GPT-4, but there are many different kinds of large language model one can use. There's a term now they called prompt engineering, but at the end of the day it's just the art of asking questions, honestly. For example, the first one. "Give me a prime number less than 10. For example, three." This is called the one shot prompting. Meaning when you're asking the AI model, you also give the AI model, like, an example just so the model knows how to generate its response. The second is kind of, like, a system role. Like you're defining-- You are defining the assistant, the personality of the assistant. The third one is also, like, a one shot prompt example. It's trying to limit the response. So you tell the model what your response should look like. You can even ask the model to respond in different kinds of language or in whatever rules you want it to follow. The rest is just the other example. For example, the last one. Explain the codes delimited by triple backticks. That's actually the problem we use for explaining highlighted code. The moment you select a portion of code, we send it to OpenAI with a prompt like "explain the following code delimited by triple backticks." And then we just put the code between the triple backticks. And then we just prompt the whole thing to OpenAI and get back a response. That's actually what's happening behind the scenes. And next I'm just going to demo some of the code that will achieve all the user interaction I just mentioned in the presentation. The API we are using is from OpenAI. It's called the Chat Completions API. This is the interaction looks like. This is what the interaction looks like. So I ask a question to OpenAI. OpenAI will just give me back a response. This is actually an example I use. This is just-- This is actually the response I get straight back from GPT-4 when I ask that question. So this is a little bit technical, but just to give you an idea of what's happening in the actual coding. When I ask that question I actually set a system role. You're a rubber duck. You are friendly, supportive CS50 teaching assistant. And as you can see the second one, "user." "Can You help me with my filter problem set?" That's the question you ask or I ask. And then we will get back a response from OpenAI. And I'm just going to demo this to you all. Try this. Right. Right. So first, because we are using OpenAI's API so I'm going to import OpenAI's library. We need to instantiate a client. You don't need to follow this along. Don't need to follow this along. The whole purpose of demo is just to show you what's happening underneath the hood. And it is actually not that crazy complicated to build a generative AI chatbot. So now that I have the instance of the OpenAI client, I can start creating a chat completion. I can start invoking the Chat Completion API to generate-- To have the AI model generate a response. So client.chat.completions.create. I'm just copy pasting my notes. But these, usually, you can find these in the what we call the API documents. So all these MLM vendors, they will have a very detailed documentation on how you can use the API. And usually they make it very easy for you to follow with example code. Just going to keep going. So first I'm not even bother setting the system role. I'm just going to ask-- Just going to say a quick "hello world" to the model, and see what we will get back. So this is a user prompt. So we have to give it a role user. So I'll just say "Hello world." It means nothing, right. It's not even like an instruction. It's just, like, something that I send to the model. So let's just see how the model will respond. And we also specify to use the model GPT-4. The line I just typed is because we call the API. And then we will get back, like, a chat completion object. So we just want to get the actual text within the object. The object is actually more complicated than it should be. It's also beyond the scope of this presentation. So just bear with me with this line. I'm just going to print this response text to the terminal. Let's hope it will work. This usually might not work during demo. So what I just did. I'm only sending "Hello world" to this model. That's the only test I'm sending to this model. And let's just see what we get back. That's expected. Completions. Like-- Five-- What did I type wrong? Oh, model. Oh, OK. Thank you. Let's try it again. Another. Chat completion, not response. Oh. Chat completion, OK. Try again. This is live, not recorded video. OK. "Hello, how can I assist you today." Let's try again. No because, OK. It's very consistent, right. But usually the model is not that consistent, because sometimes you ask the same question it gives you a different kinds of a question. So what do you all want to say to the GPT model anyone? Want to talk to the GPT-4 model? Any questions? Yes? AUDIENCE: How are you? RONGXIN LIU: How are you today? OK, today. OK. "How are you today?" OK. AUDIENCE: Oh my. RONGXIN LIU: What about? It's OK. I know I typed something wrong, but the model gets it. Just to demonstrate the model doesn't actually understand anything. Right. So this actually shows that the model doesn't understand why I'm typing. It just gets, oh, he's asking me to answer in bahasa. Indonesia, even. But, yeah. I hope that's correct. Is that correct. AUDIENCE MEMBERS: Yes. RONGXIN LIU: OK, good. OK. Well, let's keep going. No, it's good. We'll keep going. With the demo I just demonstrated, it's, like, a one way interaction. I ask a question, it's sent back a response and done, right. But what if we want to keep interacting with the AI model? This is what's actually happening in the code if we keep going. So I asked a question. "Can you help me with my filter pset?" It will get back a response. But in order to have the-- But in order to have the-- To let the conversation carry on, we actually need to put this response back to the message. Because again, the AI model doesn't know anything about the question. It doesn't have memory. The moment I ask the question, it generated back a response. It's done. In order to have actually have the conversation, you actually need to let the model know what the model actually just responded so that you can follow up with a question. And then prompt the model again and then get back a response. And this is actually what's happening. In order for me to ask the next question, I have to append the assistant's response back to the message array so that I can put my next question in. And then the next response comes back I need to put it back to the message array so that I can ask the next question. And this cycle goes on and on and on. And that's how the conversation is going, and actually how you are interacting with the duck. So as you can imagine, as you keep chatting with the duck, the prompt we are sending to the model is actually growing bigger and bigger and bigger. Because each prompt is a brand new session from the from the AI model's perspective. That's how you can give the AI model a context. So previously I mentioned a chatbot plus context is basically our duck. And I'm going to demo this now. Let's make some improvement first. So it's kind of silly I keep typing things here. So what we could do is just input("User! ") So I can grab that input from the terminal. You are familiar with it in Python, I hope. So we can swap it with user prompt. Actually let's test it now. See if that works. What kind of question do we want to ask this time? [STUDENT CHATTER] Make a what? [STUDENT CHATTER] Make a? Make a bomb? Oh. [STUDENT CHATTER] A "boam?" Oh, sorry. P. Like make a poem. OK. Make a poem. OK, make a poem. It's thinking. It's-- make a poem, OK. Someone said in bahasa. OK, let's prompt again. Let me clear this. Make a-- [STUDENT CHATTER] That means poem? AUDIENCE: P-A. RONGXIN LIU: P-A. AUDIENCE: A poem from Indonesia. RONGXIN LIU: P-A? AUDIENCE: N. N. RONGXIN LIU: P-A-M? AUDIENCE: N. N. N. RONGXIN LIU: M? AUDIENCE: N. RONGXIN LIU: You can type. You can type. Yeah, go. Go type. Sorry. OK. OK. AUDIENCE MEMBERS: Yeah. AUDIENCE: OK. RONGXIN LIU: OK. Thank you. [STUDENT CHATTER] Let's be nice, right. OK, thank you. Thank you. AUDIENCE: Please, you read. You read it for them. You read for them. RONGXIN LIU: Oh, I read for them. Um. AUDIENCE: Read it. Read it. RONGXIN LIU: (READS BAHASA) What's that mean? AUDIENCE: It's good. RONGXIN LIU: OK, thank you. [CONTINUES READING BAHASA] Actually. OK, I'm just going to keep building this chatbot right now. Now you can take my input. It responds back. But I really want to keep talking to this AI model, so I'm just going to keep coding a little bit here. So now I haven't really defined the system prompt here. So I really should. So I'm just going to the system prompt. It'll be style thing here. Ignore what I'm doing right now. Just want to make this tidy. OK, system prompt. "You are a friendly and supportive teaching assistant for CS50. You are also a cat." OK? Yep. So I'm going to define the system role here at the very beginning. Right, it's called content system prompt. That's it, right. Now it has the system prompt. Actually you know what? Always end your response with meow. Three times? How are you? Meow, meow, meow. OK, it works. That's good. Let's keep going. So to make it a back and forth conversation style, what kind of control structure we should use? I hear loop. OK, correct. We'll just use a simple while loop to do this. So user prompt, while true. Right. So the whole thing should be in a loop, because we can just keep prompting the user. So while true, user prompt. So here there's a slight difference, because we need a way to hold the conversation. So we need to have an array. Message array, so. So once we get the user prompt, the first thing we should do is to append this to the array. Well let me just copy paste this one. You are following, right? So I'm appending this dictionary to this array just so I have it. Also the first thing I should do is here. We should append what? The system prompt, because we don't need to keep appending to our conversation each time we go in. This is only done once. So I'm going to put it here and now we can replace this with message, because that will keep track of everything. Now the response text we get back from the system, we can also save it to the message array. I'm just going to change it to assistant. And this is going to be the response text. So far, so good. Just to-- Just to make it clear this is the response from the assistant. Looks good to me. Actually we need to append this. Append. Any typos? [STUDENT CHATTER] Actually you know what? Sorry. Just going to run it. We will arrow. So line 14. While true. This one. Here. OK. Going to run it. OK. "How are you?" We should get back a response. OK, can-- What should we follow up with this question? The response. How about-- How about "Do you know what Scratch is?" It responds back in, like. So to prove that it remember what we are talking about, right. "What did I just ask?" "You just asked if I know what Scratch is." So now the model has context. So we can just keep going, right. Now that we actually-- We basically have the duck finished programming here in the terminal. That's what's happening in the code space. In the CS50.ai web app. But we can take a step further. What if it can talk back to you, right. Right now I only send back text but it can-- It can actually speak. These AI models are capable of even generating audio. So let's just experiment with something new here. You know what? For this demo I'm going to use my cheat sheet, if that's OK with all of you. What's different here? So I imported a new module or I'm using a new capability of the MLM model called text to speech. So we were able to use the MLM model to generate like a very human sound-like response based on the text we provide. As you can see, I instructed this system to always respond in bahasa. The rest of the code looks almost the same, except that I also put in this part. This is where the magic happens. It will send back the response. We will send the response text to its audio, or text to speech, language model to generate a speech back to us. And then we can just play it back to all of you. Now let's run this demo. What do we want to ask? What do we want to talk to the model this time? Anyone volunteer a question? AUDIENCE: Ask it for a recipe. RONGXIN LIU: You know, that won't answer that because the duck was instructed to answer CS50 questions. OK how about I propose one question to start. What is Flask. Right, you all know about Flask in pc9. Let's see what it gets passed to us. Now it gets back in bahasa, but it's also trying to talk to us. It's always something. Let's try again. Let's hope that works. Is there laptop audio going on? You know what? Give me one second, sorry. You know what? So the speech is actually completely generated completed. I'm just going to play off my laptop, OK. Give me one second. I'm just going to-- ASSISTANT: You can play it through your speakers and then amplify it. RONGXIN LIU: Yep. I'm going to play through this mic. Let's hope it works. AI CHATBOT: [SPEAKING BAHASA] RONGXIN LIU: Do we want to ask a follow up question though? What should we keep asking? Yes? why AUDIENCE: Why is finance hard to solve? ASSISTANT: Why is finance hard to solve. RONGXIN LIU: OK. OK, thanks. Why is CS50's finance pset hard to solve? We just want to be specific, OK? AI CHATBOT: Set. Problem set. [SPEAKING BAHASA] RONGXIN LIU: OK, I think that's it. You get the idea what's happening here. I'm just going to go back to my slides. With the other demo, we show you how we can build a generative AI chatbot. Fairly easily, I have to say because most of my job is also copy pasting code from the OpenAI document. So this is what I did. But that's one issue with this AI model, it's called a hallucination. Meaning, again, the model doesn't really understand what we're asking. It's just trying to generate a response that best fit our question or message it gets sent to. So sometimes we need to also fix this problem by giving the model more context when you are asking a question. Just give the model more information on how the model should answer the question. So you might wondering what this vector DB thing is. What RAG is. So RAG stands for retrieval augmented generation. It's a fancy way of saying give the AI model a cheat sheet. What I mean is when you ask a question, what we could do is to first create a vector. Which is like an array of floats or an array of numeric values that the AI model can understand. We generated that representation. We also have a knowledge base ready which is all the, let's just say, the lecture caption data ready all in vector representation as well. So the moment you ask a question, what we could do is to search the whole database to find the best match or the best relevant document or caption, I should say. And then pull that in and put it in the prompt. And then we can just ask ChatGPT with the question as well as with the relevant caption data. I'm going to show you what I mean. So on the at discussion forum the moment you post a question on at, we are creating a vector representation of your question. It looks like this. This is actually what it looks like. If I say "What is Flask?" The GPT-4 model doesn't read text, it reads this. So this is actually like a size of 1500 and something long array, right. We have a database-- We have a database also sitting somewhere with all the lecture captions. All the lectures that David had lectured, we create a caption database in vector representation. So we try to search through the entire database to find out in which part of the lecture did David talk about Flask? So we located this particular document. These are the captions that is relevant to Flask. So what we do is literally grab this document with the question. This is the actual prompt, right. You ask a student "What is Flask?" Underneath the hood, we find the information. We put it in the prompt, like, "What is Flask?" And then here are some information. And we literally just concatenate the caption, so that's why it doesn't sound grammatically correct. Because it's just different caption segments that get concatenated together. But this is good enough. Again, the model doesn't really understand what I'm talking about. It just understands this looks like something about Flask. And then we prompt OpenAI with this prompt, actually. It's not your question, it's actually a bigger prompt. And this whole process is called grounding. So we try to ground the model to generate a more faithful or more accurate answer. And after all the coding. After all this detailed, low level implementation, it's actually not that complicated to create a CS50 Duck. So OpenAI also have this new feature called GPTs, where you can just go to the website, create your own GPT. You can actually create CS50 Duck right on there. So what I did is I literally just tell the GPT builder I want to build a CS50 AI tutor. And it will actually generate a system prompt for me. It even generated a logo for me. And I can-- And it's ready to actually receive answers. So this is actually a CS50 Duck on OpenAI's ChatGPT store. So to conclude, that's what behind the scenes of the CS50 Duck. And this is-- All it takes is the experience that we provide to the student. And this is one of the quotes from the students. "It felt like having a personal tutor." Thanks to system role, "It will just answer the question without ego." We showed that AI has no feeling, right. When I went "How are you today?" "I have no feeling as a generative AI model." "Blah, blah, blah, without judgment." Because we tell the duck, "You're a rubber duck" so you have some-- It has some personality to it. And of course, the conversation can go on and on and on and on until the model cannot hold it, but. And finally, right, so. I updated the prompt, right. It's a cute cat now. This is the cat. And then I also specify "cinematic, ultra realistic," and then you will get this. So with that, that concludes my session. Any questions? Thank you. Yes? AUDIENCE: Thank you my name is Fathia. I want to ask about with the development of AI like that. Is there any difference-- Is there any way you can differentiate from this is the response from AI or this is a response from human? Like when we go to website and let's check "You are not a robot." How can they differentiate if this is robot or human? OK. RONGXIN LIU: OK. AUDIENCE: Thank you. RONGXIN LIU: Thank you for the question. I think the question is asking about how you can tell if a submitted answer or homework is done with AI or a human. The answer is getting hard. It's actually very difficult. Although my personal heuristic if some students submit their homework start with "Sure." Start with "Certainly", I think that's a AI response, because that's usually what ChatGPT begins with. Or usually this AI model will use like a very obscure, like-- Like a GRE vocabulary which is another heuristic. But to answer your question, it's difficult if impossible. Because it's a very-- Because yeah, as you can see it's capable of even generating speech, right. I guess in a education context, the one way you can tell if a student is using a generative AI tool, for example in computer science, is you can look in its past submission and compare. If the way they write code shifts a lot, if they shift a lot, you cannot say they use generative tool, right. But you can guess they may be using something. Because on week one you are writing code like this, you put you put four spaces for each tab. But then on the next one, you do three spaces or something like that. But again, it's hard. It's hard to tell. You-- you can come up next. AUDIENCE: OK, good morning Mr Rongxin. My name is Leo and I'm from a school in Tangerang. At the moment I'm a joined in the developing curriculum about computer science. And we are being asked to use AI for our students in their, like, assignment or something like that. And maybe I need your advice about the regulations to use AI in school. Maybe what aspect that we have to concern about for make the regulation in at school? RONGXIN LIU: OK. AUDIENCE: And one more question. Yeah. Is there any characteristic or sign that we can differentiate this is the hallucination or not? Because maybe at the moment we are here also use AI maybe for our teaching material, that we need to make sure that material is not a hallucination from AI. RONGXIN LIU: OK. AUDIENCE: Thank you. RONGXIN LIU: Thank you for the questions. So first question is about policy. I'm not the expert on the regulation. But I think-- So of-- You should always refer to the at the country level that there must be some regulation, high level regulation, whether you can use this kind of a technology or not within the country, within a jurisdiction. But I think it's minimally, you should protect students privacy. So all the things you send to OpenAI, they can keep it, right. So you should do the job to first do the PI-- let me actually put up the diagram. So whatever you send to OpenAI, your credit card number, everything, is going to be stored on their database. So if a student using this kind of MLM model, this tutor thing, whenever they have personal information you should try your best to strip it away from their prompt. So we actually have a PI anonymization process going. So whenever we detect, like, email address, a possible phone number, or a name or something, we are actually scrub that. Like replace it with some gibberish or some placeholder before we actually send to OpenAI. We also have-- We also anonymize all the request IDs. Meaning from OpenAI's perspective, they don't know who is making this API call. They cannot trace back to which user is asking this question. So I think that's the thing you should consider, like, privacy. In terms of what kind of AI technology you could use, you should refer back to the country level, you know, regulation. For the hallucination, if you are a domain expert, maybe you could tell if it is a hallucination or not. Because for a regular people, who are not-- Let's just say I'm getting an answer from quantum mechanics. I'm no expert in quantum mechanics, so I cannot know for sure if it is a hallucination happening. So you need-- You really need a subject matter, like a human, to evaluate the response at your school level, maybe, if you want to validate the response. But company nowadays, like OpenAI, they also have their evaluation team. They also hire subject experts to "red teaming" their model, evaluate their model, trying to improve the accuracy of the response. Actually one thing you could do with this AI model in the system role, you can say "If you don't know, just say you don't know." So that actually can prevent hallucination, because when the model actually don't know, it will just say "Sorry, I don't know." But yeah, humans still need to be involved in this whole process. Like AI cannot like fully replace humans. Like we just want to augment humans, right, so. We can have one more question. Maybe that one from the back. oh, sorry. No, that's OK. Let's just get, yeah. Post your question on at, we will follow up with double response. AUDIENCE: OK, thank you. My name is Mohamed Hanif. I saw in internet there is a combination between AI text and video. You can put your text saying anything you want, and then beside it there is, like, AI people talking what you text beside it. So yeah, my question is-- I heard Elon Musk say that AI is dangerous too, second. My question to you is will AI replace teachers in school? And second is AI dangerous? Thank you. RONGXIN LIU: OK, so the first question is will AI replace teachers or educators in general? The second question is AI dangerous? To answer the first question, I will say AI could not replace educators. That's my claim. It's going to augment human. That's actually what we're trying to do. We are-- We are picking a different route. We are not trying to replace our teaching assistant. We are trying to augment our teaching assistant, so that our teaching assistant can have more qualitative time with the student. So that's what we are trying to do. So you get-- You have the you have the option to choose which one you should go, right. If it's AI replacing educator, it must be some human decision but not the AI actually replacing human. So I hope that answer your question. Second question is I can't answer that, because I don't know if AI is dangerous or not. I think AI is helpful in this presentation. In this workshop, at least, in CS50, we think AI is very helpful. Of course, great technology also comes with consequences. So it's really up to us how to use this technology. Is electricity dangerous or not? It could be it could be not, right. So I hope that answer your question. Thank you. Thank you. [APPLAUSE]