SPEAKER 1: --everybody, how are we today? Doing wonderful. On behalf of CS50, I would like to thank you all for joining us, both in person and online, for another in our series of tech talks. Today, we are fortunate in that we have Leo Zhadanovsky, from Amazon Web Services. And Leo is going to talk to us about DevOps and best practices within AWS. Without further ado, I give you Leo. LEO ZHADANOVSKY: Thanks everyone. So thank you for your time. Thank you for coming. So I'm going to talk to you today about the DevOps story at amazon.com and how software development evolved at Amazon. And then I'm going to talk about how that translates into tools that you can use to do DevOps on your own. Also, we're going to talk about AWS Educate, which helps you get started and gives you credits so you can actually use AWS. So just a little bit about me-- I'm a solutions architect at AWS. I've been at Amazon-- it's going to be five years in February. And in my normal day-to-day job, I help customers implement our services. So whether they want to launch a website or build a mobile app or close their data center and move everything to AWS, I help them achieve that and make sure it's secure. And I help them achieve high availability and cost optimization, all that good stuff. Now, before that, I used to work for the Democratic National Committee. And then that turned into the Obama campaign in 2012. And so if you Google around, there's a bunch of articles and slide shows about that. But basically the Obama campaign was all in on AWS. When I was there, they built about 220 applications on AWS of all kinds. So after that, I was pretty knowledgeable about the platforms, so I went to work for AWS itself. So what are we going to talk about today? So first, we're going to level set. And we'll talk about-- we define what is DevOps. It means a lot of different things to a lot of people. Than we're going to talk about the Amazon DevOps story. And then we'll talk about code services that we have that you can use and our DevOps portfolio in general. And then, finally, we're going to talk about what AWS Educate is and how it can help you get started. So software today moves really fast. We see that startups can get their software into the hands of millions of customers in minutes. So your ability to move fast if you're a company is really paramount to your ability to either disrupt other industries or ward off disruption. So DevOps, why does it matter? Well, when you implement DevOps-- this is according to a [? Puppet ?] labs report-- you get much more frequent deployments. You get much shorter lead times and less failures and faster recovery from those failures. So basically, it accelerates how your software development works. First, what is DevOps? Well, DevOps is broken down into three different pieces. There is a cultural part of it, the actual practices that you use, and the tools that you can utilize to adopt those practices. So DevOps is a combination of these three things. And basically it increases your organization's ability to deliver applications and services at a higher velocity, so evolving and improving products at a faster pace than you normally would with traditional software development practices. The speed obviously enables companies to better serve their customers and compete more effectively in the market. So let's talk about the culture piece first. So traditionally, you've got your developers who build code. And then they throw it over to the wall to an ops team. An ops team deploys the code. And traditionally, they're siloed. So if you're a developer, you don't know much about how your infrastructure is set up. And if you're an operator, you don't know much about what you're actually deploying. And there's a bunch of problems that arise from that. Teams can blame each other and pass the buck around. And it slows down development. So an important part of DevOps is really integrating these two things, developers and operators. So what that actually means varies across companies. I see customers where they have a DevOps team that builds tooling that everybody can use to deploy things. I see customers where their developers all know a little bit about operations, so the developers know how to deploy or handle the deploys themselves. So there's different ways to do it. But one thing that it means is shared responsibility. So instead of having one team be responsible for development, another for operations, you have one team that is responsible for building the products or writing the code. Building it, compiling it, testing it, deploying it both through testing phases and staging and production, and then maintaining once it's up and running. So one team is responsible for everything. And so they own the whole product. And that has a lot of good things that happen with it. And also visibility and communication-- so if you have one team that owns everything, everybody has to know, is my product up? What are the key metrics here? Am I getting enough orders per second? So it forces you to have better visibility into what's going on. And if forces you to communicate, because the whole team has to communicate to work effectively. So what this means is chat ops. It means sometimes having some kind of central communication system, such as Slack or IRC or HipChat or Amazon Chime, something like that, where everybody can get on, whether [? they're a ?] [? vote ?] or not. So those are some of the cultural benefits. Then the actual practices-- so one thing that we see is we see customers moving away from big, monolithic applications to individual services. So instead of having this black box server where you have a load balancer and a web server and a database all running at the same time on one big box, and if that box goes down you've got a bunch of problems, we see customers moving to micro services. So what that means is you take your application and you break it down into small pieces. Each piece does one thing that's really simple and really well and takes on the least amount of dependency as possible. And then you can work on these miscoservices in parallel. And if one piece goes down, it doesn't necessarily mean your whole system goes down. So we find it's a much more agile way to architect your applications. And also, continuous integration and continuous delivery and continues deployment-- so what that means is you're breaking down how often you deploy code. So every time you make a change, that change gets built. There's a testing suite that goes against your code to make sure that it compiles, that it's doing what it's supposed to. And then you sometimes have an automated deployment. So as the developer, all you have to do is work in your code and commit it to your version control repository. But in the background, all this stuff happens to make sure your code is actually functional, and ultimately, it gets deployed. So one other thing that we see is infrastructure as code. So not just treating your code in a way where you have a software development lifecycle for your code, but actually treating your infrastructure the same way. So we have a tool that I'll talk about called CloudFormation. So basically you can write your whole infrastructure. So your load balancers, your web servers, your databases, whatever else is involved in your application, you write that in a [? YAML ?] file. And that [? YAML ?] file was the code for your infrastructure, and you deploy off of that file. So you store this file in a version control system. You control it just like you do with a piece of code. And that has a bunch of advantages. It's self-documenting. You can easily do updates. You can roll back much more easily, because you have a history of all the changes. And everything that we see, this forces you to do a lot of monitoring and logging. So you have to know how your application is performing, when there's errors, what those errors mean. And you have to understand the real time performance of your application. So there's a bunch of monitoring tools that we can talk about. But we find that our customers, these days, there's no more business hours for an application. So it used to be if you're building, for example, an education application, you really only cared if it was up from 9:00 AM to 3:00 PM US time. But now most customers I work with are global. So their services need to be on 24/7. And they need to be able to scale at any given time for unexpected bursts in traffic. So you have to have really good and strong monitoring and logging for that. So let's talk about some benefits of DevOps. So the first is improved collaboration. Because you're breaking down those silos, your team works better. They can deploy their software and deliver it much more quickly and much more reliably. So if you're deploying all the time-- so traditionally, I've seen customers who deploy their code quarterly, so four times a year. Now if you're deploying your quarterly, every time you deploy your code, that's a lot of changes, a lot of things that can go wrong. Whereas if you have DepOps and automation, you're deploying multiple times a day. Each change is going to be small and granular. So if something goes wrong, you can easily roll back or you can fix it. There's not going to be a lot of-- there's going be a lot less room for error. Also, security-- so what you can do is integrate security into your software development life cycle. So instead of just scanning for vulnerabilities in your code after the fact, after everything's deployed, you can integrate security checks into your software development lifecycle. So you can deploy your application to a staging environment. And then you can run a bunch of scans to see if it's compliant with whatever your security policies are before it goes to production and do things like that. You can scale. Once you've automated everything, it's much more easy to add capacity, [INAUDIBLE] capacity, so on and so forth. And ultimately, this means more speed. So you can deliver software for your customers much quicker. So let's take a look back at Amazon and how this story evolved there. So in 2001, amazon.com, the retail website, was a big monolithic architecture and a hierarchical organization. So when you went to the website, it was probably like a giant script of some kind. There weren't a lot of microservices there. So by the time 2009 rolled around, a bunch of changes had happened. And they started decoupling services. On amazon.com, for example, so the shopping cart would have now been a different service. The hey, what do we recommend for you would have been a separate service. So the actual home page is a bunch of different services. They get put together for you, and, as I mentioned, there's a lot of benefits to that. And we also implemented this thing called two pizza teams. So this is a practice that's often used at Amazon to speed up development. And two pizza teams means that if you can't feed your team with two pizzas, your team is too big. So what that means is you have to keep breaking down your teams into smaller and smaller teams. And they can each work in parallel on whatever microservices they're working on. And then they all have speed, and they're not reliant. They're not waiting for another team to accomplish something. We've seen a lot of good results from that. And so things went much better this way. And we were releasing code faster than ever. But we still felt like we could improve. So in 2009, we ran this study. It's called Understanding Amazon's Software Development Process Through Data. And we wanted to find out how long it took to go from code check-in to code being available in our production environment. This included the time that it took to build the code, test the code, and deploy our software. So we learned that this was actually taking a long time, on the order of weeks. And we actually wanted it to get down to hours and not weeks-- hours or minutes, actually. So what we found out is that we were just waiting. So it takes time to write the code. But after a developer would write code, they'd have to submit a ticket for somebody else to build the code. And then that person would have to submit a ticket for somebody else to deploy the code to a testing environment and the staging and so on until it got to production. So the actual action part of this, the actual work, wasn't taking that long. It was somebody writing an email or opening a ticket, all these manual processes that were taking a bunch of time. And this was adding up. So the actual work part was taking minutes. The waiting in between stages was taking days. And that's what made it take weeks. So we wanted to eliminate this waiting, which could then take this whole process down to hours and not weeks. So we started building tools to automate our software release process. And the first tool that we built was called pipelines. So pipelines basically took these things where you previously had to file a ticket and wait for somebody to do something, and we changed it to be automated. So it automated the transitions between stages from check-in to production. So obviously we saw a lot of benefits from this. It made deployment much faster. It made it safer. Because it was now automated, the same thing would happen every time. Somebody couldn't fat finger Command and do something bad. It's simplified and standardized, our software release process. So if we had a developer who went from one team to another team, some things might change. But the process they were using to release code was the same. So it sped up the amount of time, or actually decreased amount of time that it took for them to get on boarded, and it visualized the process. So a developer can go on the pipeline site and see where's my code that I just committed. Did it fail this test? Is it going? Is it good, so on and so forth. So this continued to work really well. And by 2014, we had thousands of service teams across Amazon. They were all-- they were building micro services. They were practicing continuous delivery. We'll talk about what that is in a second. There's many environments. So they're deploying to staging environments, beta environments, production environments. So in that year alone, in 2014, across the company, we did 50 million deploys. So that's-- I forgot what the exact statistic is. But I think it's like around 1.5 deploys a second. And this is from 2014. So I'm sure if we look at the current metric, it would be even bigger than that. So every year, we perform a survey of our developers. And the survey asks how do you like your laptop, how do you like your productivity tools that you have, so on and so forth. And in that year, in 2014, the results found that there is only one tool or service that could be correlated statistically with the happier developers. And that was this pipeline service. So we found that continuous delivery means happier developers. So as I said, DevOps means a lot of different things to a lot of people. So where do you start? Whether you're a startup or an existing company and you want to implement this, it's a complex answer. It's going to depend on what your needs are, whether it's a regulation or compliance, or what you're trying to accomplish or what kind of product you're building. And doing that transformation can involve organizational changes, cultural changes, process changes. So there's no real right answer. The important part is to do it. But there there's one thing that's uniform across every customer I've seen is they need an efficient and reliable continuous delivery pipeline. And so because they do, we ended up building a bunch of services to help them do that. So let's talk about the software development lifecycle that we typically see. It has four different phases. So the first phase is the source phase. That's when you build your code or you write your code, you check-in the source code in server version control systems, such as Git. You usually typically do peer review on that code. Then you have the build phase. So the build phase is when you compile your code. Obviously, not all languages need to be compiled. But if you do compile it here, you run your unit tests. You run your style checkers to make sure that the code is readable by other employees of your company. You get any kind of code metrics that are important to you. And then you create your artifacts. So these could be like [? Docker ?] images. These could be RPMs, [? Debian ?] packages, MSIs, system images. So whatever your unit of deployment is, you built them here. Then you have your testing phase. So in the testing phase, this is where you do your integration testing with other systems. You do your load testing. You do your UI testing. And you do your penetration testing or any other kind of security testing. And then lastly is the push to production, which is the ultimate end goal. So you deploy to staging environments, testing environments, eventually to production. So let's see how these levels, how they map to continuous integration, continuous delivery, and continuous deployment. So continuous integration, what that means is you've got a Git repo, let's say, for your code. And every time you commit code to the mainline branch of that Git repo, it gets built. So there's a build that runs and either errors out and then you fix whatever caused it to error out, or it works and we know that the code is actually buildable. Then we've got continuous integration. So continuous integration is the practice of checking your code to the continuous and-- so it means that when you go from source to build and then you have tests that happen after it's built. And then you deploy your code to a staging environment. And everything's automated right up before you get to production deploy. So when you get to a production deploy, you have a manual gate. So somebody has to go in and say, OK, I've reviewed this change. This is OK. I hit approve, and then it goes to production. So most of our customers that are trying to do [? some form of ?] DevOps, they're trying to go for a continuous delivery at first. Then there's continuous deployment. So continuous deployment is when you go from source to build to test to deployment to production without any manual gates. So everything is automated. So you're committing code. And as long as it passes all the tests, you eventually get to production. So what we see is our customers, when they get really good at continuous delivery and they're really sure their tests are good, their automation is good, that's when they go to continuous deployment. Sometimes they never go, because for various reasons they just want a manual gate. But sometimes they go to continuous deployment. So we've got a set of code services that can help you do this. So we'll be talking today about CodeStar, AWS CodePipeline, AWS CodeDeploy, CodeCommit, and CodeBuild. Let's see how these match up with a software development lifecycle. So CodeCommit is our managed Git service. So it helps you with the source stage. CodeBuild allows you to build some code. That goes to the build stage. For testing, you typically still use third party tooling. And for deployments, you use CodeDeploy. So then to do the transition between all these stages, you have CodePipeline, which is the customer version of this tool that we built called Pipelines. And then if you don't want to have to manually set the stuff up, we have CodeStar that will give you an integrated dashboard and templates where you can get all these services up and running really quickly. So before we go on, I'm actually going to kick off a demo that's going to work in the background while I talk. And then I'll go back to it. So in my demo, I actually have two demos here. So what I'm going to do is I have two Git repos. And I'm just going to make a change to the Git repos to kickoff my pipeline. So I'm going to make a change to this read-me file. I'm going to commit the change. So hopefully, when you commit Git changes, you have more descriptive comments about what you've changed. So I'm going to push the change. So while that is going, I'm going to do the same thing on my second one here. So I'm going to again change my read-me file, just add some exclamation marks here. I'm going to push those changes up. So let's go back now to my slides. So aside from our code services, we have a bunch of other stuff in our [? DevOps ?] portfolio that can help you. So for infrastructure as code, we have a service called CloudFormation. So that, as I mentioned earlier, it allows you to basically write a JSON or a YAML file that describes your whole architecture and your infrastructure. And then it deploys off of it. We have something called OpsWorks, which is essentially a managed service for Chef, which is a commonly used configuration. SPEAKER 2: So your customers which adopt this methodology do they just have to for peer review? LEO ZHADANOVSKY: No, for peer review they-- no, they still do it. We don't have any services for peer review. So they typically use third party services to do the peer review part. So you typically still have that involved. So you typically-- SPEAKER 2: Can you [INAUDIBLE] source to deployment [INAUDIBLE]?? LEO ZHADANOVSKY: Well, typically the way to do it is there's different branches in your Git repo. So it only goes to the mainline branch, which is what triggers your pipeline after a code review. So then there's OpsWorks, which allows you to do Chef. And then for monitoring and logging, we have a service called CloudWatch, which gives you metrics on your application and your infrastructure and gives you logs. This service is called CloudTrail, which gives you an audit log. So everything you do in AWS is API driven. Even if you do it through a web console, you're just making API calls through the web console. So all those API calls get logged. And you get those logs so you can see what's going on in your environment later on. Then there's AWS Config, which tells you what is the state of my environment at any given point in time. So I can say, OK, an hour ago, what did I have running? What security groups did I have attached to it, so on and so forth. And then we have X-ray, which is a distributed tracing tool. So let's talk through our code tools here. So for building and testing your application, we have a service called CodeBuild. So CodeBuild is a fully managed build service. It compiles your source code, runs any tests that you tell it to run, and produce software packages ultimately. It's scaled continuously and processes multiple builds at the same time. So it's a fully managed service. You don't have to spin up a cluster or anything. You just put a config file in your repo. And it'll do the test after that. So you can also do custom build environments with it. So the way it works is through Docker containers. So we have a bunch of pre-built Docker containers. You can also bring your own Docker container if you have some kind of customized build environment. And so it'll spin up the Docker container and run the test in that environment, give you the output of the test, and/or create whatever artifacts it needs to create and then shut down the container. You pay by the minute. So you pay for only what you use. And it works with CodePipeline, which we'll talk about in a second. And it works with Jenkins, which is another open source CI tool that we see a lot of our customers use. So the way it works is a downloads the source code, and it executes the commands in this buildspec file, which I'll show you. And it executes them in this temporary container. And it streams the logs to our logging service so you can see exactly what's going on with your build. And then it uploads the artifacts that are generated either to an S3 bucket-- S3s are objects through our service-- or it can upload them to a Docker repo, such as Docker Hub or Container Registry. So how do I automate my process, my release process, with CodeBuild? So it's integrated with CodePipeline. It is API driven, like most of our services. You can create your own build environments, as I mentioned, and as a Jenkins plug-in. So I have a lot of customers who use Jenkins. So you can just plug it into Jenkins, and use it with that. So let's take a look at how this actually works. So this is a buildspec file. This is what controls the build. So in this buildspec file, the first part here is we have the environment variable. So this defines what environment variables will be available to this container that gets spun up. And then you have different phases. So there's an install phase, a pre-build phase, a build phase, a post-build phase. So in each of these phases, you can put whatever commands you want. So in this example, I'm updating. I'm getting the latest app updates. So this is a one two container. I'm installing Maven. I'm not doing anything in pre-build. I'm doing a Maven install and build. And then eventually, at the end of it all, I get a JAR file that I can deploy. So we can get much more complicated than this. This is just a simple example here. So that's pretty simple there. And let's see about testing in general. So there's building, and there's testing. So testing is obviously both a science and art form. But things you typically want to get out of testing your code-- so you want to make sure that the code works, it does what it's supposed to. You want to make sure you catch any kind of syntax errors right away. You want to make sure that you're standardizing any kind of patterns and how you write the code so it's readable across your whole company. You want to reduce bugs due to faulty logic in the code. And you want to check for security, so we need to make sure that your code is secure. It's allowing things like SQL injection or code injection attacks. So we find that industry experts agree that you should focus your code. You should do 70% of your testing on unit testing and then 20% on service testing and 10% on UI testing. So typically what you can do, you can do your unit testing with CodeBuild. And for service and UI testing, you want to use third party tooling. So let's talk about the pricing. So the pricing is you pay by the minute. So you only pay for what you use. And there's different container instance sizes essentially. So it depends on what you're building. If you're building Java, you might need more RAM. So you might want to use a bigger build container. But these are the prices per minute for the different container types. And there's a free tier of 100 minutes. Now let's talk about deployment. So CodeDeploy-- CodeDeploy is a service that allows you to automate your deployments to instances, [? to EC2 ?] instances, or on premise servers. So it handles the complexity of updating your application. And what that means is, in AWS, you might have two op servers [? free up. ?] You might have 10. You might have 20, if you're using auto scaling, because you're scaling up and down based on how much capacity you have. So it's a lot harder to just run a script to deploy to everything, because you don't know what everything is. So CodeDeploy handles this for you. It has automatic integration with autoscaling groups, also tagging. So you can say, OK, every instance that it's tagged with this tag, deploy to it. It allows you to avoid downtime during those deployments. So CodeDeploy has integration with our little load balancing products, so our elastic load balancer and application load balancer. And so what this means is if you have a load balancer with some web servers behind it, it can say, OK, do a deployment to one of those web servers. They can take it out of a the load balancer, do the deployment, make sure it's healthy, add it back in. And it can do that until it's done. You can also do blue green deploys. So instead of touching your existing web servers, it can spin up a whole new group of web servers, deploy to them, make sure they're healthy, shift traffic over to them. And then, after a period of time that you define, it can terminate your old web servers. So that's another pattern. And it's supposed to allow you to avoid downtime. It also supports rollbacks. So if something goes wrong, if it detects a failure, you can roll back to your old code. Or if you're doing blue green deploys, just roll back to your old instances that haven't been touched. And you can deploy it at EC2, or you can deploy it on premise servers [INAUDIBLE]. The only difference is that EC2 is free. On premise, you pay per deployment. And it integrates with third party tools in AWS. So if you're using something like Ansible to deploy on your instances, you can still use it. You just use CodeDeploy to kick off your [INAUDIBLE].. So let's take a look at what this looks like. So this is controlled by an [? appspec ?] file. The [? appspec ?] file has a file section. And so in the file section, you define what is it that I'm actually copying here. Then you have a permission section. That's pretty self-explanatory. It's like how am I going to set the permissions on what I'm copying and what I'm deploying. And then you have a bunch of lifecycle hooks. So the hooks, these are what actually do the deployment. And these can be any kind of executable. It can be a shell script. It could be Ansible, Chef, Puppet, whatever it is that you want to use. So your different hooks, where you can enter scripts, they get run. You can also choose a deployment speed. So if you're deploying to production and you want to be super safe, you can deploy to one instance at a time, or you can do half at a time. You can do all at a time. And there's different kinds of customizations beyond that. And you can choose where you want to deploy. So you can deploy it on auto scaling type of instances. You can deploy to just certain instances with tags on them. And so again, there's different lifecycle hooks. The yellow ones are the ones where you can script everything. And then you also get triggers and event notifications. So you can set up different event notifications for did this deployment fail or succeed? Is there a rollback? And so you can send these to email, to Slack, to really anywhere else. So now let's talk about building a CICD pipeline. So we have this service called CodePipeline. So CodePipeline is a continuous delivery service. It allows you to model and visualize your software release process. It is a version of that pipeline service that I talked about earlier. And it handles the builds, the tests, the deployment of your code every time there's a code change. It also integrates with a lot of third party tools and AWS services. Let's take a look at what a pipeline actually looks like here. So this is a very simple pipeline. We've got three stages here. We've got a source stage, a build stage, and a deploy stage. So the source code is stored in GitHub. Then we kick off a build in CodeBuild. And then we deploy it using Elastic Beanstalk, which is an AWS service that allows you to deploy your code pretty easily. So again, this is the whole pipeline. This is a stage. And then in [INAUDIBLE] stage, you have actions. And then there's transitions, so that when you go from one stage to another, that's a transition. And so you can have parallel actions. So in this example, we're building our code. And we're kicking off a Lambda function. So Lambda is our serverless product where you can give it a function in Python, Node.js Python-- or Python, Node.js Java, or .NET. And you can just execute it. So in this example, we're notifying our developers there's a build going on through, say, Slack. You can also have sequential actions. So then, now after all this happens and it succeeds, we're going to do an API test using a third party service call Runscope. And it has manual reviews. So in this example, we're building our code. Then we're deploying it using Elastic Beanstalk to staging. And then somebody has to go in and hit approve. And then they hit approve. And then it then goes on to deploy it to production. So there's different service integrations. So for AWS services, for source code, it works with storing our source in S3 and CodeCommit. You can invoke a Lambda function. So you can put custom logic and Lambda functions. And for deployment, it works with CodeDeploy, CloudFormation, Elastic Beanstalk and OpsWorks. And there is just a bunch of a third party integrations with this as well. And so you can also do custom actions, either through a plug-in or through Lambda functions. So things we see are doing mobile testing, updating tickets. So if you have some kind of [? JIRA ?] or other ticketing system, you can have it automatically update the tickets during deployment. Provision resources-- so CloudFormation, so deploy to CloudFormation stacked to provision extra resources during your software release process, updating any kind of dashboards, sending notifications, or initiating a security scan. Again, that's part of this. These are all things you can do with custom actions. So now let's talk about CodeCommit. So CodeCommit is managed Git. You use your standard Git tools to enter interface with it. It is built on three AWS services, DynamoDB, S3, and our key management service, which I'll talk about in a second. But basically, all your code is encrypted at rest with keys that you create. And there's no repo limit. So you can have big S3 repos or big files in your repos and a lot of files on those repos. And it has post commit hooks. So you can trigger an SNS notification or a Lambda function afterwards, after you do a commit. So let's take a look at how this works. So CodeCommit, as a developer, you're just working on the Git, just like you normally work. So you can pull. You push your code. You do commits, the same thing. In the background, the CodeCommit service, your objects are stored in S3. The index is stored in DynamoDB. And the encryption key is stored in our key management service. So it is the same Git experience as normal. So again, I did this earlier at my terminal window. So you clone a repo. You update a file. You push the file. In the background, it's working with CodeCommit. So that's CodeCommit. And then for the pricing-- so CodeCommit costs $1 per active user per month plus however much storage you end up using. CodePipeline is $1 per active pipeline per month, but the first month of any pipeline is free. CodeDeploy is free [? in the ?] EC2. It costs $0.02 per deployment on premise. And CodeBuild, it's permanent and depends on what instance I was using. So before we go to our live demo, let's go to CodeStar. Let's talk about CodeStar. So CodeStar, it allows you to basically quickly develop, build, and deploy your applications in AWS. And also, it helps you manage your developer teams. And so you basically pick a bunch of-- there's a bunch of templates in here. So let's say I have a Ruby on Rails project or a Node project. I can just click on the project. It deploys the resources I need. So it sets up all these services I talked about and deploys them. It connects to my ID. So it works with Visual Studio, Eclipse, or the command line tools. And then I get a dashboard. So in that dashboard, I can manage my users. So I can give users access to SSH into my instances that are spun up. And then you have a dashboard. That dashboard shows your code pipeline. It can connect to [? Jirus. ?] So it can show you your to do list and your issues. It has a Wiki tab. It has a Monitoring tab. So it shows you CloudWatch metrics. It can show you your commit repo history. So that's CodeStar. All right. So let's check out what my demo is doing here. So a little bit ago, I pushed these files. So let's take a look at CodePipeline here. So this is my pipeline. So in this example, what I'm doing is I have a website. This website has a load balancer. It has an ELB. And this website is a Java app. There's no database. There's three instances on it and an auto scaling group. It's a Bespoke site for dire wolves, although it looks the pictures are of dogs. And so we have a deployment pipeline for this. So it's going to connect to my code repo. So here's CodeCommit. So it's connecting to this repo. Actually, that's there before my other demo. But it's a standard Git repo. And it's building the demo in CodeBuild. Once the build is done, it goes to CodeDeploy. So as soon as I made the commit eight minutes ago, it would have picked up-- 13 minutes ago, now-- it would have picked up that I did the commit. And it would have taken the files from that commit, zipped them up, put them into an S3 bucket. And then it kicked off this code build. So let's take a look at CodeBuild here. So I'm going to refresh this page. Here we go. So 12 minutes ago, it completed a code build. So in my code build, it would have basically built this JAR file here. So I can see the history of what it was doing. I can see over the last 1,000 lines of my log. I can also see the whole log here. So here's my logs. So this is if I want to see what happened or if something broke. You get access to the full log here. So then after the build was finished, it would have done a deploy. So this is my CodeDeploy environment. I have only one group here. I'm deploying straight to production. We can take a look at the actual deployment here. So let's take a look here. So my other deployment is still in progress. Let's take a look at this one. So we've got three instances that it was deploying to. And again, this was an auto scaling group. So I could have six instances. I could have 20. It would have deployed to all of them. And I see a timeline. So all three are done. So the timeline here is we can see exactly what happened. And if anything had failed, it would have shown me the logs of why this failed. So I did a deployment. It would have not involved any downtime, and I would have had my website still up. Let's take a look at my second demo here. So this was a bit more complicated. So this is a WordPress site. So it's got a RDS database for MySQL. It's using elastic cache. So it has memecache connected to it, as a content delivery network in front of it. It's a WordPress, so it's in PHP. I'm using Engine X. I'm using MySQL. I'm using PHP. I have an auto scaling group with an application load balancer in front of it. So I have a load balancer. And I'm offloading my static assets to S3. So it's a more complicated set up. And so for my pipeline here, let's take a look. Let's go to my pipelines. See what it's doing here is, again, it's picked up my source code from CodeCommit. So let's take a look here. So here's my appspec file in CodeCommit. And so this is what it's doing. This is what it's telling it what to do during deploy. And I have all these scripts I'm running here. So I'm running security updates. I'm installing dependencies. I'm installing-- I'm changing permissions. I'm validating the servers. And this is all in the form of shell scripts. So let's look here at my scripts. So I'm just going to look at my validate script. And this is just the thing, so just curling itself. The local web server is making sure it's OK. So anyway, it's going to do whatever I tell it to do through these scripts. And so it's currently-- so it gets the code from CodeCommit. It zips it up. Then it runs Jenkins. So Jenkins is, again, an open source CICD tool. So let's see if I can remember my password here. Nope, maybe I can't. Let's see. So I forgot my password for Jenkins. But Jenkins, basically what it does is Jenkins is running a test. It's unzipping the code. It's running a test on it. If the test passes, it goes on to the next stage. So that already succeeded. So now we're on CodeDeploy. So now is deploying code to my staging environments. So it's not quite done. But I'll show you what it-- it just finished. So here's the staging deploy. And this is a blue green deploy. So I had three original instances. And it spun up three replacement instances here. And it installed the application on those replacement instances. It rerouted the traffic to the replacement instances and terminated the original instances. So here, we have six total instances-- three old, three new. And again, we can see everything that happened here. So let's take a look at one of these replacement instances. So again, this is a more involved deployment. So what's going to happen now, after this succeeded-- so now we're on the load testing stage. I'm using a third party service called BlazeMeter. So after I've done a deployment to staging, it's going to do an automated load test. So it's going to make sure, OK, you're deployed. Can this handle some traffic? So it's going to throw some traffic at it. And when that succeeds, it's going to go to a manual approval. So it's going to say, OK, this is all good. You've completed testing. You've completed a load test. Do you approve this going to production? So once it gets to that stage, which will take a few minutes, I would hit approve. And then a dozen others deploy to production. And then, finally, we're in production. So this is a more complicated pipeline. It's more similar to what a company would do to to deploy their code. So that's the demo. I'm going to save room for some questions at the end. But let's talk about how you actually get started. So I'm going to go back to my slides. So first of all, we've got a DevOps blog. And on the DevOps blog, a lot of this information I talked about. We also have blog posts about how to do various things related to DevOps. But how do I get started? Well, that's what AWS Educate is for. So Educate is a program that gives you a bunch of resources to get started. So things like grants for access to AWS-- so I'll have a sign up link for you to be able to get $150 in credits to start using AWS. It gives you open course content on how to get trained in AWS. It gives you communities to collaborate with other people trying to get trained in AWS. And it allows you to do professional development to get cloud skills and ultimately find jobs. So we want to encourage student entrepreneurship. We want to have growth and credentialing for AWS. We want to accelerate our hiring pipelines. A lot of people are trying to hire employees with cloud skill sets. So here, let me play a short video to tell you more about Educate. SPEAKER 3: Welcome to AWS Educate's cloud career pathway. AWS Educate helps you create a pathway into innovative and lucrative opportunities in the rapidly expanding cloud industry. On AWS Educate, you will discover the skills necessary to advance into cloud careers, explore content, test your knowledge, seek micro credentials as badges and certificates of completion, and even have the opportunity to apply for jobs and internships in the cloud. Start off by building your profile, adding your resume, classes, degrees attained, and other facets of your experience. Then, select a cloud career field. From there, we'll personalize your plan by providing content that matches your pathway. As you consume that content, we'll prompt you to take knowledge checks of three to five questions in order to gauge your comprehension. After you've completed 50% of the knowledge checks, you'll have a chance to earn an AWS Educate badge. Then uplevel your profile by completing a project and a final assessment, and you can receive an AWS Educate certificate of completion. Take applicable courses at your school or online to grow your knowledge and fill gaps in your skill set. All of these achievements can be added into your portfolio. Then put it all together. Apply for a job or internship at Amazon or at one of our customers or partners through the AWS Educate job board. Download your portfolio and send it to your potential employer to get your ideal job in the cloud. So what are you waiting for? Start your career pathway today. [MUSIC PLAYING] LEO ZHADANOVSKY: All right. So with Educate, there's different career paths. There's currently 27 careers, ranging from solutions architecture to programming web development, big data, Hadoop, and more. There's 30 plus hours of content for each one of these, including access to labs you can do, as well as our technical essentials training, which normally costs $600. And we've got over 1,000 educational institutions who will participate in AWS Educate and the top 10 global computer science and information systems institutions. And here's just a sampling of customers for AWS Educate. So to get started, I basically made some simple things that are easy to see. There's a link for signing up. And so that link will get you $150 in credit codes. And so I believe this also is going to be emailed out to anybody who attended. And then there's a guide for how to sign up here as well. So that's all I had in terms of my presentation. But does anybody have any questions about anything? Yeah? SPEAKER 4: Do you have a plan for open source projects? LEO ZHADANOVSKY: Sorry, can you repeat that one? SPEAKER 4: Do you have any plans for open source projects? LEO ZHADANOVSKY: Do we have any plans for open source projects? So it's a good question. So we actually have, if you search for AWS Labs, we have a GitHub repo called [? database ?] [? ops, ?] where we put our open source projects that you can implement. We also have contributed to a bunch of open source projects in general. But I think your question is, do we have a pricing plan for open source questions. SPEAKER 4: Yeah, for example, with [? Git ?] [INAUDIBLE].. That was one thing that [INAUDIBLE]. LEO ZHADANOVSKY: Right. So we have a free tier. So basically if you sign up, you get a bunch of stuff for free. So you can launch an instance. There's things you can do without getting charged. And that can actually take you pretty far. And then on top of that, if you sign up with that Educate link, you get $150 in credits. So we don't have a specific thing, a program for open source projects. But that should give you some credits to get started. Any other questions? OK. Well, thank you for your time. I'll hang out here for a little bit just if anyone else has anything. But thanks for coming. SPEAKER 5: Thanks. SPEAKER 6: Thank you. [APPLAUSE]