DAVID: So thank you all so much for coming, as you've been tuning in online as well. We're so glad to be joined by our friends Dan and Michael here from Leap Motion, whose company Leap Motion has very generously donated some hardware for the course sot that you guys can do cool things with this device. Without further ado, Dan and Michael. DAN GILL: All right, thanks. Thanks, David. Thanks folks. Nice to see you all. My name is Dan Gill. And as David described, I'm with Leap Motion. This is Michael Sutherland. Since about 1/10 of our company is actually named Michael we call him Kiwi, and once you hear the accent you'll understand why. But we're thrilled to be here today to present to you folks and you folks online, and tell you a little bit more about Leap in the development environment and our developer community, and how it may be able to impact some of the things you're doing as you get towards the end of the semester. I'm going to start off with just a brief introduction for context about the company and some of the industry examples that we're seeing and a little bit about it. And then we're going to jump right into Michael's bit. Michael runs our developer community. So there's 70,000 plus folks that have accessed our developer portal and gotten access to the Leap software development kit and are actively building all different types of applications that Michael will talk through. Personally, I run our Enterprise Business. Which means all the commercial use case for the Leap in a variety of industries, and I'll talk briefly about that. So what I wanted to do, at a high level most of you are probably familiar with some form of 3D motion technology. You've seen the Kinect, or you've seen the commercials for the Samsung phones with the swiping. And at some point you've probably tried one of these platforms or seen them in action. We really feel like we've broken new ground. And it's all mapped, so it's a very software-driven solution. It's all proprietary algorithm based, and it's allowed us to do a number of things that are different than the others in the space. We've reached a level of accuracy that you'll see in the demonstrations that's far beyond what other folks have been able to do. So accurate to 1/100 of a millimeter, we can track the palm position in the fingertips for as many as fit in the field of view. And again, that accuracy level's opened up a lot of application opportunities that haven't existed before. It's entirely embeddable. Because it's such a software driven solution, this little piece of hardware that you see here is the device. These are going to be available to you students. We have 30 that we've donated to the group. If you decide to do a project based on the Leap, you'll be able to take these out on loan and spend time with them. We also just announced an embedded version. So HP will actually start shipping laptops-- or has started shipping laptops of the device, an embedded version of this in it. And because of the software and how software driven this is, the hardware is incredibly simple and Michael's going to walk you through what's in it. We've created a very content rich environment. So those 70,000 developers are building applications. You folks will have opportunities to build applications and possibly even get them into our application environment-- it's called Airspace. We've got north of 100 apps there, and many, many, many more in the certification process and in process to being put to use. You'll see that it's incredibly powerful in terms of its speed. So you'll see there's no latency in the reaction between when you do something in the field of view and when something happens on the screen. If you've played around with the Kinect or others, you'll notice a bit of a lag when you actually have a movement and something happens. This powerful lack of latency makes a huge difference in what you can actually do with this platform. And then one thing that we think is really important, we wanted this platform to be accessible to everyone-- as many people as possible. Having such a great software driven platform has made the device and the platform very inexpensive. It's a very simple piece of hardware with very simple, commoditized components in it that allows us to keep this very inexpensive and very accessible to anyone who wants to take part with the platform. So as you'll see, there's really three main components in the platform. We've got the peripheral that I talked about, the controller. It has a USB connection to connect to any type of computing device with the supported software. We've got our software development kit in Drivers, so there's a set of drivers that run on the Windows or Mac machine that you're connected to. And then we've got the software development kit that we've made available to everyone at no cost on our website to access and build applications with. And then finally we've got our Airspace app store, where there's third party applications-- both free and paid for applications. And it's everything from personal productivity to artistic, like music and painting and drawing, to business applications like CAD software and others. So I'm going to go through these quick. But I thought it would be useful as you think about the platform and think about how the languages that you can develop in and how that applies to the platform. Just to run through a couple examples from the industries that we're seeing. So these are areas where people are making use of the Leap platform to improve applications or improve outcomes in those industries. Education, as you can imagine, interactive displays, integrating with curriculum, like you folks are doing with your computer science curriculum. Lots of applications for special needs students. So folks who can't, either for physical or cognitive limitations, interact through a keyboard and mouse are now able to get social interactions, interactions with computer applications, and do things they never could before. And then a lot of universities doing user interface research for various industries have made great use of the device. Health care is an exciting sector for us that I thought would be important to touch on briefly. You can imagine sterility is of the most importance throughout all aspects of health care. In an operating room today, a surgeon might have to have an extra person there, or unscrub and take their gloves off to be able to manipulate MRI images or CT scans or important patient information while they're in surgery. Very inefficient, could compromise sterility if it's not done right. This type of environment now allows you to interact with computer applications in a completely touchless manner, as you'll see in the demos, with gloves on. So they can leave the surgical gloves on. They can access the images and do everything they need to do in a much more efficient way. We think there's some important applications there. Information access in what I call "germ-rich" areas-- so hospitals, ATMs, all different types of areas where you don't want to touch something but you want to access information, this has become important. Measuring regression as a result of a disease. If someone's losing mobility in their hand or their arms, or in movement, being able to measure that because of the level of accuracy. Or also progress-- so if you want to measure the progress of a drug or the progress and recovery from something like a stroke, you can very accurately do that. And so those are some examples. Data visualization is another interesting space. I'm sure you've heard a lot about big data. Everybody talks about big data. Well, those large data sets in various industries have created real complexities around user interfaces. And how do you interact with that data and find correlations, be able to find actionable information, be able to share it with colleagues-- it's a huge challenge. As the amount of data grows, that challenge only gets bigger. 3D navigation with natural hand movements becomes a really interesting opportunity in that world, and we've seen a lot of input there. Manufacturing is another one. We're going to show you some videos from the folks at SpaceX using this in the manufacturing process. Also, the manufacturing floor is really dirty, and so they've destroyed mice and keyboards, and touchscreens aren't a great solution. But they need to access things like their ERP systems and other platforms on the floor, and it becomes a challenge for them. And then just a couple more before I turn things over to Michael. Retail-- so hopefully at some point, you'll go into a store and they'll have a Leap enabled screen or kiosk where you can get access to a product, product options, shop online if they don't have stuff in stock. I like to talk about it as non-intrusive consumer engagement. So I've been in sales since I graduated from college. But we all know when we walk into a retail environment, a lot of times you get pounced on by three or four people. We think using this type of technology, you can create real physical experiences with products and options and colors and different things without having to have a bunch of people jumping on top of people when they get into the store, and create some interesting things. Desktop productivity-- you'll see some basic opportunity to work with productivity apps like PowerPoint to do web browsing, to interact with your operating system. All without having you having to use a mouse or keyboard, or in addition to your mouse or keyboard, being able to get some different types of interactions. There's a lot of business applications that have inefficient user interfaces, or have interfaces that could do a lot more if they were able to take advantage of the 3D space instead of just a flat 2D user interface, so we think there's a lot of opportunity there. Salesforce.com might be a company you've heard of. They make customer relationship management systems. People like me in sales use them all the time, every day. But when you work with a big account, you may have hundreds and hundreds of records. And it's really hard to get a sense of the organizational structure, or all the activities of what's happened inside of an account, because it's a very flat 2D user interface. So we think there's a lot of opportunity to improve the front end of various business softwares. And then other B2B type applications, we've seen biometric authentication. So the idea of holding your hand in the field of view, and it's scanning your hand all the way down to blood flow. And then being able to use that later on for authentication to access in doing transactions, access systems, log in to your laptop, control your home automation system-- you name it, there's a lot of applications. Commanding control. So you can imagine, this is more of the "Minority Report" style thing people talk about when they think about Leap. The idea of someone in a command area where they've got five or six screens with video or other types of content. They need to navigate across applications and call up different videos, and pull information in, and do all sorts of interesting things. And then finally CAD. Those environments have been a big early adopter of the Leap platform in being able to create a more natural way to interact with models of things that you're creating in the design process, or adjusting after something's been built, or things of that nature. So that was just to give you a brief context in some of the industrial applications for the Leap. I'm sure you can imagine all the consumer applications, and if you've seen the website you know what those are. But I wanted, as you think about programming or applications that might be interesting to look into, here are some of the areas where industries are paying a lot of attention to this. We were at Children's Hospital before we came here, and talking to them about a number of really interesting applications around surgical processes and training and simulation and all different things. So there's a lot of really interesting opportunities to use the platform and to use the development environment. And so hopefully that's good context for you folks. Kiwi's the smart one here. So I'm going to get out of the way and let him talk you through our development community, the development environment, and all the resources that are there available to you folks if you choose to work with the Leap platform. So, thank you. MICHAEL SUTHERLAND: Cool, thanks. So you can see there's really no shortage of opportunities there. But one of the things we see a lot is people sit down with Leap and they're like, where do I start? So hopefully I can go through a few of the first steps of where to begin with all this. Because a lot of people just say there's a lot of white space, so where do I start? So my name's Mike. As Dan mentioned, I'm kind of referred to as Kiwi. I'm from New Zealand, as you may be able to tell from my accent. I've lived in San Francisco for a couple of years now. Did my electrical and computer engineering degree back in New Zealand, so I've sat in the same seat that you guys are in. So I handle platform growth and partnerships for our developer programs team. So I'll tell you a little bit more about what the developer programs team means in a little bit. But basically, Dan mentioned, this is the peripheral you see here. So this is the history of where it came from. So you can see there we started back with a very, very early prototype. Now, all that's in this, you can kind of see here a little bit. Well, it's a bit hard to see on this display, but really all you've just got is a couple of infrared optical sensors and a couple of infrared LEDs. The hardware is actually incredibly simple, and that's why we're able to keep it so low cost. The magic is really what's happening on the computer and the software in the driver layer, and that's really where the breakthrough for the company came. So I joined and the developer programs team started around about here, halfway between. And what we did was these first kits that you see at the bottom there, they're the first developer units. And we actually sent out around 12,000 of those to developers that had contacted us so that they could get started working with the platform. And that's really been a great seed for the community, and we've had a lot of great stuff developed over the last year. And you'll see that when you have a look at Airspace, our apps store. So how many of you have actually heard of Leap Motion before? So a few of you, yeah. So that's good. So honestly, what was the first thing you guys thought of when you heard of Leap Motion or you saw the videos of what it does? Kind of "Minority Report," "Iron Man?" Yeah, we get a lot of that. And definitely the day will come when we're all sitting there commanding the world with our hands, and that's going to be exciting. Right now, that's not the absolute situation that we're trying to build, but we're going to get there. But I think it's still a good opportunity to hear from the real world Tony Stark, and Dan touched on that as well. SPEAKER 1: Right now we interact with computers in a very unnatural and [INAUDIBLE] way. And we're trying to create these 3D objects using a variety of 2D tools. And it doesn't feel natural, doesn't feel normal the way you should do things. So we started playing around with the idea and using a few of the things that are available out there, such as the Beat Motion and Siemens NX, which is what we used to design the rocket. And we wrote some code to integrate the two. And we started off with what you see here, which is a wire frame of a Merlin rocket engine. And working through this, I can go ahead and grab it, and I can rotate it in multiple dimensions. And then what I can do is I can put another hand in there and I can zoom in and out on the wire frame. And I can also translate it. So I can move it around the screen and then zoom and translate. And this is what we started off with a few months ago. You can also spin it and then catch it. So this is kind of a fun way to interface with what is really a very complex model. Now we'll go from this to what we're able to advance to a few weeks after the wire frame, which is to actually use a full 3D CAD model of the engine. So here what you're seeing is the actual interaction with the CAD software. Manipulating the real 3D model of the Merlin engine just using hand gestures. If you could just go in there and do what you need to do, just understanding the fundamentals of how the thing should work as opposed to figure out how to make the computer make it work, then you can achieve a lot more in a lot shorter period of time. So then we went to a 3D projection. We started off with the kind of 3D projection that you're familiar with in the movies where you use 3D glasses. We also did a free standing glass projection, which is the sort of technology that was used in the "Iron Man" movies. And then finally, we used the Oculus Rift, which is immersive virtual reality that actually tracks your head position. And you really are moving around the object. It feels like it's right there in front of you. Now let's use this for an actual component on the rocket, which is a cryogenic valve housing. You can really apply your intuition and take something from your mind to a physical object with far greater ease than we currently do. Now that we've gotten the object out of our head and into the computer, how do we get it out of the computer and into reality? So we're actually going to print this with a 3D laser metal printer. So the way that the 3D printer works is it lays down fine particles of either titanium or [? inknell, ?] and then it goes over it with a laser and melts those tiny particles onto the prior layer. So it builds it up just layer by layer. So I believe we're on the verge of a major breakthrough in design and manufacturing in being able to take the concept of something from your mind, translate that into a 3D-- MICHAEL SUTHERLAND: --is that even though they're clearly still in pretty early stages of what they're doing with this sort of technology, it kind of helps to demonstrate some of the examples that Dan was talking about before. So they're really starting to investigate what are these next generation uses for this sort of 3D gesture technology. So I think that's just an interesting entry to seeing how some of this technology is being used. So I'm going to take you through a little bit about-- it's going to be pretty high level. But we'll leave some time at the end for question and answers if you guys have some deeper questions you want to go into. But we're just talk a little bit about building on the platform, go a little bit into the high level aspects of the SDK, have a look at what some of the resources are that are available on our website or through the community. And I'll show you a few demos of some of the stuff that's actually out there that you can check out and give some inspiration if you wanted to use this in a project, and then how you can approach us for help. We're here to help you guys if you want to develop, [? Sue. ?] Just want to make sure that you're aware of that. So as Dan mentioned, we've got Airspace. So what's the real benefit for starting to build for Leap Motion controller? Is it just a cool piece of technology? Is it a gimmick, or is there something more to it? And Dan talked a bit about the industrial applications, but on the consumer side we've actually got a really, really thriving app community as well. And you might be saying, well, another app store. So we prefer to see it as a place of discovery. So this sort of technology, it's exemplified by software that is built for it. It's not so much a system where you can port an existing touch application across. The biggest applications are the ones that are built for the technology. So when you go and buy a Leap Motion from a store and you plug it in, the first thing you see is Airspace. And so that's going to give you a place to basically find all of the software that's built for platform. We've got over 100 apps in the store now, so that's pretty good considering we launched just back in July. We had over a million app downloads in about the first three weeks. And we cover categories, productivity, games, education, creativity tools, music, science. And the store supports native as well as web apps. So it's a pretty good ecosystem for anyone that's buying their unit to have a lot of stuff to use. But for your side, on the development side, what that means is there's an awesome opportunity to get discovered. We shipped a couple hundred thousand pre-orders. We're now in all the Best Buy stores throughout the US, Canada, UK, France, Australia and New Zealand, and we're about to launch in other parts of Europe. That means that everyone that buys one of these units and gets into that store, they're going to start to see the software that you guys are developing. So that's a pretty exciting opportunity right now. Some of the other things that are coming up that maybe we're thinking about in a year or two. As you go through and you develop different programming abilities and start to look at different types of software development, is there actually some opportunities post-graduation? Some of our venture partners have actually put together a $25 million venture fund called The Leap Fund. They've actually already funded their first company, so that's been really great to see. Starting to see that kind of business ecosystem building around the technology as well. And shortly we're actually going to be seeing a new accelerator as well. So they'll be taking through teams, new teams that are just forming with some great ideas, and they'll be providing them with mentors. And there's some pretty great mentors in that program. And that will be kicking off next year, so you'll start to see some really cool stuff coming out. Not just on the apps side, but in terms of new businesses that are building around this technology. So we're providing SDK. We're supporting both native and web development. I understand you guys are mainly working in C at the moment, and you're going to be touching a little bit into JavaScript in a while, so that's great. We've got support for C++. We do have a pure C API. It is built by the community, but I can show you how to get to that. So the C++, C#, Objective C, Python and Java-- so if you've got any familiarity with any of those languages, there should be something there for you to get started. The SDK's available from our developer website, which I'll go through in a little bit. And then for the web development, we've got a full JavaScript API. So this is probably something that might be interesting as you start to go into the rest of this course. Because my understanding is you're about to start to go into some JavaScript. And there's a load of great examples and tutorials on the JavaScript API. So I'll walk you through some of those things as well, and that'll be a good platform for how to get started. So first is our developer portal. So this is going to be the place that you guys want to go. If you're going to do any development on Leap, you probably want to start here and have a check out of our developer portal. I'll just walk you through just some of the main things to keep an eye on. So this is your main download section, so this is where you're going to get your native SDK. So that's all the languages that I talked about first-- C++, Objective C, C#. Inside the SDK you're going to get a bunch of examples, you're going to get the documentation. So they'll be everything to get you set up for native app development. So basically anything that you want to run directly on your computer, that's the place for that-- not so much for the JavaScript. From here, you've got access to all the documentation. So this is going to be all the documentation around our API references, as well as guides and examples. So you can see here we've split it out by language. So it's pretty easy to find your way around. And we also have, on top of this, some knowledge based articles for technical notes. This is all changing all the time. So keep an eye on it, you'll see it evolving. So if we just dig into here, I'll just give you a quick example of how we laid it out. So if you look under here, you've got all the languages again. And then under C++ we've got our API references. So that's where you're going to want to go to find out all the APIs that are contained in our SDK. So I'll go through a few of those at a high level later on, but that's going to be the first stop you want to go to for getting that information. We've got a bunch of guides. I know it's probably difficult to see on the screen there, but these are really just a great resource for you to get started. So we've got things like how you get frame data, and I'll talk to you a little bit about what frame data means in a little bit. All the way through to understanding the sample applications that are-- oh, that's good. So that might make it a bit easier to read. So understanding the C++ sample applications-- so those are included in the SDK bundle that you download. So the other thing that's a great resource for you guys, if you do start to do some development, is our forums. You'll be able to access them up here at the top. Right now these look like this. You're going to find in a week or so that they're going to look completely different, because we're just about to launch a whole new forum platform. But that means an even more engaged community, and it's a great place to connect with other developers that have been doing the same sort of work that you guys are going to be doing. So lots of great questions have already been answered in there. And it's a great place to ask questions as well. We're in there, our team members, our engineering team are all in there, so great place to connect with the team. This is also the place that you want to go if you're going to be submitting an app, but that's probably a little bit further down the track. But if you are interested in getting something onto Airspace, this is the portal to do that. You submit your app, and that will walk you through the process. We have a full review team that goes through and reviews all the apps. There is a little bit of a bar for quality. We try and make sure that all the apps are really representative of what the platform can do. But at the end of the day, that just creates a really great experience for the people that are using the technology. So that's kind of our main developer site. I just wanted to give you a quick overview so you know where a lot of these resources are and how to access them. So I also mentioned JavaScript API. So we've actually split out the native and the JavaScript into two separate sites. There's different ways of thinking about that, but we think that JavaScript is very unique so it really deserves its own site. And we've had a lot of popularity from our JavaScript API. So this is now js.leapmotion.com. Has a bit of a different look, a little bit more fun perhaps. But this is probably one of the best places for you guys to go to get started. JavaScript, as you'll probably start to find out, is going to be a great language to get started on this platform. How many of you are familiar with JavaScript development already? So a couple. So what you'll find is that JavaScript being a scripted language and not a compiled language means that you can go straight into anything that's running on the web, right-click it, view source, you've got all the code there. So it's the easiest way to get started in a language. And what you'll find here is a bunch of awesome examples. We're adding to these all the time. You can just click on these, they'll run in the browser. So let's try one of them right now. So this is just running in the browser. The code behind this is super simple. So here you go, just View Page Source, you get all the code, it's right here. Don't worry about this too much at the moment. It may look intimidating, or to some of you that are familiar with it, it'll be fine. But most of this is actually something called 3JS in WebGL. The actual part for the Leap is down here a little bit further, but it's actually very simple once you dig into it. And if anyone's interested, I can walk you through some of these afterwards. But it's probably best to keep it high level at the moment. But anyway, so this is a great place to start looking at some different examples. And you can see here we've got some basic demos to some different types of menus, a globe that you can interact with, some data visualizations. There's just a whole host of stuff. It's a great place to check out some source code. The other thing that you'll find here which will be a big help to getting started is we have a great set of tutorials. This walks you from the very basics of just how to get a frame-- and like I said I'll go through that in a little bit-- through to getting a basic application set up. So I can definitely recommend js.leapmotion as a great place to start if you're thinking about doing some development. And again, we've got our API docs. So it's a very simple layout here. It's a little bit simpler than the developer website. It's a little bit lighter-- it's just focused on JavaScript. But you've kind of got those three main things-- examples, tutorials and APIs. And that will be a big help, I'd say, for getting started. So I'll just jump back into this. So let me just grab a quick drink. So this isn't really about what is the Leap Motion controller. What I wanted to talk about here is what is it not. So a lot of people when the Leap Motion came out they were like, oh my goodness, this is a mouse replacement. I never need to use my mouse and keyboard anymore. We don't really see it that way. Because if you remember, when the mouse came out, the keyboard didn't disappear. The mouse augmented the keyboard experience, and so that's really what this technology allows you to do. It allows you to augment the experience that you're having. It allows you to do some things better. And that's really what developing for this platform is all about. It's not about trying to do everything with the Leap right now. Because what you'll do is you'll find it becomes infuriating. You'll find it doesn't get the results that you want. The best way to approach it is what are the things that I can do better with the Leap, and I'll show you a few of those demos. But what you'll start to see as you dig a little bit into it, one of the favorite things for everyone to do-- and by all means, have a play around with this as well-- is build a mouse cursor with the Leap. So I want to use the Leap to control the cursor with my finger. It's definitely an application that can be built with the Leap. Is it the best use of the Leap? Probably not. So what I would try to encourage you guys to think about is what are the applications? If you want to build some of these, what are the applications that you can do better? They don't have to be really complex, but what are some of the things that just make your life a little bit more efficient, or maybe a little bit more fun, or allow you to navigate a little bit better? So that's just what I wanted to give you a quick thought of that. So that leads into what are some of the applications that are around that do a good job of demonstrating the power of this platform? So I'm going to take you through a couple of them right now. We can go through them pretty quickly. So the first one is a little game called Block 54 which I'll put on your screen, not my screen. We'll try it Windowed instead. So one of the reasons I wanted to show you this application is because this is really something that has never been possible before. This is not something you can do with a mouse. This is not something you can do with a keyboard. It's probably a little bit more advanced, but it's a great example of some of the things that you can do with the Leap. So what you see here, we've got a Jenga tower, obviously-- or it's a Block 54 tower, I should say. So what I can do here is I can actually grab these pieces, if my computer doesn't slow down too much. Sorry, my computer seems to be having a little bit of a hard time with this one. So I can actually grab these pieces and move them just as though they were physical objects. And that's really one of the major advantages, bringing that real-world. So I can literally grab that piece and pick it up. I can throw it away. Wow, it's running a little bit slower for some reason. So you can kind of get the feeling there. So this is something that you literally could not do on a-- I'm just going to push this one out of the way now. There we go. So that's a lot of fun. If my computer wasn't chugging along so much, then that would be a lot smoother. But you can kind of see there that this is an example of bringing something that was real-world into the digital space, and it's allowing you to interact in a way that's very natural. I'm not using a menu system to go through that. I'm not clicking, or using keyboard shortcuts or anything. It's just literally me reaching out and manipulating the blocks in the digital space. So this next one is a little bit along the same lines, but it's about bringing these real-world experiences into the digital world. And so this was an experience that I believe one of our co-founders had. I don't want to quote him on this, but there was the ability to be in the ocean and see these schools of fish swimming around and being able to interact with them. And that's something that's really difficult to communicate verbally. It's also very difficult to communicate digitally without a proper input mechanism to be able to manipulate that 3D world. So I'll just bring this one up. You might not be able to see that on the streaming version-- it could be a little dark. So what you're seeing here is my hands in the 3D space. I've got complete freedom of movement. And I can just hold my hands still, see the fish and then scatter them away. And you can see the freedom in this digital space is like something that really hasn't been possible before. I can bring them out to the screen and scare them away. So it's a simple demo, but it's highlighting the fact of being able to bring some of these real-world experience that haven't really translated into the digital space into this digital space finally for the first time. AUDIENCE: You can actually see those at The Museum of Science [INAUDIBLE] MICHAEL SUTHERLAND: And this is a great one as well. Because what we find is when people first put their hands into this, it's the best time that they've seen themselves represented so fluidly in their digital space, so you usually get an interesting reaction. So by all means, if we've got time afterwards, I'd be happy to show you guys some of these demos. So this next one-- I won't bring the slide deck up again. This next one is about creating an experience that you can just explore. So there's very limited rules to this. It's creating this immersive experience. And the developer that built this is a guy called Eddie Lee out of Japan. And this was actually an experience he had in Kyoto. And he wanted to basically bring that experience and share it with other people. I don't know if you can hear that. But you can just drag your fingers through the water and mess around with the reflections. There's nothing that's telling me what I can do. It's just a very zen experience. But it's something that you can just really immerse yourself into and forget about how you're actually interacting with it. And just put your hands in and just feel your way around. And you can see the entire environment is there to just play around with. And there is actually a story line to this. It takes a little while to go through it, but you can explore your way through it. It's kind of a lot of fun, and a lot of natural interaction. So this is actually something by the same developer. This one's a little bit crazier, but it highlights some interesting use cases, and again, something that you can only really do with this sort of platform. So I'll just come around here. So this is actually the menu page. And this is a menu like hasn't really existed before. So literally it's just looking at how many fingers I hold up, and basically choosing the menu through that. So you can see the rules are being rewritten around interface design here. You've got total freedom to do whatever you want. So in this one, this is kind of a little crazy. I love these guys. I could watch these guys bouncing all day. He's got a whole bunch of different experiences there. All of these you can get through Airspace. So feel free to sign up. It's free. You can go have a look at the apps. This is kind of a musical experiment. But what he's doing is he's using the full 3D space to create different sound effects. It's probably a little bit hard to hear through the sound system. But basically he's using this full 3D space to create a new type of instrument. And then whether I use three fingers, four fingers, or one finger, I can basically start to change the effect of a sound. So it's definitely very experimental, but it highlights that freedom in that 3D space. So you saw Block 54, it's a game. And the last two were more creative, experiential kind of things. It's easy to get wrapped up in that kind of creative world, and there's so much amazing stuff that we're seeing come out. And if nothing else, that's a great reason to develop for the Leap Motion. We're seeing so much amazing, creative stuff, but there's also an element of efficiency. And so I just want to show you a quick integration that the Google Earth team-- some of you might have seen this before. Hopefully I've got an OK connection here, because it is a little bit bandwidth intensive. But you're probably familiar with Google Earth and how you generally would navigate around that. It's click and drag, you've got the sliders for zoom in, zoom out. If you're really proficient at it, you've got keyboard shortcuts, click and pan and tilt. There's all these different ways that you can navigate around this 3D environment. But what the Google team did was they just rewrote the rules on that. We might have a little bit of bandwidth issues. But what you can kind of see there is you can basically just navigate. So let's see where we want to go to. Oh, yeah, we're a little bit stilted there. But what it's allowing me to do is I can control multiple degrees of freedom all with one fluid movement. So I can pan left to right. I've got look up, look down. I can change my elevation. I can move forward. I can basically go and I can spin around a certain point and just keep my focus on it. I've got complete freedom in this 3D space. And all of a sudden, my efficiency of navigating around the space is just multiplied immensely. So I can jump from Boston to San Francisco to New Zealand in a couple of seconds. Previously, that sort of operation would have taken me quite a few different clicks and movements and keyboard shortcuts, and I have to remember it all. So this is an application where this sort of natural interaction is allowing a greater efficiency. So that's another thing to keep at the back of your mind. Is this something that I can make more efficient in what I'm building? And the final one I want to show you before I go into the SDK is about the educational possibilities. And this is something that Dan touched on lightly with the Children's Hospital. We're going full screen. Hang on a second. Here we go. This is a bit strange. Let's try giving that a full screen again. Well, that one doesn't look like it wants to run on this predictive for some reason. Interesting. Oh well, that's too bad. So this one is basically a little app. You can have a look at it later, after this if you want. But basically what it's doing is it's a full 3D representation of the skull. And what you can do is basically take it apart in 3D. It becomes a 3D jigsaw. So some of these applications, they'll [? ask ?] for a more immersive learning environment. So when you're able to interact with what you're learning, you start to take it in a lot more. So we're seeing a lot of interesting applications being developed, both in early learning and special needs learning, as well as all the way through the sciences. So there's a lot of interesting applications along that. I'll try and show you this one, but it's going to be a little bit hit and miss whether we can get it to work here. This is only showing half of the anatomy at the moment. But what this is allowing you to do is see how you can navigate around this in 3D. I can basically start removing sections and be able to navigate. I can actually start to basically peel back the different layers. It's almost like seeing an MRI in real time. This is part of the BioDigital Human Project. So this is actually something that's brand new that's come out. You can sort of see how you can just take pieces apart and then just basically navigate in and examine it a lot more closely. You can see this is actually running directly in the browser, so this is an example of what's possible with the JavaScript API. So those are a few different examples of some of the applications. You saw the creative exploratory situation. You saw some of the efficiency increases, some sort of interactive learning examples. So you can see there's a wide variety of different applications, different software. I'm guessing that probably not many people have had a look at the SDK by this stage. So I'll just go at a very, very high level through what is the data that's making all of this work. What is it as you as a developer would be working with to create those sort of experiences. So I've touched on, a few times now, Frames. So at the very, very lowest level, we have what we call Frames. And a frame is basically returned to you up to 200 times a second, and it contains everything that the Leap sees. So the Leap see hands, it see fingers, and it sees tools. So I'll show you quickly in our visualizer what that looks like. So if you're interested in doing some Leap development, this tool here is probably going to be one of the most useful things that you can start to play with. It's actually not immediately clear possibly where you get to this from. And I'll show you quickly just so that you're all aware of where you can actually get to this. So when you're running the Leap Motion software, you've got this little icon up here. This is where you can launch Airspace from, it's where you can get to your settings from. One of the things in here is this thing called the diagnostics visualizer, and that's under Troubleshooting. It'll launch this tool here, and this is basically-- AUDIENCE: [INAUDIBLE]. MICHAEL SUTHERLAND: Oh, right. Yeah, thanks for that. I noticed that as well. Thanks. It just kind of popped out before. Thanks for noticing that. So this is basically just what the Leap is pumping out. So this is the data that's coming out being processed by us. And at the end of the day, this is what you get. So this is hands and fingers, basically. What you're seeing there is all my fingers represented in real time down to a hundredth of a millimeter. You can see the arrows. The arrows represent the direction of my finger. So that's something that you'll get through IPI. You can see where they're drawing-- that's the position of the fingers. And you also get the velocity at any one time as well. And you'll see there the two big circles representing my palm. And you've got a big arrow sticking out the bottom, or the top if I hold my hand upside down, and those are representing the palm normal vectors. So basically normal vector being just a vector that's sticking straight out of your palm's surface. So those are the fundamental building blocks that you'd be working with with building Leap software. And this tool allows you to really see exactly what's going on. And there's a few things that you can do, a few little tips in this visualize that may be helpful. One is just to represent your fingers a little bit more clearly. The other thing that may help is this is essentially what the Leap is seeing. So I mentioned before, there's a couple of optical sensors. So these things have basically a field of view of about 150 degrees. And so this yellow box here is representing what this can see. So you can see here as I go outside that box I'm starting to lose my hands. And if I start to go outside here, it'll still pick it up, but you're starting to lose it on the edges. So this gives you a bit of a sense of the space that you've having to play with. And you'll see here if you press H, it'll toggle this menu. And that will actually give you a whole bunch of different options that you can access. Most of it you probably won't need. But it's a great way of visualizing what's going on without getting dug down into the data. So that's the Frames, Hands, Fingers and Tools. Actually, I'll show you just quickly before I go back, the Tool. So let me see, this should work. So you can see here my hands. And if I bring in this pin, it's coming up as gray. And what that's saying is that that's a tool. So we have what we call a tool API. It actually recognizes objects like pens or paintbrushes, anything that's roughly of this sort of shape. It'll be able to tell that that's not a finger, and you can actually use that to your advantage when you're building software for this. You can start to bring real-world objects into your application, for example. So you might have literally an easel of different paintbrushes, and each paintbrush you've coded up to represent a different brush in the software. So instead of actually changing it through a menu on the software, you can literally just pick up a different brush and start painting with it and have the software adapt to it. So that's Frames, Hands, Fingers and Tools. So that's the really, really low level stuff that we have. I don't know if anyone here has any familiarity with the Kinect. One of the things we get asked a lot is where is the raw data. And what that means is in other 3D tracking systems, it's basically a blob of data. What we do here is create a more structured approach. So this is actually the lowest level data. And we find that because it's structured like this, it really helps people to get started quicker. If you just got given a full 3D blob of data, it becomes very difficult to work with. So that's one of the reasons why it's structured in the way it is. So is there any questions around any of that? Nope. So we'll move on. You might be able to start to see even though the data is structured in a way that gives you literally what you're seeing, like hands and fingers, it can be a little daunting at first to start to figure out how to work with that. Now I'm tracking fingers in 3D, what do I do with that? So we do have some higher level APIs to help get around some of those areas, and it might be a way to get started a bit quicker as well. So this is conceptually called Motions. It's a part of our API. You'll find some guides on what Motions is. But at a conceptual level, what it's doing is it's basically taking all these movements in the space and turning them into one of three things-- translation, rotation, and scaling. So don't get too caught up in that. But what it basically allows you to do is it converts these complex movements into single digits, or a degree of rotation, or a scaling factor like a number. So what it does is it abstracts out a lot of the complex mechanics and if I'm doing this, it gives you a number that says this is scaling by 10. So what you can do then is if you had an image that you wanted to enlarge, you could grab the image. And then use this sort of API to say, well, now I'm scaling it by a factor of 10 and you don't have to worry about all the data that's going on. So it's something to just keep at the back of your mind. It may make it a little easier if you're trying to do some of those more complex interactions. That's what we call the Motions API. You won't see it actually called Motions in the documentation. It's actually a collection of APIs from different places. But what I can do is if anyone's interested in learning more about that, I'm happy to point you at some guides for how to get started on that. And then the next thing up, which is the thing that most people are probably most familiar with, is Gestures. So this is much higher level abstraction. So you're basically taking all of these sort of movements and you're saying, right, what's a discrete thing that I can do? So I can circle with my finger, or I can swipe with my hand, or I can tap in the ear. And so we've broken those down just to try and make it a little bit easier to get started into some of these gestures. And I'll show you show a little bit about how some of those work. So back in the visualizer, if I turn Gestures on-- let me just stop that so it's not rotating and making it run dizzy. Right, there we go. Turn it up. Right, here we go. So now you can see that my hands are in the space. If I draw a circle, it's coming up and showing a circle. And you can see that's actually in any plane. It doesn't really matter how I draw it. But it's basically detecting that I'm drawing a circle with my finger. And at the API level, we've tried to make that as easy as possible to use. So you don't really have to think about the mechanics of tracking points in 3D and figuring out whether it's a circle. You can just say, is a circle happening? So it's one thing that you can use as a control mechanism. You'll also see there that we've got swipes, taps. You can see those little balls bouncing there at the bottom. So those are visually how we show what the gestures are. But in terms of developing software for this, it's just a high level way of simplifying a lot of the complexity of tracking fingers, so you can just use those as is. You'll start to see different approaches to that. And I'll show you another approach a little bit later on of a different way of doing that, but those all built into the API. So we have a few other parts to the SDK that might be interesting to you guys. So just to cover those last bits, there's three levels of abstraction I talked about. The low level, which is the Frames, the Hands and the Fingers. The middle level, where it's converting a lot of that movement into continuous movement, so Rotation, or Scaling, or Translation. And then to the next level up which is the gestures, like am I doing a circle? Am I doing a tap? Am I doing a swipe? Then on the other side of it, we've got things called the Interaction Box. I don't want to go into too many details, because this is all just to give you a bit of a taste of what some of the things are. You're definitely more than welcome to reach out to me with specifics later on about this. But the Interaction Box is another way that we're trying to make it a little bit simpler to think about coordinates in the space. So I mentioned before you saw the space-- it's this 3D inverted pyramid. That could become a little tricky. You can sort of see it there. That can become a little tricky if you're trying to translate that into screen space where you're displaying what you're working on. So what we created is an Interaction Box. It's going to be very difficult-- oh, there we go if I turn that one on. So you can see that white box there. And basically what that's doing is it's just mapping that to zero to one, zero to one. So you just get a scaled space that's always fixed. You don't need to worry about how far you are above the device, or wherever it is. And this adjusts-- at least it should adjust. This is obviously demo mode. But basically what will happen with that is it will just adjust to wherever the person is above the device, and it'll create a consistent space for you to work in. It sounds a little complex with the way I'm explaining it there. But what it essentially allows you to do is just forget about where the person is. It just gives you a scaled zero to one in the Y, zero to one in the X. And you just don't need to worry about all the complexity of where the person is, whether they're using big movements or small movements, and it just scales everything for you. So that's just something to keep an eye out. If you do look through the documentation and you see something about Interaction Box, that's what that's referring to. It can be a little difficult concept to understand what it is. And it's unfortunate that it's not scaling up with me, but that's OK. And while we're on this view, the other API that would be interesting to maybe talk about is our Touch Zone API. So one of the first things that people ask is how do you click with the Leap? It's kind of an interesting question, because you don't really need to click with the Leap. What we try and encourage is to think about actually interacting with the space, grab it and move it-- you don't need to click and drag. But for the applications where some sort of interaction is necessary, we have an API that's called a Touch Zone API. And it just tries to take a lot of the complexity of figuring out exactly what the user's doing in the air and simplifies it into just an event that says you've either clicked or not. And I'll just show you very quickly how that works. So you can see here my finger's being represented as a cursor, and you'll get given this position throughout API. And as I start to move forward, it basically says I'm now clicking and I can drag this around. And it doesn't really matter where I am in the 3D space-- it'll work no matter where I am. And so at face value it looks very simple. There's actually a lot of complex mechanics around that. So that's why we try to encapsulate all that into an API and make it a little simpler for you guys. So if you have a look at the API, it's actually pretty straightforward to build that into your application, and you don't have to worry about all the complexities of where the person's hand is. So there's a lot of other stuff in the SDK. If you want to have a bit of explore, you'll start to find some of the other things. But those are some of the high level concepts that are in our SDK. It might be a little bit much to take in without having had a chance to play around with the Leap yet. But I just wanted to give you a bit of a flavor of what's in there so that when you do get to it, as I said, feel free to reach out to us. I can point you in the right direction for any sort of documentation to help you get started. So as I mentioned, our SDK has a bunch of native languages. We have the JavaScript API. One of the easiest ways to get started might be to look at some of the platforms of the frameworks that are out there. I don't know if people are familiar with Unity. famo.us, goo and Vuo are all very new ones to the scene. Unreal you've probably seen in game engines. But what these kind of environments do is it may be an easier way to help to get started. Because what some of them do will provide you with a 3D framework to start with. So it kind of takes out some of the complexity. You get more of a visual environment to work in. famo.us is a new platform that's coming out for a web app development. Their aim is to make web app development super easy. So that will be coming out-- there's no time frame for it at the moment. But if that's something you're interested in doing, it could be one to keep an eye on. goo is an amazing HTML5 gaming platform. They're doing a very visual editor as well online, again, for high performance web apps. Vuo is something I can go over very briefly. This brings in the concept of rapid prototyping. And I don't want to go too deep into that. But one of the things if you are really interested in getting into this sort of development, finding a good tool to do rapid prototyping could be really valuable. And what I mean by that is it's a framework where you have to do very, very little effort to get a lot of return. So you don't really need to do a lot of coding. A lot of it's very visual-- it's dragging blocks around. In fact, I can show you a very, very brief example of this. So you can see here, it's just a completely visual environment-- you don't even need to code. Oh, great-- we won't do that demo at the moment. It doesn't seem to want to run at the moment. So without going into too many details, it's what's called a visual programming language. It allows you to get some basic functionality working. I'll just quickly bring up a completed version of this. So you can see here, this is a very simple application that basically takes an image and allows you to move it around with the Leap. And these green blocks here are essentially all you need to do to get started with the Leap side of it. So it's a good way to get started. If you have some ideas you want to experiment with before you even get any code down, it's a good way to get started. AUDIENCE: If we use [INAUDIBLE] look at, would it translate into actual code? MICHAEL SUTHERLAND: In that situation, I don't think you have the ability to translate to code. There are definitely some other frameworks out there. Quartz Composer is actually an Apple tool. It's no longer officially supported, but there's a big community around it. We've seen some amazingly Leap stuff come out of that. There's some plug-ins available. I think there is access to low level code from Quartz, although I'm not entirely sure about that. But that's a good question. So I'm just going to show you some very, very high level terms for things to think about when you're developing. Lighting conditions generally aren't a big issue for the Leap anymore. We've got an amazing team that's basically been able to eliminate for almost all lighting conditions, because that's something that can potentially affect. Infrared sources coming in from the outside have the ability to affect the performance. In general, you won't really come across many stumbling blocks. If in your development you see that the device goes into robust mode, really all it means is it may have detected that there's some infrared light sources in the environment and it's compensating for it. So don't be too worried about that. In general, when you're designing software for the Leap, it's important to realize that this could be the first time that your user is using this technology. And this is something that it's hard to get your head around initially. What we try and encourage people to do is think about instead of just allowing the user to have to find their way around the interaction, what you're asking your user to do, try and explain it to them a little bit. Treat them as though they may never have seen this technology. Sometimes people won't even know to reach their hands out over the device, so don't take anything for granted. If you have a look around on Airspace, you'll notice that a lot of the apps really infer the user into how to actually interact with that app. That's something to just be aware of. If you are developing software for this platform, it is new. People aren't familiar with the technology yet, and so you may need to help ease them into whatever it is that you're building. Data is your friend. I mentioned before the Visualizer. It could be one of the best tools you use. It just allows you to look and what you're doing. Think about the action that you're trying to code up, and then look at what it looks like in the Visualizer. And then it will give you a better sense of what that data means that you're getting out of the SDK. If you're doing anything that needs a menu, menus are something that you want the user to be able to do without even thinking. It's not really part of your application. It's a part of how the user uses your application. So we have some resources on the developer site. Just a couple of different systems for menus that take the burden off you guys for having to think about how to build menus. Because menus can be something that you could spend a lot of time trying to build into your application, when really what you're trying to do is build the idea that you have, not the menu. So I would recommend if you have to do any sort of menu systems, definitely have a look at the resources we have on the developer site. We've got some great examples of how to do menus, and how to keep them consistent so that users have a consistent experience across applications. Visual feedback. So what I mean by that is if for example you're trying to do something that is showing a 3D space, it's very important to provide some sort of visual feedback. So whether that's showing where your fingers are in that space, or in the case of Block 54 that we saw at the very start, you may have noticed that the [? pedals ?] were illuminated. And when I went close to the tower of blocks, you could actually see visually that I was close to them. And it's a small trick, but it's actually a very important one. So make sure that the user's oriented in that 3D space. And again, rapid prototyping. If you can find some tools that you find helpful, I definitely would encourage you to invest the time in it. Being able to get your ideas out quickly instead of having to spend a lot of time coding at a lower level and trying to figure out how to code it up, if you can get those ideas out in front of you, play around with them a bit and then code it up, it could be a great time saver. So we're getting to the end. How are we for time? SPEAKER 2: [INAUDIBLE] eight minutes until 5:30. MICHAEL SUTHERLAND: We'll finish at 5:30? SPEAKER 2: I do that. That was the spot we advertise in here. But we can do one on one Q&A after this. MICHAEL SUTHERLAND: Yeah, I won't go too deep into the rest of this then. I did mention before a different way of doing gestures. If you're interested and you are working with JavaScript, this is a JavaScript application that a developer named Robert Leary built. What it does is it takes a lot of the complexity out of recording and using movements. So what he's done is he's basically created a gesture recorder. You can type in the gesture, record it. It spits out something that you can then pull into your application. So instead of having to code up all the complex movements in 3D, you can just take this, do the action, and save it for your application. So that could be an interesting tool to help you get started. I can go through these very quickly. It's just a couple of videos that show some of the interesting applications. Some of the stuff you might not see either in Airspace, but it's floating around in the developer community. Just some amazing work that people have been working on that show some maybe some more unique applications possible. So this is a system that's using basically head tracking on the camera to give that depth perspective. But you can see it's a pretty interesting visual trick. So that's kind of an interesting thing that is possible with this type of technology. And then some of you may be familiar with the Oculus Rift. This is just some experimental work that some of the developers have been doing around combining Leap Motion with the Oculus Rift, so for the first time you can be inside that virtual world. So that's going to be an interesting approach for gaming coming up soon. The Oculus Rift is a VR headset. Poor guy-- he really got a hard time. This was an exhibit that was done using projection mapping with the Leap Motion. Just a really nice interactive environment where people can just play around. You can see there they created these 3D trees using projection mapping techniques. This was an interesting one done in Taipei with Heineken doing an installation. The whole "Iron Man" approach there. But for the sake of time, I'll just quickly get to the end. So if there's three things that would be nice to take away-- because I realize there's a lot of information that we just covered. And a lot of it you'll really need to spend a little bit of time to just dig down into the resources that are available. But I think the first thing is really if you are designing software, try and design for the user, not for "Iron Man." So forget about I want to be "Iron Man." I want to be on "Minority Report." But instead, design for the user. Design for the person that's going to be using your software. So think about how can I make their experience better? How can I make something that they're doing better? And that's really going to be where the most powerful and the most engaging software comes from. And if you're familiar with UI/UX, you can almost throw the rules out the window in some sense. With this sort of technology, we're starting to rewrite the rules as we go, and that just means that you've got a blank canvas. So you guys are really starting at the right time. If you're just getting into programming now, that means you get to write the rule book as you learn, so that's an amazing opportunity for this. And I would just say again, be able to find a way to prototype quickly and then build. Don't necessarily waste all your time getting into the nuts and bolts straight away. See if you can get your ideas out. It used to be that it was good to get them on paper. And paper's still a great way to go. But once you start to get these dynamic interfaces, you really start to need some better tools to be able to get those dynamic ideas out. And so if you can find some tools that help you to prototype, try and learn them and use them, and you'll probably save yourself a lot of time and hassle. So a few resources. Once you start getting into JavaScript, js.leapmotion.com/tutorials, that'll be a great way to get started. Examples again on js.leapmotion.com, you'll find some great JavaScript examples. Please feel free to engage in the forums, ask developers, ask us. It's a great way to learn. If you're interested in reading more content about what's out there, some of the thought leadership in the space, labs.leapmotion.com's a great blog for that. We're putting out new content every week, it's a great space. And if you want to connect with us, again, the forums. You can email us at developers@leapmotion.com. We're on @leapmotiondev on Twitter, so just tweet at us. We're pretty active on there. And our main handle, @leapmotion. @leapmotiondev obviously is our developer Twitter handle. So that's really about it. If there's some time for questions, definitely happy to answer any questions. If you think of anything afterwards, please feel free to reach out to me directly at kiwi@leapmotion.com, or tweet at me at @kiwi. Cool. Any questions? AUDIENCE: In addition to developing apps that [INAUDIBLE], how feasible is it to make [INAUDIBLE] level software so that you could scroll left, right, up, down, and any [INAUDIBLE] applications, [? for instance ?] [INAUDIBLE]? MICHAEL SUTHERLAND: So there are applications for that. If you have a look on Airspace, you'll find a few different applications. Some of the more popular ones are one called HandWAVE that allows you to do some basic gesturing. If you want to do that sort of stuff, there's really nothing that limits you in the SDK to do it. It's really a question of if you're building that OS level control, is it actually making that experience of using the OS better? Over time, we'll start to see the operating systems evolve to a state that really is made for this type of input. For right now, we are actually using operating systems that have been built for 26-year-old technology. If you have a look at the Mac interface, it really hasn't changed in about 26 years. So we're really battling a 26 year learning curve where people have got so used to this type of interface that it's hard to see beyond that. So if you can improve that experience, that's a definite win. But if it's just doing a gesture for the sake of doing a gesture, what you'll probably find is that users, they'll find it easier to just go back to their keyboard and mouse because that's what they're comfortable with. So that's why it's really important to think about who am I designing for? Who is that end user, and how can I make their life a bit better? But if we have the time, I can show you a quick one. This is an interesting one that just came out. It's a very simple cursor, but it's kind of cute because it has this little hand-- or it doesn't. Are we in there? Interesting. Well, I won't show you that demo. But that [? leapcursor.js ?] is an interesting little example that basically lets you scroll up and down fluidly in a web page and sort of clicks just by flexing your hand. So it's supposed to be more of a laid back kind of scroll up and down, flicks. AUDIENCE: You mentioned the device itself is largely commodity hardware. What is the underlying hardware technology that's actually doing the detecting of objects? MICHAEL SUTHERLAND: So the actual detection of the objects-- so basically if you were to hack the USB on here, you're just going to get a whole bunch of image data back. People have already done it. Where the magic is happening is basically once it gets into the computer, it's essentially some proprietary algorithms that were originally developed by our co-founder and have now just taken on a life of their own. AUDIENCE: Is it through infrared, or a magnetic [INAUDIBLE]? MICHAEL SUTHERLAND: So it's just purely infrared. So literally, it's kind of like having a little webcam sitting on your disk and then a spotlight shining on your hand. It's just all done in infrared. So it's just some infrared optical sensors, and some infrared LEDs, and there's really nothing too complex about it. It's the way that we're able to take that data and then turn it into something useful in 3D. AUDIENCE: So it seems fairly easy for people [INAUDIBLE]. But is there any way for a developer to maybe apply [INAUDIBLE] for other types of objects-- maybe faces or other types of things that the user might put forward? MICHAEL SUTHERLAND: At the moment we do support a limited set of tools. Unfortunately, with the way that we've structured the data, because we wanted to do it in a simple fashion or one that makes the most sense for hands and fingers, the API won't support face tracking or generic object tracking. That may come in the future. But for right now, it's really fine tuned for hands and fingers and specific tools. Cool. [? DAVE: Thank ?] you so much. This is terrific. [APPLAUSE]