[MUSIC PLAYING] IAN: All right, folks. Welcome back to week six, where we're going to talk a little bit about exposure. So to this date we've talked a lot about composition, lensing, storytelling, without focusing too much on the technical details of how to actually make an image, and match it to our intentions. So we'll dive a little bit more in that today. And I think it's important to define exposure. So the idea of exposure is that we want to render a scene in a specific way using our camera controls to interpret the amount of light in a given scene. So what does that really mean? Well we're going to dive deep into this, and we're going to look at each of the different camera controls. How they affect the image, and how we can utilize them to make different exposures. So what's our goal with exposing an image? It's really to capture that intentional image. It's to make a decision about how we want an image to look before we press the shutter button. When you have a camera that's on auto mode and you press the shutter button you're basically giving up all the decision making to a small machine. And they're sophisticated, sure, but they're not intelligent. And our goal is for us as intelligent operators to make the decisions for the camera, or at least override decisions that may be poor. So I think what we really need to understand is what exactly is it that we're trying to do? And so in any given scene there's some amount of light. And we can use absolute measurements to tell how much that is. It could be 150 foot candles of light, but I don't know what that means. I sort of conceptually understand that, but it doesn't help me take a camera and make an image. I have to do a bunch of math. It gets very confusing. There's easier ways. So what we really need is a relative amount of light, or at least the scale-- a relative scale that we can use to adjust our camera settings. So for us in this class when we're talking about photography, and then later cinematography, we're going to use the concept of f-stops. And all an f-stop is is a doubling or halving unit. So if you have a camera that has some sensitivity, and you double the sensitivity of it, that's one stop. If you have a camera that has some light coming into it, and you cut the amount of light coming into it by one half, that's one stop. They're going in opposite directions, but it's an equivalent unit of one stop. So just briefly to get a sense of how this might look when we look at an image we have here an image from Mount Auburn Cemetery that's at exposure, roughly. And I made this exposure using a DSLR, and the light meter that was in the camera told me that if I set the camera settings to this it'll be at exposure. And I think it did a pretty good job. There's no real complaints there. So in an effort to sort of investigate how much difference a stop might be I opened up one stop. So I allowed double the amount of light to strike the sensor. And you can see that it gets much brighter. If we go backwards we have some dark shadow detail in here, neutral gray, some bright white, and everything feels naturalistic. We start to brighten up that neutral gray begins to push towards the lighter gray, this definitely begins to clip, these trees feel a little bit brighter. Not completely unnatural, but getting there. So now if we allow four times as much light in, or two stops-- remember that you're doubling each time you open up a stop. So you double, and then you double again. Now this image really starts to fall apart. And this is the classic overexposure that maybe you've accidentally done, or you've been struggling with your camera controls. And you end up here where you have clipping elements, there's no detail, there's no actual shadow detail. No dark tones in this at all. It's all light grays. And if we go even further it completely falls apart. And we could keep opening this-- allowing more light to hit the camera and eventually we'd end up with a solid white image. So, OK, that's overexposure. But when we look at this image again-- add exposure. This is the same image from the beginning. And we go the other way, I've now reduced the amount of light entering into the camera by a half, or one stop. And you can see that all the tonalities get depressed. They're getting pushed down into the shadow areas. There's more detail in that white snow, which was much brighter before, but now it's sort of a shade of gray. And if we keep going two stops so four times less light is reaching in the sensor you now see like beautiful detail in here, but everything else is turning into a muddy, crushed, shadowy darkness. And if we keep going it becomes almost unreadable. So all that is is to say that we have these tools that allow us to increase and decrease the amount of light, and we do so by measuring it in doubling or having units. So how do we measure light? Well, we have a brief video online that we posted earlier this week, on Friday, on light meters, which if you've all watched, great. And if not we'll talk a little bit about as a refresher. But there's a few different ways. You can use a handheld light meter like this to measure light, but I think more often than not what we'll use is the internal light meter of the camera. And so this will measure the amount of light that's striking the subject, and reflecting back through the lens onto some sort of light sensitive material. And it will say if you set your camera to these values you'll get a decent exposure. So they can be hand-held or internal to a camera, but all light meters are calibrated to expose for an idea called 18% reflective gray. And they do so in different ways. So briefly, before we look at a couple different ways they do that, 18% reflective gray is often sort of colloquially termed middle gray. When exposed properly it forms the middle value between absolute light and absolute black in your image. It's right in the middle. So it's a very handy reference tone for us. And it's usually found on a gray card-- something like this, that reflects back 18% of the light that strikes it. Cool. So before we get here let us quickly fire this guy up. So this is the output of our camera right now. If we take a middle gray card and we place it in front of the camera well illuminated-- we'll fill the frame with it. We can see that in the middle of this there's this histogram, and sort of all of the image data is centered right in the middle. If we set it for this white background you can see that it's sort of drifted back to the middle as well. And that's sort of interesting because we would expect this tone to reflect more light. So there is a bit of a trick going on with reflective light meters that we need to pay attention to. Reflected light meters always assume that every single element that they're metering is middle gray. So there's a problem when you meter something that is not middle gray like this white background. And so if we actually are to open up this camera and allow a little bit more light in-- what am I doing here? There we go. I has auto ISO on. So there's the shadow. Let's get this shadow out of here so we can-- so now all of a sudden we brighten the image up by allowing more light in, and this tone is rendering normally or as we would wish it to. So we can also do the same thing if we have a black object. You can see that the camera using the reflective meter set this value to add exposure, which is lining up that small cursor with the caret in that scale. You can see that this is not black. It's middle gray. It's actually rendering this incorrectly because what it's expecting to see is this middle gray card. So what we need to do in order to offset that is actually underexpose what the camera is reading. So now when we check this the camera is like you're over three stops underexposed, but this starts to look correct. So we talked a lot about this-- thanks, Ben. [INAUDIBLE]-- in the light meter video. And we'll come back to this, but that is to say that in any given scene the exposure suggested by a camera is made to be 90% accurate. Most scenes have a mix of light values, dark values, and values in between. So if you sort of assume that that all melds down to about middle gray you can suggest an exposure. And the meter will suggest an exposure, and then when you expose for that you'll be pretty accurate. But so this scene here is pretty much all highlights. And my camera said if you set it to these settings you'll get a properly exposed image, but it's not. It's too dark. So if we open up one stop. Well, we're getting closer. Snow starting to look like snow. And if we open up one more stop maybe we're a little bit too far. It's getting a little bit bright over in this area over here, but it's rendering as a very bright white. And sort of side by side you can see it here. The metered exposure versus compensating by increasing our exposure one stop. So this is what I mean to say when I say that we need to be a little bit more intelligent than the machines. They're very sophisticated. They can measure all kinds of things, but they're still sort of locked in and reference at specific values. So we need to understand what those are so that we can compensate, and sort of use our intention to override the camera when it's making, essentially, a dumb decision. Yeah? AUDIENCE: If middle gray is meant to be halfway between pure black and pure white how come it's 18% and not 50? IAN: So it has to do with the sensitivity of-- or the way that human eyes render light, and because it's logarithmic. So it's a power function. So 50% is not quite halfway on a linear scale when it gets transformed, I think, but essentially it is because if you have say-- this is a good example. If you have one light bulb, and you turn it on, and you add another light bulb it's almost like you've doubled the amount of brightness, and it's a very obvious difference, but if you have 100 light bulbs and you turn one more light bulb on it's such a tiny incremental difference. So we're dealing with light on this power function scale where we're doubling and we're halving, and so the value that gets us to that middle gray is actually 18% and not 50% because it's not a linear scale. Which there's plenty of math, and I certainly won't do it justice, but we can dive down that rabbit hole another time. So we looked a little bit about what exposure is doing and how we might fool ourselves with the light meter, but what are the actual settings that I was just changing on the camera? I was pushing and pulling at some buttons and dials, and I was clicking some things, and the image was changing, but what was it? So the three main camera controls. The first one is ISO, which is the sensitivity of a sensor to light, or a film stock, or any kind of medium that we're using to capture light. The second is shutter speed. How long do we let light strike that sensitive medium for? In the case of DSLRs it's how long the shutter is open, but maybe there are other mechanisms in play on other cameras. And the final one is aperture. So in every lens there's a diaphragm that opens and closes. And depending on how open that is or how small that is it lets in more or less light. That's a control that we have at our disposal. And so by using all three of these elements we can control how much light reaches the photographic sensor, and how sensitive that sensor is to light more generally. So let's dive in a little to each of these categories because there are different artifacts that happen when we change and control each one of them. So first one, ISO. So it, again, measures the sensitivity of a medium to light. In digital cameras it's the sensor. In old film cameras it's actually a type of stock, like film stock, and they each have different ISO values. And the sensitivity doubles and halves, which is that unit of a stop, when the ISO value is doubled or halved. And in this little scale down below there's some common ISO values. A few of them you'll notice are bigger than the other ones. Those are-- maybe you can call them major stops of ISO. When you bought film stock back in the day it generally did not come in these third stop increments. It came as 100, or you'd get 200, or 400, or 800. So through that they've become our major stops, and there's ISO values in between that digital cameras can replicate at third stop intervals or maybe half stop intervals. But every time you double the number-- so if you go from 100 to 200 it's now twice as sensitive. And if you half the number going from 800 to 400 it's half as sensitive. And it works even with the third stops. So say we had our camera at 320 and we double it to 640 that's doubling the ISO value, that's one stop. We've increased the exposure value of the camera by one stop. DAN: And by default most DSLRs that you buy, and mirrorless as well, will increment by thirds of a stop. So you can set it to half stop or only to do full stops, but by default your camera as you kind of click up-- and this is all the exposure values-- is a third of the stop. IAN: Exactly. So when we were shooting on film predominantly you would load a camera with a roll of film, and you would have one ISO at your disposal until you finished that roll of film. So the flexibility of digital cameras is sort of amazing because in any given moment-- I had it on auto ISO by accident, and it was just sort of changing ISO to every stop in between for any different shot that we want to make. So let's do a little exercise 'cause I really want to drive home this idea of the relationality of stops. So if we go from 200 ISO to 800 ISO how many stops difference is that? And is it more or less sensitive? This is a good time for the internet to chime in if you're on there and want to join us. Yeah? AUDIENCE: I would say more sensitive. IAN: It's more sensitive, but how much more? AUDIENCE: Two stops more sensitive. IAN: Two stops because we to start at 200, and we go to 400, and then to 800. So we've doubled, then doubled. That's two stops difference. Absolutely. More sensitive. Great. How about this one? Ooh, so much math. All the arithmetic. 1,600 50. How about someone from Zoom? Anyone feeling brave? DAN: Carla says five stops. IAN: Five stops. Yep. Less sensitive. Exactly. Very good. How about 320 to 400? This is a little bit weird. I was talking about it halving in doubling, being a whole stop. What do we think this might be? One third up. There we go. Nice. And then finally one where we have a weird incremental stop that's maybe not one of those major stops, 500 to 1,000. Is it doubling or having? Right, it's doubling. It's going 500 to 1,000 It's one more stop more sensitive. This feels very basic in the moment, but at the end of the day having a firm grasp that when you double and half you're moving things in a stop increment it's helpful because every other camera control also moves in stop increments. Can be controlled in stop increments. So if you add a stop here you can take a stop away somewhere else. So because there is no free lunch in anything that we do and there are nothing but trade-offs there are artifacts that are associated with different ISO values. So increasing the sensitivity of a medium introduces noise. Does anyone have a sense of what ISO noise looks like or what noise is more generally? Yeah. So in the old days of film it was actually grain. You'd see particles that were actually-- the molecular structure of the film was bigger so that it was more sensitive to light because there was more surface area. In electronical-- or electronical-- [CHUCKLES] In electronic systems it's actually just random data that gets introduced into our image. And so the more sensitive ISO values tend to have much higher levels of noise. And lower ISO values have less noise. It's much more apparent in the shadows of an image. Why do you think that might be true? Yes? AUDIENCE: There's probably less data in the shadows. IAN: Yeah. So there's less data in the shadows. There's no information to overwrite a noise value. So you can see even low value noise in shadows because there's very little data there to begin with whereas if you have a lot of-- like a bright highlight it may overwrite sort of a medium level value of noise. The first place you're going to see noise is always in the shadows. So this is actually from a cinema camera. This is still of our friend Dan Armendariz. And here we can see him sitting in this magnificent room at 400 ISO. I don't see too much wrong with this image. I don't see a lot of noise anywhere. It sort of seems fairly clear and crisp to me. At 3,200, well, I can sort of start to see a little bit of degradation. If we go back and then forward there's some something there. Still not crazy-- it doesn't seem from this distance. But then if we go to 12,800 that looks wild. There's just so much textural element to it. So much noise. And so it's a little hard to see at that scale so if we zoom in we have Dan looking handsome. Dan looking, ooh, a little horse. And whoa. And you definitely can see it down in the shadow areas. And I didn't mean to make fun of Dan there. He's a handsome man. DAN: Ian, you we're talking about the ISO levels like at 400-- IAN: Mm-hm. DAN: I forget-- something in the middle, and then 1,200, but is every camera the same? IAN: No. So every camera is not the same. The range of ISO values that you have available to you is hardware dependent. So you'll find that certain cameras can go from 50 to maybe 32,000. Some of them have ludicrous numbers. Like the A7S, I think, is like 120,000 or something. It can see in the dark. Whereas a film camera might be locked into one ISO because you put 100 speed film in it. Or maybe your camera only has ISOs from 50 to 3,200. And maybe it only has half stop increments in between. It doesn't have third stops. So the range of ISOs that you have available to you is hardware specific, but the relationship is the same regardless of hardware. Doubling or having increases or decreases the sensitivity of the medium by half or double. DAN: And my question also-- like 400 on one camera, is it the same as 400 on another camera? IAN: I would say no because at the end of the day the way the hardware is interpreting the electronic signals is different for every single camera. And so you may have one camera and say, oh, there's no noise at 400 ISO, and then use a different camera, and you may find that in a specific image there is a lot of noise at 400 ISO. And so there is definite variability in the quality of the electronic circuits that are in digital cameras. And so obviously higher tier cameras are going to do better at more ISO values than lower tier cameras. So, yeah. DAN: So sensitivity-wise they are the same, but there are definitely trade offs between different models and different sensors as far as quality? IAN: Right. Exactly. So has the same sensitivity, but you may actually end up with more noise, more artifacts, and it's because of the quality of the camera. That's a good way to say it. So why would you ever accept more noise? Why would you why would you increase the noise? If it looks so bad why would you do it? Well, it's a really pragmatic decision. There sometimes is just not enough light. It's nighttime, it's dark, it's dusk. There's just not a lot of light and so you need to boost up the sensitivity of the sensor to even render any kind of image. So there's a very practical side of it. Or maybe you're going for aesthetic effect. To mimic grainy footage. Maybe you're trying to mimic some surveillance footage or something like that, or you want to go back to-- a lot of street photographers used to shoot 3,200 speed film and it always had really heavy grain in it. And maybe you like that kind of look and style and you want to add a little bit of more noise into your image to mimic that for a textural effect, perhaps. Here's a picture of a ghost I took. I was ghost hunting last night. Do you see it? AUDIENCE: [INAUDIBLE] IAN: All right. Well, you guys will have to investigate it later. So shutter speed. So that's ISO. It's the sensitivity of the sensor. Increasing the sensitivity introduces noise. Lowering the sensitivity sort of minimizes noise, but it also requires more light so there's a trade off there. So shutter speed. So shutter speed is the amount of time the sensor is exposed to light. There is a shutter in here. It opens for some period of time. The sensor is struck by light and then it closes. We measure it in fractions of a second, though you may see them written on cameras as integer values. This is to save space. On those tiny knobs you may see like 1,000. That's actually one over 1,000, 1/1000 of a second. And listed there are the major stops or shutter speeds. This is what you would find on most old school 35 millimeter film cameras. Starting at a second and going up to about 1/1000. So, again, hardware specific. Your camera may have more shutter speeds available to you, and in different increments. Or it may have less. But the important thing to take away is that doubling the length of time doubles the amount of light that can strike the sensor. Halving the length of time halves the amount of light that can strike the sensor. So, again, we have this stop interval. So you could imagine that if I have to decrease the sensitivity of my sensor maybe I can open the shutter for twice as long. So go down one stop here, and up one stop here. And it's the same amount of light just manifesting a little bit differently. So we're gonna count stops again. So a 1/60 to a 1/15. DAN: 1/60 of second. IAN: 1/60 of a second. I apologize. I shorthand a lot of this, but I should be more specific. So 1/60 of a second to 1/15 of a second. So it can be a little counter intuitive. 1/60 of a pizza is more or less than 1/15 of a pizza? That's how I have to do it my head to be frank with you. [CHUCKLES] How many stops? Yes? AUDIENCE: Two stops. IAN: Two stops. So it goes from 1/60 to 1/30, 1/30 to 1/15. Two stops more sensitive, or more light striking the sensor. 1/1000 to 1/30? So much counting. DAN: Alex says five stop. IAN: Bam. Five stops. And that's wrong. That should say more sensitive. That's a typo. I apologize. Because we're going from 1/1000 of a second to 1/30 of a second that's much more light. That should read five stops more sensitive. I'll fix that before the end of the lecture. So 1/500 to 1/400. This is odd. That doesn't feel like a doubling or halving. What might that be? AUDIENCE: A third, maybe? IAN: A third. Yep, a third more sensitive. And from 1/180 to 1/90? Those are odd numbers that weren't on our list, but it's still half. Yep. Exactly. So it's one stop more. So like all things that we've talked about there is a trade-off with shutter speed. You can't just open up your camera for as long as you want and still expect to render a crisp image. So this is an image of a dam that Dan took, and you can see that the water is frozen. It was falling down, and it's literally frozen in midair. So this must be a very short shutter speed in order to freeze motion like this. The blink of an eye. Here's the same image with the same amount of light. So technically these exposures are equivalent, but they look very different. In this one there's time for the water to fall all the way down and create this sort of streaking effect in the image. So this is a slower shutter speed. What is it? Half a second. So the shutter is open for half a second, whereas in the previous one what was it? 1/4000 of a second. That's beyond fathoming to me. It's a little too fast. And so this image takes a second to load, but you can see here a composite image that Dan's made that starts at very slow shutter speeds, and moves towards very fast shutter speeds. And if we actually put the shutter speeds there you can see incrementally where motion begins to look how we might perceive it normally into being frozen instantaneously or drifting into this fairy-like wishy-ness. DAN: This kind of raises the question of how fast is fast enough too? So it really depends on your subject. For sports maybe one 1/250 of a second is fast enough to freeze motion, but obviously to freeze waterfall you need to crank it up even further, so. IAN: Yeah. DAN: So it depends. IAN: And also perception too because when I think about when I turn my tap on, and what I see I see something over here. I can't make out individual drops unless I'm tracking them. It's gushing, but it doesn't look like this. This is sort of very ethereal. And it definitely doesn't look frozen in time like the one before so the speed of the object matters. And sort of our perception of that speed as humans defines how what it looks naturalistically. AUDIENCE: That's to say human eyes are sort of equivalent to some f-stop, would you say? IAN: I don't know. Maybe-- DAN: There are arguments online that yes, but-- IAN: Oh, I'm going back. DAN: I think It varies very much by person to person as well. IAN: Yeah. And I think the argument is you have to be looking at a fixed point. And everything-- when humans track objects-- fast moving objects to achieve more clarity than maybe if something passes right in front of your vision while you look at a fixed point. So it's a really hard experiment to perform because people eyes are always moving. And if you move something in front of them they just will track it. And you sort of get-- it's all skewed data I would say. But there probably is some sort of upper bound of what we can resolve with our eyes. Just as fast as they refresh. So this is last night's snowy night. Snow is just streaking down through these wires. Here's another one with a tree. DAN: Just to go back one second to the previous conversation. I think it also largely comes down to something known as frame rate, which is how many images you see in sequence, and what human perception is to believe something is actually in motion. And so I think that is a bigger part than shutter speed, for example, as far as what feels natural and feels normal. And we're going to punt to the next lecture in here on that when we talk about video. IAN: Yeah. It's sort of the idea of persistence of vision. Video is not moving images. It's a sequence of still images, and we perceive them as moving because they happen fast enough that we don't notice. So there is some sort of threshold where that happens. I'm just not convinced I know what it is. So to sort of jump back to that idea of we can actually track objects to increase the perceptive focus of this, this is a car that's driving by, but by panning with the car I was able to capture this in focus while the rest of it is out of focus at a much slower shutter speed. If I hadn't moved the camera at all everything would be blurred. It would just be a large streak. DAN: And I think we should be careful with what you say is in focus versus what's not in focus. This is actually motion blur, which is different from something being in soft focus. IAN: That is true. A very important distinction. So this is actually-- as it moves across the sensor it ends up looking blurry, which is the idea of that motion blur for sure. So it does lead us to this question that if shutter speeds get low enough you can actually introduce shake or movement just from your own human body. This camera's on a tripod. It's very stable, but if I hold something I'm always moving. Always. Try as hard as I might. So the rule of thumb is to minimize hand-held camera shake set your shutter speed to faster than one over your focal length. So it seems confusing, but I have a 50 millimeter lens on here. In order for there not to be any camera shake when I hand hold it I should be shooting it faster than 1/50 of a second. If I put a 70 millimeter lens on here or a 100 millimeter lens then maybe I want to shoot at 1/70 or 1/100. So, especially-- this comes into play a little bit more too with zoom lenses where you might be zooming in and out and changing your focal length, and not sort of paying attention to what shutter speed you're at. And you're say at 1/70 of second and you're shooting at 70 millimeters, but then you sort of snap into 200 all of a sudden you're going to introduce camera shake into the image. A little bit of motion blur that reduces the crispness of your images. So aperture. The final of our three is the size of the opening in a lens. And we have a little short video to tease you with this if we can just watch like this. So as the lens gets more and more open what do you notice happening? AUDIENCE: The numbers get smaller. IAN: Yeah, the numbers get smaller. This is odd. Interesting. So aperture actually refers to the size of the diaphragm opening in a lens. And it's a fractional relationship between the size of the opening and the length of the lens. So it, again, is written in the integer-- as integers or decimals on camera bodies and lenses and things like that, but it's actually like a fractional amount. So it's 1/2 or 1/2.8. So 1/22 is smaller than one half. And that's why the diameter of the opening gets smaller the larger the number gets. It's a little frustrating, and counter intuitive, and can be confusing, if you're not used to this, but with practice I promise you'll grab it. So the major f-stops are listed below up to 22. You can have smaller apertures like f/32, or 45, or 64. And there are increments in between. There are third stop increments in between. You may find someone that shoots something at f/9 or something like that and you're like, well, that's not listed here, but so there's increments in between. And just to drive home the point that the smaller the f number is the larger the opening. So over here we're in like 2.8, and over there we're at f/22. And so when you refer to the size of an aperture you might just say F whatever the number value. So I'm at f/2, or I'm at f/16, or f/8 to denote that you're talking about the size of the aperture, and not some other numerical value associated with camera exposure. So artifacts of aperture. Again, there are trade-offs with everything. So changing the aperture directly affects the depth of field of an image. We have not talked about depth of field except sort of in glancing blows in critique. And depth of field is defined as the amount of an image in apparent focus. So in reality with the way optics are there's only one single plane of critical focus in an image that runs perpendicular to the lens axis. And it's set at some distance from the lens. And if you look at your lens you'll probably see that there's feet and meter indicators on there. And when you adjust those to a witness mark that's how far away that critical plane of focus is. But, before we get there, that's sort of obviously not how we experience photographs. We often look at photographs where there's more things in focus on the z-axis than a single plane. So there's some artifact that's happening when cameras make images that allow us to have more in focus than just a single plane, and that is what depth of field is. How much distance on the z-axis is in apparent focus. And we'll come all the way back around to depth of field in just a few minutes, but that's the concept that we're talking about, and aperture directly affects it. Oops. Forward. So here we have an image of a young man in-- DAN: Handsome young man. IAN: What? DAN: A handsome young man. IAN: Handsome young man in a swing. And I used Dan's son as my stand in. But you'll notice that the back of the image is out of focus. And so we're at what? At f/1.4. So in this image-- which is a really large opening-- really large aperture opening. If we go one more image we're now at 11 so the aperture has gotten smaller. And we've compensated for that with other exposure controls, but now you can see that all of this background is in focus. I love that he sort of looks over his shoulder at that point. What's that? And so side by side you can see that these images while the exposure is the same, the brightness of the light values and the darkness of the dark values is the same, they look very different. And so you can use depth of field as a creative tool. And it's often used to separate people from environments to make things more intimate, or to show how expansive an environment might be if you go the other way. DAN: Can I-- IAN: Yeah. DAN: [? Can ?] I give an easy way to remember the f-stops? IAN: Yeah. DAN: Can you set a drawing for me? IAN: Yeah, I can. DAN: So I to this day I have trouble remembering these numbers, just like-- you know, you get to know the majors, but the easiest way for me to visualize this as you're going along is start with one and 1.4. And so the nice thing now is you can just keep doubling along the way. So one you double to two. 1.4 doubles to 2.8. Two doubles to four. 2.8 doubles to 5.6. Four doubles to eight. 5.6 doubles to approximately 11. We round here. Eight doubles to 16. 11 doubles to 22. And that covers most lenses that you'll pick up and operate with. So if you're trying to remember this scale just remember one and 1.4, and then keep doubling. IAN: Quick and dirty. So we talked-- oops. So I was alluding to-- these have the same-- they're allowing the same amount of light to strike the sensor. They're just using different settings in order to do this. So that means that there's some sort of idea that you can have different exposure settings that yield equivalently exposed images with different artifacts. And so as a photographer, or an image maker, you have to make decisions about which artifacts you want and which ones you don't. And it comes back to that idea that we were talking about so much of intention and supporting your narrative story. 'Cause this image over here is very much about this young boy. And this image is actually about this boy in a larger environment. And by making the decision to have narrow depth of field we're focused in with the child, but to have expensive depth of field we're sort of looking at the child in relationship to the space. So exposure equivalencies. So we can expose the same scene with different settings, and yield an image that is at exposure. But how does the image change? So to come back to this image again, this is exposed at ISO 100. So a lot of noise? Not a lot of noise in that do you think? AUDIENCE: Low. IAN: So low noise probably. It's at f/5.6. So is that a lot of depth of field or a little depth of field? Shallow or narrow? AUDIENCE: In the middle. IAN: It's in the middle-ish we could say. And it's exposed at 1/100 of a second. So when we think back to the image of the dam with all the sequential shutter speeds on that it's not super fast right to freeze anything like falling water or anything moving really fast, but fast enough to freeze most things. And it's also not slow enough to allow significant motion blur. So here's the same image, and we've changed a couple of things. We're now shooting at ISO 400. So what have we done compared to the last image? AUDIENCE: Doubled the ISO? IAN: We were at 100 before so we went 100 to 200. 200 to 400. So we increased the sensitivity by two stops, but we also opened up the aperture by several stops, and then shrank the shutter speed by a fair amount. But what you'll see is that f/1.4 is a really big opening and has a very shallow depth of field. And you can begin to see that. If we go backwards you can see detail here. Crispness. We go forward and it's totally blurry. And we know that that's probably not motion blur because we're shooting at 1/6400 of a second, which is incredibly fast. It's fast enough to freeze water and most anything that we would deal with in our daily life. So here's another version of this image. We're still shooting at a very high shutter speed. We're shooting at a deeper-- or a smaller aperture, which gives us deeper depth of field, but we're shooting at this really wildly high ISO, which introduces a lot of noise. And it's difficult to see, but we'll zoom in on this in just a second. So again, just to go the other way. So we have f/1.4. And this is 1/8000 of a second, which-- there's not a lot moving in here, but there's relatively no motion blur. DAN: There would be nothing moving even if there were. IAN: Yeah. Exactly, but it wouldn't be moving if it was. And a sort of relatively benign ISO 640. And again f/22. And we notice that the difference between f/1.4 if you look at that gravestone in the very far background there. This guy here. All of a sudden-- oops, I went the wrong way-- at f/22 it's sort of crisp, whereas before it was out of focus. DAN: And, Ian, you said a moment ago a benign ISO here, but I don't know that you defined what is an acceptable ISO range? Like we talked about the trade of a high ISO introducing noise, and a low ISO having less noise, but like in your-- maybe this is an experiential question. In your experience-- IAN: Yeah DAN: --like what would your target range be if you're going to go out and shoot something and wanted to keep as little noise as possible while giving yourself a range, what would you operate in? IAN: So I tend to shoot between 100 and 400. And I think that may be just habit from shooting 35 millimeter film where I would buy it at 100 speed, or 200 speed, or 400 speed, but mostly 100 and 400. But I think on any given shot I'm willing to push up to like 800, maybe 1,000, and once I get past that it just starts to-- I need to really want to have the grain there because it gets hard to get rid of. So I would say experientially, yeah, I shoot around 400. 400 to 800 is sort of what I shoot because it's sensitive enough that I can be in a reasonably dark situation and capture what I want, but it's also just not introducing that much noise that there's a real problem when I go in and look at the images later. Yeah? AUDIENCE: Are you shooting full frame or crop? IAN: Full frame. Well, it depends. It depends which camera I'm using. So I'll shoot full frame, which means a 35 millimeter size sensor, but I own cameras that have smaller sensors than that. And so the-- yeah, and so and so it really depends on what piece of hardware I'm using. Again, all of these values and things are dependent on the hardware that you have. DAN: Yeah, but I think Ralph's question is actually interesting asking which size sensor. Is there a performance difference with a bigger sensor versus a smaller sensor? IAN: Yeah, there is because you can have larger photo sites. So because you have a larger sensor the photosites that are sensitive to light can be larger which means that they are effectively better at higher sensitivities than sensors that have smaller photosites. So a full frame or a larger sensor is going to have better quality in lower light than something that has a smaller photo sensor. DAN: With a higher ISO. IAN: Yeah, with a higher ISO. I think, all things being equal, if you say have like a small micro 4/3 mirrorless camera at ISO 400 they're probably indistinguishable. Maybe if you really, really, really dig into the image you can find it. But if you're shooting it say 3,200, or 6,400, or something like that, having a larger photosite, which means a bigger sensor, is going to be more beneficial to you. That's where you're going to find that little bit of edge that it gives you. For sure. It's a good question. DAN: And then I just want to highlight-- Alec said that there's vignetting on the lens. It's very noticeable between 1.4 f/11. IAN: Yeah, there definitely is. AUDIENCE: There is. IAN: Yeah, like you can see it here. At 22 all the corners are bright, and at 1.4 for there's this serious vignetting. And it almost feels like it's softening too a little bit. And what that is is that the coverage of the lens is sort of just getting a little bit too small for the sensor size and it's just-- not quite enough light is reaching the sensor at that time. So I'm going the wrong way. That's why they keep going. So if we jump in and look at both. This is the first image, which was sort of medium values of everything. It's very vanilla. This is 1.4 and you can see my focus was just off. AUDIENCE: It's hard to focus at 1.4. IAN: It's really hard. And I sort of left this in here as an illustration that you might think you have something in focus when you're looking at it with a smaller aperture, but then if you open up it may not quite be-- you're plane of focus might not be exactly where you expected it to be. And this is very-- like this happens a lot. Like it's not far off really. It's on this bush here somewhere, but it's just too far forward. Like I made a mistake. DAN: And it's really unfortunate when you're shooting with a person and-- IAN: Yeah. DAN: The thing you want when you're shooting a person is for their eyes to be in focus because that's like the first thing your eyes typically go to. And when you perceive an image in focus the eyes are typically in focus. So if you notice that the focus is like just on the end of someone's nose, but their eyes are not in focus it's definitely a moment to kind of-- it's good to double check if you're taking a picture of people, I guess, to punch in digitally on your screen and make sure that the image is actually sharp. So IAN: This is that image with a really high ISO. And I think it was a little bit difficult to see, but in when we zoom in now you really can see the noise just sort of introduced and all of the texture. And you'll almost see-- like it's almost a little bit brighter too because there's so much added bright data in the shadows. Lift them up a little bit because there's so much of it. So, again, 1.4 again was my bad focus. So frustrating. AUDIENCE: It happens to all of us. IAN: Right. But then if we go to f/22-- and this is what tricked me because I think I actually shot this one first and I was like, ah, it's in focus. Looks great. You can see that this background is in focus. And this foreground object is in focus. So it has a large depth of field. So this should visually illustrate that we have the ability to have different exposure settings for the same exposure value. So if we think about a scene in this way there is some sort of amount of available light that we want to sort of render. And we have three controls at our disposal to do that. And by increasing or decreasing one we can increase or decrease the overall sensitivity of the camera to some amount of light. Or we can increase one and decrease another to not change the sensitivity of the camera, but the change the way that image looks. So if we think about this-- if we assume that are base exposure that we metered returned some values like f/5.6, 400 ISO at 1/60 a second, which is, again, not that grainy. It's not that fast. Not that slow. There's not a lot of room for motion blur. Someone walking would be fine. A car might be out of-- a moving car might have a little blur to it. Someone standing still would be fine. And the depth of field is in the middle range. We haven't figured out quite what that means, how much that means, but we know it's in the middle compared to 1.4 or 22. So let's make some decisions. Let's change the way this image looks. And what I've done is I've filled in two of the three blanks with different numbers. So we should at this point be able to calculate what the empty is. So we'll just walk through the very first one. So before we were at 5.6 and then we closed down one stop to f/8 so we allow less light in by one stop. Here we're at 400. We're still at 400. So we've allowed one stop less light in. So now we need to compensate for that. And we're going to compensate for that in this empty square. So we need to allow one stop more light in to make them equivalent. Does it make sense? So what would the value be in that square? Yeah? AUDIENCE: [INAUDIBLE] DAN: I've got the internet. You're right here. Go ahead. AUDIENCE: Yeah, but [INAUDIBLE]. I haven't heard them all day. DAN: Miles says should we go up to 1/30 of a second? IAN: We should. Well done. That's awesome. So we reduce the amount of light by one stop by closing the aperture down. So we had to increase the amount of time that the shutter speed was open by one stop. This isn't the only decision we could have made, but it is the decision here. AUDIENCE: For the proper exposure, right? IAN: Yeah, to maintain our exposure. The same exposure that we had given by these yellow numbers up here. So now we have a different set of numbers. And again, we're still going to reference the yellow numbers. Don't worry about the bottom one. So we open up the camera from 5.6 to 1.4. DAN: Can you go back to my drawing? IAN: Yeah. DAN: To go this far I still have to visualize it. So we're going from 1.4-- IAN: Well, we started at 5.6. DAN: Right, sorry. IAN: Was the original exposure value, and we're going to open up the aperture to 1.4. DAN: 1, 2, 3. IAN: 4. DAN: I won't say the final number. Somebody else can say it. IAN: So we've added four stops more light by adjusting the aperture value. We still haven't adjusted the ISO value so that's the same. So we now just have to compensate for four stops more light. Got a few answers from online. AUDIENCE: Do you need more light or less light? IAN: Well, so you tell me. We started at 5.6 and we're going to 1.4. AUDIENCE: OK. We need to add more. OK, yeah, I'm good. I'm good. IAN: Right. So the smaller numbers are larger openings, which is sort of counter intuitive, but we've gone five stops more open. So where would the shutter speed go to? You said you have an answer? DAN: I have a few answers from online. IAN: OK. DAN: So I have 1/240. I've got 1/1000. 1/960. IAN: OK. So we'll start with the lowest and we'll go all the way up. So 1/240. So we were at 1/60 and we want to reduce the amount by four stops, right? Is that what we decided? So we go 160 to 120. 1/120 to 1/250, which doesn't match perfectly, but that's sort of the way it is. 1/250 to 1/500, and 1/500 500 to 1/1000. That's our four stops. So we end up at 1/1000 of a second. DAN: And this is a good moment to highlight the rounding that we do. The kind of fudging of the math. It's the same thing from 5.6 to f/11 when we make that jump. It's not quite, but it's easier to talk in whole numbers than it is to remember 1/960 of a second. IAN: Right, exactly. And so you will find as you investigate some of these concepts that some of the numbers are scales on the cameras. Maybe you do have that fudge factor when you go from 5.6 to 11, which doesn't quite make sense. Which should be 11.2 I think. All right, so let's do this one here. So we're at 5.6, and we open up to 2.8. How many stops? AUDIENCE: Two stops? IAN: 5.6, four, 2.8. Yeah, two stops. 1/60 to 1/30. Opening or closing? More sensitive or less sensitive? AUDIENCE: More. IAN: More sensitive. By how many stops? AUDIENCE: One. IAN: One. So we add one more. So that's three stops difference. So now if we were at ISO 400 and we've added three stops of light how much less sensitive do we need to make that? There's so much arithmetic. It's annoying, I know. But I'll make you really good at it. AUDIENCE: Three stops. IAN: Yeah, so we need to go down three stops. So what is the numerical value for that? AUDIENCE: [INAUDIBLE] IAN: OK, perfect. That's exactly right. Great. AUDIENCE: [INAUDIBLE] IAN: Great. All right, so our last one we've doubled the ISO. We've done something with the shutter speed. How many stops differences is the shutter speed from 1/60? Oh, I still have to do it my head. 1/30. 1/15. So you go from 1/60 to 1/30, 1/30 to 1/15, 1/15 to 1/8. That doesn't make any sense, but that's what the number is. So it's three stops more light. And we've also doubled the ISO. So that's one stop more sensitivity. So that's a total of four stops. So if we take the aperture we need to close down four stops from 5.6 so we just go the other way. We go 5.6 to eight, eight to 11, 11 to 16, 16 to 22-- f/22. And finally this last one. I don't even understand this one. We'll go 400 to 320. That is not halving nor doubling. What do we do? AUDIENCE: It's a third isn't it? IAN: Yeah. Let's do the rough math. It's about a third. So I've got a third of a stop. I don't know how to do that with my fingers so I'll just keep it in my head. Now we go from 1/60 to 1/100. Is that a full stop? Full stop would be 1/120. So it's 2/3. So now we have 2/3 and 1/3. Ah, one stop. AUDIENCE: [INAUDIBLE] can handle that. IAN: Perfect. So we just have to open up to f/4. So it can get a little funky. And it can get-- like you can start to sort of move and shake things around, but you literally can make different images. So what does this image look? This middle one? 1.4, 400, 1/1000 of a second? AUDIENCE: There would be a vignette. IAN: Yeah. So we noticed that when it was at 1.4 with this lens on this camera there was seriously vignetting so if we use this same system again we'll get serious vignetting, but what's the artifact that we'll really care about? AUDIENCE: [INAUDIBLE] a shallow depth of field. IAN: It'll have very shallow depth of field. What about things that are moving in that? Say there's some cars in it. AUDIENCE: They'll be frozen. IAN: Yeah, they're going to be frozen. 1/1000 of a second. That's pretty fast. That's 1,000 pictures in one second. That's actually pretty ludicrously fast if I think about it. So very shallow depth of field, but no motion blur. So blur because of depth of field-- or lack of focus because of the depth a field, but what about this one? F/22 at 800 at 1/8 of a second? Yeah? AUDIENCE: You'll have wide depth-- very wide depth of field. IAN: Mm-hm. AUDIENCE: And potentially, depending on the lens, it's gonna have like a bit of shake or blur. IAN: Mm-hm. AUDIENCE: At 800 I'm assuming not too bad. Not too much grain. IAN: Yeah, but more than 400 for sure. So large depth of field and then some motion blur. 1/8 of a second. I can do a lot in 1/8 of a second. Like I moved. A lot of dancing in 1/8 of a second. So that is to say that we can actually speculate at what these images look like abstractly just by looking at the camera settings. We have an understanding of how to previsualize what a scene will look like at given settings. So when you're out photographing you can look at a scene and you can be like, oh, so there's cars moving, there's some water flowing, I know if I set a low shutter speed I'm going to get interesting blur. Cool, I want to try that. And you know that you can then decrease the shutter speed so that you end up with more light, and then you maybe have to stop down. It's going to increase your depth of field, which may be an interesting image. Now I'm going backwards again. So briefly, what do you think the camera settings were for this image? And we're going to talk abstractly without sort of like digging into the image file. We don't know for sure, but we can take some guesses. AUDIENCE: F/11. IAN: OK, so you're saying f/11, but why are you saying that? AUDIENCE: Well, a lot's in focus. IAN: So there's a large depth field. That's our first indicator, so we're going to say that it's probably a small aperture. F/11, maybe higher. Who knows? But like a small aperture for sure. What about the shutter speed? AUDIENCE: Relatively slow. IAN: Relatively slow. Why do you say that? AUDIENCE: [INAUDIBLE] shutter speed, there would be a shadow on that, I guess. IAN: Well, we don't we don't know how much light was at the scene. We don't know, but what we can do is look at the artifacts. And the artifact of shutter speed is is there motion blur? Is there not? Is there motion blur on certain things and not other things? Because that gives us an indication of how fast it is. AUDIENCE: It doesn't look like there's much motion [INAUDIBLE] at all. IAN: Yeah, so I think-- you know, where there's-- the water is rippling in some wind and that seems relatively crisp. Like maybe there's a little bit of motion blur there, but it's not a lot. It's not sort of distracting, and it doesn't look like the water in the dam picture, which was super smooth and flowing. AUDIENCE: Carla says 600 ISO. IAN: 600 ISO. AUDIENCE: What? IAN: Yeah, why? So why? AUDIENCE: [INAUDIBLE] be more. IAN: What is the reasoning for that? The logic behind that assessment? AUDIENCE: Aperture's closed down. IAN: I don't see a lot of grain so if that is the idea-- like, yeah, there's not a lot of noise. And so I think we could say that this scene probably has a fair amount of light in it. It's a sunlit vista. And there's a reasonably high shutter speed, a large depth of field, and some relatively low ISO. AUDIENCE: Yeah. IAN: Right? And maybe-- yes? AUDIENCE: Isn't another artifact of a low ISO good color reproduction? 'Cause like when I saw the picture at the beginning with-- what's his name? I don't know if his name was Dan as well. Was his name Dan [INAUDIBLE]? IAN: Mm-hm. AUDIENCE: So like at 400 the colors looked OK, but when you went to 1,200, 800, it was blotchy and colors that weren't there started to exist. IAN: Yeah, so in that sense because noise is random and it's not just brightness values, it's also color values, that you can get random red introduced at a pixel or a bunch of other pixels and so the color loses fidelity with the introduction of noise in the same way that your exposure loses fidelity. And you notice in the graveyard image where it seemed almost brighter because there was so much noise in the shadow it easily could have been like seemed more colorful or the colors seemed off and messed up because of like random color data forming as noise. So, yes, in that sense absolutely. DAN: Yeah, and with the stretching of the information basically you're sensor's just like stretching the information that it is able to collect. As we saw there was a lot-- when the noise comes up in the shadows that can also lead to the perception of lack of contrast as well because there's just like more noise across the darks that seem to raise them more than they actually are so there's better reproduction at a lower ISO because of that as well. IAN: So what's going on in this image? Hm? AUDIENCE: Looks like a California. IAN: Maybe. Are they cormorants or pelicans? I don't know. High ISO or low ISO? We'll start there. DAN: Getting a lot of slow shutters. Long shutter-- IAN: Yeah. I think the most the most sort of dramatic thing about this image is the slow shutter speed that allows the water to blur. And the depth of field is-- I don't know. I mean there's some definition in the water out in the far horizon and the rock so maybe it's pretty large, but it's, you know-- and I don't notice a lot of noise so. But the most dramatic feature of this is someone is utilizing a slow shutter speed for an interesting compositional effect. AUDIENCE: And I would say it's low noise because when you have the sun that's a powerful source. IAN: Yeah, right. So they may not need a lot of sensitivity, but we don't know what time of year? What time of day? This could be evening or early morning so there's a lot of room for variation. Our error bars are large with that's that sort of idea, but yep the sun is an incredibly strong source of illumination so probably not something really, really high. And we also don't see any artifacts of that. I don't see a lot of noise in the image. How about this one? AUDIENCE: Very small f-stop. IAN: Yeah. So a very small f-stop. Well, actually I think we should say a small aperture opening, which is a large f-stop number. Just to be specific because it gets so confusing otherwise if you're like, oh, it's a large f-stop and you're like, but is that the big number or the big opening? So. AUDIENCE: Small number, big opening. IAN: Yeah, yeah. No, but just-- it takes muscle memory and practice. So this is really about sort of diving into a very small section of an image and letting the out of focus play is sort of a graphic element around that. There doesn't appear to be a lot of noise, and there's not a lot to suggest anything either way about the shutter speed. It's some sort of neutral value, but they made a very conscious choice to shoot at a very large aperture to get a shallow depth of field. Whoa. What's this one? DAN: Fast shutter speed, [INAUDIBLE] says. IAN: Yep, exactly. And not much to suggest-- I mean, the depth the field feels pretty reasonable. I can see things in the background that are in focus. The water is completely stopped midair for sure. AUDIENCE: It's a big aperture. IAN: Yeah, yeah, yeah. Absolutely. Whoa. God. What about this one? AUDIENCE: High ISO. IAN: Yeah. I would agree. Absolutely. That's a good read. AUDIENCE: [INAUDIBLE]. fast ISO. IAN: Yeah. What about the shutter speed though too? This is sort of interesting. AUDIENCE: Yeah. I would say fast 'cause look, they're frozen in the air. IAN: Yeah, they're frozen in midair so it's fast enough to stop someone in mid leap. But also they see in order to do that they needed to boost up the ISO to make it more sensitive because it was allowing such little light in for the time. AUDIENCE: Yeah, and there's a lot in focus in the picture. IAN: Yeah. AUDIENCE: How did they get this picture? IAN: By boosting the ISO. And I think you could-- like the level of grain in the image, or noise in the image, is really apparent. AUDIENCE: Yes. IAN: Yeah? AUDIENCE: One thing on this image as well, like you mentioned about color reproduction and higher ISO. IAN: Mm-hm. AUDIENCE: I think this image might be made black and white and that's one kind of thing people sometimes do when they bump up their ISOs. When you have bad color you just make it black and white [INAUDIBLE].. IAN: Yeah, because the color gets really mushy and noisy, and then the black and white begins to feel like a textural compositional element rather than a degradation of some other image. That's a very good point. Nope. Backwards again. So let's take a five minute break at 7:04, and then we'll come back and we'll talk a little bit about depth of field because I've sort of been saying oh there's large depth of field and shallow depth of field, but we don't know what it is. So let's demystify this when we return in just a couple of minutes. All right, folks. Welcome back. So We're gonna dive right in and start talking a little bit about depth of field as a creative tool, and also how it works from a functional standpoint with your camera. So depth of field is the amount of any image that's in apparent focus. We've seen some images where there's only-- a tiny part of the frame is in focus and everything else is sort of blurring out in the foreground and the background. And we've seen images where everything from the foreground object to the vast horizon in the distance is in focus. So we need to be careful because apparent focus is the linchpin of this. There is only one plane of focus in an image, and it's set when you adjust the focal ring on your lens and choose some distance at that witness mark. The rest of everything that you perceive as in focus is apparently sharp. It's sharp enough that our human eyes don't notice that it's out of focus. So there is a threshold where at a certain point our eyes do notice that it's out of focus. And this threshold is much lower on smaller images. And as you blow things up, and I think you may have experienced this, where you take a picture and it feels like it's sharp, like my gravestone image. And then I blow it up to a big size and I'm like, oh, it's actually out of focus. AUDIENCE: [INAUDIBLE] IAN: Yeah, on an 85 inch TV. So there is this relationship to how that apparent level of focus breaking down the larger you enlarge an image. So the bigger print that you're going to make, the larger you're going to project something, the more important it is to make sure that you nail that critical plane of focus. And to Dan's point we often, when we're doing portraiture, put that right through the eyes. So that we know that this person's perfectly in focus, and that as the depth of field grows we'll get a wonderful image of their whole face. So what does this look like more generally? So this is a funky diagram that I made where you have this sensor plane that's inside of the camera, you have a lens, and then you have your plane of focus out here set to some distance. So one of the interesting things about depth of field is that it's not perfectly 50/50 surrounding that plane of focus. So the amount that's apparently in focus is 1/3 third in front of it and 2/3 behind it. So there's actually a little more behind that plane of focus that appears in focus than in front of it. And this is a handy trick if you're really shallow, you have a really shallow depth of field, you can actually cheat that plane of focus a little bit forward to make use of that extra space that's behind it. Nope. I keep-- do I keep going backwards? So we described depth of field as being deep when there's much of an image in apparent focus, and conversely when only a small area is in apparent focus we describe that as being shallow. The three factors that control this are our aperture, focal length, and the focusing distance of the lens. So the only exposure control that effects depth of field is aperture, but other elements effect depth of field as well. And that means the length of the lens, whether you're at a wide, normal, or tele. And how closely the lens is focused. Whether you're focused at a subject really, really close in front of the lens or much, much further away. So to sort of talk a little bit more concretely about this. So the smaller the opening the deeper the depth of field for aperture. So f/22 will make a deeper depth the field in the same image than f/1.4. And we saw some examples of that with the cemetery shot where there's-- the background gravestone was out of focus, but then it was in focus. The larger the opening the shallower the depth the field. And there's this little basic-- major stops down there. Shallow are going this way towards more open and the smaller F number. And deeper going towards the larger F number but smaller opening. So your rule of thumb to help you make sense of this is that doubling the aperture doubles the depth field. Yes? AUDIENCE: [INAUDIBLE] IAN: So it's a little bit helpful that if you don't have enough depth of field you can double the amount of it just by opening up one stop-- or closing down one stop. I'm sorry. Closing down one stop and adjusting one of the other exposure controls to give you a little bit more depth of field. So a big thank you to Andrew Markham for sitting in for us on this. But what we have here is a wide angle lens set to f/16 with some amount of lighting in a space. And what I want to look at is not just our subject, but the area behind them. And watch what happens as we play this video. So what we're actually doing in this moment is opening up the aperture at the same time as we decrease the amount of light in the space. So we're actually dimming the lights as we open up the aperture. And what you see is in the starting frame when we're at f/16 all this appears crisp. Maybe a little blurry, but much crisper than this, for certain. So this is a wide angle lens. Do you remember what focal length we were at, Dan? 25, let's say. Maybe. DAN: There was crop factor there too. IAN: Yeah, yeah, yeah. But it definitely is a wide angle lens. And then if we look at this same situation with a normal lens we'll see something interesting happen. Remember that I said that focal length increases depth of field. So here we are with a normal lens. The angle of view has shrank a bit. The camera hasn't moved. We've just tightened up the lens a little. Remember from Dan's lecture. And we'll do the same thing again where we start at f/16 and we'll open up the aperture. And we'll actually just sort of dim the lights in concert. And if we look at this one, again, you see that the background is in focus. And then it's out of focus, but is it more out of focus than the previous one? AUDIENCE: Yes. IAN: OK. Yeah. That's sort of interesting. So now we'll move to a telephoto lens. Again, the angle of view cropping in-- or not cropping in, but we'll run this again. And now it's really soft. Incredibly soft. And this is the difference between them. And I would say that this starting softness out here is not quite as crisp as it was in the earlier two iterations of this, but that is much much softer. So that's adjusting the aperture in three different situations. One on a wide angle lens, one on a normal lens, and one on a telephoto lens. And by opening up the aperture, making the opening larger, we shrink the depth the field in all three of those cases. It's interesting to see how it's different for wide, normal, and tele. And we'll come back to that in just a second. So one of the other factors that effects depth of field is the distance to the subject. And what I mean by that is where your lens is focused. Generally I'm just going to assume that we're focused on our subject, but perhaps not. But the distance that your lens is focused at, whether it's near or far changes how deep or shallow the depth of field is. The closer that critical plane of focus to the film plane the shallower that depth of field is. So the closer our subject is to us the shallower the depth of field will be versus when it's further away. It's the inverse. And a rule of thumb for this is doubling the distance quadruples the depth of field so there's some power law here. Yeah. AUDIENCE: [INAUDIBLE]. IAN: Right? Or halving the distance cuts it by four, so-- DAN: It's dramatic though. IAN: It is. It is. DAN: And I was shooting some macro photography this weekend, and I was at an aperture that was 5.6 six and f/8 and it was so thin. You think of that depth field as being fairly deep, but it really matters how close you are. With the macro lens you're right up against your subject, and so this really is exaggerated there. IAN: Right, and I think maybe even that image that we looked at that was the green plant with that really narrow depth of field might have been on a macro lens. Let me just go-- what did I do? I went backwards again. So this is a wide angle lens where we're focused at four feet. Look at the background. There's a lot out of focus. We have our little rubber ducky there. And as we play this video-- perhaps. No. And we moved the camera further away from our subject racking with our subject, so keeping the focus on our subject. Increasing that distance you can see that the focus in the background begins to change. That by increasing the distance the objects behind our subject come into crisper focus. DAN: It's so slow. It's wonderful. It's like-- AUDIENCE: Can you play it again for me? IAN: Sure. So if you think back to that duck originally it was much softer than it is now. AUDIENCE: So you're changing focal length. Are you doing it literally moving the camera, but keeping the focus on the subject? IAN: Yeah. So what that is doing is essentially changing where the lens is focused, and just moving it like this. As the camera moves backwards it changes that distance. So when you look at the starting position and the end position you really can see how soft it originally was when we were very close to the subject compared to when we moved further back. So this is all, again, a wide angle lens. So now we have a normal lens. We start a little bit more out of focus in the background. The depth of field is a little bit shallower. AUDIENCE: [INAUDIBLE] IAN: So if we do our little comparison again we can see that distance to subject is really driving how much depth the field we have between these two images. And to do our due diligence here is a telephoto lens. Notice how out of focus our friend the duck is. And as we move through the scale we'll see if we can get him to be in focus. So not quite. Not quite in focus. The depth of field is still shallow enough to keep the duck out of focus. And this is again a telephoto lens. And if we do the comparison between the two we can see that it's incredibly out of focus. Almost unrecognizable to getting into some sort of shape that we can understand a little bit. What is happening? AUDIENCE: [INAUDIBLE] DAN: Don't steal my thunder. IAN: OK. We'll leave it for a second. So this is what's called a dolly zoom. This is where the camera is moving back as the camera zooms in at the same time. So we maintain the exact same frame over a camera movement. And what you notice is we actually go from a wide angle lens with this frame to a telephoto lens with this frame and you can see the actual spatial distortions that happen in real time as you change-- OK, one more time. OK. DAN: We'll look at this next week when we talk about video as well. Yeah. IAN: So this is-- I really wanted to show it because it's awesome. And one of the things that-- our final element of effecting depth of field or controlling depth of field is this idea that focal length matters. That a longer full focal length yields a shallower depth of field. So a 150, a 250 millimeter lens on a full frame camera, a telephoto lens, has a shallower depth of field than a wide angle lens, which yields a deeper depth of field. And I think we saw that when we looked at how different each of the elements were when they were in wide, normal, and tele. The wide angle lenses had more in focus in the background than the telephoto ones. AUDIENCE: Regardless of the aperture? IAN: Yeah, regardless of the aperture. Well, yes. So when we were changing aperture-- If we go-- let's-- whoops. Let's go back here. So we were changing aperture here. And you can see the effect that it has on depth of field. We didn't change focal length and we didn't change focus distance. When we go to here and we change-- let's do this one. We don't change aperture. We don't change focal length. This is a wider angle of view. This is the camera backing up. This is just the distance to the subject. And you can see the difference between the elements. But that is to say that this image compared to this image are two different-- this is us changing focal length, while all the other elements stay the same. We still shifted over distance, but this is its own unit. So your rule of thumb for this is halving the focal length quadruples the depth the field. Doubling the focal length cuts it by a quarter. That might be helpful for you in the field if you want to-- like I need much more depth of field. A little rule of thumb to help you get there. The interesting thing though is that in this comparison when we start at the beginning and the end is the depth of field different? And I think maybe we'll do this side by side here. So this is the dolly zoom that we did. We started with a wide angle lens very close to the subject. And we didn't change the aperture. We did change the focal distance, but then we moved the camera to a telephoto lens further from the subject. So we increased the focal length, but at the same time we also increased the distance to the subject. And if you look at this it's sort of apparent, but it's difficult because we maybe don't see this, but the parts that are out of focus are sort of similarly out of focus in both of these positions. So what that means is that as the focal length pushed the depth of field smaller and smaller the distance to the subject pushed it larger and larger and they sort of offset each other. Remember that our rule of thumb was if you change it by half it doubles it. If you change it by half-- or if you change it by half it quadruples it. If you change it by half it quadruples it. And so those two values actually offset each other. So for the same frame focal length doesn't do a lot to adjust your depth of field because you have to change the focus distance. Does that make sense? And they actually offset each other. So if you-- let's draw it. Maybe that's the easiest way. So if we have a frame here and we have a frame here. And they're the same frame, but for this one the camera is very close, and for this one the camera is very far. And for this one we're at a wide, and for this one we're at a tele. We've zoomed the lens out-- this is exactly what a dolly zoom is. It starts close. It moves this way and as it moves we zoom the camera in. So what we've done is this focal length is getting larger, which we know produces a shallower depth of field. We're going from a wide to a tele. And this focus distance, the distance from here to here, is getting larger, which we know increases the depth of field so it makes it deeper. And in doing so they essentially cancel each other out roughly. AUDIENCE: And that's what gives that whole effect [INAUDIBLE] IAN: Yeah, so really what we're only seeing is the spatial relationships of the foreground and background doing that sort of expansion, compression trick, rather than large portions of the image coming in and out of focus as the depth of field shifts throughout that entire element. So that's just a little aside. That's like a hiccup where if you-- focal length really matters for depth of field if you don't with the camera. Because obviously, you start to move the camera, it begins to do less because you change your focal distance along with it. Are there questions on depth of field before we move on a little further? So one of the other tools in your arsenal besides a light meter is this histogram, which is basically a plotting. It shows the distribution of brightness values of any given image. It can also display the distribution of color values. But for our purposes right now we're going to look at it as a luminosity scale. And all this says is that they're-- it reads left to right. So this is black, and this is white, and everywhere in between is some midtone of brightness. And it just shows you how much of your image is falling in certain areas. And so is if this is full black that means this area is probably our shadows. If this is full white this area is probably our highlights, and somewhere in here is our midtones. So in looking at this histogram we can see that there's a lot of data in the dark side of this with a big spike of white light at the top. And there's sort of nothing at the high delicate highlights or mid tones. So this is a histogram What do you think this is a histogram of? AUDIENCE: [INAUDIBLE] IAN: Yeah. So this is a histogram of this. Just plain white. Everything is just jammed up on that end. So this should warn you when you start to see your images sort of jam up towards one end-- towards the right hand side you're going to end up with something that's very bright. What about this one? AUDIENCE: [INAUDIBLE] IAN: Yeah, it's this. It's just this. Just black. Inky blackness. How about that one? AUDIENCE: Is it 18%? IAN: Yeah, this is middle gray. It's this image right here. Here's one for you. What's this? It's a lot of everything. AUDIENCE: It's going to be, probably, [INAUDIBLE] highlight [INAUDIBLE].. IAN: Yeah. Maybe. AUDIENCE: [INAUDIBLE] IAN: Yeah. It's actually just this gradient. It's equal parts of every sort of element and it renders a very flat histogram. So what's interesting about that is this flat histogram tells you that there's a completely even distribution of tones, which sort of suggests that that's exactly what this is. It's about as even a distribution of tones as you can get in any kind of image. DAN: Well, [INAUDIBLE],, just to hammer around the point of how to read this, this gradient is on the left black, on the right white, which is how a histogram represents its luminosity as well. But if you were to reverse this image the histogram does not read left to right like an image does. It would be the same histogram. IAN: Yes. Yeah. Actually that-- yeah, exactly. 'Cause it's just basically saying that there is a certain number of dark pixels in this image and a certain number of light pixels in this image. It doesn't care where they are. Just there's this much value. So let's go back to the histogram that we saw before. What do you think this is a histogram for? Keeps coming back around. It's like a boomerang. AUDIENCE: [INAUDIBLE] DAN: Alec says an outside shot. IAN: An outside shot. That's sort of interesting. Why might we say that? DAN: Alec also feel free to unmute and just shout it out. IAN: Yeah, we can hear you. AUDIENCE: My guess looking at that would be like your outside shot that you had 'cause there was so much shadow detail from-- like in the trees and everything. IAN: Yeah, totally. It's the same shot. It just keeps coming back. AUDIENCE: You've gotta make points. IAN: Yeah, that's nice. So there's not a lot of highlight value in this image. There's a little bit of that snow, but most of it is this dark tree value, which is pushing the majority of our tones into this shadow area, but we see a pretty decent distribution of tones, and we have some highlight values, and we have some shadow values. DAN: And the very rightmost pixel here, on the right side, we also have this indicator up here. And it looks different in different software, but this is the clipping indicator meaning that something is fully overexposed, meaning that it's true white. IAN: Yeah, and it honestly is the-- it's the middle section in here. Like there's some small amount of that that's clipped out pretty hard. AUDIENCE: So when you say clipping that means like there's like [INAUDIBLE]?? IAN: Yeah. So as you try to render tones in an image you can essentially push up against the maximum value, which is like it's so bright that it hits the maximum recordable value for the camera and then there's just no more data to record so it just is at max. AUDIENCE: [INAUDIBLE] IAN: It's at full, which is usually-- like clipping over exposure. It will show up as white because it's maximum red, maximum blue, maximum green. It's just pinned at the top. AUDIENCE: [INAUDIBLE] IAN: Yeah, exactly. And the opposite is where it is just crushed down to that black. And that's what those two extreme examples of the white histogram and the black scram were. Where there was-- like all the values were just pinned at either the left or the right. Either the maximum highlight value or the minimum darkness value. And so once you clip though you can't get that information back. It's lost. There's no way to bring that back. If you try to do any correction on that and bring the tone down it's just going to shift in grayness 'cause there's no detail. It's just bright white. It's a flat white field, and if you bring it down it just will be a flat gray field or you go all the way down to a flat black. So if we look at this here histogram, which is the idea of a low key histogram it's pushed to which side? AUDIENCE: The left. IAN: Which is? AUDIENCE: Shadows. IAN: Shadows. And we've got some midtones. Not really a lot. AUDIENCE: Very little. IAN: Yeah. And we have some highlights-- AUDIENCE: And there is a peak of light seek of white. IAN: What's that? AUDIENCE: There is a peak of white somewhere in there as well. IAN: Yeah, there is a little bit of brightness. You can see it ticks up just at the end. Just a little bit there. So what do we imagine this image looks like? DAN: [INAUDIBLE] says, underexposed. AUDIENCE: A white dot on a dark wall? IAN: A white tall and a dark wall? Yeah, something like that. Yeah. So there's some small amount of brightness in a large dark field. That was a pretty good read. This is probably our white values. There's the tiny bit of mid tones that we were getting and the rest of it is falling off into these very distinctive shadow details. So you might have looked at-- if you look at this histogram, like all tools, it can be fooled. This was an intentionally exposed image to have all of the values-- to have this be dark. To have everything pushed to the left because that's the sort of the composition that this photographer was going for, but if you were sort of just looking a histogram you'd be like no that's not right. I'm looking for an even distribution of midtones. And I think you'll find a lot of people suggest that that's the correct way, but it's not always. It does matter about your intention. So the flip side of this is a high key histogram. It's pushed to which side? AUDIENCE: The highlights. IAN: The highlights. DAN: And just to say plainly we're looking at an overlay of several histograms here. Different color channels. And then the gray one represents the luminosity. IAN: Right. DAN: But that just depends on which software you're using and what options you have turned on. IAN: Yeah, and so you can get a variety of different scopes. And actually that's a good point. So the earlier version that we were looking at it was the histogram from Photoshop and this is luminosity and color like a compound histogram from Lightroom. And so you have different options that you can turn on and turn off. And sometimes for certain images having the color on shows you a cast or a skew that might be in there especially if you did not set your color temperature. So what does this image look like? AUDIENCE: Bright. IAN: Bright. Right. I think we can safely assume that. There's not a lot in here that is actually a deep dark color-- dark tone. It's mostly whites. Bright sky highlights. There's some grays in there. There's not even really a solid black. Maybe a little bit of a shadow detail in there. And so when we look at the histogram you really can see this. And so again we're not using an even distribution but we understand that this is correct for our subject. Sweet. So putting it all together. Here is an image that is overexposed. This image is mostly white. The values up here are completely clipping. It's solid white, but it's OK because it's the point of the image. So this is an intentionally overexposed image. We could meter this and the camera might try to tell us to make this a middle gray because it thinks that's what we want, but we know we're smarter than it. So we're going to increase the exposure so that pushes up to white and in fact clips. And we get an interesting shot of these sunglasses. Solid marketing. So then the flip side is this is intentional under exposure where we've decided to not expose for this value, but actually for this white value and give ourselves a silhouette. DAN: Do you have a histogram for this one? Like what does this histogram look like? IAN: I don't. I could get it in a second, but I don't. I don't have it. DAN: It's split though right? 'Cause we have-- IAN: Yeah, it is split. DAN: --very little in the middle 'cause it's almost all largely the bottom because the histogram represents like 100% of the pixels in the image. Most of them are in the dark so that's going to be where our biggest mountain and the highest peaks are. And then we'll have also a big spike up on the right because the white screen in that image was almost fully overexposed. So histograms I think-- most cameras have histograms on them when you're shooting. If you pull up your digital screen and push the button to pull up the display you can cycle through different overlays on your screen. And so histogram is an option, but I find that they're much more useful in post-production. When you're actually shooting I think the thing you typically want to look at is your light meter and to know if you're getting a good exposure or not. And obviously you'll either intentionally add exposure over or under, but at the end of the day histogram is helpful when you want to look at overall trends once you get to post-production. And really to check if you are clipping any information at the highlights or in the shadows. I think that's really where the histogram is best served. IAN: Yeah, and I think actually, I also tend to check the histogram early when I'm shooting, but when I'm pushing exposure in an image like this or an image where it's really bright and I'm going to maybe push up against overexposure or clipping I will look at the histogram at that moment. When I know that I'm compensating and I want to make an image brighter, and I want to push it up towards that bright white value I want to make sure that I don't clip because I can't get that information back so I want to get as close to it while still maintaining some detail in the image. So it's a really good tool for when you begin to experiment with pushing and pulling your exposure away from what the light meter is telling you. At the end of the day 90% of the scenes that you photograph the light meter is going to do an amazing job at calculating some calibration for you. AUDIENCE: It'll get you really close. IAN: Yeah, it'll get you really close, but you then have to make a decision. Do I accept this, or do I push one way or the other? DAN: And the other thing I'll just say, since we're talking about histograms, is the useful thing-- the indicator that popped up. In post, and we haven't covered Lightroom in this class, but if you're using something like Lightroom or Photoshop typically if you hover over the indicator that you're overexposed it'll show you an overlay on the screen where which portion of your image is clipping. And it's just helpful to get a read on exactly which part of the image is over or underexposed. So I think that's the tail end of our conversation about exposure. I want to stop for a minute and see if there are questions from anyone in the audience. What questions do you have? AUDIENCE: I have a question. IAN: Yeah, go for it. AUDIENCE: When you introduced histograms programs Dan just briefly mentioned this, that we can see a histogram in our viewfinder or on the screen in our cameras? IAN: You can in a large number of cameras. I think this is now turned off. So let me just fire up this camera. We can take a look. So this is the output of the 5D and it's currently shooting this bright white wall. And you can see that the histogram is pinned, except when I walk in front of it, right to the right hand side. But if we do something like maybe introduce another tone-- again, this is over exposed-- we can see that the histogram starts to move-- this is a darker tone-- in real time for what it is that it's seeing. So in this image I would look at this exposure and I would know that if what I want is a bright white field I have successfully exposed this image. But if I don't want that-- if I want some kind of detail I'm going to adjust some values. Like maybe I'll close the aperture down, which you can see I'm doing on the bottom. And I'll change the exposure so that now I don't have a bright white field, but I have a middle gray field. And I know that it's rendering as middle gray because right in the middle of the histogram there's this giant peak. I think-- can you push the talk? AUDIENCE: Sorry. IAN: That's all right. AUDIENCE: So I assume you have to be on a manual mode in order for this-- to see a histogram and alter all of these different-- IAN: It's actually just one of the info features. I can actually turn off all of that clutter. If I press it again I get it without the histogram. I get some extra data. And then I can turn the histogram on. So your camera may or may not have this feature. Most do at this point. So there should be either a way to turn it on in the menu or a button that functions to allow you to turn this on so you can get a sense of what you're exposing for. DAN: And Lorna, your camera may not have it in automatic mode. You might need to be in a different camera mode. And we did record a short video on camera modes so check that out after this lecture if you have more questions. And if I can just speak experientially for a second with histograms. Like I said I think when you're going out to shoot-- I don't find them all that useful when I'm shooting, but the times that they are useful-- if you're shooting a bright sun and can't quite get a read on your screen and you want to know if you're overexposed on your highlights it's really helpful in that moment, but for the most part when I go out and shoot I am using exclusively the camera meter. IAN: Yeah. And I think that's a good point. Just like it's sometimes very hard to assess your focus on a very tiny monitor or through a viewfinder. It can be difficult. It can also be very difficult to see a LCD screen in bright sunlight, which is I think what you're talking about. And that the histogram sort of proves to you what is happening in the image because it's not based on some visual cue. It's based on the actual data in the field. So again there's a video on light metering more generally and how to trick and fool your light meter, and also one on camera modes, which I would encourage you to watch for the next assignment because there's a couple-- we're going to ask you to experiment with the different elements of exposure. And you can use either aperture priority mode or shutter priority mode to help you play with that as well as full manual if you're feeling brave and adventurous, which I encourage you to feel at this point. Any final questions or parting thoughts? Well, we'll stick around for a few minutes and let us know, but thank you all very much and we'll see you next week.