6 Feb ’143d,Visual effects
So here’s a quick run through of the process I normally use when sampling scenes within vray. It’s based in 2.4 but it’s the same process of using overrides and render elements to check is each aspect of the image clean each time. The scene is not meant to be a pretty picture or a finished scene in the end, it’s just a run through of a fairly typical and also quite nasty scenario as regards sampling of lights and materials in vray.
Special thanks to Toni Brantincevic for his original article at http://interstation3d.com/tutorials/vray_dmc_sampler/demistyfing_dmc.html and both Vlado and Svetlozar Draganov on the vray support forums for asnwering all my dumb questions over the years!
17 Jul ’11Uncategorized
Welcome to WordPress. This is your first post. Edit or delete it, then start blogging!
22 Dec ’103d,Photography
Right, so when you’re taking a photograph you’re exposing a piece of film (or a digital sensor) to a certain amount of light for a certain amount of time to get the correct exposure. You’ve got three different settings that control how much light gets on to the film – The aperture or how wide the hole in the camera lens opens, the shutter speed or how long the film stays exposed to the light and also the iso which is the sensitivity of the film to light – the more sensitive, the quicker it exposes. Here’s a slightly better break down of the three:
The aperture on the camera at its most simple is a hole that controls how much light is let in. The wider the hole, the more light that floods in – similar to gradually opening the door in a pitch black room to gradually let the light from an attached room flood in. The more light being allowed in at once will make your image expose quicker. The other thing is that the wider the hole is open, the harder it is to keep everything in focus at the same time. What happens is that the area you’re focused on will be perfectly sharp, but anything closer to your camera or further away from the camera will be out of focus. Sometimes this is something we actually want since it can be used to draw your attention to a particular part of the picture and give a softer, warmer look. Here’s a nasty animated gif showing you an aperture closing down:
The shutter speed is literally the amount of time a small shutter that sits in front of the sensor or film is left open for. Again the longer this is open for, the more time light has to shine on the sensor of the camera and the brighter an image you’ll get. The thing is though that objects in the real world have a habit of moving so if you’re using a longer shutter speed you may end up getting a blurry result. As an example imagine how far you can move in 1/500th of a second and how much you can move in 1/2 a second. If you were trying to take a photo of someone running at top speed with the shutter open for 1/2 a second they’re going to appear as a blur. Again this is as much of an artistic choice as anything else.
ISO is a fairly boring setting by comparison, especially in cg terms. In photography terms, it’s the sensitivity of the film you put into a camera or how sensitive a digital cameras sensor is to light. The higher the number, the more sensitive the film / sensor and the quicker your image will expose. In photography terms though this means lots more grain in the case of film photography or noise in the case of digital. It does allow you to take images with less light though so it’s far handier for night photography at the expense of image quality. In terms of vray rendering, iso does the same thing – it makes your render expose quicker, and the good thing is that it doesn’t have an equivalent downside – there’s no extra noise in the image with a high iso and since it doesn’t affect the end image in the way that motion blur from your shutter speed or shallow focus from your aperture it’s a handy one for controlling the exposure of the image without messing up anything else. ISO is a little bit like a multiplier for the amount of light coming in to the camera.
Lets take a simple sum to have a look at the relationship between our aperture, shutter speed and iso and how you can use them to control the end look of your image. So our aperture is how much light we allow in, the shutter speed is how long for, and our ISO is a sensitivity multiplier. Lets say for example it takes 50 units of light to expose an image properly. We’ve got our ISO set to a multiplier of 1, we set our shutter speed to 2 seconds and our aperture set to let in 25 units of light per second so that gives us 25 x 2 x 1 = 50 units of light. Say for example though our shutter speed is too slow and we’re getting way too much motion blur so we need to use a shutter of .5 of a second instead. With the same settings as previous we’d get 25 x .5 x 1 = 12.5 which isn’t enough for our image to expose properly. What we need to do here is either use a wider aperture to let in more light or turn up our ISO so the image exposes quicker. If we want to use the aperture to do this we need it to let in four times as much light since our shutter is now only open for a quarter of the time it previously was. We then end up with 100 for our aperture x .5 for our shutter x 1 for our iso = 50 again. With this our picture is going to have less motion blur but with the aperture being wider we’re going to get more shallow focus in the image. Next lets say you’ve got a really wide landscape and you want to get as much in focus as possible, this means using a much smaller aperture. Lets say we need to narrow the aperture down to 5 units of light per second to get everything we want in focus. Of course this is going to let less light in so it’ll make our picture so again we need to compensate by either making our camera more sensitive with a higher ISO multiplier or using a longer shutter speed so the light has more time to expose. In this case lets use a longer shutter speed so we have an aperture of 5 x 10 seconds shutter speed x 1 ISO = 50. Lets take a third case where we want to have really sharp focus but also want to have a short shutter speed to minimize motion blur. This means using a narrower aperture, let’s stick with our 5 units of light per second, and a short shutter speed of say 1/10th of a second so this leaves us with 5 aperture x 0.1 of a second shutter x 1 ISO= 0.5 – quite a way off our 50 units of light needed for a correct exposure. In this case since we need to use those exact aperture and shutter settings, it means that we’re going to have to use our ISO multiplier to make our camera sensitive enough to get a correct exposure – in this case it’ll have to go to a value of 100 to give us the exposure we want. So that leaves us with 5 aperture x 0.1 shutter x 100 ISO = 50. This relationship is one of the main things in getting the depth of field or motion blur you want. Once you’ve gotten the level of exposure / brightness in your render correct, once you know that doubling one value means that another has to be halved or else your image will be over exposed. The main irritation is that while shutter speed and ISO use convenient numbers that are double and half of each other, aperture uses a slightly different scale of values which aren’t quite as easy to remember. Here’s a scale for reference where each F number lets in half as much light as the number to its left.
f/1, f/1.4, f/2, f/2.8, f/4, f/5.6, f/8, f/11, f/16, f/22, f/32, f/45, f/64, f/90, f/128
To have a quick play and get some more hands on experience play with this to get a good idea on how aperture will affect the look of your render and how it interacts with your shutter speed – http://www.photonhead.com/simcam/shutteraperture.php . A second page will do something similar except it’ll show you how your shutter speed will affect your camera from a motion blur / shake perspective – http://www.photonhead.com/simcam/camerashake.php
(Caution – useless trivia. Skip if you’re not arsed about what f numbers are.)
The f number on the camera is actually a fraction – what it means is how wide the hole in the front of the lens is opening as a fraction of the length of the lens. So for example if you are using a 50mm lens and an f stop of 1, you divide the length of the lens by the number fstop and get 50mm divided by 1 which means the hole at the front of your lens is actually opening up 50mm in diameter wide.If you’re set to f2, the hole is opening 50mm / 2 = 25mm wide.
If you’re on a 100mm lens and it’s f4, that means the hole at the front is opening 100mm/4 = 25mm wide. Again this doesn’t matter a huge amount to cg folks, but to a photographer the handy thing about an f number is that if you use the same f number on every lens it lets in the same amount of light. F2 on a 100mm lens is as bright as f2 on a 50mm lens as on a 25mm lens.
You normally don’t see really long lenses with particularly low f values for this reason – if you had a 200mm lens that went down to f1, the front of the lens would have to be able to hold a 200mm wide and it’d look fucking ridiculous. Anyway, now you know the reason for the actual number.
(End of useless trivia about fstops)
So pretty much the job of the lens is to focus rays of light in front of the camera on to the sensor or film plane of the camera which is done by using the various bits of glass in the lens to bend and redirect the light rays. The main thing is that you can only focus on one point at one time, and everywhere in front and behind of the lens will become more out of focus the further they are from this point.
Here we have a simple diagram showing a wide open lens focused on a point in space we want to be in focus. The light rays from this point travel to the lens, are redirected through the glass of the lens and focused on our sensor or piece of film.
Here we’ve added in a point in front and behind our original in focus point. You can see that while the rays from our original object are still converging on our sensor, the closer and further out points rays are converging in front and behind the sensor so they are out of focus and won’t appear sharp in our final image.
Here we’ve added in an aperture to our lens and set it to open up less than before. In this example you can see that there’s a smaller hole for the light rays to get through. The important thing is that the rays that do get through are more converged than the previous ones. This means that the points that were previously out of focus are still out of focus, but because the aperture is only letting in rays that are more converged, they will appear to be slightly sharper than before. the other side of this is that since we’ve blocked over part of the lens, less light can get in so it’ll be a darker picture / render.
Here we’ve made our aperture tiny (around f/11) and you can see that most of the light rays coming at the lens are being blocked from getting in but the ones that do get in are really tightly converged. This means that nearly everything is going to look like it’s in focus in our picture / render again though the downside is all of the light getting blocked out means a much darker picture. From a vray point of view this doesn’t really affect us all that much since we can put in f numbers and iso numbers that aren’t possible in real life but it’s a pain in the ass for photographers.
The last thing to note with focus is that it has a far more dramatic effect on objects that are up close to the camera than far away objects. Here you can see that we have a close up object and the rays of light that get through the aperture are totally spread out when they hit the sensor of the camera. You can see that the rays coming from our in focus point (3), and the two points closer to the lens, 4 and 5, tend to spread out a huge amount before they hit the camera lens, and there’s also a huge visual difference between the amount of defocussing the three points get, despite being very close to each other. On the further away side, you’d expect that you’d get a very large amount of variation in focus between points 1, 2 and 3, but since points 1 and 2 are so far away in the distance, the rays from those points that actually get into the camera lens and hit the sensor have a lot more distance to converge together over, and the optical effect is that there isn’t as dramatic difference in the amount of defocussing, despite points 1, 2 and 3 being much further away from each other than points 3, 4 and 5.
In terms of Vray you’ve got some distinct advantages. One is that you can use pretty much any value you want – you can use ridiculous f values like 0.1 if you want impossibly shallow focus, and turning up the ISO to huge amounts isn’t going to give you lots of grain like high ISO film or digital sensors would. Still though you might end up doing a spot of fiddling trying to get the exact amount of depth of field you want without completely blowing out your image. If you’re rendering for print or doing still images then you’ve got lots of freedom. It’s most common that you’ll be adjusting depth of field and not so much motion blur so it means you can happily adjust your aperture first and use any other value to bring the exposure to the level you want.
If you’re doing renders for animation work, you’ve got a tiny bit less flexibility since there’s optimum settings for your shutter speed to get a “filmic” level of motion blur. Normally this is one divided by whatever your frame rate your animation will be played at which is then multiplied by two.
Caution – more boring trivia, skip if you don’t care where filmic levels of motion blur come from.
Where does this value come from? The shutter in a film camera is slightly different from the shutter in a stills camera in that rather than being like a blade or a curtain that drops down and pulls back up over the film / sensor, the shutter is actually a continually turning disc which will rotate in front of the sensor and cover and reveal it. The standard type of shutter for a film camera looks like half of a disc and so half the time it’s covering the film and half the time it’s allowing it to expose. Each frame of film in a camera is fed through so that it’s allowed one full rotation of the shutter.
For NTSC this means 30 x 2 = 1/60th of a second, Pal is 25 x 2 = 1/50th of a second and film or HDTV is 24 x 2 = 1/48th of a second. If you want a slightly more sharp, strobey look such as they had in Saving Private Ryan, mulitply your frame rate by 4 instead so you’ll end up with 1/120th for NTSC, 1/100th for PAL and 1/96th for HDTV.
So with that in mind you’re probably going to use a specific aperture to get the amount of depth of field you want, a specific shutter speed to get a normal or filmic amount of motion blur, so that leaves you with your ISO control to adjust your level of exposure. Thankfully the ISO is by far the most convenient control to use since it uses regular numbers. Doubling your ISO number will double the brightness of your render, halving it will half your renders brightness and so on. The other big benefit is that it won’t affect the look of your render in any way – no change to your motion blur, no change to your depth of field.
21 Nov ’103d,Commercials,Visual effects
This is one of my favourite old commercials done around 2005 while working for Screenscene in Dublin. When originally shot it was planned that a lot of the destruction and dirt elements such as the debris elements would be done practically on set and so there was a very large engine from an old messerschmitt airplane brought on set where various bits of paper, boxes and dirt were thrown in front of it. Unfortunately it didn’t look quite as violent as was originally hoped so it was back to the 3d route to create all of the various bits of destruction in the commercial. The director’s idea was that the wave of rubbish wouldn’t roll over itself like a normal wave, it would roll back on itself and tear up from the ground to give it a more violent and unnatural sense. After tracking the shots in syntheyes, I set about the joyous task of using the terrible slow and unstable realflow 2 to generate the main body of the fluid wave. Back in the days of 32 bit computing this meant a tonne of instability, crashing, lost work and general cursing. We ended up with various different meshes of fluid elements which were either used entirely, or only for their alpha channels with their fill replaced with stock footage in the comp later. The various bits of rubbish were made with simple models fed into particle flow. The audi was a model I had made before hand and was animated before being fed through realflow for further cursing and generation of more elements of fluid pouring over the bonnet.