What is CGI?
In very simple terms, CGI (or computer generated imagery) allows us to model a real world object or scene using computer tools, light that object or scene (as one would in the real world) and then render out that object or scene as an image or sequence of images. The computer software uses the physical properties assigned to the object to calculate how the light falls upon it, is reflected off it or absorbed by it. The upshot is a very realistic representation of the object – like a photograph, in fact.
When Steven Spielberg took a punt and handed a good chunk of the special effects budget of Jurassic Park over to Industrial Light and Magic’s computers in 1993, it was a gamble. And boy did that pay off. Audiences reeled at the realistic T-Rex chasing Jeff Goldblum, Sam Neil and Laura Dern. The genie was well and truly out of the bottle and virtually every production made these days has some form of CGI associated with it. There’s the obvious bending of reality in films like Inception and Dr.Strange, to the much more subtle green-screening in UK costume dramas like Downton Abbey and Gentleman Jack. Then, of course, there are the Disnet Pixar movies that are entirely computer generated with no real actors or cameras.
But no longer is CGI limited to the mighty server farms of ILM. It’s available to everyone in the form of powerful desktop applications like Blender 3D, Cinema 4D, Maya and many others (some of which are even free to use!). Whilst most desktop PCs do not have the grunt to be able to produce full length feature films, they do have the capability to create photos and small video clips which is useful to us product photographers.
How’s it all done?
There are, broadly speaking, four main tasks to the CGI workflow when it comes to creating an image. These are: –
Step 1: Modelling
The modelling phase is process of creating an object from a series of interconnected faces in 3-dimensional space. Each face is bounded by 3 or more edges and each edge ends in two vertices. The software provides simple ways to move, scale or rotate faces, edges and vertices and also add new ones. Tools allow for extruding and shearing, splitting and joining.
You can start from scratch but more often you begin from a basic stock object. A number of simple “starter objects” are given to get the ball rolling e.g. a cube or a sphere, a cylinder or a plane. From there we stretch and split and add and cut to build up the basic shape we’re looking for. In the example shown here, I started to create a Haig Club Whisky bottle.
The modelling phase is often a quick phase when regular shapes are involved. There are additional tools to create complex features like hair or particles or liquids.
Once we have the basic mesh in place, the software can automatically add more topology to our object. This means giving it some solidity and adding more edges and faces to allow us to simulate curves, fine detail and so on. These are “behind the scenes” features which are generated dynamically by the software – we don’t need to add them or move them ourselves. In some cases there are millions of faces and to keep them all in line would be unmanageable.
We do have some control over where the software adds this extra meshiness (no that wasn’t a Sean Connery impression). It will only add extra edges between the edges we, ourselves, have already added. So we can control the sharpness or roundedness of corners, for example.
Step 2: Texturing
Now the thing really starts to come to life. We start by associating attributes like transparency, colour, roughness and reflectiveness to the object’s faces. Some objects have a kind of milky look where the surface is one colour (or transparent) but there is a secondary colour seeming to come from within. We can simulate that too. Often, we use images to add texture like wood, rust, ceramic and so on. These images can displace our mesh to make the surface appropriately “bumpy”.
Some objects are hollow, like this glass bottle and others are solid (or volumetric) like the liquid within. These “solids” can be clear like water or absorb light the deeper we go in. It’s all very mathematical and clever and I couldn’t pretend to know how it’s all calculated, but the parameters are all at our fingertips in the software.
It’s often the case that, no matter how accurately we try to texture such objects, they don’t look real. Even if we don’t quite know why, our eyes can tell real from fake. So extra steps need to be taken to begin to add layers of “reality” to this process. No surface is perfectly smooth. No edges are perfectly sharp and angular. We have to add imperfection to gain a more realistic image. The addition of smears, scratches and fingerprints add to the real-worldiness.
Texturing can take a long time to get spot on and consumes the lion’s share of the CGI workflow.
Step 3: Lighting
This step is the one that most resembles that of traditional studio product photography: the positioning and assigning of lights in the scene.
Light and shadow are what give any photograph it’s power and dimension so this step is crucial. We have control over where and how we position our lights, how intense (bright) they are and how hard or soft (diffuse) the shadows that they cast are. There have been many a book written on the art of lighting and there are dedicated teams of people working in the movies (headed up by the gaffer and the best boy) that are there to achieve the mood and feel that the director of photography wants to achieve. So this one small section of this one small blog post is unlikely to be able to extol all the intricacies of how to light a scene.
However, we always start with one light. We use it to illuminate one aspect of our scene the way we want it before adding the next light. Every light has a job – we don’t just chuck a load of lights in and hope for the best.
For larger scenes, lighting can take some time to get right. Because, whilst there can be “wrongs”, there generally isn’t one “right”. It’s all about the aesthetic and the achievement of mood.
Step 4: Rendering
Rendering is the final step of our process. It is what produces our image (or series of images) which was the whole reason we embarked on this journey.
With an eye on our composition (which we already thought about, because it contributed to the positioning of our lights) we now place a pseudo-camera in the scene. We set it’s focal length, it’s aperture and we are good to go. Hit “Render”.
And the software embarks on a sometimes long and slow journey of producing an image. For each pixel on the image, the software reverse traces all of the rays of light that hit that spot, the objects they bounced off, the substances they travelled through, the refraction that occurred along the way. Just to end up with…. a colour and a brightness. Then, on to the next pixel. That is highly simplified but in broad terms that is what goes on.
It takes a while. It’s precisely the reason that Industrial Light and Magic have buildings full of super-computers to do it, because a movie is 24 of these such images every second, for approximately 2 hours that is your average movie length.
Why bother with all this effort?
The image I created above we could easily achieve with a camera and some lights.
Or could we?
Those plinths that the bottle and the glass are sat on, the ones with the gold rim and wooden veneered surfaces….I made them up. They don’t exist anywhere on planet Earth. OK, that’s slightly cheating, we could still achieve a very similar photograph if we just replaced those plinths with something similar. But do we want to?
There are three situations (probably more) where CGI trumps traditional photography.
The plus points
1. Achieving the physically impossible
Like the plinths above, there are things we can do with CGI that would be impossible (or at least prohibitively difficult) to achieve with a camera. In the real world, the sheer presence of gravity causes us problems. Lights are mounted inside softboxes, behind diffuseres and hanging from booms. Everything gets in the way. Photographing scenes made up of smaller objects can look like a forest of metal supports in my studio. No such problem with CGI, the lights float majestically in thin air and in fact can be invisible – only the light they produce can be seen. This simple fact, allows us to do all sorts of things we wouldn’t otherwise be able to do.
In the scene above, we could have a camera fly (like a drone might) past that gold bottle stopper, round the back of it and right on through the glass of whisky. How would we do that in the real world?
2. What-if analysis
What if the Haig Club Whisky people woke up one morning and decided to make their bottles orange? They would likely want to see what that prospect looked like before ever committing to production. That can be done in seconds with CGI. The above image is created, all we have to do is tweak the shade that we’ve made the glass, run another render and hey presto, that’s what your orange bottle might look like.
3. Object re-use
This is the big win for those that decide to go down the CGI route. Not only is it possible for us to create the image without ever having the real object in our sweaty mits, but once created, we can create a limitless number of scenes using the object. You want Haig Club in a gentleman’s club scene, you got it. Another in a hotel bedroom, no problem. What was that? You want Haig Club on a snowy mountain top? Ok, we can do that.
And we don’t have to re-model the bottle ever again (unless it changes).
So traditional photography is dead, right?
Whoah, slow down thar! We never said that. There is no doubt that CGI is here to stay and is incredibly useful in certain situations. The Swedish giant IKEA use CGI in 80% of their kitchen images, they have embraced it wholeheartedly.
But it doesn’t suit everything. A bunch of flowers or a plate of food from the finest Michelin starred restaurants would require a skilled photographer. CGI’ing that would be a nonsense. There are objects that are just too prohibitively difficult to model. Some are topographically complex and others could require days of work to make them realistic enough to be usable. In other situations, traditional photography is a volume job. For e-commerce sites, it’s a shotgun approach where objects are loaded into a light box, click, next. Clearly that would not lend itself to CGI.
The cost of producing a CGI image is all up-front. The modelling and texturing requires time and skill. The cost of one image would not make it commercially viable unless the object was of sufficient value to gain that return on investment pretty quickly. But when you start to add a year’s worth of social media images (all different and unique and dramatic), refresh your website with all new images as often as you like, then it really starts to pay dividends.
CGI is here to stay. Traditional photography is here to stay too. Don’t let anyone tell you otherwise. My customers all get the option for CGI but it’s part of a due diligence process to offer the most appropriate service to deliver the best ROI for them.
Want to know more about how Skyhawk Pictures can help you show off your product offerings to get more customers? Get in touch – the links are at the top and bottom of this page.