In this discussion I propose a methodology for stereoscopic production — a workflow methodology and a way to look at the stereoscopic pipeline that has not been clearly articulated heretofore. The idea is based on the concept of raw data, so to help the reader understand the idea I’ll give some background.
For many decades cinematography based on 35mm negative has followed a methodology in which the cinematographer bakes in the look of the shot – hence the entire film. The cinematographer’s intention, and the intention of the director, becomes an intrinsic part of the negative – the way it is exposed, the filtration, anything the cinematographer can do to give a certain look. This was the established method before the advent of the digital intermediate, and now probably 90% of movies are made using DI (or shot electronically). Before digital post the cinematographer, at the most basic level, could control the look of the movie and it couldn’t be diddled with afterwards. This is in accordance with an approach to still and motion picture cinematography that’s been around for years — to introduce control at the earliest possible stage. Given that one had a negative that reflected the artistic intention of the cinematographer, only gross timing changes, practically speaking, could be made in post. In fact, in the days before DI, only relatively minor changes (by today’s standards) could be routinely made. It’s worth noting that certain major changes could also be made in optical printing.
Today whether you’re shooting with camera negative or with an electronic camera, files are meant to be fully exposed, to have a lot of data, to preserve raw data for maximizing later timing flexibility. The term “raw” in terms of electronic photography indicates that little or no compression is used and that the data that comes out of the camera has been preserved in the file – or the data that comes out of the negative is preserved in the digital file transfer. And that’s usually a well exposed negative shot without filtration. This, then, gives the post production experts an opportunity to completely control the look of the show. They can add any effect and effects you can only dream doing of in-camera. (I should not that cinematographers often don’t like this approach and the ASC has worked diligently to find means to preserve the integrity of their vision.)
I am now going to extend this idea to stereoscopic cinematography and post production.
Today’s post production for stereoscopic work involves two steps: correction of camera errors and what I’ll call “stereo timing”. The stereoscopic image that comes out of a camera has to be corrected for binocular asymmetries to prevent eyestrain or viewer discomfort, and it has to be stereo timed so that the image is appropriate for telling the story.
The timing is almost entirely concerned with setting the zero-parallax position. The cinematographer can have an opinion about setting the zero-parallax position (some people call it convergence and I forgive them) at the time of photography, but the zero-parallax position (which is usually set for the most important object in the shot which is also in focus) may have to change when editing a sequence to improve the image flow, because the adjacent shots inform the zero-parallax position for each particular shot.
During photography it’s rather difficult to visualize how stereoscopic images will look when projected on a big screen and even more difficult to visualize how the shots are going to play when cut together. Part of this difficulty comes about because stereoscopic images are scaled by monocular depth cues and these can greatly add or subtract from the effect one would suppose has been achieved solely through parallax – which is principally achieved by the distance between camera head. Thus a flexible system of stereoscopic timing in post is crucial. Right now it is essentially limited to setting the zero-parallax position by laterally shifting the images (which in addition involves some magnification of the image and some of loss of material at the edges). Another thing that can now be done in post is setting the effective stereo window through floating windows.
The suggestion I’m making here is a departure from the way stereoscopic post is accomplished at present. The idea is to use a system of raw stereoscopic data so that post can perform another essential correction. In addition to setting the zero parallax plane it is important to be able to control the strength of the stereoscopic image. Correcting for camera mistakes and setting the zero-parallax plane are important, but given today’s technology I’ll term them to be routine.
The ability to set the strength of the stereo image is missing (unless one uses what I will characterize as heroic means and engage in stereo synthesis). The strength of the stereoscopic image is a function of extrastereoscopic cues, as noted, but it is to a large extent a function of the distance between the camera heads. The interaxial separation is the major determinant of the strength of the stereoscopic image. Once the interaxial separation information is baked in it’s difficult to do anything about it in post. But manipulating the interaxial or stereo strength is crucial for the shot’s appearance and image flow because the stereographer/cinematographer may not have gotten it right in photography. Or the flow of shots in a scene dictates a change in the stereo strength of a shot.
Right now there is no simple way to alter stereo strength, and by this I mean a method for reducing interaxial using a knob so that all intermediate effective interaxials can be viewed to help select the best looking version. Conversion or synthesis may work but there will be delays in implementation. That could make this approach a non-routine one best reserved for fixing mistakes that are true whoppers. In order to produce the desired result interpolation is required.
An important point is that most of the time stereoscopic cinematography that is accomplished on the set needs to be done with a reduced interaxial separation – that is to say, an interaxial separation that is less than the interpupillary. That’s because you want to reduce the parallax values, especially for background points, so that the image is easy to look at; but also, especially when close to the subject, the image will appear to be elongated if you do not reduce the interaxial separation. This dictates the use of beamsplitter cameras because the minimum interaxial for a forward looking camera design is determined by the width of the lens used or camera body, whichever is wider.
As noted, beamsplitter cameras, based on a design by Floyd Ramsdell, are currently used to reduce the interaxial separation. You’ve got one camera looking directly through the pellicle, and another camera seeing the reflected image. There are problems with these rigs, one set of which has to do with the fact that you’re shooting through a beamsplitter which differentially polarizes each image (can produce asymmetrical reflections), reduces illumination, and can get dirty and flex. Also the beamsplitter mirror must get very large for wide angle lenses.
Another part of the problem has to do with precision alignment – mechanical alignment and optical alignment of the lenses. After many years of development we have to ask ourselves a question: If the cameras today are not producing an adequate result, will they ever be able to be engineered to produce an adequate result? I think the answer is that there are strong indications that after so much effort the development is paying off and the best cameras with the best operators are able to do the job – providing adequate correction is available in post.
Part two of this article will appear soon.