The revival of the stereoscopic theatrical cinema is intimately linked to the rise of digital technology for the production and projection of motion pictures. The term “digital” when applied to cinema means many things. Most people would assume a strong linkage with computers; and indeed computers play an important part in the digital cinema, from image capture or generation to projection. Whether the computers are servers or in projectors, the digital cinema depends not only on this technology but on modern display technology, including the Texas Instruments DLP light engines. My purpose is to acquaint the reader with some understanding of how the stereoscopic medium and the digital medium work together nicely for the capture or the creation of stereoscopic images.
It would be more pleasing to me, for one, to call this the electronic cinema, rather than the digital cinema, but this isn’t an article about technology definitions and everybody knows what I am talking about. Oddly, it is electronic movies or television that has begun to replace chemical-based photography, because the current digital cinema is clearly an outgrowth of television. It is this isomorphism that gives the studios such fits because the distinction between the 1920 TV standard and the 2K theatrical standard is of interest to and possibly only noticeable to experts.
It is my purpose to illuminate why the combination of stereoscopy and digital technology is such a neat one, providing so many benefits. First let’s take a look at the content creation aspects of the medium, which fall into several categories: Live-action photography, animation by means of computer generated images, animation by means of performance capture, and conversion from planar to stereo.
The differentiation between computer generated animation and performance capture is one that does not have a sharp dividing line. There are movies that are touted as having performance capture, such as Beowulf, and there are movies such as Monster House that also use performance capture but make no mention of it in their promotion or advertising. In a naïve time the use of rotoscoping, the progenitor of motion capture, was a hush-hush affair and reports of its use in Snow White were denied by Disney. But they obviously used it.
For computer generated images and for performance or motion capture, digital technology plays a powerful role – and indeed it is utterly impossible to conceive of this means of content creation without digital technology. In the 3D boomlet of the fifties, except for a few cell animation shorts, all the features were live action. But in this, the first two years of the renaissance, until recently, all the features were CG animation. Which is history looping back on itself, because stereoscopy was invented using drawings, before photography was invented.
All of the major animation studios that produce computer generated images have physicists and imaging specialists who are attempting to produce a computer world that can be rendered with remarkable real-world fidelity or with controlled departures from the real world, to produce a beautiful visual effect. The people who create this content – the animators, background artists and other specialists – for the most part deal with content creation on an intuitive level. They aren’t doing calculations, but they are using computers. They need to be able to do what they do as any creative artist does, using on intuition to work the medium.
Whether their endeavors are based on animator’s skills or the artist’s ability to create backgrounds, generally speaking they are dealing with three-dimensional databases that exist as algorithms and numbers in a computer. These three-dimensional databases have to be fully rendered and captured by a virtual camera, and for a stereoscopic version what is required are two perspective views; so there must be two virtual cameras. These two cameras must be set up and coordinated according to the geometry of stereoscopic image capture.
The same kind of remarks can be made for performance capture, in which motion vectors of the actors’ bodies and faces are turned into a database. That database is then manipulated into characters that are inserted into a computer generated world, or for that matter the characters could be placed into photography of the real world.
For camera-captured images digital (or electronic) technology leaves film-based photography of stereoscopic images in the dust. Cameras that depend on modern CMOS and CCD technology aren’t digital, but produce analog signals that are captured digitally and are then recorded digitally either on hard drives or tape drives. One benefit of these video cameras is that they can be lighter and more compact than film cameras. This is important because two cameras make a rig, and two big heavy cameras become a big, heavy, clunky rig. Also, it is very good to be able to get the lenses as close together as possible especially for close-ups but also for medium shots.
During capture and immediately after capture it is desirable to look at the images. It is possible to look at the images on various kinds of stereoscopic monitors during photography, and without the need to process film and look at dailies (typically the next day), the cinematographer, the director and other creative and technical people can look at the images right away. In fact, they can often look at them on large screens – sometimes on a theater-size screen. It is very important to be able to do this, because it is so hard to visualize how stereoscopic images will look. It turns out to be a real bear to be able to predict the stereoscopic effect. If you have to resort to calculators and rules to try to figure out whether the image is going to look good, stereoscopic photography becomes difficult to do. But if you can actually see what you’ve done real-time (or shortly thereafter), you can improve and correct and tweak what you’re shooting. The same remarks that are made here with regard to the ability to view stereoscopic camera-captured images apply to computer generated images, because the content creators are able to look at stereoscopic images real-time on their desktops or in their sweatboxes. (A sweatbox is a little theater.)
The same considerations apply to conversion technology. There are a number of firms that now specialize in converting planar to stereoscopic movies. They are all doing more or less the same thing depending on artists and computers to help them get a decent result with reasonable throughput. The basic idea involved is outlining of foreground objects, laying the skins of those objects on a wire frame mesh or a depth map, and treating the background by filling in missing data and modeling the background where required. All of which would be impossible without digital technology.
We have looked at the major ways in which content can be created. We will now look at a vital portion of the filmmaking process, which is post-production. Post-production involves an array of procedures that create the film after photography. These include the manipulation of picture elements and sound elements into a finished product that can then be released to the theaters. When a stereoscopic film is cut it’s a good thing to be able to see it in 3D so that the editor and the director can understand how shots interact with each other. There are many prejudices, opinions and myths about stereoscopic cutting – about what works and what doesn’t work in 3-D movies: for example, whether a lot of depth-of-field is required, whether fast cuts are allowable or slow cuts are better to allow the stereoscopic effect to build. Theories matter only to a small extent. The eyes of the beholder rule. So if editors and directors can see what they are doing stereoscopically, that’s a tangible benefit. And the well-known advantages of cutting a film digitally apply here in spades. It is beneficial because of the difficulties in visualizing stereoscopic images and how shots interact.
An important process for camera generated material in particular is called rectification, which is a term that comes to us from aerial photography. If the left and right images have any distortions or magnification errors, they can, to a large extent, be fixed in post-production by tweaking the geometry of the two images so they correspond. This becomes important for zoom lenses, because zoom lenses have great big problems in terms of optics centration, which causes spurious generation of parallax values. Problems can be fixed in post-production and there are both proprietary and off-the-shelf tools for doing so. For the most part, these errors can be eliminated; and they can also include color and density shifts in the left and right images that can occur in cinematography.
The most important and well-developed element in this current phase of the stereoscopic cinema is stereoscopic projection. By emerging from a single DLP light engine projector, a stereoscopic image can be created using the time-multiplex or field-sequential mode. I am the first person to create flicker-free images for the time-sequential process, and the primary inventory of the major selection techniques used with field-sequential stereoscopic presentations: CrystalEyes active shuttering eyewear and the ZScreen, the electro-optical modulator used by Real D. It fits in front of the projection lens and switches the characteristics of polarized light in synchrony with the projection fields. Other systems are extant, such as shuttering eyewear systems, or the Dolby system which is an advanced form of anaglyph.
Only the projectors made by manufactures licensing DLP technology from Texas Instruments, Christie, Barco, and NEC, meet the required specification for field-sequential 3D. In order to make the Real D, NuVision shuttering eyewear, or Dolby systems work, you have to have a rapid sequence of frames projected on the screen. And only the DLP can refresh fast enough. In the case of material captured at the film standard rate of 24 frames per second, these systems work best when projecting at 144 frames per second. There are two 24-fps images for 48 fps, and each image is repeated three times for a total of 144 fps. The images are concatenated, and a train of images (left, right, left, right, left, right, and so on) reach the eyes. Half the time when you look at the image your right eye is seeing only the right images and is seeing nothing in the left eye, and vice versa. If everything is done right, the result is a good because the left and right images are treated identically by the projector in terms of geometry and illumination. The repetition rate of 144 frames per second lets us approach left and right frame projection simultaneity, another important factor.
Dual-projection systems require a lot of tweaking; and even after they have been tweaked they can drift out of spec. It’s not that it is impossible to make dual-projection systems work. It is simply that they are not a real product that you can count on given current technology in digital cinemas, which requires not only the projection of a beautiful image but a dependable process and an image that does not require constant monitoring.
Digital technology–content creation, post-production, and projection–has enabled the stereoscopic medium to become a part of the filmmaking armamentarium; not only to provide beautiful projection but to provide a dependable product, free from the mistakes of the past, that I don’t want to dwell on because they’re such a bummer. But today’s modern 3D digital projection is free from fatigue and eyestrain, and can now allow content creators to do their best to discover the art of this new medium. We’re going to see several years of experimentation and discovery, and at the end of that time the stereoscopic medium will be on a firm foundation. Creative people will never stop creating, but we will reach a plateau where many of the creative and production technical processes become routinized. Oddly enough, the reintroduction of the stereoscopic cinema comes down to turning that which had been more or less a laboratory experiment into a routine.
And none of this would have been possible without DLP projection which is the invention of Larry Hornbeck, who I just had the pleasure of meeting at the SPIE Stereoscopic Displays and Applications Conference in San Jose. Larry asked me for my autograph, so I asked him for his. As you can see, his autograph is on the back of a pair of paper 3D eyewear, which is entirely appropriate.