Lately there has been a lot of interest in two formats for stereoscopic multiplexing: The above-and-below, resurrected by Technicolor for theatrical projection using film, and the side-by-side for multiplexing left and right images for television. Here’s some background from a personal perspective.
In the early 1970s Chris Condon of StereoVision International introduced the side-by-side format on 35mm motion picture film for the projection of a 3D feature film, The House of Wax. Prior to this there are instances of the side-by-side format being used on 35mm film for still cameras; Leitz and Zeiss offered stereo lenses that produced a version of the format for their Leica and Contax cameras decades before Condon’s implementation. And in the fifties Paillard sold such a product (both taking and projection lenses, which I used) for their Bolex 16mm cameras.
Condon added a 2X anamorphic squeeze to restore the aspect ratio of the image. Since the aspect ratio of the full Academy aperture is about 1.3:1, if the frame is split into two portions with a vertical line between the two halves, and each half is squeezed by a factor of 2:1, when it is unsqueezed by the same factor it will restore the image to 1.3:1. I saw a screening of Condon’s version of House of Wax in this format in the early 1970s, and the projection, although a bit dark, was good. The screening was at a theater in San Francisco and I don’t know if the fault was attributable to the format or the optics, or whether the problem was the lamp because some theaters run their lamps too long and they get dimmer with age. For all I know they were using a carbon arc so this was not a factor.
In the late 1980s and early 1990s, when I was at StereoGraphics, I was thinking about multiplexing techniques for video, like the above-and-below and the side-by-side. I had used a video version of the above-and-below format (see prior article) that Colonel Bernier and others, including Condon, had used for motion picture film.
I was able to use the above-and-below idea applied to video for the first 120Hz flicker free stereo system by dividing the television frame into two squeezed subfields, above and below each other, and (unfortunately) halving the resolution to produce a 120Hz signal played back on a modified monitor. We injected a synch pulse to the video signal between the subfield blanking, and with a monitor running at 120Hz, with the proper selection device, one could see a stereo image. Speeding up the monitor wasn’t all that hard to do (for some monitors) because the bandwidth was not being doubled, only the vertical refresh rate. (Lhary Meyer, the first StereoGraphics hire, developed a circuit that could be added to some monitors to do the trick.) The picture was okay but only okay because the raster lines were visible. There were only half of them (240 for NTSC) to fill up the image on the screen. We then tried line-doubling, but the image was never great. It remained soft but without visible raster lines.
The above-and-below technique could also be used for computer graphics but for this application it produced much better looking images because of the higher vertical resolution, so halving the number of lines in the image was less detrimental. Computer graphics users might object that the pixels were no longer square, but rather rectangular. But, big deal, because the image looked good; for the first time people could see decent quality stereo images on a computer screen. Although stereoscopic computer graphics were born in my lab the early 1980s I never take for granted looking at a stereo monitor or movie screen; I never crease to marvel at the images.
Soon computer companies like Evans and Sutherland and Silicon Graphics adopted StereoGraphics’ technology and because they controlled the graphics card’s output they could offer square pixels and because of their relationships with their vendors they were able to supply their customers with monitors that ran at a high refresh rate. The value of the above/below technique was that it allowed me to demonstrate 120Hz flicker free images, not because it was implemented as part of products (but sometimes it was). The selection device was either the ZScreen which placed over the monitor screen and used with passive glasses starting in 1987, or by 1989, for those who preferred it, we had CrystalEyes shuttering. The first ZScreen we developed was of the same type that is used in today’s cinemas.
I’ve related this history in part to highlight the shortcoming of the above-and-below format for video, because of its dearth of scan lines for NTSC, in order to explain what lead me to experiment with the side-by-side format. I worked on the side-by-side format for television because I felt it could produce a better result than above-below. There are only 480 active lines in NTSC television so I thought that pixel doubling might look better than line doubling. I did some experiments in a post-production suite in San Francisco and much to my delight, when run through the post-production facility’s excellent equipment (a Harry) the resultant reconstructed image, which had been vertically squeezed to occupy half the frame area and then unsqueezed to fill the entire frame area, looked very good from normal viewing distances. It looked as good as the original from normal viewing distances. In fact, only the finest of fine details was lost. I remember a test shot we had of a tennis match, and from normal viewing distances of several times the picture width, everything looked perfectly fine – but standing right up to the monitor (I’m nearsighted so I can stand eight inches from the screen and see it sharply) I could see there was some loss of detail in the tennis net itself. But wow! I thought I was onto something.
Working with Lhary (that’s how he spelled it), and with some outside help, we developed a couple of boxes that could squeeze and then unsqueeze television signals. Another engineer, Bill McKee, and I designed a stereoscopic camera. They were side-by-side cameras, and we used them to shoot some films. Mort Heilig, who is the inventor of Sensorama and is considered to be one of the founders of augmented- or virtual-reality technology, shot a couple of the films one called Above and Below San Francisco, which consisted of aerial photography, and the other of a boxing matching in a San Francisco gym.
As I said, we built two boxes – one that could mux and one that could demux – and these were the subject of two U.S. patents. We could never get the same image quality we got with the Harry in the post-production suite. The image was always softer than I liked. But the technology 20 years ago wasn’t what it is today, and squeezing and unsqueezing the signal now can be much more easily accomplished with digital circuitry.
Both the side-by-side and above-and-below electronic formats have an attractive advantage in common: They survive JPEG and MPEG compression. Both need to piggyback on the existing compression schemes and they do so perfectly. Only at the boundaries between sub- or sidefields is there any possibly of smearing or combining the two perspective views’ information but not to any visible extent. They both do well because these topological transformations segregate all perspective information to within areas that preclude information pollution from the unwanted perspective. Other such geometrically isolating schemes might work as well: Splitting the frames diagonally?
As I said initially the side-by-side format has attracted attention recently. It is now being offered as a viable candidate for stereoscopic multiplexing technique for video. Perhaps it is, because the tests I have seen with it on stereo hi-def sets (DLP RPTV, fast LC, and plasma) look good. It’s impressive what digital compression can do compared with the analogue (or hybrid) means we originally used at StereoGraphics. People are now trying various tricks that to enhance side-by-side resolution, but for the images I have seen at the Entertainment Technology Center (which is part of USC) and other venues, on various television sets using versions of the side-by-side, they all look pretty much the same, that is to say, good. And I should add that one of the curious things about the current stereoscopic television efforts is that when looking at various types of monitors using various multiplexing techniques there are many combinations that give a good-looking pictures. The major difference is the selection technique rather than the multiplexing technique; whether one is looking at a Micropol or XPol (interdigitated polarizer) image or one is looking through shuttering eyewear. Some people like one, some the other.
Improvements to side-by-side include one by Sensio which I think would be classified as a mezzanine muxing scheme designed to add back high frequency information. I think they use side-by-side but they don’t mention it in their patents and I don’t know much more about it. I worked on various other approaches with other workers at Real D but unless I got up close to the screen it was hard to see the difference between these variations and unadorned side-by-side.
We are now entering an age in which we will have smart stereoscopic televisions that will have stereo engines in them, that will be able to look at the incoming signal and, depending upon how it’s multiplexed, take that information and use it properly given the combination of the set’s display technology and selection technique. That’s the analogue of what happens today with the scaling engine used in a TV to massage the incoming signal into that required by the native resolution of the set’s display.
The side-by-side format that I commandeered from Condon, and Condon took from Leitz, Zeiss, and Paillard, and who knows who else, still has legs. It’s the darnedest thing but that’s the way of technology; people just won’t leave well enough alone. Sometimes they make things better.
I am presently working on a side-by-side variant for film at the company I recently cofounded, Oculus3D. But that’s another story.