Raw Stereo Data, Part Two

In the first part of this article I articulated the concept of raw stereo data, an analog of that which is now common practice with regard to electronic cinematography or for exposing camera negative.  This approach allows for more extensive manipulation in post than exposures that bake in the final effect. This last part of Part One discussed beamsplitter rigs.

To continue: But it is sure that the customers — producers, studios, directors, cinematographers – who look at these rigs wonder about them.  These rigs do not appear to have the authenticity that a simple motion picture camera has.  Cameras from Panavision or Arriflex are not only well developed machines, but they also have extensive support systems and they have brand-name clout.  The present stereoscopic cameras or rigs based on beamsplitter designs are complicated machines.  In order to reduce the interaxial separation, which is required for much of the photography, they have gone to some lengths to solve the problem.  They are complicated and require extra care in extra post, but I hasten to add, in the right hands they produce excellent results.

 With very small camera heads and lenses, say down to one inch width, most of the photography could be done with simple rigs. We may be within reach of such machines, but within reach might mean years of development.

Another approach might be to have a fixed-interaxial-separation forward looking pair of camera heads that are many inches apart.  This vastly simplifies the design.  Their optics need to be calibrated, they need to have coordinated instrumentation.  Zoom lenses have to have proper centration and magnification calibration.  Compared to the problems of beamsplitter rigs (which include these issues), these are easier to solve.  Such a design of two forward-looking camera heads is a simpler design, but such a design won’t work unless we can apply routine reduction of interaxial separation in post – in other words.  The gist of the idea is that we can simplify camera design if we can have a post technique that can properly reduce effective interaxial separation – hence stereo strength.

I have some experience with interpolation, having spent a couple of years using off-the-shelf morphing packages to gain some understanding of how to reduce interaxial separation.  I did this in the context of producing multiple views from stereo pairs for autostereoscopic lenticulars.  The problem is that there is hidden data that can only be reproduced through either artistic means or some algorithm that at this moment may not exist, but probably can be devised since human beings are endlessly imaginative and ingenious.  A camera with heads that are, say, 3 or 4 inches apart may be all well and good but how do you routinely interpolate to reduce the interaxial separation?  You want to reduce the interaxial separation on the fly so that in post you can see what you’re doing by dialing in the right effect. 

You could provide the stereo pair (or just a planar image for that matter) to In-Three or Sony Imageworks, and they would use their tool to produce a shot with the right stereo strength or effective interaxial separation.  .

With a routinized interpolation process camera design and cinematography will change. On the set the cinematography would go quickly with the simplified camera.  The forward-looking camera would not need the tweaking and adjusting that the beamsplitter rigs need, and you would be shooting very much in the way that you shoot normally; but you would be providing a stereo pair that could be manipulated in post.  So from budgetary considerations it may well be that having synthesis in conjunction with fixed-interaxial photography can have cost advantages. If raw stereo data can be provided that will mesh with the post process the whole thing works as an end to end system.  Production time cost more than post production time so a scheme like this may have real advantages.

Furthermore it may not be necessary to film with two heads stereoscopically.  If there is enough depth data it might be as good or better to shoot in planar and manipulate everything in post.  I know this will sound like heresy to some filmmakers but the kind of creative freedom that the CG animators have needs to become part of live action and the only way to do that is by using the concept of raw depth data to enable the freedom of manipulation in post required to fulfill the story telling vision of the film’s creators.

There are several ways I can conceive of for providing raw data at the camera for post. The first idea that occurs to me is to shoot the set without the actors.  If you shoot the set both without the actors and with the actors and have that information, that gives you an advantage in that there is no hidden material that you have to make up. This might not work all the time for every shot, but in the controlled environment of shooting theatrical features it can work. 

One method that might help is depth mapping but because it intrinsically provides you with data from only a single perspective view it is limited – but probably better than nothing.  There are ways to scan a scene and thereby create a depth map and cameras using such technology have appeared.

Another approach would be to have a very small camera located between the two left and right cameras.  That small camera shoots the middle perspective, and that middle gives you the material that would provide much of the hidden data.  Or top and bottom camera could be used, located at the mid point, to interpolate the mid-line view to be used for filling in hidden data. 

The idea, then, in post would be to take the photography, the additional data however it is derived and to use this to create or control the stereoscopic shot that is required.

Off the shelf tools from Avid and Quantel have added to the post pipeline.  Avid lets you edit off-line with untimed color and untimed stereo (you can view the stereo pair as it came out of the camera without depth grading).  The Quantel Pablo allows for corrections and horizontal shifting for setting the zero parallax position.  It has no interpolation capability.  Luster might be an interesting package to incorporate stereo post features and interpolation.

Stereoscopic post production is going behave much like color timing.  When an editor edits a show with an offline device like an Avid, he sees a one-light print; and as a matter of faith he knows that when it’s timed, the show is going to look good.  The new Avid off-line editing provides for viewing stereoscopically but for no corrections.  This is a philosophy that is analogous to what they presently offer for looking at footage in color.  You can see a stereoscopic image, but the stereoscopic image hasn’t been timed so you see whatever came out of the camera.  You can’t set the zero-parallax point.  You edit the show as best you can, assuming that it’s going to get fixed in post. (But you do see it in half res because they use the over-under format converted to the quincunx for DLP.) 

To sum up: The approach I am recommending is one in which you use various kinds of data to produce the end result.  Stereoscopic post can, in addition to its present functions, allow for setting the effective interaxial separation to provide the right strength of stereoscopic effect for a particular shot, so that it works together with adjacent shots in the sequence.  The concept of providing raw data in whatever form, and being able manipulate it in post, is a solid idea and will ultimately provide the ability to manipulate stereoscopic images so that they’re perfectly appropriate to telling the story.

Advertisements

3 Responses to “Raw Stereo Data, Part Two”

  1. clydedesouza Says:

    Dear Lenny,
    After doing the same (morphing / warping) to produce autostereo content for the stereographics screens and then later the Newsight/Opticality screens back in 2004, I revisited that old problem in it’s new incarnation..
    … the dilemma of those big monstrous 3D beamsplitters.

    Thanks to new modern laptop processing power, and the almost realtime ability of software like Adobe CS5, It’s now possible to go on location at least on 3D shoots destined for theatrical release and do quick shots of a location with a parallel rig and then “reshape the 3D volume” in post.

    I call it the RealFlex method:
    http://realvision.ae/blog/2010/08/3d-camera-interaxial-and-convergence-using-the-realflex-method/

    Regards.
    Clyde

    • Lenny Lipton Says:

      Dear Clyde,

      You are right! If you can successfully routinely interpolate and produce reduced interaxials then the damn beamsplittters are history. However there remain significant challenges with regard to automatic reconstruction of hidden material and a morphing artifact that shows up when interpolating around thin lines with a vertical component.

      • clydedesouza Says:

        Yes, your right on the artifacts on vertical lines. On extreme re-shaping of interaxial it will show up.

        Ideally, I’d use a combination of the smallest parallel rig (Si2K minis?), or MVC cameras, with HIT, some distance and an ever so slight nudge on a zoom along with the volume re-shaping in post to achieve the best results.

        I updated the article with a small example, as it’s generating a lot of questions: http://bit.ly/bYfWi3

        Best Regards.
        Clyde

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


%d bloggers like this: