Preprint C203/2016
Panoramas omnidirecionais expandidos
Aldo René Zang
Keywords: Omnidirectional panoramas | RGB-D panoramas | Multi-layered panoramas | mixed reality | photo-realistic rendering

Every day we face mixed reality environments, where synthetic and real captured scenes are intertangled to our amusement. We see them in the cinema industry, in the television, in ads pieces and even in scientific simulations. In this thesis we introduce a fresh framework to produce computer generated photo-realistic rendering combined with real sets. The synthetic content is undistinguishable from the original elements, while they both share the same in loco captured lighting.

Omnidirectional panoramas are old friends from computer graphics and the entertainment industry. Since the '80s they are used for reflection mapping, lighting fields and non-traditional medium (old clunky panorama viewers come to mind here). Much has changed since. In the virtual and augmented reality areas, the current course of hardware evolution leads the industry ripe to embrace this new era.

Marginal to the computer graphics advances, there is visible commotion in the photo and video hardware industries. Light-field cameras, panorama and omnidirectional stereo (i.e. spherical stereo) devices, structured-light sensors to name a few. Gadgets coming from the most imaginative piece of sci-fi literature seems to be consumer-ready faster than an optimistic futurist could hope for. And here again, panoramas are everywhere.

It comes then as no surprise that a considerable subset of this work addresses the peculiarities of captured HDR (High Dynamic Range) omnidirectional panoramas. Thus the resulting pipeline is author-friendly to any panorama-viewer device such as smartphones, fulldomes, caves and head mounted displays (i.e., VR glasses).

There is much that hasn’t been accounted for in previous works though. This is partially due to the limitations of a panorama compared to a full representation of a scene. A panorama is a representation format, nonetheless, and as such it can be expanded. Therefore, we introduce in this thesis an \emph{Expanded Panorama} format, where along the lighting information we encode the space geometry as a depth channel in the camera space. With our expanded panoramas we not only can re-lit synthetic objects. The rendering can be done as a single-pass run-through. This excuses the need of multiple-pass calibrations and post-production to fine tune the blending between the original and the new scenarios. The expanded panorama is explored here as a complete solution, including a proper hidden surface determination solution and lighting algorithms.

 


Anexos: