Wednesday March 16th, 2011 – Final Programme

  • 8:30 – 9:00 Registration
  • 9:00 – 9:15 Welcome
  • 9:15 – 10:15 Keynote
    • Alexander Keller (mental images): Evolving Cinematic Rendering
  • 10:15 – 10:45 Coffee Break
  • 10-45 – 12:15 Morning Session
    • Alex Evans (Media Molecule): Volumetric Techniques in the LittleBigPlanet2 Graphics Engine
    • Rob Pieke (Moving Picture Company): A look at MPC’s core technologies for VFX
    • Erik Reinhard (University of Bristol): Perceptual Plausibility: Some Errors are Better than Others
  • 12:15 – 13:30 Lunch (provided)
  • 13:30 – 15:00 Afternoon Session
    • Sam Martin (Geomerics): Real-Time Radiosity for Video Games
    • Jon Starck (The Foundry): Stereo Post-Production: Current Problems, Future Challenges
    • Simon Green (NVIDIA): From Tablets to Supercomputers – The New World of the GPU
  • 15:00 – 15:30 Coffee Break
  • 15:30 – 17:15 Late Afternoon Session
    • Yuriy Yemelyanov (Black Rock Studio): Bridging Ray and Raster Processing on GPUs
    • Dan Bailey (Double Negative): Thinking in Parallel – Achieving Faster Simulations for VFX
    • Martin Preston (Framestore): R&D in VFX Production
    • Tim Weyrich (University College London): Skin and Acrylics – Modelling and Fabricating Real-World Appearance
  • 17:15 – 17:20 Closing Remarks
  • 17:20 – 18:30 Drinks Reception (in North Cloisters, sponsored by the EngD VEIV Centre)


Alexander Keller (mental images): Evolving Cinematic Rendering

Cinematic rendering is no longer in its infancy. However, in many cases, achieving the desired lookand performance requires a great deal of manual labor. In this context we will discuss four keytechnologies: First, deterministic sampling can be used to enable algorithms which are both consistentand simple to parallelize. Second, the hierarchies that are exposed by most massive models can beused in automatic accelerated rendering. Third, thefinite precision of the computations found inrendering allows their efficiency to be increasedthrough the selection of appropriate levels ofdetail. This in turn results in simplified algorithms. Finally, through the application of physically based simulation extraordinary results can be achieved without renderer or scene specific parameter tweaking. Taking these points intoconsideration can help evolve cinematic rendering beyondthe current state of the art, thereby unleashing more artistic freedom through improved workflows.

Alex Evans (Media Molecule): Volumetric Techniques in the LittleBigPlanet2 Graphics Engine

LittleBigPlanet, the 2008 PS3 exclusive title, used a ‘light-prepass renderer’ to allow user generated content to have large numbers of lights without imposing complicated technical limits on its un-technical users. As with all deferred techniques, this led to limitations in the rendering of transparent objects and effects. In this talk, I will discuss the decision for LittleBigPlanet2 to move to a more traditional ‘forward renderer’. To maintain the ability to render scenes with many local lights, with predictable cost, we made extensive use of volumetric techniques, such as dynamically generated irradiance volumes, and voxelization of the scene. This voxelization, dynamically computed on the GPU each frame, was then used to compute single scattering fog effects, local light shadows, world-space ambient occlusion, fluid-simulation collision, as well as the direct lighting for the main renderer. I’ll cover implementation details, including the pros and cons of these techniques as applied to LittleBigPlanet 2 on PS3.

Rob Pieke (Moving Picture Company): A look at MPC’s core technologies for VFX

Over the last decade, MPC has built up and iteratively refined a core set of proprietary technologies. These have become not only the building blocks of the tools we continue to develop today, but allow for significant interoperability between – and a coherent pipeline around – them. This talk will cover the origins of our core technologies, some of the themes in our approach to software development, and a brief overview of some of the tools our artists use.

Erik Reinhard (University of Bristol): Perceptual Plausibility: Some Errors are Better than Others

Graphics has always pursued physical accuracy to guide algorithm design. However, full physical accuracy is not always necessary or even possible. Substituting perceptual plausibility is a good approach to design algorithms that would otherwise be unattainable. This presentation highlights a case-study, namely image-based material editing, in which gross physical errors are stacked to add up to a visual percept that is considered plausible.

Sam Martin (Geomerics): Real-Time Radiosity for Video Games

This talk focuses on an architecture and set of techniques for producing real-time radiosity in video games. We describe Enlighten’s architecture, covering its separation of direct and indirect lighting, mixed use of CPU/GPU resources, and Enlighten’s optimisation strategies. We also touch on areas of workflow, content pipelines and run-time systems for Enlighten.

Jon Starck (The Foundry): Stereo Post-Production: Current Problems, Future Challenges

Stereo film production has had a huge impact since the success of James Cameron’s Avatar. Stereo 3D films generate considerable revenue with predictions of 15 per cent of all films produced in 3D by 2015. Stereo presents a series of challenges: from the construction and calibration of stereo capture rigs, the subsequent work flow, to the post-production tools and technology requirements for working with stereo footage. The Foundry produce the Academy Award® winning compositor Nuke and the Ocula tool-set which is designed specifically for stereo post-production. This talk will cover some of the basic problems in stereo film production, the tools that are available to “fix it in post” and some of the future challenges that still need to be addressed.

Simon Green (NVIDIA): From Tablets to Supercomputers – The New World of the GPU

The graphics processing unit (GPU) has evolved from a fixed-function graphics accelerator aimed mainly at PC games into a highly capable parallel processor that can improve the performance of a wide variety of applications, on platforms spanning the range from mobile phones to games consoles and super computers. This talk will give a brief overview of the evolution of the GPU, illustrated with demonstrations, followed by discussion of the latest GPU features such as stereo, compute and hardware tessellation, and the challenges and opportunites this new world offers to developers and researchers.

Yuriy Yemelyanov (Black Rock Studio): Bridging Ray and Raster Processing on GPUs

This presentation details Blackrock’s exploration of new techniques in real time rendering as it endeavours to achieve real-time global illumination (GI) for the next generation consoles and games. We discuss a system for ray traced GI, carefully integrated with a traditional raster renderer using an incremental irradiance cache. The talk covers novel GPU methods for spawning secondary GI rays on only visible scene surfaces, smoothly sampling the visible 3D cache into 2D, and incrementally ray traced spherical harmonics basis. We target nVidia’s OptiX ray tracing engine and present a range of optimizations that boost the performance of our hybrid rendering method to achieve real-time frame rates. Finally, we examine the memory footprint of the 3D irradiance cache and present our investigations of GPU-based spatial hashing methods as a way of lowering memory consumption whilst maintaining constant lookup costs.

Martin Preston (Framestore): R&D in VFX Production

The role of an R&D group in a visual effects studio is an unusual mix of planning for the future and hurried catch-up. This influences how academic research can be applied in film production, and guides how we go about developing proprietary tools. This talk explains some of the limitations film production places on us, and highlights areas where we’ve been able to side-step these restrictions.

Dan Bailey (Double Negative): Thinking in Parallel – Achieving Faster Simulations for VFX

Parallel programming has massively increased in popularity in recent years, but there remains many barriers to its adoption in the Visual Effects Industry. This talk assesses the potential performance increases brought by the GPU for simulation use and looks at how it can be employed successfully in a highly production-driven environment. In addition, we will be looking at future trends of this new paradigm and examining the trade-offs of working in such a rapidly evolving field.Programme as it currently stands (more talk titles/abstracts to come in the next days):

Tim Weyrich (University College London): Skin and Acrylics – Modelling and Fabricating Real-World Appearance

This talk focusses on digital descriptions of real-world appearance. At the example of human skin, latest developments in acquisition, modelling and rendering of complex real-world materials will be demonstrated. We will see how modelling light transport in a material from first principles allows to effectively acquire and model spatial and temporal variations in skin appearance, leading to life-like appearance of digitally animated human faces. To further underline the power of first principles, I will show examples of how the process of appearance acquisition can be inverted, leading to physical artefacts with custom-defined reflectance properties.

Comments are closed.