Multi-frequency Shading and VR.

Multi-frequency Shading and VR.

Our peripheral vision is low resolution but high frequency, our focus vision is high resolution but slower frequency updates. This is one of the reasons 60hz isn't good enough for VR, at the edges most people can consciously see the flicker still.

Additional of course VR is stereoscopic, so requires two views of everything and low latency response, whilst you can just run everything at 90hz or even 144hz this is expensive performance and power wise.

Multi-frequency shading solves the issue by sharing where possible some calculation over time and space (view ports). Of course for this work, we need to break down the rendering into parts that are the same (or close enough) to be ran as shared.

Perhaps the oldest split has been diffuse versus specular. diffuse lighting is only dependent on light position and the surface being lit, so camera changes can be ignored. This has been used in lightmaps for a long time. For VR this means that diffuse lighting can be shared between eyes. Except for shadowing and GI it also only needs updating at the rate objects move, which may be infrequently.

So lets start with that split, decouple diffuse lighting from the entire equation. The first problem then is how we store these diffuse light values so they can be quickly picked up when required. A classic approach is to use a hash over the spatial position as a key into a fixed sized cache. The diffuse update runs asynchronous at for example 30hz, this is looked up at full display frequency (say 120hz) with specular and other high frequency effects are added.

This is in essence what light maps do but with a 0hz update rate, the 3d spatial lookup replaced with a surface based 2D lookup.

The question then becomes history management, how do you look up diffuse value in space (hash storage?) and do you need some form of cache replacement system to ensure that stall diffuse values get kicked out.

Again the key to high quality and fast VR seems to me to lay in changing how we define a frame. Decouple the display rate from the rest of renderer, allowing for the brain and other effects to paper over the differences. With only a tiny amount of time any error will be visible, its likely you want notice that its actually wrong sometimes.

However whether this is true, depends on wet-ware in our heads so only way to know is to try and see if anyone pukes up or gets a splitting headache.

Deano out

Comments

Popular posts from this blog

Machine Learning Performance

A large scale archival storage system - ERC Reinenforcement

Flu + teaching AI to play games