SIGGRAPH 2015: NVIDIA has a better VR solution
SIGGRAPH (Special Interest Groups on GRAPHics and interactive techniques) has been around since the 1970s and while it may not have as much recognition as other industry events like GDC, it's still an interesting conference for those specializing in the computer graphics, animation and design.
With VR being the industry buzzword these days, it's not surprising that the tech is the focus of industry events like this one. The concept certainly has come a long way since the inception of the very first stereoscopic headset in 1838. While the technology has certainly improved since, the basic theory still applies to today's advanced headsets. Two images, both of the same scene, viewed from slightly different angles, are shown to a user who looks through a device that is attached to their head.
The problem with creating just two images is that it isn't how our eyes view the world. Current VR technology is lacking what scientists call 'Focus Cues'. Basically, our eyes adjust themselves whenever we're looking at objects that are further away, focusing on them so they get clearer and more visible. With VR however, whenever our eyes tries to do that, we just end up looking at a blurry object, as the VR doesn't take into account focus cues and thus doesn't adjust the object in any way even when our eyes focus on it. An object in front of you in VR, will look the same as an object far away, just smaller.
The solution that NVIDIA's researchers came up with is to sandwich two displays together, which will create a hologram that projects 25 different kinds of angles per eye, for a total of 50 images when projected to both eyes. Each of the images will be slightly offset but will be of the same scene. This techniques imitates our natural viewing and comes complete with changes in focus as your eyes move around the scene.
While it's great that NVIDIA's researchers have found a way to make VR viewing better, we might not be seeing the fruits of their labors for years. The Oculus Rift and HTC RE Vive both use single displays for each eye, not two displays sandwiched together like what's needed to make the tech work. It's highly unlikely that Valve or Oculus VR would delay their launches to add in extra hardware. Perhaps in a hardware refresh in a couple of years, but definitely not with their first generation devices.
The only headset with a chance of having the sandwiched displays is Razer's OSVR, Razer's headset is still early enough in development that a hardware revision is definitely possible before a final release, though whether the company will actually do it is another question entirely.