Before this experiment started, we were all a little skeptical that a static image could give the user an interesting experience. But we decided to try anyways, to see what it would feel like visiting some famous locations.
The first method we tried was to simply replace out skybox with an appropriate destination - our first experience being the Roman Colosseum. It is hard to explain the look of joy on our faces - one half soaking in the feeling of being there, and another half amazed at how well it actually works. There is something really simple, and special here.
The second thought that popped into our head was to scrape Google Street View. The APIs were simple enough - give google a location (latitude / longitude) along with a direction and field of view and it returns an image 640x640. After a few manual REST calls, we had our 6 cube mapped images of Akihabara, Japan ready to test. There is something very interesting and fun about the exploration aspect of popping up around the world and looking around. You are looking into the (near) past, but every picture has an unspoken story and it's neat to let your imagination wander.
Spherical panoramas yielded similar results. Albeit with a more efficient and uniform pixel distribution than cube maps (where the corners have higher densities) - which could help with quality when mipmapping / streaming panoramas. With our field of view at 90 degrees and a resolution of 1280x800 (640x800 per eye), a spherical panorama size of 2560x1280 is about as good as we'll get for DK1.
Of course the panoramas we used were all from a single viewpoint, and not stereoscopic. Objects that are sufficiently far off in the distance are fine enough rendering the same stream to both eyes, but you lose depth (and scale) at a closer range. If you had depth information (eg. using a Kinect or a Tango like sensor device) - you could try doing a re-projection as an approximation for close objects. But these devices are no good for outdoors. A lot of videos out there are of people taping together cameras in pairs (for example 3 pairs of 2 for each hemisphere) and stitching each eye separately. But that's not exactly correct either when looking up and down. Using computer vision to abstract 3D information from 2D may be another way of doing this.
There is something to be said about being grounded in a single location, and not running around and getting dizzy. Showing this to friends and family, there was a sense of magic - instant conversion. There are many interface and navigation problems to solve in this space.
I will leave with a final image of the logical followup to static panoramas. Video panoramas and stereoscopic video panoramas is another transformative experience that needs to be seen to be believed. There is not much data out there, but there is already a few low end 360 video stitching camera kickstarters that have shaken things up. It will be interesting to see where this space ends up.
Roman Colosseum – image by “Humus”, http://www.humus.name/index.php?page=Textures Sistine Chapel – image by The Vatican, http://www.vatican.va/various/cappelle/sistina_vr/index.htmlBBC Newsroom – Video by BBC http://www.bbc.co.uk/blogs/internet/posts/Virtual-Reality-the-future-of-TV