The concept, create an interactive 3D scene by augmenting video not shot from a fixed point. The idea came from old arcade on-rails shooters. I wanted to know if you could shoot a video of an environment for an on-rails shooter and then add interactive 3D models at run-time in such a way that everything seamlessly fit together. I’ve seen this attempted in the past, but the camera would always have to stop moving when the scene became interactive. On the other end is Augmented Reality, where generally it is real-time video with a fixed point.
With the help of Calin Reimer(modeling & art) and Kert Gartner(video and camera tracking) we managed to build a working tech demo this last weekend at PegJam.
Now onto the cool part, how did we achieve this.
First we found a suitable area to shoot the video (interesting but not too cluttered). We then shot a short video moving the camera around the environment.
This video was then brought into a piece of camera tracking software where the scene was tracked. This then gave us the position and path of the camera during the whole video. This was exported as an FBX file and the video was converted to an image sequence.
A rough 3d model of the scene including all objects in the environment was created.
The image sequence, camera track and 3D scene model where then all brought into Unity.
Within Unity a static orthographic camera was setup to view a plane scaled to the video’s dimensions. The current time was used to determine the current video frame and this video image would be added to the plane. This creates a video player that while a little slow, has the required control needed for the project. As a note we did attempt to use Unity’s built-in MovieTexture but it was just not robust enough for this project.
Next a new camera was added into the main camera tracking node. This allows that camera to follow the exact same path the real camera did when we shot the scene. The camera’s “Field Of View” was also modified to closely match that of the actual camera.
The camera tracking animation could not just be played like a normal animation. The problem was this would move the camera into between video frames causing alignment issues. Instead the camera’s animation was manually played based on the current frame of the video.
The 3D model of the scene was position within the camera track. Box Colliders were added and a custom shader that occludes all objects behind it was applied. Lights were placed in the scene to roughly match the actual lighting in the environment.
Lastly we added the remaining items like the gun, laser, laser mark decals and enemies.
We render the camera that captures the video first, followed by the 3D scene camera and lastly the camera that renders the gun.
When it is all played together it works to give you the illusion that everything actually exists in that scene.
I’ve very excited of what we were able to accomplish in one weekend. If you are interested you can download the project source below. A quick word of warning as we built everything in one weekend this project is a bit crazy.
To get the source (unitypackage) click here.
If you missed it above click here to play the tech demo.