2020
Manipulation of Remote Virtual View

Daniel Smith
Bachelor of Science Capstone Project, June 2020


[Link to Capstone Presentation]

We often observe remote spaces from camera footage, for example, video conferencing, distance learning. We currently observe remote space from a fixed viewpoint. That is, we are limited to seeing the world from the position of the imaging device, e.g., a webcam. This can be frustrating if we want to observe something outside the visible region. Ideally, we want to be able to move the webcam anywhere in the space. Rather than viewing footage from a moving camera, we should be able to achieve the same effect by compositing footage from a collection of stationary cameras.

My capstone project was an initial investigation of this problem. Picture a conference room or a lecture hall with cameras mounted in the corners of the room. The goal was to, based on video feeds of these cameras, implement a system that synthesizes footage with viewpoints that can be located anywhere in the conference room. The idea is that the user can manipulate the virtual viewpoints to positions that are in between the mounted physical cameras. To construct the synthetic view, we needed to know the distances between the objects and the cameras. A system could acquire this information several ways. We chose to use depth cameras: cameras that report distance per pixel. We believed that using depth-sensing devices would provide better accuracy and convenience compared to software solutions. I implemented a system that allows the user to move a synthetic view between two cameras. I developed the system for two depth-sensing cameras positioned less than a meter apart, but I believe the approach can be applied to devices with greater distance. Future work can add more cameras to the system and replace the depth-sensing devices with common RGB cameras by approximating depth information with software.

Under supervision of Dr. Kelvin Sung. Division of Computing Software Systems at UW Bothell