A Generic Approach for Designing Multi-Sensor 3D Video Capture Systems
Permanent address of the item is
The increased availability of 3D devices on the market raises the interest in 3D technologies. A lot of research is going on to advance the current 3D technology and integrate it into market products. Video gaming, displays, cameras, and transmission systems are some examples of such products. However, all of these products use different input 3D video formats. For instance, there are many types of 3D displays that work with different inputs like stereo video, video plus depth map or multi-view video. To provide input for these displays, the development of adequate 3D video capture systems is required. 3D video capture systems in general consist of multiple cameras. They also include tools for interfacing the cameras, synchronizing the captured video and compensating the physical disorientation effect of assembly process over hardware devices. A drawback of the current approaches for data capture is that each of them is suitable only for a specific display type or specific application. Capture systems should be re-designed for each of them particularly. There is no single solution that can provide 3D streams for the multitude of 3D technologies. The main objective of this thesis is to develop a generic solution that can be used with different camera array topologies. A system is developed to interface multiple cameras remotely and capture video from them synchronously in real-time data acquisition. Several computers are used to interface multiple cameras and a physical network is setup to build communication lines between the computers. A client-server software is implemented to interface the cameras remotely. The number of cameras is scalable with the flexibility of software and hardware of the system. This approach can handle integration of different video camera models. Moreover, the system supports integration of depth capture devices, which delivers depth information of the scene in real time. Calibration and rectification of the proposed multi-sensor camera array setup are supported.