Jump to content

Stereo cameras

From Wikipedia, the free encyclopedia

The stereo cameras approach is a method of distilling a noisy video signal into a coherent data set that a computer can begin to process into actionable symbolic objects, or abstractions. Stereo cameras is one of many approaches used in the broader fields of computer vision and machine vision.[1]

Calculation

[edit]

In this approach, two cameras with a known physical relationship (i.e. a common field of view the cameras can see, and how far apart their focal points sit in physical space) are correlated via software. By finding mappings of common pixel values, and calculating how far apart these common areas reside in pixel space, a rough depth map can be created. This is very similar to how the human brain uses stereoscopic information from the eyes to gain depth cue information, i.e. how far apart any given object in the scene is from the viewer.

The camera attributes must be known, focal length and distance apart etc., and a calibration done. Once this is completed, the systems can be used to sense the distances of objects by triangulation. Finding the same singular physical point in the two left and right images is known as the correspondence problem. Correctly locating the point gives the computer the capability to calculate the distance that the robot or camera is from the object. On the BH2 Lunar Rover the cameras use five steps: a bayer array filter, photometric consistency dense matching algorithm, a Laplace of Gaussian (LoG) edge detection algorithm, a stereo matching algorithm and finally uniqueness constraint.[2]

Uses

[edit]
Artist's concept of a Mars Exploration Rover, an example of an unmanned land-based vehicle. Notice the stereo cameras mounted on top of the rover.

This type of stereoscopic image processing technique is used in applications such as 3D reconstruction,[3] robotic control and sensing, crowd dynamics monitoring and off-planet terrestrial rovers; for example, in mobile robot navigation, tracking, gesture recognition, targeting, 3D surface visualization, immersive and interactive gaming.[4] Although the Xbox Kinect sensor is also able to create a depth map of an image, it uses an infrared camera for this purpose, and does not use the dual-camera technique.

Other approaches to stereoscopic sensing include time of flight sensors and ultrasound.

See also

[edit]

References

[edit]
  1. ^ "STMicroelectronics and eYs3D Microelectronics to showcase collaboration on high-quality 3D stereo-vision camera for machine vision and robotics at CES 2023". Yahoo Finance. Retrieved 2023-04-17.
  2. ^ Ming Xie (2010). Ming Xie; et al. (eds.). Intelligent robotics and applications : second international conference, ICIRA 2009, Singapore, December 16–18, 2009 : proceedings (online ed.). Berlin: Springer. ISBN 978-3-642-10816-7. Retrieved 13 March 2011.
  3. ^ Geiger, Andreas, Julius Ziegler, and Christoph Stiller. "Stereoscan: Dense 3d reconstruction in real-time." Intelligent Vehicles Symposium (IV), 2011 IEEE. Ieee, 2011.
  4. ^ "Technology Benefits". Focus Robotics. Retrieved 13 March 2011.