askvity

What is Disparity in Depth Estimation?

Published in Computer Vision 3 mins read

Disparity, in the context of depth estimation (often used in stereo vision), is the difference in the location of a point in two images, usually taken from slightly different horizontal perspectives. It's a crucial factor that allows us to infer the depth of objects within a scene.

Understanding Disparity

Imagine you have two cameras positioned a short distance apart, mimicking human eyes. When you look at an object, each eye captures a slightly different image due to the angular difference. An object closer to you appears to shift more between the two images than a faraway object. This shift is the disparity.

How Disparity Relates to Depth

The relationship between disparity and depth is inverse:

  • Large Disparity = Closer Object: Objects that are close to the cameras will appear to shift more between the two images, resulting in a larger disparity.

  • Small Disparity = Farther Object: Objects that are far away will appear to shift less, resulting in a smaller disparity.

In mathematical terms, depth (Z) can be calculated using the following formula:

Z = (f * B) / d

Where:

  • Z is the depth to the point.
  • f is the focal length of the cameras.
  • B is the baseline, or the distance between the two cameras.
  • d is the disparity.

As you can see from the formula, depth is inversely proportional to disparity. A smaller disparity results in a larger depth value, indicating that the object is farther away.

Importance in Stereo Vision

Disparity is the fundamental measurement used by stereo vision algorithms to create depth maps. These algorithms attempt to find corresponding points in the left and right images and calculate the disparity between them. This disparity information is then used to estimate the depth of each point in the scene.

Challenges in Disparity Estimation

Estimating disparity accurately can be challenging due to:

  • Textureless regions: It's difficult to find corresponding points in areas with little or no texture.
  • Occlusions: Objects that are visible in one image may be hidden in the other, making it impossible to find corresponding points.
  • Repetitive patterns: Repeated patterns can lead to ambiguity in finding the correct match.
  • Image noise: Noise in the images can interfere with the disparity estimation process.

Applications of Disparity-Based Depth Estimation

Disparity-based depth estimation is used in a wide range of applications, including:

  • Robotics: For navigation, object recognition, and manipulation.
  • Autonomous driving: For obstacle detection, lane keeping, and adaptive cruise control.
  • 3D reconstruction: For creating 3D models of objects and environments.
  • Medical imaging: For visualizing and analyzing 3D structures in the human body.

In summary, disparity provides crucial information about the depth of objects in a scene by quantifying the apparent shift in their position between two images taken from different viewpoints. The accuracy of disparity estimation is essential for many applications that rely on depth perception.

Related Articles