What is Stenopeic Vision?

Stereo vision is the perception that observing the scene with two eyes can distinguish the distance form of the object.

Stereo vision is the perception that observing the scene with two eyes can distinguish the distance form of the object.
Stereo vision is an important subject in the field of computer vision, and its purpose is to reconstruct the three-dimensional geometric information of the scene. The research of stereo vision has important application value, and its applications include autonomous navigation systems for mobile robots, aviation and remote sensing measurement, industrial automation systems, etc.
Name
Stereo vision
category
Ophthalmology, computer

Stereo vision 1. Ophthalmology

Normal stereo vision

If the subject's stereoscopic function is normal, the pattern can be quickly and correctly found to determine how many seconds the stereoscopic acuity is, which is normally 100s. The advantage of this test is that no special glasses are needed Quickly detect whether the subject has stereo vision.

Clinical significance of stereo vision

Abnormal results: eye vibration, squinting, facing the eyes, looking at things crooked, squinting, no three-dimensional sense, poor eye-hand coordination. People to be checked: Stereo vision is missing (stereoscopic blindness).

Stereo vision considerations

Unsuitable: No special instructions. Contraindications before inspection: If this phenomenon is found, do not delay. Requirements during inspection: Pay attention to the orientation.

Stereo vision inspection process

Generally available: (1) Simultaneous inspection: can check binocular vision functions, including simultaneous vision, fusion, and stereo vision. A stereoscopic picture is required to check stereo vision. It can be performed according to the inspection instructions of the same machine, and the result will be judged. (2) Stereo vision checker: It consists of three test boards with different thicknesses. Each board is printed with four random network structure patterns, one of which is convex in the middle (indented from the other side).
Related diseases
Paraneoplastic strabismus myoclonus-myoclonus, infantile esotropia, minor degree strabismus, reverse strabismus, acute common strabismus, periodic esotropia, fixed strabismus, primary common esotropia, primary Unregulated esotropia, intermittent exotropia

Stereo vision related symptoms

Strabismus amblyopia

Stereo vision 2. Computer

Stereo vision research methods

In general, there are three types of methods for the study of stereo vision:
(1) A method of directly obtaining range data by using a rangefinder (such as a laser rangefinder) to establish a three-dimensional description;
(2) a method of inferring a three-dimensional shape using only the information provided by an image;
(3) A method of reconstructing a three-dimensional structure using information provided by two or more images at different viewpoints, perhaps at different times.
The first type of method, the range data method, reconstructs surface information using a numerical approximation method based on a known depth map, establishes a description of objects in the scene based on the model, and implements image understanding. This is an active stereoscopic vision method. The depth map is obtained by range finders, such as structured light, laser range finders, and other active sensing technologies ( active sensing techniques). This type of method is suitable for tightly controlled domains, such as the application of industrial automation.
The second type of method, based on the perspective principle and statistical assumptions of optical imaging, derives the outline and surface of the object based on the gray level changes in the scene, and infers the objects in the scene from shape to shading. The understanding of line drawings is such a typical problem that once attracted universal attention and became a focus in the field of computer vision research, resulting in various line annotation methods. The result of this method is qualitative, and quantitative information such as position cannot be determined. Because of the limitation of the information provided by a single image, this method has difficulties that cannot be overcome.
The third method, which uses multiple images to recover three-dimensional information, is passive. According to the difference of image acquisition methods, it can be divided into two categories: ordinary stereo vision and commonly called optical flow. Ordinary stereo vision studies two images taken by two cameras at the same time, while optical flow method studies two or more images taken sequentially by a single camera moving along any orbit. The former can be regarded as a special case of the latter. They have the same geometric configuration and the research methods have common points. Binocular stereo vision is a special case of it.

Stereo vision components

The study of stereo vision consists of the following parts:
(1) image acquisition,
There are various methods for acquiring images used for stereo vision research. There are a large range of changes in time, viewpoint, and direction, which are directly affected by the field of application. The research of stereo vision mainly focuses on three application fields, namely the interpretation of aerial pictures in automatic mapping, the guidance and avoidance of obstacles by autonomous vehicles, and the functional simulation of human stereo vision. Different application areas involve different types of scenery. According to the differences in scene features, they can be divided into two categories. One is scenery with cultural features, such as buildings and roads. Features of natural objects and surfaces, such as mountains, water, plains, and trees. Different types of scenes have different image processing methods, each with its own particularities.
In summary, the main factors related to image acquisition can be summarized as follows:
(a) the scene domain,
(b) timing,
(c) time of day (lighting and presence of shadows),
(d) imaging morphology (including special coverage),
(e) resolution,
(f) FIELD OF VIEW,
(g) Relative camera positioning.
The complexity of the scene is affected by the following factors:
(a) occlusion,
(b) man-made objects (straight edge, flat surfaces),
(c) smoothlytextured areas,
(d) Areas containing repetitive structure.
(2) camera modeling,
A camera model is a representation of important geometric and physical features of a stereo camera group. As a calculation model, it is used to calculate the position of the spatial point represented by the corresponding point according to the disparity information of the corresponding point. In addition to providing a mapping relationship between the corresponding point space on the image and the actual scene space, the camera model can also be used to constrain the search space when finding corresponding points, thereby reducing the complexity of the matching algorithm and reducing the mismatch rate.
(3) feature acquisition,
It is difficult to find a reliable match for the featureless areas of almost the same gray level. Therefore, most of the work in computer vision includes some form of feature extraction process, and the specific form of feature extraction is closely related to the matching strategy. In the research of stereo vision, the feature extraction process is the process of extracting matching primitives.
(4) image matching,
Image matching is the core of the stereo vision system, and it is extremely important to establish the correspondence between images and calculate the parallax.
(5) depth (depth) determination,
The key to stereo vision is image matching. Once the exact corresponding points are established, the distance calculation is relatively simple triangulation. However, the depth calculation process also encountered significant difficulties, especially when the corresponding points had some degree of imprecision or unreliability. Roughly speaking, the error of the distance calculation is proportional to the deviation of the match and inversely proportional to the baseline length of the camera group. Increasing the baseline length can reduce errors, but this also increases the difference between the parallax range and the features to be matched, which complicates the matching problem. In order to solve this problem, various matching strategies have emerged, such as rough-to-fine strategies and relaxation methods.
In many cases, the matching accuracy is usually one pixel. However, in fact, both the region correlation method and the feature matching method can obtain better accuracy. The area correlation method needs to interpolate the correlation surface to achieve half-pixel accuracy. Although some feature extraction methods can get better than a pixel accuracy feature, but this directly depends on the type of operator used, there is no universally available method.
Another method to improve accuracy is to use a pixel-accurate algorithm, but by using the matching of multiple images, a higher accuracy estimate can be obtained by the statistical average result of multiple sets of matching. The contribution of each set of matching results to the final depth estimation can be weighted according to the reliability or accuracy of the matching results.
In short, there are three ways to improve the accuracy of depth calculations, each involving some additional calculations:
(a) subpixel estimation,
(b) increased baseline length (increased stereo baseline),
(c) Statistical averaging over several views.
(6) interpolation.
In the field of stereo vision, a dense depth map is generally required. The algorithm based on feature matching only obtains a sparse and unevenly distributed depth map. In this sense, an area-based matching algorithm is more suitable for obtaining dense depth maps, but the method's matching on those areas with little information (uniform gray) is often unreliable. Therefore, both types of methods are inseparable from some meaningful interpolation process. The most direct method of interpolating a sparse depth map into a dense depth map is to treat the sparse depth map as a sample of the continuous depth map, and use a general interpolation method (such as spline approximation) to approximate the continuous depth map. This method may be appropriate when the sparse depth map is sufficient to reflect important changes in depth. Interpolation in this way may be more appropriate in the processing of aerial stereo photographs of undulating landforms. However, this method is not applicable in many fields of application, especially in the field of images with hidden borders.
Grimson pointed out that the degree of omission of matching features reflects the corresponding limit of the degree of change in the surface to be interpolated, and on this basis, he proposed an interpolation process [2]. From another perspective, according to the "Shadow to Shape" technology of a single image, using the matched features to establish contour conditions and a smooth interface surface can ensure the effectiveness of interpolation. These methods combined can make the interpolation process achieve the desired goal. Another way of interpolation is to establish a mapping relationship between an existing geometric model and a sparse depth map, which is a model matching process. In general, for model matching, sparse depth maps should be clustered in advance to form several subsets, each corresponding to a special structure. Then find the best corresponding model for each class, which provides parameters and interpolation functions for this particular structure (object). For example, Gennery used this method to find the ellipse structure in the stereo pair picture, and Moravec was used to detect the ground for autonomous vehicles.

IN OTHER LANGUAGES

Was this article helpful? Thanks for the feedback Thanks for the feedback

How can we help? How can we help?