What Is the Feature Integration Theory?

Feature integration theory is a theory of cognitive psychology. It was proposed by Treisman and Gelade in 1980 to try to make up for the shortcomings of the variable focus model without explaining the processing mechanism of visual concentration attention . This shortcoming [1] . This theory is also a theory of attention involving automatic processing. They distinguish between objects and features, treating features as a specific value of a certain dimension, and objects as a combination of some features. They believe that the characteristics are analyzed by a functionally independent subsystem of perception. This processing is performed in parallel, and the identification of objects requires concentrated attention and participation, which is the result of a series of processing; concentrated attention The effect is similar to "glue", allowing some features to be combined into a single object. [2]

Feature integration theory mainly discusses the problems of early vision processing, so it can be seen as a theory of perception or pattern recognition. Proposed by Treisman, Sykes, and Gelade in 1980. They regard features as a specific value of a certain dimension, and the object is a combination of some features. For example, graphics and colors are dimensions, triangles and red are the values of the two dimensions, and red triangles are composed of the "triangle" value of the graphics dimension and the "red" value of the color dimension. object. [3]
According to Treisman, the visual processing process is divided into two stages. Feature integration occurs in the later stages of visual processing. It is a non-automated, sequential processing, that is:
Feature registration stage (equivalent to pre-attention stage: no need for concentrated attention): At this time, people hardly need to be aware of it or even realize it. Pre-attention processing can help people perform a directional search of the surrounding environment. The vision system extracts features from the light stimulation mode, which is a parallel and automated processing process. Treiasman hypothesized that in the early stages of vision, only independent features can be detected, including color, size, orientation, contrast, tilt, curvature, and endpoints of line segments. It may also include distance differences between motion and distance. These features are in a free-floating state (unconstrained by the object they belong to, their position is subjectively uncertain). The perceptual system encodes the features of each dimension independently. The psychological representation of these individual features is called a feature map. Note: The relationship between features cannot be detected in the pre-attention stage.
Feature integration stage (object perception stage). Perceptual systems correlate correctly separated features (characteristic representations) to form a representation of an object. At this stage, it is required to locate the feature, that is, to determine where the boundary position of the feature is. This is called a map of locations. The location information of the processing features needs concentrated attention. Concentrated attention is like glue, integrating primitive, separate features into a single object. Feature integration occurs in the later stages of visual processing and is a non-automated, sequential process. Due to the need for willpower, when the attention is overloaded or people are distracted, especially when the demand for attention is high, the characteristics of the stimulus may be improperly combined, causing an illusion.
1. Visual inspection operation [4]
The model assumes:
Early vision encodes some simple and useful information in the scene into some feature modules. These modules may maintain the spatial relationship of the visible world, but they themselves cannot directly provide spatial information to subsequent stages of the processing process. [3]
Treisman not only values the role of bottom-up processing in perception, but also recognizes the interaction of object files and recognition networks. In this sense, the feature integration model of attention is a model with local interactions with top-down processing as the main feature. [4]

IN OTHER LANGUAGES

Was this article helpful? Thanks for the feedback Thanks for the feedback

How can we help? How can we help?