What Is Data Segmentation?
Features are the properties or characteristics of the phenomenon being studied that can be observed and measured. Segmented features refer to dividing the features of the research object into different parts or dividing a feature into different parts. In the field of artificial intelligence, segmented features are often performed on different data, mainly for better feature extraction and feature recognition.
- Segmented features refer to dividing the features of the research object into different parts or dividing a feature into different parts. Segmented features are part of feature engineering. Through segmented features, you can better understand and identify the relevant characteristics of features, so that you can more accurately identify objects or feature recognition, such as converting images into
- The spatial frequency (ie, wave number) is used as the independent variable to describe the characteristics of the image. The spatial changes of the image element values of an image can be decomposed into linear superposition of simple vibration functions with different amplitudes, spatial frequencies and phases. The composition and distribution of space frequency components is called the space spectrum. This decomposition, processing and analysis of the spatial frequency characteristics of an image is called spatial frequency domain processing or wave number domain processing. Similar to the time domain and frequency domain can be converted to each other, the space domain and space frequency domain can also be converted to each other. in
- In 1962, biologists Hubel and Wiesel studied the visual cortex of the cat brain and found that there are a series of complex structures in the visual cortex. These cells are sensitive to local areas of the visual input space. They are called "receptive fields." ". The receptive field covers the entire visual domain in some way, and it plays a local role in the input space, so it can better excavate the strong local spatial correlation existing in natural images. These cells, called receptive fields, are divided into two types: simple cells and complex cells. According to Hubel-Wiesel's hierarchical model, the neural network in the visual cortex has a hierarchical structure: LGB (lateral geniculate body) simple cells complex cells low-order supercomplex cells high-order supercomplex cells. The structure of the neural network between low-order ultra-complex cells and high-order ultra-complex cells is similar to the neural network structure between simple cells and complex cells. In this hierarchical structure, cells at a higher stage usually have a tendency to selectively respond to more complex features of the stimulus pattern; at the same time, they also have a larger receptive field and are less likely to move the position of the stimulus pattern. Sensitive [1]
- Feature engineering is
- In machine learning and statistics,
- In machine learning, feature learning or representation learning is a collection of techniques for learning a feature: transforming raw data into a form that can be effectively developed by machine learning. It avoids the trouble of manually extracting features and allows the computer to learn how to extract features while learning to use features: learn how to learn. Machine learning tasks, such as classification problems, usually require input that is very easy to handle mathematically or computationally. Under this premise, feature learning comes into being. However, the data in our real world such as pictures, videos, and sensor measurements are very complex, redundant and changeable. So, how to effectively extract features and express them is very important. Traditional manual feature extraction requires a lot of manpower and relies on very specialized knowledge. At the same time, it is not easy to promote. This requires that the overall design of the feature learning technology is very effective, automated, and easy to generalize. Feature learning can be divided into two categories: supervised and unsupervised, similar to machine learning. In supervised feature learning, labeled data is used as features for learning. Examples include neural networks, multilayer perceptrons, (supervised) dictionary learning. In unsupervised feature learning, unlabeled data is used as features for learning. For example (unsupervised) dictionary learning, independent component analysis, automatic coding, matrix factorization, various clustering analysis and its deformation