What Is a Daily Factor?

Control factors are factors that have an effect on system control. The controlling factors for the formation of lithologic formations are sand control and reservoir control, and factors that control the enrichment zone. The largest lake flooding surface, regional unconformity surface, and fault surface. The factors controlling the enrichment zone include favorable sedimentary facies zone, fault development zone, stratum annihilation zone, secondary pore development zone, fluid property change zone, and structural slope break zone. Wait.

Overview of Controlling Factors

Rule mining is an important part of data mining. The traditional rule mining method based on rough set theory is to first find the decision information. The core idea of granular computing is to granulate the problem to be solved and analyze the problem in multiple granularity spaces. And solve, and then synthesize the solution of the original problem, which is consistent with the cognitive law of human beings from multiple perspectives to analyze and solve problems, and has attracted the attention of researchers.
In this paper, the process of attribute reduction and attribute value reduction are combined into one, and rules are mined in units of knowledge particles. First, the decision information system is layered and granulated, and the granular relationship matrix is calculated in the knowledge space with different granularities, and the inspiration is obtained from The formula information determines the order of attribute value reduction of the information granules based on the heuristic information, removes redundant attributes on this basis, and sets termination conditions to achieve rapid mining of decision rules. Theoretical analysis and UCI data set test results show that the The algorithm can get all the simplest rules.

Algorithm for Mining Decision Rules Based on Granular Computing for Control Factors

The traditional method of mining rules for decision information systems is to first find the attribute reduction, and then extract the rules line by line.The middle contains a lot of redundant calculations. The final result also depends on the quality of the attribute reduction results. As the algorithm complexity increases, the granularity analysis of attribute reduction is carried out and it is pointed out that the knowledge partitioning space obtained by attribute reduction on a decision information system is a very approximate partitioning space. This is not necessarily the "coarse" grain in the entire knowledge space. This article considers mining rules in the knowledge space at different levels of granularity. For the convenience of algorithm description, the symbol definition is given first.
3.1 Symbol definition
In order not to lose the generality, it is assumed that the decision information system has a condition attribute and a decision attribute. It is the number of condition attributes contained in the condition attribute , which represents the granularity of the system, 1; all condition attributes under the granularity , such that There is one conditional attribute; a conditional granular matrix corresponding to a certain conditional attribute; a decision granular matrix corresponding to a decision attribute; × a granular relationship matrix.
3.2 Algorithm description
Algorithm for mining simplest decision rules based on granular computing. Input: decision information system; output: all simplest decision rules.
1) Generate a decision granular matrix and take granularity = 1.
2) Find the conditional granularity matrix and granular relationship matrix for each conditional attribute, calculate 1, 2 and save the corresponding data and do the following processing:
Find if there exists 2 = 1. If it exists, it can be known from property 3 that the corresponding information particle can completely distinguish a certain decision class, and it is given priority in the reduction process, so that it can ensure that the minimum number of rules can be obtained with the same distinguishing ability. , Reduce the corresponding information particles to get the decision rules, otherwise go to ;
If 2 = 1 does not exist, then the value of 1 is compared. The larger the value of 1, the greater the discrimination ability of the corresponding information particle, and it can also ensure that the rule obtained is the least when the discrimination ability is unchanged. According to the value of 1 The size of the information particle determines the reduction order of the information particle. The decision rules are obtained by reducing the information particle, and then turn to ;

Control factor algorithm complexity analysis

The algorithm mainly considers how to improve the computational efficiency of existing algorithms, including how to reduce redundant calculations, how to improve search efficiency, and how to reduce storage space. Reduction of information particles according to heuristic information 1, 2 while removing redundant attributes, reducing It reduces the traditional redundant calculation when attributes are first reduced and attribute values are reduced. When searching in the same granularity space, heuristic operators are used to select and sort different knowledge spaces, which improves the search efficiency. In the worst case It needs to search 2 times, and in reality, when the data itself is very redundant, the search space is much smaller than 2, because heuristic information is added to the algorithm, and the termination conditions are set at the same time, the algorithm converges faster. The matrix used in this article is a Boolean sparse matrix. [3]

IN OTHER LANGUAGES

Was this article helpful? Thanks for the feedback Thanks for the feedback

How can we help? How can we help?