What Is the Computational Complexity Theory?

Computational complexity theory is a branch of theoretical computer science and mathematics that focuses on classifying computable problems according to their own complexity and linking these categories. A computable problem is considered to be a problem that can be solved by a computer in principle, that is, the problem can be solved by a series of mechanical mathematical steps, such as algorithms.

Computational complexity theory is a part of computational theory. It studies the resources required for calculation problems, such as time and space, and how to save these resources as much as possible.
The most common of the resources studied by computational complexity theory are time (how many steps it takes to solve a problem) and space (how much memory is required to solve a problem). Other resources could also be considered, such as
In the 1950s, the paper by Trahtenbrot and Rabin was considered the earliest document in the field. Generally speaking, it is recognized that the foundation of the field of computational complexity is the 1960s paper On the computational complexity of algorithms by Hartmanis and Stearns. In this paper, the author introduces the concept of time complexity class TIME (f (n)) and uses
The original purpose of computational complexity is to understand the difficulty of different algorithmic problems, especially the difficulty of some important algorithmic problems. In order to accurately describe the difficulty of a problem, the first abstraction of computational complexity is to consider that polynomial time is valid and non-polynomial time is difficult. This is based on the "intuitive" characteristic of the exponential function's growth rate: if the time complexity of an algorithm is 2 n and the scale of the input is 100, the program will run at a computing speed of 10 12
Years are almost the age of the universe. This provides intuitive evidence that polynomial time is considered valid time.
However, when the exponent of the polynomial is large, the algorithm cannot be regarded as effective in practice. For example, the n 10 polynomial algorithm takes a problem size of 1000, which is almost 2 100 . On the other hand, even if a problem does not have a polynomial algorithm, it may have a better approximation algorithm or a good heuristics. Heuristics are characterized by theoretically no precise behavioral analysis, or they can indicate the presence of very bad inputs, which run slowly on these inputs. Most of the time, however, it solves problems quickly. Corresponding theoretical analysis in computational complexity is average-case complexity theory and smooth analysis. Examples in practice include Presburger arithmetic, Boolean satisfiability problems, and knapsack problems.

IN OTHER LANGUAGES

Was this article helpful? Thanks for the feedback Thanks for the feedback

How can we help? How can we help?