Discrete Complexity

This is the quantitatively refined parallel to discrete computability theory and usually based on the Turing machine model. Decision problems (i.e. languages) $L\subseteq\{0,1\}^*$ are generally considered solvable efficiently if a Turing machine $\mathcal{M}$ can do so in polynomial time. This means that there exists some $k,C\in\mathbb{N}$ such that, on every input $\bar x\in\{0,1\}^n$ of length $|\bar x|=n$, $\mathcal{M}$ performs no more than $C+C\cdot n^k$ steps before reporting (correctly) whether $\bar x$ belongs to $L$ or not. Note that asymptotical growth here is meant with respect to the length of the input as (only) parameter. The class of languages thus decidable in polynomial time is called $\mathcal{P}$ and has proven reasonable: First because most practically efficient algorithms run in polynomial time whereas those considered inefficient don't; and secondly because it is closed under composition---if the output of a polynomial-time Turing machine is fed into another one, both combined will still run (slower yet) in polynomial time. Many problems for which no efficient algorithm is known ask, given one object, for the existence of another so-called witness:

  • SAT asks, given a Boolean formula, whether it admis a satisfying assignment
  • Hamilton Circuit asks, given a graph, whether it admits a tour traversing each node precisely once
  • Clique asks, given a graph G and an integer k, whether there exist k vertices in G mutually joined by edges.
The famous class $\mathcal{NP}$ by definition consists of all such problems where the witness is bounded in size polynomially in that of the input; equivalently: of all problems decidable by a so-called nondeterministic polynomial-time Turing machine. The latter model of computation is a variant of the standard machine and introduced mainly for the purpose of providing an alternative perspective on the $\mathcal{P}$-versus-$\mathcal{NP}$ Millenium Problem. Structural complexity theory explores the mutual relationships of these and many further complexity classes such as PSPACE, EXP, NL etc. A major tool consists in comparing two problems in terms of reductions. A language $A$ is for instance called polynomial-time reducible to $B$ if there exists a total function $f:\{0,1\}^*\to\{0,1\}^*$ such that for every $\bar x$ it holds $\bar x\in A\;\Leftrightarrow\; f(\bar x)\in B$. The famous Cook-Levin Theorem asserts that every language in $\mathcal{NP}$ is polynomial-time reducible to SAT. Put differently, SAT is a most difficult problem in $\mathcal{NP}$; and a (deterministic) polynomial-time algorithm for it would yield one for every witness problem of the above type. Moreover, Richard Karp has initiated the identification of many more (at present at least 300 distinct natural) problems mutually reducible and thus --- up to polynomial slow-down --- equally difficult.

Selected References: