Jump to content

L (complexity)

From Wikipedia, the free encyclopedia
(Redirected from L (complexity theory))

In computational complexity theory, L (also known as LSPACE, LOGSPACE or DLOGSPACE) is the complexity class containing decision problems that can be solved by a deterministic Turing machine using a logarithmic amount of writable memory space.[1][2] Formally, the Turing machine has two tapes, one of which encodes the input and can only be read,[3] whereas the other tape has logarithmic size but can be read as well as written. Logarithmic space is sufficient to hold a constant number of pointers into the input[1] and a logarithmic number of Boolean flags, and many basic logspace algorithms use the memory in this way.

Complete problems and logical characterization

[edit]

Every non-trivial problem in L is complete under log-space reductions,[4] so weaker reductions are required to identify meaningful notions of L-completeness, the most common being first-order reductions.

A 2004 result by Omer Reingold shows that USTCON, the problem of whether there exists a path between two vertices in a given undirected graph, is in L, showing that L = SL, since USTCON is SL-complete.[5]

One consequence of this is a simple logical characterization of L: it contains precisely those languages expressible in first-order logic with an added commutative transitive closure operator (in graph theoretical terms, this turns every connected component into a clique). This result has application to database query languages: data complexity of a query is defined as the complexity of answering a fixed query considering the data size as the variable input. For this measure, queries against relational databases with complete information (having no notion of nulls) as expressed for instance in relational algebra are in L.

[edit]

L is a subclass of NL, which is the class of languages decidable in logarithmic space on a nondeterministic Turing machine. A problem in NL may be transformed into a problem of reachability in a directed graph representing states and state transitions of the nondeterministic machine, and the logarithmic space bound implies that this graph has a polynomial number of vertices and edges, from which it follows that NL is contained in the complexity class P of problems solvable in deterministic polynomial time.[6] Thus L ⊆ NL ⊆ P. The inclusion of L into P can also be proved more directly: a decider using O(log n) space cannot use more than 2O(log n) = nO(1) time, because this is the total number of possible configurations.

L further relates to the class NC in the following way: NC1 ⊆ L ⊆ NL ⊆ NC2. In words, given a parallel computer C with a polynomial number O(nk) of processors for some constant k, any problem that can be solved on C in O(log n) time is in L, and any problem in L can be solved in O(log2 n) time on C.

Unsolved problem in computer science:

Important open problems include whether L = P,[2] and whether L = NL.[7] It is not even known whether L = NP.[8]

The related class of function problems is FL. FL is often used to define logspace reductions.

Additional properties

[edit]

L is low for itself, because it can simulate log-space oracle queries (roughly speaking, "function calls which use log space") in log space, reusing the same space for each query.

Other uses

[edit]

The main idea of logspace is that one can store a polynomial-magnitude number in logspace and use it to remember pointers to a position of the input.

The logspace class is therefore useful to model computation where the input is too big to fit in the RAM of a computer. Long DNA sequences and databases are good examples of problems where only a constant part of the input will be in RAM at a given time and where we have pointers to compute the next part of the input to inspect, thus using only logarithmic memory.

See also

[edit]

Notes

[edit]
  1. ^ a b Sipser (1997), p. 295, Definition 8.12
  2. ^ a b Garey & Johnson (1979), p. 177
  3. ^ On a read/write input tape, a linear amount of memory could be obtained by packing of symbols (as in the proof of the linear speedup theorem), thus evading the logspace contraint.
  4. ^ See Garey & Johnson (1979), p. 179, Theorem 7.13 (claim 2)
  5. ^ Reingold, Omer (2005). Undirected ST-connectivity in log-space. STOC'05: Proceedings of the 37th Annual ACM Symposium on Theory of Computing. ACM, New York. pp. 376–385. doi:10.1145/1060590.1060647. MR 2181639. ECCC TR04-094.
  6. ^ Sipser (1997), Corollary 8.21, p. 299.
  7. ^ Sipser (1997), p. 297; Garey & Johnson (1979), p. 180
  8. ^ "Complexity theory - is it possible that L = NP".

References

[edit]