Loading AI tools
Concept in computability theory From Wikipedia, the free encyclopedia
In computability theory, the μ-operator, minimization operator, or unbounded search operator searches for the least natural number with a given property. Adding the μ-operator to the primitive recursive functions makes it possible to define all computable functions.
Suppose that R(y, x1, ..., xk) is a fixed (k+1)-ary relation on the natural numbers. The μ-operator "μy", in either the unbounded or bounded form, is a "number theoretic function" defined from the natural numbers to the natural numbers. However, "μy" contains a predicate over the natural numbers, which can be thought of as a condition that evaluates to true when the predicate is satisfied and false when it is not.
The bounded μ-operator appears earlier in Kleene (1952) Chapter IX Primitive Recursive Functions, §45 Predicates, prime factor representation as:
Stephen Kleene notes that any of the six inequality restrictions on the range of the variable y is permitted, i.e. y < z, y ≤ z, w < y < z, w < y ≤ z, w ≤ y < z and w ≤ y ≤ z. "When the indicated range contains no y such that R(y) [is "true"], the value of the "μy" expression is the cardinal number of the range" (p. 226); this is why the default "z" appears in the definition above. As shown below, the bounded μ-operator "μyy<z" is defined in terms of two primitive recursive functions called the finite sum Σ and finite product Π, a predicate function that "does the test" and a representing function that converts {t, f} to {0, 1}.
In Chapter XI §57 General Recursive Functions, Kleene defines the unbounded μ-operator over the variable y in the following manner,
In this instance R itself, or its representing function, delivers 0 when it is satisfied (i.e. delivers true); the function then delivers the number y. No upper bound exists on y, hence no inequality expressions appear in its definition.
For a given R(y) the unbounded μ-operator μyR(y) (note no requirement for "(Ey)" ) is a partial function. Kleene makes it as a total function instead (cf. p. 317):
The total version of the unbounded μ-operator is studied in higher-order reverse mathematics (Kohlenbach (2005)) in the following form:
where the superscripts mean that n is zeroth-order, f is first-order, and μ is second-order. This axiom gives rise to the Big Five system ACA0 when combined with the usual base theory of higher-order reverse mathematics.[citation needed]
(i) In context of the primitive recursive functions, where the search variable y of the μ-operator is bounded, e.g. y < z in the formula below, if the predicate R is primitive recursive (Kleene Proof #E p. 228), then
(ii) In the context of the (total) recursive functions, where the search variable y is unbounded but guaranteed to exist for all values xi of the total recursive predicate R's parameters,
then the five primitive recursive operators plus the unbounded-but-total μ-operator give rise to what Kleene called the "general" recursive functions (i.e. total functions defined by the six recursion operators).
(iii) In the context of the partial recursive functions: Suppose that the relation R holds if and only if a partial recursive function converges to zero. And suppose that that partial recursive function converges (to something, not necessarily zero) whenever μyR(y, x1, ..., xk) is defined and y is μyR(y, x1, ..., xk) or smaller. Then the function μyR(y, x1, ..., xk) is also a partial recursive function.
The μ-operator is used in the characterization of the computable functions as the μ recursive functions.
In constructive mathematics, the unbounded search operator is related to Markov's principle.
The bounded μ-operator can be expressed rather simply in terms of two primitive recursive functions (hereafter "prf") that also are used to define the CASE function—the product-of-terms Π and the sum-of-terms Σ (cf Kleene #B page 224). (As needed, any boundary for the variable such as s ≤ t or t < z, or 5 < x < 17 etc. is appropriate). For example:
Before we proceed we need to introduce a function ψ called "the representing function" of predicate R. Function ψ is defined from inputs (t = "truth", f = "falsity") to outputs (0, 1) (note the order!). In this case the input to ψ. i.e. {t, f}. is coming from the output of R:
Kleene demonstrates that μyy<zR(y) is defined as follows; we see the product function Π is acting like a Boolean OR operator, and the sum Σ is acting somewhat like a Boolean AND but is producing {Σ≠0, Σ=0} rather than just {1, 0}:
The equation is easier if observed with an example, as given by Kleene. He just made up the entries for the representing function ψ(R(y)). He designated the representing functions χ(y) rather than ψ(x, y):
y | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7=z |
---|---|---|---|---|---|---|---|---|
χ(y) | 1 | 1 | 1 | 0 | 1 | 0 | 0 | |
π(y) = Πs≤y χ(s) | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
σ(y) = Σt<y π(t) | 1 | 2 | 3 | 3 | 3 | 3 | 3 | 3 |
least y < z such that R(y) is "true": φ(y) = μyy<zR(y) |
3 |
The unbounded μ-operator—the function μy—is the one commonly defined in the texts. But the reader may wonder why the unbounded μ-operator is searching for a function R(x, y) to yield zero, rather than some other natural number.
The reason for zero is that the unbounded operator μy will be defined in terms of the function "product" Π with its index y allowed to "grow" as the μ-operator searches. As noted in the example above, the product Πx<y of a string of numbers ψ(x, 0) *, ..., * ψ(x, y) yields zero whenever one of its members ψ(x, i) is zero:
if any ψ(x, i) = 0 where 0≤i≤s. Thus the Π is acting like a Boolean AND.
The function μy produces as "output" a single natural number y = {0, 1, 2, 3, ...}. However, inside the operator one of a couple "situations" can appear: (a) a "number-theoretic function" χ that produces a single natural number, or (b) a "predicate" R that produces either {t = true, f = false}. (And, in the context of partial recursive functions Kleene later admits a third outcome: "μ = undecided".[1])
Kleene splits his definition of the unbounded μ-operator to handle the two situations (a) and (b). For situation (b), before the predicate R(x, y) can serve in an arithmetic capacity in the product Π, its output {t, f} must first be "operated on" by its representing function χ to yield {0, 1}. And for situation (a) if one definition is to be used then the number theoretic function χ must produce zero to "satisfy" the μ-operator. With this matter settled, he demonstrates with single "Proof III" that either types (a) or (b) together with the five primitive recursive operators yield the (total) recursive functions, with this proviso for a total function:
Kleene also admits a third situation (c) that does not require the demonstration of "for all x a y exists such that ψ(x, y)." He uses this in his proof that more total recursive functions exist than can be enumerated; c.f. footnote Total function demonstration.
Kleene's proof is informal and uses an example similar to the first example, but first he casts the μ-operator into a different form that uses the "product-of-terms" Π operating on function χ that yields a natural number n, which can be any natural number, and 0 in the instance when the u-operator's test is "satisfied".
This is subtle. At first glance the equations seem to be using primitive recursion. But Kleene has not provided us with a base step and an induction step of the general form:
To see what is going on, we first have to remind ourselves that we have assigned a parameter (a natural number) to every variable xi. Second, we do see a successor-operator at work iterating y (i.e. the y' ). And third, we see that the function μy y<zχ(y, x) is just producing instances of χ(y,x) i.e. χ(0,x), χ(1,x), ... until an instance yields 0. Fourth, when an instance χ(n, x) yields 0 it causes the middle term of τ, i.e. v = π(x, y' ) to yield 0. Finally, when the middle term v = 0, μyy<zχ(y) executes line (iii) and "exits". Kleene's presentation of equations (ii) and (iii) have been exchanged to make this point that line (iii) represents an exit—an exit taken only when the search successfully finds a y to satisfy χ(y) and the middle product-term π(x, y' ) is 0; the operator then terminates its search with τ(z' , 0, y) = y.
For the example Kleene "...consider[s] any fixed values of (xi, ..., xn) and write[s] simply 'χ(y)' for 'χ(xi, ..., xn), y)'":
y | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | etc. |
---|---|---|---|---|---|---|---|---|---|
χ(y) | 3 | 1 | 2 | 0 | 9 | 0 | 1 | 5 | . . . |
π(y) = Πs≤yχ(s) | 1 | 3 | 3 | 6 | 0 | 0 | 0 | 0 | . . . |
↑ | |||||||||
least y < z such that R(y) is "true": φ(y) = μyy<zR(y) |
3 |
Both Minsky (1967) p. 21 and Boolos-Burgess-Jeffrey (2002) p. 60-61 provide definitions of the μ-operator as an abstract machine; see footnote Alternative definitions of μ.
The following demonstration follows Minsky without the "peculiarity" mentioned in the footnote. The demonstration will use a "successor" counter machine model closely related to the Peano Axioms and the primitive recursive functions. The model consists of (i) a finite state machine with a TABLE of instructions and a so-called 'state register' that we will rename "the Instruction Register" (IR), (ii) a few "registers" each of which can contain only a single natural number, and (iii) an instruction set of four "commands" described in the following table:
Instruction | Mnemonic | Action on register(s) "r" | Action on Instruction Register, IR |
---|---|---|---|
CLeaR register | CLR ( r ) | 0 → r | [ IR ] + 1 → IR |
INCrement register | INC ( r ) | [ r ] + 1 → r | [ IR ] + 1 → IR |
Jump if Equal | JE (r1, r2, z) | none | IF [ r1 ] = [ r2 ] THEN z → IR ELSE [ IR ] + 1 → IR |
Halt | H | none | [ IR ] → IR |
The algorithm for the minimization operator μy[φ(x, y)] will, in essence, create a sequence of instances of the function φ(x, y) as the value of parameter y (a natural number) increases; the process will continue (see Note † below) until a match occurs between the output of function φ(x, y) and some pre-established number (usually 0). Thus the evaluation of φ(x, y) requires, at the outset, assignment of a natural number to each of its variables x and an assignment of a "match-number" (usually 0) to a register "w", and a number (usually 0) to register y.
In the following we are assuming that the Instruction Register (IR) encounters the μy "routine" at instruction number "n". Its first action will be to establish a number in a dedicated "w" register—an "example of" the number that function φ(x, y) must produce before the algorithm can terminate (classically this is the number zero, but see the footnote about the use of numbers other than zero). The algorithm's next action at instructiton "n+1" will be to clear the "y" register -- "y" will act as an "up-counter" that starts from 0. Then at instruction "n+2" the algorithm evaluates its function φ(x, y) -- we assume this takes j instructions to accomplish—and at the end of its evaluation φ(x, y) deposits its output in register "φ". At the (n+j+3)rd instruction the algorithm compares the number in the "w" register (e.g. 0) to the number in the "φ" register—if they are the same the algorithm has succeeded and it escapes through exit; otherwise it increments the contents of the "y" register and loops back with this new y-value to test function φ(x, y) again.
IR | Instruction | Action on register | Action on Instruction Register IR | |
---|---|---|---|---|
n | μy[φ(x, y)]: | CLR ( w ) | 0 → w | [ IR ] + 1 → IR |
n+1 | CLR ( y ) | 0 → y | [ IR ] + 1 → IR | |
n+2 | loop: | φ(x, y) | φ([x], [y]) → φ | [ IR ] + j + 1 → IR |
n+j+3 | JE (φ, w, exit) | none | CASE: { IF [ φ ]=[ w ] THEN exit → IR ELSE [IR] + 1 → IR } | |
n+j+4 | INC ( y ) | [ y ] + 1 → y | [ IR ] + 1 → IR | |
n+j+5 | JE (0, 0, loop) | Unconditional jump | CASE: { IF [ r0 ] =[ r0 ] THEN loop → IR ELSE loop → IR } | |
n+j+6 | exit: | etc. |
What is mandatory if the function is to be a total function is a demonstration by some other method (e.g. induction) that for each and every combination of values of its parameters xi some natural number y will satisfy the μ-operator so that the algorithm that represents the calculation can terminate:
For an example of what this means in practice see the examples at mu recursive functions—even the simplest truncated subtraction algorithm "x - y = d" can yield, for the undefined cases when x < y, (1) no termination, (2) no numbers (i.e. something wrong with the format so the yield is not considered a natural number), or (3) deceit: wrong numbers in the correct format. The "proper" subtraction algorithm requires careful attention to all the "cases"
But even when the algorithm has been shown to produce the expected output in the instances {(0, 0), (1, 0), (0, 1), (2, 1), (1, 1), (1, 2)}, we are left with an uneasy feeling until we can devise a "convincing demonstration" that the cases (x, y) = (n, m) all yield the expected results. To Kleene's point: is our "demonstration" (i.e. the algorithm that is our demonstration) convincing enough to be considered effective?
The unbounded μ-operator is defined by Minsky (1967) p. 210 but with a peculiar flaw: the-operator will not yield t=0 when its predicate (the IF-THEN-ELSE test) is satisfied; rather it yields t=2. In Minsky's version the counter is "t", and the function φ(t, x) deposits its number in register φ. In the usual μ definition register w will contain 0, but Minsky observes that it can contain any number k. Minsky's instruction set is equivalent to the following where "JNE" = Jump to z if Not Equal:
IR | Instruction | Action on register | Action on Instruction Register, IR | |
---|---|---|---|---|
n | μyφ( x ): | CLR ( w ) | 0 → w | [ IR ] + 1 → IR |
n+ 1 | CLR ( t ) | 0 → t | [ IR ] + 1 → IR | |
n+2 | loop: | φ (y, x) | φ( [ t ], [ x ] ) → φ | [ IR ] + j + 1 → IR |
n+j+3 | INC ( t ) | [ t ] + 1 → t | [ IR ] + 1 → IR | |
n+j+4 | JNE (φ, w, loop) | none | CASE: { IF [φ] ≠ [w] THEN "exit" → IR ELSE [IR] + 1 → IR } | |
n+j+5 | INC ( t ) | [ t ] + 1 → t | [ IR ] + 1 → IR | |
n+j+6 | exit: | etc. |
The unbounded μ-operator is also defined by Boolos-Burgess-Jeffrey (2002) p. 60-61 for a counter machine with an instruction set equivalent to the following:
In this version the counter "y" is called "r2", and the function f( x, r2 ) deposits its number in register "r3". Perhaps the reason Boolos-Burgess-Jeffrey clear r3 is to facilitate an unconditional jump to loop; this is often done by use of a dedicated register "0" that contains "0":
IR | Instruction | Action on register | Action on Instruction Register, IR | |
---|---|---|---|---|
n | μr2[f(x, r2)]: | CLR ( r2 ) | 0 → r2 | [ IR ] + 1 → IR |
n+1 | loop: | f(y, x) | f( [ t ], [ x ] ) → r3 | [ IR ] + j + 1 → IR |
n+2 | JZ ( r3, exit ) | none | IF [ r3 ] = 0 THEN exit → IR ELSE [ IR ] + 1 → IR | |
n+j+3 | CLR ( r3 ) | 0 → r3 | [ IR ] + 1 → IR | |
n+j+4 | INC ( r2 ) | [ r2 ] + 1 → r2 | [ IR ] + 1 → IR | |
n+j+5 | JZ ( r3, loop) | CASE: { IF [ r3 ] = 0 THEN loop → IR ELSE [IR] + 1 → IR } | ||
n+j+6 | exit: | etc. |
Seamless Wikipedia browsing. On steroids.
Every time you click a link to Wikipedia, Wiktionary or Wikiquote in your browser's search results, it will show the modern Wikiwand interface.
Wikiwand extension is a five stars, simple, with minimum permission required to keep your browsing private, safe and transparent.