Loading AI tools
From Wikipedia, the free encyclopedia
In computer science, separation logic[1] is an extension of Hoare logic, a way of reasoning about programs. It was developed by John C. Reynolds, Peter O'Hearn, Samin Ishtiaq and Hongseok Yang,[1][2][3][4] drawing upon early work by Rod Burstall.[5] The assertion language of separation logic is a special case of the logic of bunched implications (BI).[6] A CACM review article by O'Hearn charts developments in the subject to early 2019.[7]
This article's use of external links may not follow Wikipedia's policies or guidelines. (October 2024) |
Separation logic facilitates reasoning about:
Separation logic supports the developing field of research described by Peter O'Hearn and others as local reasoning, whereby specifications and proofs of a program component mention only the portion of memory used by the component, and not the entire global state of the system. Applications include automated program verification (where an algorithm checks the validity of another algorithm) and automated parallelization of software.
Separation logic assertions describe "states" consisting of a store and a heap, roughly corresponding to the state of local (or stack-allocated) variables and dynamically-allocated objects in common programming languages such as C and Java. A store is a function mapping variables to values. A heap is a partial function mapping memory addresses to values. Two heaps and are disjoint (denoted ) if their domains do not overlap (i.e., for every memory address , at least one of and is undefined).
The logic allows to prove judgements of the form , where is a store, is a heap, and is an assertion over the given store and heap. Separation logic assertions (denoted as , , ) contain the standard boolean connectives and, in addition, , , , and , where and are expressions.
The operators and share some properties with the classical conjunction and implication operators. They can be combined using an inference rule similar to modus ponens
and they form an adjunction, i.e., if and only if for ; more precisely, the adjoint operators are and .
In separation logic, Hoare triples have a slightly different meaning than in Hoare logic. The triple asserts that if the program executes from an initial state satisfying the precondition then the program will not go wrong (e.g., have undefined behaviour), and if it terminates, then the final state will satisfy the postcondition . In essence, during its execution, may access only memory locations whose existence is asserted in the precondition or that have been allocated by itself.
In addition to the standard rules from Hoare logic, separation logic supports the following very important rule:
This is known as the frame rule (named after the frame problem) and enables local reasoning. It says that a program that executes safely in a small state (satisfying ), can also execute in any bigger state (satisfying ) and that its execution will not affect the additional part of the state (and so will remain true in the postcondition). The side condition enforces this by specifying that none of the variables modified by occur free in , i.e. none of them are in the 'free variable' set of .
Separation logic leads to simple proofs of pointer manipulation for data structures that exhibit regular sharing patterns which can be described simply using separating conjunctions; examples include singly and doubly linked lists and varieties of trees. Graphs and DAGs and other data structures with more general sharing are more difficult for both formal and informal proof. Separation logic has, nonetheless, been applied successfully to reasoning about programs with general sharing.
In their POPL'01 paper,[3] O'Hearn and Ishtiaq explained how the magic wand connective could be used to reason in the presence of sharing, at least in principle. For example, in the triple
we obtain the weakest precondition for a statement that mutates the heap at location , and this works for any postcondition, not only one that is laid out neatly using the separating conjunction. This idea was taken much further by Yang, who used to provide localized reasoning about mutations in the classic Schorr-Waite graph marking algorithm.[8] Finally, one of the most recent works in this direction is that of Hobor and Villard,[9] who employ not only but also a connective which has variously been called overlapping conjunction or sepish,[10] and which can be used to describe overlapping data structures: holds of a heap when and hold for subheaps and whose union is , but which possibly have a nonempty portion in common. Abstractly, can be seen to be a version of the fusion connective of relevance logic.
A Concurrent Separation Logic (CSL), a version of separation logic for concurrent programs, was originally proposed by Peter O'Hearn,[11] using a proof rule
which allows independent reasoning about threads that access separate storage. O'Hearn's proof rules adapted an early approach of Tony Hoare to reasoning about concurrency,[12] replacing the use of scoping constraints to ensure separation by reasoning in separation logic. In addition to extending Hoare's approach to apply in the presence of heap-allocated pointers, O'Hearn showed how reasoning in concurrent separation logic could track dynamic ownership transfer of heap portions between processes; examples in the paper include a pointer-transferring buffer, and a memory manager.
Commenting on the early classical work on interference freedom by Susan Owicki and David Gries, O'Hearn says that explicit checking for non-interference isn't necessary because his system rules out interference in an implicit way, by the nature of the way proofs are constructed.
A model for concurrent separation logic was first provided by Stephen Brookes in a companion paper to O'Hearn's.[13] The soundness of the logic had been a difficult problem, and in fact a counterexample of John Reynolds had shown the unsoundness of an earlier, unpublished version of the logic; the issue raised by Reynolds's example is described briefly in O'Hearn's paper, and more thoroughly in Brookes's.
At first it appeared that CSL was well suited to what Dijkstra had called loosely connected processes,[14] but perhaps not to fine-grained concurrent algorithms with significant interference. However, gradually it was realized that the basic approach of CSL was considerably more powerful than first envisaged, if one employed non-standard models of the logical connectives and even the Hoare triples.
An abstract version of separation logic was proposed that works for Hoare triples where the preconditions and postconditions are formulae interpreted over an arbitrary partial commutative monoid instead of a particular heap model.[15] Later, by suitable choice of commutative monoid, it was surprisingly found that the proof rules of abstract versions of concurrent separation logic could be used to reason about interfering concurrent processes, for example by encoding the rely-guarantee technique which had been originally proposed to reason about interference;[16] in this work the elements of the model were considered not resources, but rather "views" of the program state, and a non-standard interpretation of Hoare triples accompanies the non-standard reading of pre and postconditions. Finally, CSL-style principles have been used to compose reasoning about program histories instead of program states, in order to provide modular techniques for reasoning about fine-grained concurrent algorithms.[17]
Versions of CSL have been included in many interactive and semi-automatic (or "in-between") verification tools as described in the next section. A particularly significant verification effort is that of the μC/OS-II kernel mentioned there. But, although steps have been made,[18] as of yet CSL-style reasoning has been included in comparatively few tools in the automatic program analysis category (and none mentioned in the next section).
O'Hearn and Brookes are co-recipients of the 2016 Gödel Prize for their invention of Concurrent Separation Logic.[19]
Tools for reasoning about programs fall on a spectrum from fully automatic program analysis tools, which do not require any user input, to interactive tools where the human is intimately involved in the proof process. Many such tools have been developed; the following list includes a few representatives in each category.
The distinction between interactive and in-between verifiers is not a sharp one. For example, Bedrock strives for a high degree of automation, in what it terms mostly-automatic verification, where Verifast sometimes requires annotations that resemble the tactics (little programs) used in interactive verifiers.
The satisfiability problem for a quantifier-free, multi-sorted fragment of separation logic parameterized over the sorts of locations and data can be shown to be PSPACE-complete.[27] An algorithm for solving this fragment in DPLL(T)-based SMT solvers has been integrated into cvc5.[28] Extending this result, satisfiability for an analog of the Bernays–Schönfinkel class for separation logic with uninterpreted memory locations can also be shown to be PSPACE-complete, whereas the problem is undecidable with interpreted memory locations (e.g., integers) or further quantifier alternations[29]
Seamless Wikipedia browsing. On steroids.
Every time you click a link to Wikipedia, Wiktionary or Wikiquote in your browser's search results, it will show the modern Wikiwand interface.
Wikiwand extension is a five stars, simple, with minimum permission required to keep your browsing private, safe and transparent.