Tuesday, January 01, 2019

Thirty-five years of "Testing Equivalences for Processes" (TCS edition)

Thirty-five years ago, Rocco De Nicola and Matthew Hennessy published the archival paper introducing the notions of testing equivalence over concurrent processes in the journal Theoretical Computer Science. These testing equivalences embody in a natural and mathematically elegant way the intuitive idea that two processes should be equated unless they behave differently when subjected to some ‘experiment’ or ‘test’. The origin of this notion of equivalence can be traced back to Gottfried Wilhelm Leibniz (1646–1716), who stated that two mathematical objects are equal if there is no test to distinguish them. In the semantics of programming languages, its earliest precursor is, to the best of my  knowledge, the notion of contextual equivalence proposed by Morris in his doctoral dissertation.

In general, given a set of processes, a set of tests and a relation between processes and tests that describes when a process passes a test, one can apply Leibniz’s motto and declare two processes to be equivalent if if they pass exactly the same set of tests. In the work of De Nicola and Hennessy, processes are states in some labelled transition system. A test is itself a process, which interacts with a concurrent system under observation by hand-shake synchronisation and uses a distinguished action to report success in its observation. Since both processes and tests may be nondeterministic, the interaction between a process and a test may lead to different outcomes depending on how the two systems resolve their nondeterministic choices in the course of a computation. This led De Nicola and Hennessy to define three notions of testing semantics, which are naturally expressed in terms of preorders over processes.

In the so-called may semantics, a process q is at least as good as some process p if the set of tests that p may pass is included in the set of tests that q may pass. In may semantics, possible failure under a test is immaterial and therefore nondeterminism is angelic. On the other hand, one may take the view that failure in the testing effort is catastrophic, in the sense that a process that may fail some test is just as bad as one that always fails it. The notion of testing semantics that captures this viewpoint is the so-called must semantics, according to which a process q is at  least as good as some process p if the set of tests that p must pass is included in the set of tests that q must pass. Finally, a third testing preorder over processes is obtained as the intersection of the may and must preorders described above. According to this more refined view of process behaviour, a process that always fails a test is worse than one that may pass that test, which in turn is worse than one that always passes it.

De Nicola and Hennessy explored the rich theory of the testing semantics in their seminal TCS paper (see this monograph for a classic, book-length treatment), where each of these semantics is given operational, denotational and axiomatic accounts that are in agreement one with the other. Their ideas and the accompanying technical results have had an enormous impact on further research, as witnessed, among other things, by the 1,640 citations to the TCS paper published in 1984 (source: Google Scholar today).

Happy 2019 to everyone.



No comments: