Luca: You receive one of the two CONCUR ToT Awards for the period 1992-1995 for your paper "Probabilistic simulations for probabilistic processes", presented at CONCUR 1994. Could you tell us briefly what spurred you to develop the “simple” probabilistic automaton model introduced in that paper and how the ideas underlying that notion came about?
Roberto: We were studying randomized distributed algorithms and we noted that several algorithms suffered from subtle errors. We were looking for a formal, rigorous way to study the correctness of those algorithms and in particular we were trying to extend the hierarchical approach used for I/O automata. We knew that there was already an extensive literature within concurrency theory, but we were not able to use the theory to express even the simple idea that you may choose internally between flipping a fair or a biased coin; the existing approaches were extending labeled transition systems by adding probabilities to the arcs, losing the ability to distinguish between probabilistic and nondeterministic choice. The fact that the axioms for nondeterministic choice were not valid in most probabilistic calculi was further evidence of the problem. Starting from the observation that in the non-probabilistic world a step of a distributed algorithm corresponds to a transition of an I/O automaton, we tried to extend the notion of transition so that the process of flipping a coin could be represented by the "new" entity. This led to the idea of "replacing points with measures" within ordinary automata and/or ordinary labeled transition systems. The idea worked very well for our analysis of randomized distributed algorithms, but we also wanted to validate it by checking whether it would simplify the concurrency-theoretical approach to probability. We decided to use simulation relations for our test, and indeed it worked and led to the CONCUR 1994 paper.
Nancy: For background for this paper, we should recall that, at the time, there was a divergence in styles of modeling for distributed systems, between different concurrency theory research communities. The style I had been using since the late 70s for modeling distributed algorithms involved interactive state machines, such as I/O automata, which are based on a set-theoretic foundation. Such models are good for describing distributed algorithms, and for analyzing them for both correctness and costs. On the other hand, the predominant style in the European concurrency community was more syntactic, based on logical languages like PCTL or various process algebras. These were good for formal verification, and for studying expressive power of different formalisms, but not great for analyzing complicated distributed algorithms. Our models were in the general spirit of the Labeled Transition System (LTS) models previously studied in Europe. When Roberto came to MIT, he and I set about to extend prior modeling work for distributed systems to the probabilistic setting. To do this, we considered both the set-theoretic and logical approaches. We needed to bridge the gap between them, which led us to the ideas in this paper.
Luca: How much of your later work has built on your CONCUR 1994 paper? What follow-up results of yours are you most proud of and why?
Roberto: Besides using the results of the paper for the analysis of several randomized distributed algorithms, we worked jointly as well as independently on the study of the theory and on the analysis of security protocols. In collaboration with Frits Vaandrager, we were able to discover different ways to analyze security in a hierarchical and compositional way. Furthermore, since in simple probabilistic automata probability and nondeterminism are well separated, it was easy to include computational complexity into the analysis of security protocols. This is work I did in collaboration with Andrea Turrini. The clear separation between probability and nondeterminism turned out to extend our approach to real-time models, leading to notions like Probabilistic Timed Automata and to several of the elements behind the first sketches of the probabilistic model checker PRISM in collaboration with the group of Marta Kwiatkowska.
Nancy: Roberto and I continued working on probabilistic models, in the set-theoretic style, for several years. As Roberto notes, Frits Vaandrager also became a major collaborator. Some of our more recent papers following this direction are:
- Compositionality for Probabilistic Automata, CONCUR 2003,
- Observing Branching Structure through Probabilistic Contexts, SIAM Journal of Computing 2007,
- Task-Structured Probabilistic I/O Automata, JCSS 2018.
Luca: Did you imagine at the time that the CONCUR 1994 paper and its journal version would have so much impact? In your opinion, what is the most interesting or unexpected use in the literature of the notions and techniques you developed in your award-winning paper?
Roberto: We knew that the change of view proposed in the paper would have simplified the study and extension of several other concepts within concurrency theory, but we did not have any specific expectations. The collaboration on the PRISM project and on Probabilistic Timed Automata made more evident that there was a connection with Markov Decision Processes, which lead to a fruitful cross-fertilization between artificial intelligence, model checking, and concurrency theory. It was a long exchange of ideas between a world interested in existential quantification, that is, find an optimal policy to achieve a goal, and universal quantification, that is to make sure that under any scheduler, policy, adversary, a system behaves correctly. The consequences of such exchange were many and unexpected.
Nancy: No, I did not anticipate that it would have broad impact, though of course I thought the ideas were important. But in general, I am bad at predicting what will appeal to others and inspire them to further work.
Luca: The journal version of your paper appears in the Nordic Journal on Computing, which is not a prime venue. Did you ever regret not publishing that work somewhere else? What is your take on the trend of using the perceived quality of a publication venue to evaluate research quality?
Roberto: Within semantics I know of a few technical reports that are much more influential than most journal papers; I do not think that the venue of a paper should be given much importance. On the other hand, venue as an evaluation tool is more objective, allows an evaluator to consider many papers without reading them, and protects the evaluator from any type of claim against decisions, which sometimes may have even legal consequences. Needless to say that I do not agree with any of the ideas above. I do not agree as well with the other emerging trend of counting papers to evaluate quality. One good paper is much better than ten mediocre papers. What worries me the most is that young researchers may have a hard time to concentrate on quality if they have to focus on venue and quantity. I feel I was lucky not to have to worry about these issues when we submitted our paper for the special issue of the Nordic Journal of Computing.
Nancy: I certainly never regretted the venue for this paper. In general, I haven't paid too much attention to choice of publication venue. The main thing is to reach the audience you want to reach, which can be done through prestigious journals, less prestigious journals, conferences, or even ArXiv technical reports and some publicity. It’s good to get feedback from referees, though. For evaluations, say for hiring or promotion, I think it’s fair to take many factors into account in evaluating research quality. Venues, number of citations, special factors about different fields,… all can be discussed by hiring and promotion committees. But I hope that those committees also take the time to actually read and consider the work itself.
Luca: The last thirty years have seen a huge amount of work on probabilistic models of computation and on the development of proof techniques and tools based on them. What advice would you give to a young researcher interested in working on probabilistic models in concurrency today?
Roberto: My first advice would be to keep things simple and to focus on key ideas, possibly under the guidance of multiple application scenarios. When we study probabilistic concurrency, especially in contexts where the external world is governed by non-discrete laws, the interplay between different features of the model may become overwhelmingly complex. Of course it is a nice exercise to dive into all the details and see what it leads to, or to extend known results to the new scenarios, but how about checking whether some simple changes to one of the building blocks could make life easier? Unfortunately, working on building blocks is risky. So, my second advice is to dive into details and write papers, but take some breaks and think whether there may be nicer ways to solve the same problems.
Nancy: Pick some particular application domain, and make sure that the models and techniques you develop work well for that application. Don't work on the theory in a vacuum. The models will turn out much better! Perhaps simpler, as Roberto says.
Luca: What are the research topics that currently excite you the most?
Roberto: Difficult question. There are many things I like to look at, but at the moment I am very curious about how quantum computing can fit in a nice and elementary way into a concurrency framework.
Nancy: Probabilistic algorithms, especially those that are flexible (work in different environments), robust to failures and noise, and adaptive to changes. This includes such topics as wireless network algorithms, robot swarms, and biologically-inspired algorithms.
Acknowledgements: Many thanks to Ilaria Castellani, who pointed out some typos in the original version of this text.