Thursday, June 24, 2021

The detectEr runtime-verification tool for Erlang programs

Thanks to the huge amount of excellent work done by Duncan Paul Attard and Adrian Francalanza, we now have a tutorial on detectEr that some of you might want to check out. See this web page for all the material, tool download and links to the videos of the tutorial Duncan delivered at the DisCoTec 2021 Tutorial Day

detectEr is a runtime verification tool for asynchronous component systems that run on the Erlang Virtual Machine. It also supports monitoring systems that can execute outside of the EVM, so long as these can produce traces that are formatted in a way that is parsable by detectEr. The tool itself is developed in Erlang, and is the product of five years of theoretical and practical development. (Erlang is a programming language used to build massively scalable soft real-time systems with requirements on high availability.)  

Enjoy!

Friday, June 18, 2021

Interview with CONCUR 2021 ToT Award recipients, Part 1: Rajeev Alur, Thomas Henzinger, Orna Kupferman and Moshe Vardi

Last year, the CONCUR conference series inaugurated its Test-of-Time Award, whose purpose is to recognise important achievements in Concurrency Theory that were published at the CONCUR conference and have stood the test of time. This year, the following four papers were chosen to receive the CONCUR Test-of-Time Awards for the periods 1994–1997 and 1996–1999 by a jury consisting of Rob van Glabbeek (chair), Luca de Alfaro, Nathalie Bertrand, Catuscia Palamidessi, and Nobuko Yoshida: 

Last year, I interviewed the CONCUR 2020 Test-of-Time Award recipients and was asked by Javier Esparza (chair of the CONCUR SC) and Ilaria Castellani (outgoing chair of the IFIP WG 1.8 on Concurrency Theory) to do the same with the current batch of awardees. (In passing, let me thank Nathalie Bertrand and Nobuko Yoshida for their kind help with the interviews!)

This post is devoted to the interview I conducted via email with Rajeev Alur, Thomas A. Henzinger, Orna Kupferman and Moshe Y. Vardi. Reading the answers I received from that dream team of colleagues was like a masterclass for me and I trust that their thoughts on their award-winning paper will be of interest to many of the readers of this blog. Enjoy!

Luca: You receive the CONCUR ToT Award 2021 for your paper Alternating Refinement Relations, which appeared at CONCUR 1998. In that article, you gave what I consider to be a fundamental contribution, namely the introduction of refinement relations for alternating transition systems. Could you briefly explain to our readers what alternating transition systems are? Could you also tell us how you came to study the question addressed in your award-winning article and why you focused on simulation- and trace-based refinement relations? Which of the results in your paper did you find most surprising or challenging? 

Answer: When we model a system by a graph, our model abstracts away some details of the system. In particular, even when systems are deterministic, states in the model may have several successors. The nondeterminism introduced in the model often corresponds to different actions taken by the system when it responds to different inputs from its environment. Indeed, a transition in a graph that models a composite system corresponds to a step of the system that may involve some components. Alternating transition systems (ATSs) enable us to model composite systems in more detail. In an ATS, each transition corresponds to a possible move in a game between the components, which are called agents. In each move of the game, all agents choose actions, and the successor state is deterministically determined by all actions. Consequently, ATSs can distinguish between collaborative and adversarial relationships among components in a composite system. For example, the environment is typically viewed adversarially, meaning that a component may be required to meet its specification no matter how the environment behaves.

In an earlier paper, some of us introduced ATSs and Alternating Temporal Logics, which can specify properties of agents in a composite system. The CONCUR 1998 paper provided refinement relations between ATSs which correspond to alternating temporal logics. Refinement is a central issue in a formal approach to the design and analysis of reactive systems. The relation “I refines S '' intuitively means that system S has more behaviors than system I. It is useful to think about S being a specification and I an implementation. Now, if we consider a composite implementation I||E and specification S||E and we want to check that the component I refines the component S, then the traditional refinement preorders are inappropriate, as they allow I to achieve refinement of I||E with respect to S||E by constraining its environment E. Alternating refinement relations are defined with respect to ATSs that model the interaction among the underlying components, and they enable us to check, for example, that component I has fewer behaviors than component S no matter how component E behaves. They are called “alternating” because refinement may restrict implementation actions but must not restrict environment actions. In other words, refinement may admit fewer system actions but, at the same time, more environment actions. 

It was nice to see how theoretical properties of preorders in the traditional setting are carried over to the game setting, and so are the results known then about the computational price of moving to a game setting. First, the efficiency of the local preorder of simulation with respect to the global preorder of trace containment is maintained. As in the traditional setting, alternating simulation can be checked in polynomial time, whereas alternating trace-containment is much more complex. Second, the branching vs. linear characterizations of the two preorders is preserved: alternating simulation implies alternating trace containment, and the logical characterization of simulation and trace-containment by CTL and LTL, respectively, is carried over to their alternating temporal logics counterparts. The doubly-exponential complexity of alternating trace containment, as opposed to the PSPACE complexity of trace containment, is nicely related to the doubly-exponential complexity of LTL synthesis, as opposed to its PSPACE model-checking complexity,

Luca: In your paper, you give logical characterisations of your alternating refinement relations in terms of fragments of alternating temporal logic. Logical characterisations of refinement relations are classic results in our field and I find them very satisfying. Since I teach a number of those results in my courses, I'd be interested in hearing how you would motivate their interest and usefulness to a student or a colleague. What would your "sales pitch" be? 

Answer: There is extensive research on the expressive power of different formalisms. Logical characterization of refinement relations tells us something about the distinguishing power of formalisms. For example, while the temporal logic CTL* is more expressive than the temporal logic CTL, the two logics have the same distinguishing power: if you have two systems and can distinguish between them with a CTL* formula (that is, your formula is satisfied only in one of the systems), then you should be able to distinguish between the two systems also with a CTL formula. Moreover, while CTL is not more expressive than LTL, we know that CTL is “more distinguishing” than LTL. These results have to do with the logical characterizations of trace containment and simulation. The distinguishing power of a specification formalism is useful when we compare systems, in particular an implementation and its abstraction: if we know that the properties we care about are specified in some formalism L, and our system refines the abstraction according to a refinement relation in which the satisfaction of specifications in L is preserved, then we can perform verification on the abstraction.

Luca: I am interested in how research collaborations start, as I like to tell "research-life stories" to PhD students and young researchers of all ages. Could you tell us how you started your collaboration on the award-winning paper? 

Answer: Subsets of us were already collaborating on other topics related to reactive models and model checking, and all of us shared a common belief that the field was in need to move from the limited setting of closed systems to a more general setting of open systems, that is, systems that interact with an environment. Open systems occur not only when the environment is fully or partly unknown, but also when a closed system is decomposed into multiple components, each of them representing an open system. To build “openness” into models and specifications as first-class citizens quickly leads to the game-theoretic (or “alternating”) setting. It was this realization and the joint wish to provide a principled and systematic foundation for the modeling and verification of open systems which naturally led to this collaboration.

Luca: Did any of your subsequent research build explicitly on the results and the techniques you developed in your award-winning paper? Which of your subsequent results on alternating transition systems and their refinement relations do you like best? Is there any result obtained by other researchers that builds on your work and that you like in particular or found surprising? 

Answer: Various subsets of us pursued multiple research directions that developed the game-theoretic setting for modeling and verification further, and much remains to be done. Here are two examples. First, the game-theoretic setting and the alternating nature of inputs and outputs are now generally accepted as providing the proper semantic foundation for interface and contract formalisms for component-based design. Second, studying strategic behavior in multi-player games quickly leads to the importance of probabilistic behavior, say in the form of randomized decisions and strategies, of equilibria, when players have non-complementary objectives, and of auctions, when players need to spend resources for decisions. All of these are still very active topics of research in computer-aided verification, and they also form a bridge to the algorithmic game theory community.

Luca: One can view your work as a bridge between concurrency theory and multi-agent systems. What impact do you think that your work has had on the multi-agent-system community? And what has our community learnt from the work done in the field of multi-agent systems? To your mind, what are the main differences and points of contact in the work done within those communities? 

Answer: Modeling interaction in multi-agent systems is of natural interest to planning problems studied in the AI community. In 2002, the International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS) was formed and the annual International Conference on Autonomous Agents and Multiagent Systems (AAMAS) was launched. The models, logics, and algorithms developed in the concurrency and formal methods communities have had a strong influence on research presented at AAMAS conferences over the past twenty years. Coincidentally, this year our paper on Alternating-Time Temporal Logic was chosen for the IFAAMAS Influential Paper Award.

Luca: What are the research topics that you find most interesting right now? Is there any specific problem in your current field of interest that you'd like to see solved?

Answer: Research on formal verification and synthesis, including our paper, assumes that the model of the system is known. Reinforcement learning has emerged as a promising approach to the design of policies in scenarios where the model is not known and has to be learned by agents by exploration. This leads to an opportunity for research at the intersection of reactive synthesis and reinforcement learning. A potentially promising direction is to consider reinforcement learning for systems with multiple agents with both cooperative and adversarial interactions.

The realization that reactive systems have to satisfy their specifications in all environments has led to extensive research relating formal methods with game theory. Our paper added alternation to refinement relations. The transition from one to multiple players has been studied in computer science in several other contexts. For the basic problem of reachability in graphs, it amounts to moving from reachability to alternating reachability. We recently studied this shift in other fundamental graph problems, like the generation of weighted spanning trees, flows in networks, vertex covers, and more. In all these extensions, we consider a game between two players that take turns in jointly generating the outcome. One player aims at maximizing the value of the outcome (e.g., maximize the weight of the spanning tree, the amount of flow that travels in the network, or the size of the vertex cover), whereas the second aims at minimizing the value. It is interesting to see how some fundamental properties of graph algorithms are lost in the alternating setting. For example, following a greedy strategy is not beneficial in alternating spanning trees, optimal strategies in alternating flow networks may use fractional flows, and while the vertex-cover problem is NP-complete, an optimal strategy for the maximizer player can be found in polynomial time. Many more questions in this setting are still open. 

Luca: What advice would you give to a young researcher who is keen to start working on topics related to alternating transition systems and logics?

Answer: One important piece of advice to young researchers is to question the orthodoxy. Sometimes it is necessary to learn everything that is known about a topic but then take a step back, look at the bigger picture, reexamine some of the fundamental assumptions behind the established ways of thinking, change the models that everyone has been using, and go beyond the incremental improvement of previous results. This is particularly true in formal methods, where no single model or approach fits everything. And young researchers stand a much better chance of having a really fresh new thought than those who have been at it for many years.

Tuesday, June 01, 2021

Frank P. Ramsey on research and publication rates

Spurred by the excellent 1978 radio programme 'Better than the Stars' by D. H. Mellor about Frank Ramsey and by the Philosophy Bites interview with Cheryl Misak on Frank Ramsey and Ludwig Wittgenstein, I started reading Cheryl Misak's biography of Frank Ramsey. (FWIW, I strongly recommend the radio programme, the podcast and the book.)

The following quote from pages 169-170 of Cheryl Misak's book describes Ramsey's views on publications and research, as stated in a letter to his father Arthur:

Arthur tried a different tack, suggesting that Frank was going to be in trouble for wasting all this time on analysis, rather than on his career. On 24 September, in what seems to be his last letter home before he left Vienna, Frank wrote:
I don’t see how there can be any such inquisition into my conduct in Vienna as you suppose seem to want to guard against.... No one can suppose that you can’t research for six months without having a paper ready by the end. If everyone wrote a paper every six months the amount of trivial literature would swell beyond all bounds. Given time I shall produce a good paper. But if I hurry it will be ill written and unintelligible and unconvincing.
It seems to me perfectly proper to spend a scholarship being analysed, as it is likely to make me cleverer in the future, and discoveries of importance are made by remarkable people not by remarkable diligence.
While it may not be persuasive that psychoanalysis makes one cleverer, Frank was prescient that the numbers of journal articles would eventually swell and he was right that diligence isn't enough to produce discoveries of importance.
 
Of course, academia has changed since those times and I do value "remarkable diligence". However, I will try to remember Ramsey's words next time someone proposes to make academic hirings and promotions conditional to having a certain number of papers or citations or whatever per year on average.

Wednesday, May 26, 2021

Course on Ethics and Accountability in Computer Science

About one year ago, I became aware of the course "Ethics and Accountability in Computer Science" designed and taught by Rodrigo Ferreira and Moshe Y. Vardi at Rice University. The goals and high-level structure of the course, as well as its context, are described in an informative and thoughtful paper by Moshe and Rodrigo that appears in the Proceedings of the 52nd ACM Technical Symposium on Computer Science Education (SIGCSE ’21), which I strongly recommend. You can also read a journal paper reporting on one of their course assignments, which was designed to make students focus on practising "deep attention" in the sense of artist and academic Jenny Odell.

I was smitten by the underlying tenet for their course, namely that "social justice is the single most important issue confronting computer science students today." That tenet is also very much in line with a reflection on the impact that digital technology has on social justice that the Scientific Advisory Board of the Gran Sasso Science Institute asked the Computer Science group at that institute to undertake. Therefore, after having invited Rodrigo to deliver a webinar on the course at the ICE-TCS+GSSI webinar series, I decided that, as an experiment, I would offer a version of the Rice University course to master students in computer science, language technology and software engineering at Reykjavik University during our spring semester 2021. 

Mine was a foolhardy decision for a variety of reasons. However, I have always believed that computer scientists should strive to be the 21st century Renaissance men and women, and bridge the gap between C. P. Snow's "two cultures". Indeed, to quote Edward A. Lee freely, to my mind technologists ought to be amongst the greatest humanists of our age. Despite my other commitments, offering a version of the Ferreira-Vardi course at Reykjavik University felt like the right thing to do at this time. 

I taught the course over twelve weeks to a varied group of eight students, in cooperation with Claudio Pedica. Thanks to the constant support I received from Rodrigo Ferreira, I lived to tell the tale and I hope that I managed to do some justice to the truly excellent course that Moshe and Rodrigo put together. Having a dream team of students with a variety of cultural backgrounds made the course extremely interesting and a learning experience for me. I had to refresh my memory of the philosophy I studied at high school in Italy in a previous life, learn some modern moral philosophy I had not met at school (such as the work by Elisabeth Anscombe and Philippa Foot, amongst others), read a substantial amount of new material (some of it fresh off the press as the course was unfolding) and broaden my horizons. I could not have asked for a better intellectual experience and the students in the course, from Denmark, France, Germany, Iceland and the Netherlands taught me well, kept me on my toes and stimulated me to keep reading material related to ethics even now that the course is over.

Teaching the course reinforced my belief that a course on ethics and on the impact that our field has on social justice should be required for all students in computer science. If you plan to run such a course, I strongly recommend that you consider the Ferreira-Vardi course as a blueprint.




Friday, April 30, 2021

PolyConc: Online collaboration to improve on a result on the equational theory of CCS modulo bisimilarity

The aim of this post is to try and start an online collaboration to improve the solution to a problem in the equational logic of processes that I posed in a survey paper in 2003, namely

Can one obtain a finite axiomatisation of the parallel composition operator in bisimulation semantics by adding only one binary operator to the signature of (recursion, restriction, and relabelling free) CCS?

Valentina Castiglioni, Wan Fokkink, Anna Ingólfsdóttir, Bas Luttik and I published a partial, negative answer to the above question in a paper at CSL 2021. (See the arXiv version for details and for the historical context for the above question.) Our solution is based on three simplifying assumptions that are described in detail in Section 3 of the above-mentioned paper. We'd be very interested in hearing whether any member of the research community in process algebra, universal algebra and equational logic can relax or remove any of our simplifying assumptions. In particular, one can start with assumptions 3 and 2. 

We would also welcome any comments and suggestions on whether some version of that problem can be solved using existing results from equational logic and universal algebra. In particular, are there any general results guaranteeing that, under certain conditions, the reduct of a finitely based algebra is also finitely based? Or, conversely, that if some algebra is not finitely based, then so its expansion with a new operator?

To start with, add any contributions you might have as comments to this post. If ever we make substantial enough progress on the above question, anyone who has played a positive role in extending our results will be a co-author of the resulting paper. 

Let PolyConc begin!

Sunday, February 21, 2021

Article by Sergey Kitaev and Anthony Mendes in Jeff Remmel's memory

Sergey Kitaev just shared with me an article he wrote with Anthony Mendes in Jeff Remmel's memory. Jeff Remmel was a distinguished mathematician with a very successful career in both logic and combinatorics. 

The short biography at the start of the article paints a vivid picture of Jeff Remmel's  personality, and will be of interest and inspiration to many readers. His hiring as "an Assistant Professor in the Department of Mathematics at UC San Diego at age 25, without officially finishing his Ph.D. and without having published a single paper" was, in Jeff Remmel's own words, a "fluke that will never happen again."

I had the pleasure of making Jeff Remmel's acquaintance when he visited Sergey in Reykjavik and thoroughly enjoyed talking to him about a variety of subjects. He was truly a larger-than-life academic.

Monday, February 08, 2021

Whence do research collaborations (in TCS) arise?

About ten days ago, I gave a talk to my colleagues at the Department of Computer Science at Reykjavik University, introducing my personal (and admittedly very biased) view of the past, present and future of ICE-TCS

After my presenta­tion, a colleague asked me how she could engage mathematicians and theoretical computer scientists in joint research. I gave her an answer off the top of my head, but it was clear that she was unconvinc­ed and felt that I was avoid­ing answering her question. (For the record, I basically told her that she should knock on our door, discuss with us the problems she was interested in solving and hope that they are of interest to us. I feel that many research collaborations arise from serendipity and that there is no recipe that is guaranteed to work.) 

The thought that she felt that I might have dodged her question prompted me to look back at my own research collabo­rations and how they came about. The rest of this post is the result of that quick-and-dirty reflection. Let me state right away that my list isn't meant to be exhausti­ve and that I won't mention many of the collaborations in which I have been lucky to be involved and that I have played a crucial role in shaping my academic development. 

Reading papers. One of my long-term research collaborations arose from reading a paper written by a colleague. His paper prompted my companion and me to ask ourselves whether we could prove a similar result to the one our colleague had shown in a different setting. We succeeded and sent him our paper. Subsequently, we invited him to visit us in Aalborg. That visit marked the start of a collaboration and friendship that has lasted for over 20 years.

Approaching a colleague via email for help in solving a problem. At some point, my companion and I were thinking about a research problem that had frustrated us for a while. I remembered reading a number of papers by a colleague on related topics, so I wrote to him, describing the problem, our attempts at solving it and where we had hit a brick wall. I asked him whether he would be interested in working with us on solving it. He did and that was one of the lucky breaks I have had in my research career. Once more, that collaboration offer via email led to mutual visits, other joint papers and, IMHO even more importantly, a long-term friendship that extended beyond work.
 

Available funding and building on one's mistakes. One day in 2009, an email in my mailbox alerted me to the availability of substantial funding for research collabo­ration between universities in country X and those locat­ed in Norway, Iceland and Lichtenstein. This opportunity was enticing, as I had never visited country X, so I asked myself: "Is there anyone there we might conceivably work with?'' Mulling over that question, I recalled that a colleague from country X had spotted an imprecision in a paper I had coauthored. 

I wrote to him, we applied for that funding jointly and got it. That successful grant application provided the funds for many research visits involving several people in our research groups. Those visits resulted in joint papers, another successful grant application and a number of friendships.

Coffee breaks at conferences. I have at least two exhibits under this heading. The first belongs to a previous geologic­al era (1991). I was attend­ing a conference at CMU and asked a colleague what he was working on. He told me
about a problem he was tackling, which I knew was also on the radar of a fellow researcher and on which I had started working independently. Eventually, after some email exchanges, that chat over coffee turned into a three-way collaboration that, thanks to my coauthors, produced one of my best papers.

Fast forward to 2017 and I'm in Rome to deliver an invited talk at a small conference. During the coffee break follow­ing my presentation, I was approached by a young research­er, with whom I had a number of pleasant conversations during the conference. Some time later, she sent me a draft paper dealing with a topic related to the content of my invited talk. I invited her to visit our research group in Reykjavik and to join the team working on a research project for which we had funding at the time. Those coffee-break conversations led to a collaboration and friendship that I hope will last for a long time. Meeting that colleague has been another of my lucky breaks.
 

Reading groups. Last, but by no means least, let me mention that my first research collaboration that did not involve my thesis supervisors arose when I read Gordon Plotkin's famous "Pisa Notes (On Domain Theory)" with a fellow PhD student. Reading that work led to our first joint paper in 1991 and a companionship that has lasted to this day. I heard Orna Kupferman give the following, tongue-in-cheek advice to young researchers: "Write papers with your twin-sister!" Mine might be: "Write papers with your companion in life!" 

Let me conclude by saying that serendipity and an actual friendship that extends beyond the confines of scientific work were the key aspects in my most pleasant and enduring collaborations. I apologise to the colleagues from whom I have learnt much over the years (former students and postdocs, as well as others) who were the prime movers in research collaborations I did not mention in this post. 

 I guess that this note provides much more information than my colleague was intending to receive, but I thought I should put it out for the benefit of the young researchers at Reykjavik University and at the Gran Sasso Science Institute, and of any reader I might have. 

How did your research collaborations arise? If you have anything to add to what I wrote above, and I am sure you do, add your contributions as comments to this post.