Wednesday, December 15, 2021

Call for Invited Talk Nominations: 7th Highlights of Algorithms conference (HALG 2022)

On behalf of Keren Censor-Hillel, PC chair for HALG 2022, I am happy to post the call for invited talk nominations for that event. I encourage all members of the algorithms community to submit their nominations. HALG has rapidly become a meeting point for that community in a relaxed workshop-style setting. 

Call for Invited Talk Nominations
7th Highlights of Algorithms conference (HALG 2022)
LSE London, June 1-3, 2022

The HALG 2022 conference seeks high-quality nominations for invited talks that will highlight recent advances in algorithmic research appearing in a paper in 2021 or later.
To nominate, please email the following information in plaintext only:

  • Basic details: title of paper, authors, conference/arxiv links, suggested speaker.
  • Brief justification: Focus on the benefits to the audience, e.g., quality of results, importance/relevance of topic, clarity of talk, speaker’s presentation skills.

Nominations will be reviewed by the Program Committee to select speakers that will be invited to the conference.
Nomination deadline: January 14, 2022
Please go to for general information on HALG and for information about previous HALG meetings.
Keren Censor-Hillel
PC chair for HALG 2022

Tuesday, December 14, 2021

Faculty positions in Computer Science at Reykjavik University

The Department of Computer Science at Reykjavik University invites applications for full-time, permanent faculty positions at any rank in the fields of Software Engineering, and Design and Development of Computer Games. Outstanding candidates in other areas of Computer Science are encouraged to apply as well. 

We are looking for energetic, highly qualified academics with a proven international research record and excellent potential in their field of study. The primary evaluation criterion is scientific quality. We particularly welcome applications from researchers who have a strong network of research collaborators, can strengthen internal collaborations within the department, have the proclivity to improve their academic environment and its culture, and generally have potential to flourish in our environment.

Apart from developing their research career, the successful applicants will play a full part in the teaching and administrative activities of the department, teaching courses and supervising students at both graduate and undergraduate level. Applicants with a demonstrated history of excellence in teaching are preferred. Salary and rank are commensurate with experience. Among other benefits, Reykjavik University offers its research staff the option to take research semesters (sabbaticals) after three years of satisfactory teaching and research activity. The positions are open until filled, with intended starting date in August 2022. Later starting dates can be negotiated. 

The deadline for applications is January 30, 2022. The review of the applications will begin in late January 2022, and will continue until the positions are filled.

See here for full details and information on how to apply. 

Near-Optimal Distributed Degree+1 Colouring

If you are interested in graph colouring, and specifically in distributed algorithms for that classic problem, I strongly recommend the recent paper posted by Magnús M. Halldórsson, Fabian Kuhn, Alexandre Nolin and Tigran Tonoyan. (For the record, three of the authors work, or have worked, in my department at Reykjavik University, so I am biased.) 

The above-mentioned paper addresses the problem of finding a proper colouring of a graph when each node v has a palette of degree(v)+1 colours at its disposal. The paper resolves an open problem in distributed colouring and simplifies known results in doing so. As a byproduct of their results, the authors obtain improved streaming and sub-linear time algorithms. See the paper for much more information on the context for this result and the technical details. 

Congratulations to the authors!

Saturday, December 04, 2021

TheoretiCS: A new open-access journal in Theoretical Computer Science

I am delighted to share with all the readers of this blog the news of the launch of TheoretiCS, a new, top-quality, open-access journal dedicated to the whole of Theoretical Computer Science. The journey leading to the establishment of this new journal has been long and the fact that this path eventually reached its destination is due to the sterling efforts and foresight of a number of people within our community. The seeds of TheoretiCS were sown within the Council of the European Association for Theoretical Computer Science during my two terms as president of that association. At that time, Jos Baeten and Antonin Kucera worked hard to try and set up an open-access journal of the association. That journal never saw the light of day, but the process that eventually led to TheoretiCS was afoot. After some time, Thomas Schwentick took the baton in the relay race and took decisive steps in turning the establishment of this new journal into a true community effort. It is fair to say that the launch of this journal has involved an unprecedented level of cooperation amongst representatives of leading conferences from across the entire Theoretical Computer Science community. I am immensely grateful to everyone who participated in that work for the effort they put into making TheoretiCS a reality and apologise for not mentioning all the colleagues who contributed explicitly in this post. You can find more information on TheoretiCS in the text below, which I received from Jos Baeten, founding member-at-large of the Advisory Board of the Theoretics Foundation. 

So, what can you do to support this new journal? There are at least two things that each of us can do and that I believe are important. 

  1. Help TheoretiCS reach its quality objectives by submitting your best work to it and by encouraging your colleagues to do so too. TheoretiCS aims at publishing articles of a very high quality, and at becoming a reference journal on par with the leading journals in the area. To be accepted, a paper must make a significant contribution of lasting value to a relevant area of TCS, and its presentation must be of high quality. This is elaborated upon in the guiding principles spelled out below. I let you reach your own conclusions about the quality and the breadth of the editorial board of TheoretiCS, which is stellar IMHO.
  2. Spread the news of the launch of TheoretiCS within your networks. Despite its many years of existence and the role it plays within our community, I still meet colleagues (even in Theoretical Computer Science) who do not know that LIPIcs exists and who claim that publishing conference proceedings with commercial publishers is a sign of scientific quality. 

If you happen to be in a position to do so, you can support the TheoretiCS foundation, which is a German not-for-profit organization, by making a donation. 

I look forward to seeing the developments of TheoretiCS. Even though predicting the future is difficult, I have no doubt that the journal will become a coveted outlet, with the help of the community it aims to serve. Welcome to TheoretiCS and good luck!

=== Additional information about the journal ===

---  Guiding principles and objectives  ---

- We believe that our field (and science in general) needs more 'virtuous' open-access journals, a whole eco-system of them, with various levels of specialization and of selectivity. We also believe that, along with the structuring role played by conferences in theoretical computer science, we collectively need to re-develop the practice of journal publications.

- The scope of TheoretiCS is the whole of Theoretical Computer Science, understood in an inclusive meaning (concretely: including, but not restricted to, the Theory of Computing and the Theory of Programming; or equivalently, the so-called TCS-A and TCS-B, reminiscent of Jan van Leeuwen's Handbook of Theoretical Computer Science).

- Our aim is to rapidly become a reference journal and to contribute to the unity of the Theoretical Computer Science global community. In particular, we will seek to publish only papers that make a very significant contribution to their respective fields, that strive to be accessible to a wider audience within theoretical computer science, and that are, generally, of a quality on par with the very best journals in the field.

- TheoretiCS adheres to the principles of 'virtuous' open-access: there is no charge to read the journal, nor to publish in it. The copyright of the papers remains with the authors, under a Creative Commons license.

---  Editorial Board  ---

The inaugural Editors-in-Chief are Javier Esparza (TU München) and Uri Zwick (Tel Aviv U.). The entire Editorial Board can be seen at (see below as well).

---  Organization and a bit of history  ---

The project built on groundwork laid in the Council of the EATCS, and started in 2019 and underwent a long gestation. From the start, we wanted to have a thorough discussion with a wide representation of the community, on how to best implement the guiding principles sketched above. It was deemed essential to make sure that all fields of theoretical computer science would feel at home in this journal, and that it would be recognized as a valid venue for publication all over the world.

This resulted in the creation of an Advisory Board, composed of representatives of most of the main conferences in the field (currently APPROX, CCC, COLT, CONCUR, CSL, FOCS, FoSSaCS, FSCD, FSTTCS, ICALP, ICDT, ITCS, LICS, MFCS, PODC, SoCG, SODA, STACS, STOC, TCC) and of so-called members-at-large. The composition of this Advisory Board, which may still evolve, can be consulted at

---  Logistics and answers to some natural questions  ---

- The journal is published by the TheoretiCS Foundation, a non-profit foundation established under German law.

- TheoretiCS is based on the platform, in the spirit of a so-called overlay journal.

- The Advisory Board, together with the Editors-in-Chief and the Managing Editors, devolved much of their efforts to designing and implementing an efficient 2-phase review system: efficient in terms of the added-value it brings to the published papers and their authors, and of the time it takes. Yet, as this review system relies in an essential fashion on the work and expertise of colleagues (like in all classical reputable journals), we cannot guarantee a fixed duration for the evaluation of the papers submitted to TheoretiCS.

- Being charge-free for authors and readers does not mean that there is no cost to publishing a journal. These costs are supported for the foreseeable future by academic institutions (at the moment, CNRS and Inria, in France; others may join).

- The journal will have an ISSN, and each paper will have a DOI. There will be no print edition.

---  Editorial Board  ---


    Javier Esparza (Technische Universität München, Munich)
    Uri Zwick (Tel Aviv University, Tel Aviv)

Managing  Editors

    Antoine Amarilli (Institut polytechnique de Paris, Télécom Paris)
    Nathanaël Fijalkow (CNRS, Bordeaux, and The Alan Turing Institute of Data Science, London)


    Martin Abadi, Google, USA
    Andris Ambainis, U. of Latvia
    Albert Atserias, UPC, Barcelona
    Haris Aziz, UNSW, Sydney
    David Basin, ETH Zürich
    Patricia Bouyer, CNRS, Paris-Saclay
    Nicolò Cesa-Bianchi, Università di Milano
    Anuj Dawar, Cambridge University
    Luc Devroye,  McGill University, Montreal
    Jacob Fox, Stanford University
    Mohsen Ghaffari, ETH Zürich
    Georg Gottlob, Oxford University
    Anupam Gupta, Carnegie-Mellon University
    Venkatesan Guruswami, Carnegie-Mellon University
    Johan Håstad, KTH, Stockholm
    Ravi Kannan, Microsoft Research India, Bengaluru
    Anna Karlin, University of Washington, Seattle
    Ken-ichi Kawarabayashi, National Institute of Informatics, Tokyo
    Valerie King, University of Victoria
    Robert Kleinberg, Cornell University
    Naoki Kobayashi, University of Tokyo
    Elias Koutsoupias, Oxford University
    Xavier Leroy, Collège de France, Paris
    Katrina Ligett, Hebrew University, Jerusalem
    Rupak Majumdar, MPI-SWS, Kaiserslautern
    Joseph Mitchell, State University of New York at Stony Brook
    Mehryar Mohri, Google and New York University
    David Mount, University of Maryland
    Anca Muscholl, Université de Bordeaux
    Danupon Nanongkai, University of Copenhagen
    Moni Naor, Weizmann Institute, Rehovot
    Catuscia Palamidessi, Inria, Palaiseau
    Michał Pilipczuk, University of Warsaw
    Jean-Francois Raskin, Université Libre de Bruxelles
    Peter Sanders, KIT, Karlsruhe
    Davide Sangiorgi, Università di Bologna
    Nitin Saxena, IIT Kanpur
    Alistair Sinclair, UC Berkeley
    Ola Svensson, EPF Lausanne
    Gregory Valiant, Stanford University
    Stephanie Weirich, University of Pennsylvania
    Virginia V. Williams, Massachusetts Institute of Technology
    James Worrell, Oxford University
    Mihalis Yannakakis, Columbia University, New York

Wednesday, November 24, 2021

Inria Innovation Prize 2021 to Mateescu, Garavel, Lang and Serwe

I just learnt that Radu Mateescu, Hubert Garavel, Frédéric Lang and Wendelin Serwe from the Construction of Verified Concurrent Systems (Convecs) project team at INRIA Grenoble – Rhône-Alpes Centre have been recently awarded the Inria Innovation Prize – Académie des sciences – Dassault Systèmes. Their work contributes to the development of the CADP toolbox for modelling and verifying parallel and distributed systems. The aim of that project is to automatically detect design flaws in highly complex systems. 

Readers of this blog will be as delighted as I am by this news. CADP is one of the tools from the concurrency community that has the longest history, dating back to its early releases in 1989. It has been used to good effect in a variety of applications, is still under continuous development and makes excellent use in practice of classic tools from concurrency theory. By way of example, let me mention the recent successes by the Convecs team in dealing with difficult challenges posed by Bernhard Steffen and his team on the evaluation of CTL and LTL formulae on large products of automata. (See, for instance, the news items here and here.) Traditional model checkers fail on those challenges because the state space of the product automata is too large for them. However, a wise use of bisimulations and congruence results allows CADP to solve many of those challenges. Interested readers might also wish to peruse the slides at 

Congratulations to the whole CADP team and to the concurrency community for this award!

Thursday, September 16, 2021

PhD scholarship in Data Science for sustainability at Reykjavik University

Department of Computer Science, Reykjavik University 

PhD scholarship in Data Science for sustainability by predicting food consumption to reduce food waste 

The Department of Computer Science at Reykjavík University is looking for a PhD student to build, apply and evaluate a global model that predicts future food consumption by geographic area. The goal of the project is twofold: Reduce food waste in restaurants and grocery stores by using global machine learning algorithms. Understand food consumption trends in restaurants and grocery stores by analyzing the global model’s input data and their results. The ideal candidate should possess data science expertise, be passionate about sustainability and enthusiastic about research. The project is a collaboration with GreenBytes and will be carried out in Reykjavik, Iceland.

See for full details and for information on how to apply.

Saturday, August 28, 2021

Interview with CONCUR 2021 ToT Award Recipients: David Janin and Igor Walukiewicz

This is the fourth and last instalment of the series of interviews with the 2021 CONCUR Test-of-Time Award recipients, which I have conducted with Nathalie Bertrand and Nobuko Yoshida's great help. (See here, here and there for the previous ones.) In it, I interviewed David Janin and Igor Walukiewicz

I thank David and Igor for their very stimulating answers to my questions and for painting such a rich picture of the context for their award-winning work and of the subsequent developments related to it. Enjoy!

Note: The interview has been slightly edited for clarity and to correct some small typos. Thanks to Ilaria Castellani for pointing out several typos that had escaped my attention.

Luca: You receive the CONCUR ToT Award 2021 for your paper On the Expressive Completeness of the Propositional mu-Calculus with respect to Monadic Second Order Logic, which appeared at CONCUR 1996. In that article, you showed what I consider to be a fundamental result, namely that the mu-calculus and the bisimulation-invariant fragment of  monadic second-order logic (MSOL) have the same expressiveness over transition systems. Could you tell us how you came to study that question and why do you think that such a natural problem hadn't been answered before your paper?

David and Igor: At that time we were interested in automata characterizations of the expressive power of MSOL and the mu-calculus. In 1988 Damian Niwinski has shown that over binary trees the mu-calculus and MSOL are expressively equivalent. When it appeared, the result was quite surprising, even unexpected. The two logics are not equivalent on unranked trees where the number of children of a node is not fixed. In consequence, the logics are not equivalent over the class of all transition systems. 

In 1983 van Benthem showed that modal logic is equivalent to the bisimulation-invariant fragment of first-order logic. When we have learned about this result we
have realized that our automata models can be used to prove a similar statement
for the mu-calculus.

The method of van Benthem could not be applied to MSOL, the automata based
method looks like the only way to prove the result.

Luca: I am interested in how research collaborations start, as I like to tell "research-life stories" to PhD students and young researchers. Could you tell us how you started your collaboration?

David: I came to meet Igor when he was visiting Bordeaux in fall 93, presenting his first completeness result on a proof system for the mu-calculus (LICS 93). I was myself starting a PhD considering modal logic for specifying (various forms of) nondeterminism in connection with (power)domain theory (FST&TCS 93).

We eventually met again in Auvergnes (center of France) for the Logic Colloquium in summer 94. There, spending time in the park nearby the conference venue or walking around the volcanoes, we elaborated a notion of nondeterministic automata for the modal mu-calculus. This result can be seen as extending the notion of "disjunctive" normal form from propositional logic to the modal mu-calculus. I remember that Igor later used this result for his second completeness result for Kozen’s proposed proof system for the mu-calculus (LICS 95). Thanks to Igor, this was for me the occasion to learn in depth the link between mu-calculus and tree automata.

Incidentally, Johan van Benthem was also attending this Logic Colloquium and we both attend his lecture about the bisimulation-invariant fragment of FO. Although we did not yet realize we could generalize his result to MSO, this surely increased our own understanding of logic.

Our first result (Automata for the mu-calculus) was presented at MFCS in 95 in Prague, where Igor and I met again. It could have been there, or a little later, that we eventually postulated our bisimulation-invariance result. However, proof arguments were only found later while exchanging emails. It was a bit amazing for me that we could discuss that way across Europe: email started to be heavily used only in the late 80's. In my head, Poland was far, far away from France.

Yet our collaboration was eventually in the line of an  ongoing collaboration
between Warsaw and Bordeaux, involving researcher like Arnold (my supervisor)
and Niwinski (Igor's mentor), both major specialists in the area of automata and
logics. Somehow, as a followup, together with Aachen (Grädel and Thomas) and
many other sites, the GAMES European network was later created, and, almost at
the same time, Igor came to Bordeaux as a CNRS researcher.

Luca: As you mentioned earlier, van Bentham has shown that modal logic has the same expressive power as the bisimulation-invariant fragment of first-order logic. In some sense, one may consider your main result as the extension of van Bentham's theorem to a setting with fixed-points. Could you briefly describe at a high level the challenges that fixed-points created in your work? To your mind, what was the main technical achievement or technique in your paper?

David and Igor: Sure our result is similar to van Benthem's, and, as mentioned
above, his own presentation in 93 was very inspiring. However, his proof relies
on compactness of first-order logic and cannot be adapted to monadic
second-order logic. In our proof, we have used automata-theoretic techniques.

In our previous works we had developed automata models for MSOL and the
mu-calculus on unranked trees. Every transition system is bisimulation invariant
to a tree obtained by the unfolding of the transition system. Crucially for our
result, it is also equivalent to a tree where every subtree is duplicated
infinitely many times. A short pumping argument shows that on such trees the two automata models are equivalent.

Luca: Did any of your subsequent research build explicitly on the results and the techniques you developed in your award-winning paper? Is there any result obtained by other researchers that builds on your work and that you like in particular?

David and Igor: Around that time Marco Hollenberg and Giovana D'Agostino used our automata-theoretic methods to show the uniform interpolation property for the mu-calculus.

In collaboration with Giacomo Lenzi (postdoc in Bordeaux from Pisa), David
considered the bisimulation-invariant fragment of various levels of the monadic
quantifier alternation hierarchy. It turns out that monadic Sigma_1 corresponds to reachability properties and monadic Sigma_2 corresponds to Büchi properties, respectively the first and second levels of the mu-calculus hierarchy.

It could be the case that all mu-calculus can be translated into monadic
Sigma_3 formulas -- this is true when restricting to deterministic transition
systems. However, with nondeterminism, such a result seems difficult to achieve.

Incidentally, Giacomo and David also proved that adding limited counting capacity
to modalities yields a fixed-point calculus equivalent to the unraveling invariant
fragment of MSO (LICS 2001). 

In 1999, in collaboration with Erich Grädel, Igor has used similar techniques to define and study guarded fixed-point logic. Subsequently, several other fixed-point logics of this kind were proposed with the most expressive one probably being guarded negation logic with fixed points of Bárány, ten Cate, and Segoufin from 2015.  These works all use automata-theoretic methods to some extent.

Luca: Your paper was written while Igor was at BRICS at the University of Aarhus. Igor, what was it like to be at BRICS@Aarhus at that time? What role do
you think centres like BRICS played and can play in the development of theoretical computer science? Do you think that your stay at BRICS had a
positive impact on your future career?

Igor: My stay at BRICS had definitively a beneficial impact on me. At that time BRICS was one of the most active centers world wide in theoretical computer science. Being able to see and talk to so many different people, being exposed
to so many different ideas, was very enriching. BRICS was a meeting place
allowing scientists to have a better and larger vision of our field. BRICS contributed to the development of many people involved, as well as to the excellence of Aarhus, and even Denmark as a whole, in our field.

Luca: I have been brought up in the concurrency-theory tradition and I feel that bisimulation-invariant properties are the interesting ones over transition systems. Do you think that we actually "need" logics that allow one to specify properties of transition systems that are not bisimulation invariant?

David: That was the idea indeed and the mu-calculus is the good logic for that. From a mathematical perspective, bisimulation is a fairly natural definition. In some sense, bisimulation equivalence is a greatest co-congruence.

However, I always suspected that bisimulation is too fine grained as an equivalence for concurrency. When modeling realistic systems, nondeterminism arises either as controllable (angelic) or uncontrollable (demonic) choice. In this case, the right equivalence to be considered is simulation equivalence.

The funny thing is that David Park, who "invented" bisimulation, was actually studying equivalence of J.R. Büchi's deterministic string automata where, because of determinism, bisimulation turns out to be equivalent to simulation equivalence. It is only after Matthew Hennessy and Robin Milner's work in concurrency that bisimulation was applied to nondeterministic behaviors.

Igor: In the context of XML and semistructured data, bisimulation is not relevant. One of the prominent queries in XPATH is whether a value appears twice -- that is clearly not a bisimulation-invariant property. So while XPATH looks very close to PDL or the mu-calculus, many techniques and questions are very different. The other context that comes to mind is controller synthesis, where we ask for a
transition system of a specific shape, for example, with self-loops on certain actions. Such self loops represent invisibility of the action to the controller.

Luca: What are the research topics that you find most interesting right now? Is there any specific problem in your current field of interest that you'd like to see solved?

David: In Logic for Computer Science, there have always been two kinds of approaches: model theory that eventually led to model checking and proof theory that eventually led to typed programming languages.

Twenty years later, aiming at designing and implementing real concurrent systems, especially for interactive arts, I realized that the latter approach was
(at least for me) a lot more effective. Building concurrent systems by synchronizing arbitrary sub-systems sounded for me like unstructured programming; it was essentially unmaintainable. Monads and linear types, among
many other approaches in typed functional programming, surely offered interesting
alternatives to process calculi approaches.

Igor: In the context of the paper we discuss here, I am surprised by developments around the model-checking problem for the mu-calculus. After some years of relative calm, Calude, Jain, Khoussainov, Li and Stephan have made an important breakthrough in 2017. Yet, despite big activity after this result, the research on the problem seems to have hit one more barrier. Another old prominent problem is decidability of the alternation hierarchy for the mu-calculus. The problem is: given a formula and a number of alternations between the least and the greatest fixed-points, decide if there is an  equivalent formula with this number of alternations. Even when the number of alternations is fixed we do not know the answer. Among others, Thomas Colcombet and Christof Loeding have done very interesting work on this subject.

Luca: What advice would you give to a young researcher who is keen to start working on topics related to automata theory and logic in computer science?

David: From the distance, our result sounds to me as a combination of both technique and imagination. Technique for mastering known results and imagination for finding original open problems, especially between research fields that are not (yet) known to be deeply related. As a matter of fact, I felt lucky finding such a fresh open problem that was (probably) a lot easier to solve than many other well-known hard problems.

Technique comes from hard work. It is obviously essential but somehow easy to teach and evaluate in academia. Imagination comes from curiosity. It is still essential but a lot more difficult to teach. So young researchers must develop by themselves their own imagination and curiosity.

In the automata-theory branch of logic in computer science, the remaining open
problems seem fairly hard, so I believe that it is the imagination of young researchers that will make the difference for setting up new interesting directions of research, especially those who are ready to look aside, towards other areas of
logic in computer science and, because it can be a considerable source of
motivation, funny applications.

Igor: Naturally, the field is much broader these days than it was 25 years ago. It is
crucial to master some techniques. For this, working on variations of already solved problems is a good method. Yet, I think it is important to escape
the cycle of constant modifications of existing problems and their solutions.
I would suggest that, at some moment, one should find an important open
problem that one is passionate about and should spend a considerable effort on
it. I admit that this is a matter of a taste, personality, and having a sufficiently
comfortable situation to afford such a risk. Another good option is to look at frontiers with other areas: distributed computing, semantics, control theory, security.

Monday, August 23, 2021

Interview with CONCUR 2021 ToT Award Recipients: Ahmed Bouajjani and Javier Esparza

I am delighted to post Nathalie Bertrand's splendid interview with CONCUR 2021 Test-of-Time Award recipients Ahmed Bouajjani and Javier Esparza. (Ahmed and Javier receive the award for a paper their co-authored with the late Oded Maler.)

Thanks to Ahmed and Javier for their answers and to Nathalie for conducting such an interesting interview, which is rich in information on the technical context for, and impact of, their work as well as on their research collaborations over the years. I now leave the floor to Nathalie and her interviewees. Enjoy!

This post is devoted to the interview I was honored to conduct with Ahmed Bouajjani and Javier Esparza for their paper with Oded Maler on reachability in pushdown automata.

Nathalie: You receive the CONCUR ToT Award 2021 for your paper with Oded Maler Reachability Analysis of Pushdown Automata: Application to Model-Checking, which appeared at CONCUR 1997. In that article, you develop symbolic techniques to represent and manipulate sets of configurations of pushdown automata, or even of the broader class of alternating pushdown systems. The data structure you define to represent potentially infinite sets of configurations is coined alternating multi-automata, and you provide algorithms to compute the
set of predecessors (pre*) of a given set of configurations.  Could you briefly explain to our readers what alternating multi-automata are?

Ahmed: The paper is based on two ideas. The first one is to use finite automata as a data structure to represent infinite sets of configurations of the pushdown automaton.  We called them multi-automata because they have multiple initial states, one per control state of the pushdown automaton, but there is nothing deep there.  The second idea is that this representation is closed under the operation of computing predecessors, immediate or not. So, given a multi-automaton representing a set of configurations, we can compute another multi-automaton representing all their predecessors. If you compute first the immediate predecessors, then their immediate predecessors, and so on, you don't converge, because your automata grow bigger and bigger. The surprising fact is that you can compute all predecessors in one go by just adding transitions to the original
automaton, without adding any new states. This guarantees termination. Later we called this process ``saturation''.

Once you can compute predecessors, it is not too difficult to obtain a model-checking algorithm for LTL model checking. But for branching-time logics you must also be able to compute intersections of sets of configurations. That's where alternation kicks in, we use it to represent intersections without having to add new states.

Nathalie: Could you also tell us how you came to study the question addressed in your award-winning article?  Which of the results in your paper did you find most surprising or challenging?

Javier: In the late 80s and early 90s many people were working on symbolic model-checking, the idea of using data structures to compactly represent sets of configurations. BBDs for finite-state model-checking were a hot topic, and for quite a few years dominated CAV. BDDs can be seen as acyclic automata, and so it was natural to investigate general finite automata as data structure for infinite-state systems.  Pierre Wolper and his group also did very good work on that.

About your second question, when I joined the team Ahmed and Oded had already been working on the topic for a while, and they had already developed the saturation algorithm. When they showed it to me I was blown away, it was so beautiful. A big surprise.

Nathalie: In contrast to most previous work, your approach applied to model checking of pushdown systems treats in a uniform way linear-time and branching time logics. Did you apply this objective in other contributions?

Javier: I didn't. The reason is that I was interested in concurrency, and when you bring together concurrency and procedures even tiny fragments of branching-time logics become undecidable. So I kind of stuck to the linear-time case. Did you work on branching-time, Ahmed?

Ahmed: Somehow yes (although it is not precisely about linear vs. branching time properties), in the context of Regular Model Checking, a uniform framework for symbolic analysis of infinite-state systems using automata-based data structures. There, we worked on two versions, one based on word automata for systems where configurations can be encoded as words or vectors of words, such as stacks, queues, etc., and another one based on tree automata for configurations of a larger class of systems like heap manipulating programs, parametrized systems with tree-like architectures, etc.  The techniques we developed for both cases are based on the same principles.

Nathalie: As it is often the case, the paper leaves some open questions. For instance, I believe, the precise complexity of verification of pushdown systems against CTL specifications is PSPACE-hard and in EXPTIME. Did you or others close this gap since? Did your techniques help to establish the precise complexity?

Ahmed: In our paper we showed that model checking the alternating modal
mu-calculus is EXPTIME-hard. CTL is less expressive, and it  was the most popular logic in the verification community at the time, so it was natural to ask if it had lower complexity.

Javier: Yes, as a first step in the paper we showed that a fragment of CTL, called EF, had PSPACE complexity. But I made a mistake in the proof, which was later found by Igor Walukiewicz. Igor cracked the problem in a paper at FSTTCS'00. It turns out that EF is indeed PSPACE-complete (so at least we got the result right!), and full CTL is EXPTIME-complete. I wish Igor had used our technique, but he
didn't, he applied the ideas of his beautiful CAV'96 paper on parity pushdown games.

Nathalie: It is often interesting to understand how research collaborations start as it can be inspiring to PhD students or colleagues. Could you tell us how you started your collaboration on the award-winning paper? Did you continue working together (on a similar topic or on something totally different) after 1997?

Ahmed: Javier and I first met in Liege for CAV 95. French universities have this program that allows us to bring foreign colleagues to France for a month as invited professors, and I invited Javier to Grenoble in 96.

Javier: It was great fun; Verimag was a fantastic place to do verification, we both liked cinema, Ahmed knew all restaurants, and the Alps were beautiful. Ahmed invited me again to Grenoble in 97. This time I came with my wife, and we again had a great time.

When I arrived in Grenoble in 96 Ahmed and Oded had already written most of the work that went into the paper. My contribution was not big, I only extended the result to the alternation-free mu-calculus, which was easy, and proved a matching lower bound. I think that my main contribution came after this paper. Ahmed and Oded were too modest, they thought the result was not so important, but I found it not only beautiful, I thought it'd be great implement the LTL part,
and build a model checker for programs with procedures. We could do that thanks to Stefan Schwoon, who started his PhD in Munich around this time--he is now at Paris-Saclay---and was as good a theoretician as a tool builder. Around 2000 he implemented a symbolic version of the algorithms in MOPED, which was quite successful.

Ahmed: In 99 I moved to LIAFA in Paris, and I remember your kids were born.

Javier: Yes, you sent my wife beautiful flowers!

Ahmed: But we kept in touch, and we wrote a paper together in POPL'03 with my PhD student Tayssir Touili, now professor in Paris. We extended the ideas of the CONCUR paper to programs with both procedures and concurrency. Other papers came, the last in 2008.

Javier: And Ahmed is visiting Munich next year, pandemic permitting, so I hope there'll be more.

Nathalie: How would you say this award-winning paper influenced your later work? Did any of your subsequent research build explicitly on it?

Ahmed: This paper was the first of many I have co-authored on verification of infinite-state systems using automata.  All of them use various automata classes to represent sets of configurations, and compute reachable configurations by iterative application of automata operations. We call these procedures accelerations; instead of computing a fixed point of a function by repeated iteration, you "jump" to the fixed point after finitely many steps, or at least converge faster. Accelerations were implicitly present in the CONCUR'97 paper. They have been also used by many other authors, for example Bernard Boigelot and Pierre Wolper.

My first paper on accelerations was with Peter Habermehl, my PhD student at the time and now at IRIF.  We worked on the verification of systems communicating through queues, using finite automata with Presburger constraints as data structure.  Then came several works on communicating systems with my student Aurore Annichini and Parosh Abdulla and Bengt Jonsson from Uppsala. As a natural continuation, with the Uppsala group and my student Tayssir Touili we developed the framework of Regular Model Checking. And then, with Peter Habermehl, Tomas Vojnar and Adam Rogalewicz from TU Brno, we extended Regular Model Checking to Abstract Regular Model Checking, which proved
suitable and quite effective for the analysis of heap manipulating programs.

We also applied the CONCUR'97 results to the analysis of concurrent programs. The first work was a POPL'03 paper with Javier, Tayssir, and me on an abstraction framework. Two years later, Shaz Qadeer and Jakob Rehof proposed bounded-context switch analysis for bug detection. That paper created a line of research, and we contributed to it in many ways, together with Shaz, Mohamed Faouzi Atig, who was my student then, and is now Professor at Uppsala, and others.

Javier: The CONCUR'97 paper was very important for my career. As I said before, it directly led to MOPED, and later to jMOPED, a version of MOPED for Java programs developed by Stefan Schwoon and Dejvuth Suwimonteerabuth. Then, Tony Kucera, Richard Mayr, and I asked ourselves if it was possible to extend probabilistic verification to pushdown systems, and wrote some papers on the topic, the first in LICS'04. This was just the right moment, because at the same time Kousha Etessami and Mihalis Yannakakis started to write brilliant papers on recursive Markov chains, an equivalent model. The POPL'03 paper with Ahmed and Tayssir also came, and it triggered my work on Newtonian program analysis with two very talented PhD students, Stefan Kiefer, now in Oxford, and Michael Luttenberger, now my colleague at TUM.  So the CONCUR'97 paper was at the root of a large part of my work of the next 15 years.

Nathalie: Is there any result obtained by other researchers that builds on your work and that you like in particular or found surprising?

Javier: After implementing MOPED, Stefan worked with Tom Reps on an extension to weighted pushdown automata, the Weighted Pushdown Library. Tom and Somesh Jha also found beautiful applications to security. This was great work.  I was also very impressed by the work of Luke Ong and his student Matthew Hague. In 97 Ahmed and I tried to apply the saturation method to the full mu-calculus but failed, we thought it couldn't be done. But first Thierry Cachat gave a saturation algorithm for Büchi pushdown games, then Luke, Matthew cracked the mu-calculus problem, and then they even extended it to
higher-order pushdown automata, together with Arnaud Carayol, Olivier Serre, and others.  That was really surprising.

Ahmed: I agree. I'd also mention Qadeer and Rehof's TACAS'05 paper. They built on our results to prove that context-bounded analysis of concurrent programs is decidable. They initiated a whole line of research.

Nathalie: What are the research topics that you find most interesting right now? Is there any specific problem in your current field of interest that you'd like to see solved?

Javier: Ten years ago I've had said the complexity of the reachability problem for Petri nets and of solving parity games, but now the first one is solved and the second almost solved! Now I don't have a specific problem, but in the last years I've been working on parameterized systems with an arbitrary number of agents, and many aspects of the theory are still very unsatisfactory. Automatically proving a mutual exclusion algorithm correct for a few processes was already routine 20 years ago, but proving it for an arbitrary number is still very much an open problem.

Ahmed: I think that invariant and procedure summary synthesis will remain hard and challenging problems that we need to investigate with new approaches and techniques. It is hard to discover the right level of abstraction at which the invariant must be expressed, which parts of the state are involved and how they are related. Of course the problem is unsolvable in general but finding good methodologies on how to tackle it depending on the class of programs is an important issue. I think that the recent emergence of data-driven approaches is
promising. The problem is to develop well principled methods combining data-driven techniques and formal analysis that are efficient and that offer well understood guarantees.

Nathalie: Would you have an anecdote or a tip from a well-established researcher to share to PhD students and young researchers?

Javier: Getting this award reminded me of the conference dinner at CAV 12 in St. Petersburg. I ended up at a table with some young people I didn't know. The acoustics was pretty bad.  When the CAV Award was being announced, somebody at the table asked "What's going on?", and somebody else answered "Not much, some senior guys getting some award". Never take yourself very seriously ...

Nathalie: Oded Maler passed away almost 3 years ago. Do you have any memory of him to share with our readers?

Ahmed: Oded was very amused by the number of citations. He used to say "Look at all the damage we've done."

Javier: Yes, Oded had a wonderful sense of humor, very dry and deadpan. When I arrived in Grenoble it took me a few days to learn how to handle it!  I miss it very much.

Wednesday, August 11, 2021

The First: A Movie

I had the great pleasure to watch Ali Hossaini and Luca Viganò's short movie "The First" that has been released by National Gallery X. "The First" is a coproduction of UKRI TAS Hub, RUSI and National Gallery X. It was produced as a keynote for Trusting Machines?, a conference on how to develop trustworthy AI.

For the little that it may be worth, I strongly recommend the movie. Do watch also the conversation between National Gallery X co-director Ali Hossaini and Luca Viganò, possibly before enjoying a second viewing of the movie. You can also read the paper "Gnirut: The Trouble With Being Born Human In An Autonomous World" mentioned in that conversation.  

I fully subscribe to Luca Viganò's vision of using artistic tools to explain computer science concepts to the public, whose members will have to make use of the technological artifacts based on those concepts. Indeed, we live in a world in which technologists will increasingly have to be great humanists. IMHO, we are lucky to have people like Luca Viganò, who is also a playwright, paving the way in connecting "The Two Cultures". (In case any of you is interested, I recommend Luca Viganò's GSSI+ICE-TCS webinar.)


Friday, August 06, 2021

Interview with CONCUR 2021 ToT Award Recipients: Uwe Nestmann and Benjamin Pierce

I am pleased to re-post Nobuko Yoshida's splendid interview with CONCUR 2021 Test-of-Time Award recipients Uwe Nestmann and Benjamin Pierce. I thoroughly enjoyed reading it and learnt much from the many pearls of wisdom that pepper the interview. 

Thanks to Benjamin and Uwe for their answers and to Nobuko for conducting such an inspiring interview. Enjoy!

Thursday, June 24, 2021

The detectEr runtime-verification tool for Erlang programs

Thanks to the huge amount of excellent work done by Duncan Paul Attard and Adrian Francalanza, we now have a tutorial on detectEr that some of you might want to check out. See this web page for all the material, tool download and links to the videos of the tutorial Duncan delivered at the DisCoTec 2021 Tutorial Day

detectEr is a runtime verification tool for asynchronous component systems that run on the Erlang Virtual Machine. It also supports monitoring systems that can execute outside of the EVM, so long as these can produce traces that are formatted in a way that is parsable by detectEr. The tool itself is developed in Erlang, and is the product of five years of theoretical and practical development. (Erlang is a programming language used to build massively scalable soft real-time systems with requirements on high availability.)  


Friday, June 18, 2021

Interview with CONCUR 2021 ToT Award recipients, Part 1: Rajeev Alur, Thomas Henzinger, Orna Kupferman and Moshe Vardi

Last year, the CONCUR conference series inaugurated its Test-of-Time Award, whose purpose is to recognise important achievements in Concurrency Theory that were published at the CONCUR conference and have stood the test of time. This year, the following four papers were chosen to receive the CONCUR Test-of-Time Awards for the periods 1994–1997 and 1996–1999 by a jury consisting of Rob van Glabbeek (chair), Luca de Alfaro, Nathalie Bertrand, Catuscia Palamidessi, and Nobuko Yoshida: 

Last year, I interviewed the CONCUR 2020 Test-of-Time Award recipients and was asked by Javier Esparza (chair of the CONCUR SC) and Ilaria Castellani (outgoing chair of the IFIP WG 1.8 on Concurrency Theory) to do the same with the current batch of awardees. (In passing, let me thank Nathalie Bertrand and Nobuko Yoshida for their kind help with the interviews!)

This post is devoted to the interview I conducted via email with Rajeev Alur, Thomas A. Henzinger, Orna Kupferman and Moshe Y. Vardi. Reading the answers I received from that dream team of colleagues was like a masterclass for me and I trust that their thoughts on their award-winning paper will be of interest to many of the readers of this blog. Enjoy!

Luca: You receive the CONCUR ToT Award 2021 for your paper Alternating Refinement Relations, which appeared at CONCUR 1998. In that article, you gave what I consider to be a fundamental contribution, namely the introduction of refinement relations for alternating transition systems. Could you briefly explain to our readers what alternating transition systems are? Could you also tell us how you came to study the question addressed in your award-winning article and why you focused on simulation- and trace-based refinement relations? Which of the results in your paper did you find most surprising or challenging? 

Answer: When we model a system by a graph, our model abstracts away some details of the system. In particular, even when systems are deterministic, states in the model may have several successors. The nondeterminism introduced in the model often corresponds to different actions taken by the system when it responds to different inputs from its environment. Indeed, a transition in a graph that models a composite system corresponds to a step of the system that may involve some components. Alternating transition systems (ATSs) enable us to model composite systems in more detail. In an ATS, each transition corresponds to a possible move in a game between the components, which are called agents. In each move of the game, all agents choose actions, and the successor state is deterministically determined by all actions. Consequently, ATSs can distinguish between collaborative and adversarial relationships among components in a composite system. For example, the environment is typically viewed adversarially, meaning that a component may be required to meet its specification no matter how the environment behaves.

In an earlier paper, some of us introduced ATSs and Alternating Temporal Logics, which can specify properties of agents in a composite system. The CONCUR 1998 paper provided refinement relations between ATSs which correspond to alternating temporal logics. Refinement is a central issue in a formal approach to the design and analysis of reactive systems. The relation “I refines S '' intuitively means that system S has more behaviors than system I. It is useful to think about S being a specification and I an implementation. Now, if we consider a composite implementation I||E and specification S||E and we want to check that the component I refines the component S, then the traditional refinement preorders are inappropriate, as they allow I to achieve refinement of I||E with respect to S||E by constraining its environment E. Alternating refinement relations are defined with respect to ATSs that model the interaction among the underlying components, and they enable us to check, for example, that component I has fewer behaviors than component S no matter how component E behaves. They are called “alternating” because refinement may restrict implementation actions but must not restrict environment actions. In other words, refinement may admit fewer system actions but, at the same time, more environment actions. 

It was nice to see how theoretical properties of preorders in the traditional setting are carried over to the game setting, and so are the results known then about the computational price of moving to a game setting. First, the efficiency of the local preorder of simulation with respect to the global preorder of trace containment is maintained. As in the traditional setting, alternating simulation can be checked in polynomial time, whereas alternating trace-containment is much more complex. Second, the branching vs. linear characterizations of the two preorders is preserved: alternating simulation implies alternating trace containment, and the logical characterization of simulation and trace-containment by CTL and LTL, respectively, is carried over to their alternating temporal logics counterparts. The doubly-exponential complexity of alternating trace containment, as opposed to the PSPACE complexity of trace containment, is nicely related to the doubly-exponential complexity of LTL synthesis, as opposed to its PSPACE model-checking complexity,

Luca: In your paper, you give logical characterisations of your alternating refinement relations in terms of fragments of alternating temporal logic. Logical characterisations of refinement relations are classic results in our field and I find them very satisfying. Since I teach a number of those results in my courses, I'd be interested in hearing how you would motivate their interest and usefulness to a student or a colleague. What would your "sales pitch" be? 

Answer: There is extensive research on the expressive power of different formalisms. Logical characterization of refinement relations tells us something about the distinguishing power of formalisms. For example, while the temporal logic CTL* is more expressive than the temporal logic CTL, the two logics have the same distinguishing power: if you have two systems and can distinguish between them with a CTL* formula (that is, your formula is satisfied only in one of the systems), then you should be able to distinguish between the two systems also with a CTL formula. Moreover, while CTL is not more expressive than LTL, we know that CTL is “more distinguishing” than LTL. These results have to do with the logical characterizations of trace containment and simulation. The distinguishing power of a specification formalism is useful when we compare systems, in particular an implementation and its abstraction: if we know that the properties we care about are specified in some formalism L, and our system refines the abstraction according to a refinement relation in which the satisfaction of specifications in L is preserved, then we can perform verification on the abstraction.

Luca: I am interested in how research collaborations start, as I like to tell "research-life stories" to PhD students and young researchers of all ages. Could you tell us how you started your collaboration on the award-winning paper? 

Answer: Subsets of us were already collaborating on other topics related to reactive models and model checking, and all of us shared a common belief that the field was in need to move from the limited setting of closed systems to a more general setting of open systems, that is, systems that interact with an environment. Open systems occur not only when the environment is fully or partly unknown, but also when a closed system is decomposed into multiple components, each of them representing an open system. To build “openness” into models and specifications as first-class citizens quickly leads to the game-theoretic (or “alternating”) setting. It was this realization and the joint wish to provide a principled and systematic foundation for the modeling and verification of open systems which naturally led to this collaboration.

Luca: Did any of your subsequent research build explicitly on the results and the techniques you developed in your award-winning paper? Which of your subsequent results on alternating transition systems and their refinement relations do you like best? Is there any result obtained by other researchers that builds on your work and that you like in particular or found surprising? 

Answer: Various subsets of us pursued multiple research directions that developed the game-theoretic setting for modeling and verification further, and much remains to be done. Here are two examples. First, the game-theoretic setting and the alternating nature of inputs and outputs are now generally accepted as providing the proper semantic foundation for interface and contract formalisms for component-based design. Second, studying strategic behavior in multi-player games quickly leads to the importance of probabilistic behavior, say in the form of randomized decisions and strategies, of equilibria, when players have non-complementary objectives, and of auctions, when players need to spend resources for decisions. All of these are still very active topics of research in computer-aided verification, and they also form a bridge to the algorithmic game theory community.

Luca: One can view your work as a bridge between concurrency theory and multi-agent systems. What impact do you think that your work has had on the multi-agent-system community? And what has our community learnt from the work done in the field of multi-agent systems? To your mind, what are the main differences and points of contact in the work done within those communities? 

Answer: Modeling interaction in multi-agent systems is of natural interest to planning problems studied in the AI community. In 2002, the International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS) was formed and the annual International Conference on Autonomous Agents and Multiagent Systems (AAMAS) was launched. The models, logics, and algorithms developed in the concurrency and formal methods communities have had a strong influence on research presented at AAMAS conferences over the past twenty years. Coincidentally, this year our paper on Alternating-Time Temporal Logic was chosen for the IFAAMAS Influential Paper Award.

Luca: What are the research topics that you find most interesting right now? Is there any specific problem in your current field of interest that you'd like to see solved?

Answer: Research on formal verification and synthesis, including our paper, assumes that the model of the system is known. Reinforcement learning has emerged as a promising approach to the design of policies in scenarios where the model is not known and has to be learned by agents by exploration. This leads to an opportunity for research at the intersection of reactive synthesis and reinforcement learning. A potentially promising direction is to consider reinforcement learning for systems with multiple agents with both cooperative and adversarial interactions.

The realization that reactive systems have to satisfy their specifications in all environments has led to extensive research relating formal methods with game theory. Our paper added alternation to refinement relations. The transition from one to multiple players has been studied in computer science in several other contexts. For the basic problem of reachability in graphs, it amounts to moving from reachability to alternating reachability. We recently studied this shift in other fundamental graph problems, like the generation of weighted spanning trees, flows in networks, vertex covers, and more. In all these extensions, we consider a game between two players that take turns in jointly generating the outcome. One player aims at maximizing the value of the outcome (e.g., maximize the weight of the spanning tree, the amount of flow that travels in the network, or the size of the vertex cover), whereas the second aims at minimizing the value. It is interesting to see how some fundamental properties of graph algorithms are lost in the alternating setting. For example, following a greedy strategy is not beneficial in alternating spanning trees, optimal strategies in alternating flow networks may use fractional flows, and while the vertex-cover problem is NP-complete, an optimal strategy for the maximizer player can be found in polynomial time. Many more questions in this setting are still open. 

Luca: What advice would you give to a young researcher who is keen to start working on topics related to alternating transition systems and logics?

Answer: One important piece of advice to young researchers is to question the orthodoxy. Sometimes it is necessary to learn everything that is known about a topic but then take a step back, look at the bigger picture, reexamine some of the fundamental assumptions behind the established ways of thinking, change the models that everyone has been using, and go beyond the incremental improvement of previous results. This is particularly true in formal methods, where no single model or approach fits everything. And young researchers stand a much better chance of having a really fresh new thought than those who have been at it for many years.

Tuesday, June 01, 2021

Frank P. Ramsey on research and publication rates

Spurred by the excellent 1978 radio programme 'Better than the Stars' by D. H. Mellor about Frank Ramsey and by the Philosophy Bites interview with Cheryl Misak on Frank Ramsey and Ludwig Wittgenstein, I started reading Cheryl Misak's biography of Frank Ramsey. (FWIW, I strongly recommend the radio programme, the podcast and the book.)

The following quote from pages 169-170 of Cheryl Misak's book describes Ramsey's views on publications and research, as stated in a letter to his father Arthur:

Arthur tried a different tack, suggesting that Frank was going to be in trouble for wasting all this time on analysis, rather than on his career. On 24 September, in what seems to be his last letter home before he left Vienna, Frank wrote:
I don’t see how there can be any such inquisition into my conduct in Vienna as you suppose seem to want to guard against.... No one can suppose that you can’t research for six months without having a paper ready by the end. If everyone wrote a paper every six months the amount of trivial literature would swell beyond all bounds. Given time I shall produce a good paper. But if I hurry it will be ill written and unintelligible and unconvincing.
It seems to me perfectly proper to spend a scholarship being analysed, as it is likely to make me cleverer in the future, and discoveries of importance are made by remarkable people not by remarkable diligence.
While it may not be persuasive that psychoanalysis makes one cleverer, Frank was prescient that the numbers of journal articles would eventually swell and he was right that diligence isn't enough to produce discoveries of importance.
Of course, academia has changed since those times and I do value "remarkable diligence". However, I will try to remember Ramsey's words next time someone proposes to make academic hirings and promotions conditional to having a certain number of papers or citations or whatever per year on average.

Wednesday, May 26, 2021

Course on Ethics and Accountability in Computer Science

About one year ago, I became aware of the course "Ethics and Accountability in Computer Science" designed and taught by Rodrigo Ferreira and Moshe Y. Vardi at Rice University. The goals and high-level structure of the course, as well as its context, are described in an informative and thoughtful paper by Moshe and Rodrigo that appears in the Proceedings of the 52nd ACM Technical Symposium on Computer Science Education (SIGCSE ’21), which I strongly recommend. You can also read a journal paper reporting on one of their course assignments, which was designed to make students focus on practising "deep attention" in the sense of artist and academic Jenny Odell.

I was smitten by the underlying tenet for their course, namely that "social justice is the single most important issue confronting computer science students today." That tenet is also very much in line with a reflection on the impact that digital technology has on social justice that the Scientific Advisory Board of the Gran Sasso Science Institute asked the Computer Science group at that institute to undertake. Therefore, after having invited Rodrigo to deliver a webinar on the course at the ICE-TCS+GSSI webinar series, I decided that, as an experiment, I would offer a version of the Rice University course to master students in computer science, language technology and software engineering at Reykjavik University during our spring semester 2021. 

Mine was a foolhardy decision for a variety of reasons. However, I have always believed that computer scientists should strive to be the 21st century Renaissance men and women, and bridge the gap between C. P. Snow's "two cultures". Indeed, to quote Edward A. Lee freely, to my mind technologists ought to be amongst the greatest humanists of our age. Despite my other commitments, offering a version of the Ferreira-Vardi course at Reykjavik University felt like the right thing to do at this time. 

I taught the course over twelve weeks to a varied group of eight students, in cooperation with Claudio Pedica. Thanks to the constant support I received from Rodrigo Ferreira, I lived to tell the tale and I hope that I managed to do some justice to the truly excellent course that Moshe and Rodrigo put together. Having a dream team of students with a variety of cultural backgrounds made the course extremely interesting and a learning experience for me. I had to refresh my memory of the philosophy I studied at high school in Italy in a previous life, learn some modern moral philosophy I had not met at school (such as the work by Elisabeth Anscombe and Philippa Foot, amongst others), read a substantial amount of new material (some of it fresh off the press as the course was unfolding) and broaden my horizons. I could not have asked for a better intellectual experience and the students in the course, from Denmark, France, Germany, Iceland and the Netherlands taught me well, kept me on my toes and stimulated me to keep reading material related to ethics even now that the course is over.

Teaching the course reinforced my belief that a course on ethics and on the impact that our field has on social justice should be required for all students in computer science. If you plan to run such a course, I strongly recommend that you consider the Ferreira-Vardi course as a blueprint.

Friday, April 30, 2021

PolyConc: Online collaboration to improve on a result on the equational theory of CCS modulo bisimilarity

The aim of this post is to try and start an online collaboration to improve the solution to a problem in the equational logic of processes that I posed in a survey paper in 2003, namely

Can one obtain a finite axiomatisation of the parallel composition operator in bisimulation semantics by adding only one binary operator to the signature of (recursion, restriction, and relabelling free) CCS?

Valentina Castiglioni, Wan Fokkink, Anna Ingólfsdóttir, Bas Luttik and I published a partial, negative answer to the above question in a paper at CSL 2021. (See the arXiv version for details and for the historical context for the above question.) Our solution is based on three simplifying assumptions that are described in detail in Section 3 of the above-mentioned paper. We'd be very interested in hearing whether any member of the research community in process algebra, universal algebra and equational logic can relax or remove any of our simplifying assumptions. In particular, one can start with assumptions 3 and 2. 

We would also welcome any comments and suggestions on whether some version of that problem can be solved using existing results from equational logic and universal algebra. In particular, are there any general results guaranteeing that, under certain conditions, the reduct of a finitely based algebra is also finitely based? Or, conversely, that if some algebra is not finitely based, then so its expansion with a new operator?

To start with, add any contributions you might have as comments to this post. If ever we make substantial enough progress on the above question, anyone who has played a positive role in extending our results will be a co-author of the resulting paper. 

Let PolyConc begin!

Sunday, February 21, 2021

Article by Sergey Kitaev and Anthony Mendes in Jeff Remmel's memory

Sergey Kitaev just shared with me an article he wrote with Anthony Mendes in Jeff Remmel's memory. Jeff Remmel was a distinguished mathematician with a very successful career in both logic and combinatorics. 

The short biography at the start of the article paints a vivid picture of Jeff Remmel's  personality, and will be of interest and inspiration to many readers. His hiring as "an Assistant Professor in the Department of Mathematics at UC San Diego at age 25, without officially finishing his Ph.D. and without having published a single paper" was, in Jeff Remmel's own words, a "fluke that will never happen again."

I had the pleasure of making Jeff Remmel's acquaintance when he visited Sergey in Reykjavik and thoroughly enjoyed talking to him about a variety of subjects. He was truly a larger-than-life academic.

Monday, February 08, 2021

Whence do research collaborations (in TCS) arise?

About ten days ago, I gave a talk to my colleagues at the Department of Computer Science at Reykjavik University, introducing my personal (and admittedly very biased) view of the past, present and future of ICE-TCS

After my presenta­tion, a colleague asked me how she could engage mathematicians and theoretical computer scientists in joint research. I gave her an answer off the top of my head, but it was clear that she was unconvinc­ed and felt that I was avoid­ing answering her question. (For the record, I basically told her that she should knock on our door, discuss with us the problems she was interested in solving and hope that they are of interest to us. I feel that many research collaborations arise from serendipity and that there is no recipe that is guaranteed to work.) 

The thought that she felt that I might have dodged her question prompted me to look back at my own research collabo­rations and how they came about. The rest of this post is the result of that quick-and-dirty reflection. Let me state right away that my list isn't meant to be exhausti­ve and that I won't mention many of the collaborations in which I have been lucky to be involved and that I have played a crucial role in shaping my academic development. 

Reading papers. One of my long-term research collaborations arose from reading a paper written by a colleague. His paper prompted my companion and me to ask ourselves whether we could prove a similar result to the one our colleague had shown in a different setting. We succeeded and sent him our paper. Subsequently, we invited him to visit us in Aalborg. That visit marked the start of a collaboration and friendship that has lasted for over 20 years.

Approaching a colleague via email for help in solving a problem. At some point, my companion and I were thinking about a research problem that had frustrated us for a while. I remembered reading a number of papers by a colleague on related topics, so I wrote to him, describing the problem, our attempts at solving it and where we had hit a brick wall. I asked him whether he would be interested in working with us on solving it. He did and that was one of the lucky breaks I have had in my research career. Once more, that collaboration offer via email led to mutual visits, other joint papers and, IMHO even more importantly, a long-term friendship that extended beyond work.

Available funding and building on one's mistakes. One day in 2009, an email in my mailbox alerted me to the availability of substantial funding for research collabo­ration between universities in country X and those locat­ed in Norway, Iceland and Lichtenstein. This opportunity was enticing, as I had never visited country X, so I asked myself: "Is there anyone there we might conceivably work with?'' Mulling over that question, I recalled that a colleague from country X had spotted an imprecision in a paper I had coauthored. 

I wrote to him, we applied for that funding jointly and got it. That successful grant application provided the funds for many research visits involving several people in our research groups. Those visits resulted in joint papers, another successful grant application and a number of friendships.

Coffee breaks at conferences. I have at least two exhibits under this heading. The first belongs to a previous geologic­al era (1991). I was attend­ing a conference at CMU and asked a colleague what he was working on. He told me
about a problem he was tackling, which I knew was also on the radar of a fellow researcher and on which I had started working independently. Eventually, after some email exchanges, that chat over coffee turned into a three-way collaboration that, thanks to my coauthors, produced one of my best papers.

Fast forward to 2017 and I'm in Rome to deliver an invited talk at a small conference. During the coffee break follow­ing my presentation, I was approached by a young research­er, with whom I had a number of pleasant conversations during the conference. Some time later, she sent me a draft paper dealing with a topic related to the content of my invited talk. I invited her to visit our research group in Reykjavik and to join the team working on a research project for which we had funding at the time. Those coffee-break conversations led to a collaboration and friendship that I hope will last for a long time. Meeting that colleague has been another of my lucky breaks.

Reading groups. Last, but by no means least, let me mention that my first research collaboration that did not involve my thesis supervisors arose when I read Gordon Plotkin's famous "Pisa Notes (On Domain Theory)" with a fellow PhD student. Reading that work led to our first joint paper in 1991 and a companionship that has lasted to this day. I heard Orna Kupferman give the following, tongue-in-cheek advice to young researchers: "Write papers with your twin-sister!" Mine might be: "Write papers with your companion in life!" 

Let me conclude by saying that serendipity and an actual friendship that extends beyond the confines of scientific work were the key aspects in my most pleasant and enduring collaborations. I apologise to the colleagues from whom I have learnt much over the years (former students and postdocs, as well as others) who were the prime movers in research collaborations I did not mention in this post. 

 I guess that this note provides much more information than my colleague was intending to receive, but I thought I should put it out for the benefit of the young researchers at Reykjavik University and at the Gran Sasso Science Institute, and of any reader I might have. 

How did your research collaborations arise? If you have anything to add to what I wrote above, and I am sure you do, add your contributions as comments to this post.

Friday, February 05, 2021

Support research in the Foundations of Computing at the University of Leicester!

In an ideal world, university administrators would support the work of the top-class academics employed by their institution, especially if they attract students, have a high research standing within their communities and bring in substantial funding from competitive research funds. After all, to quote Isidor Isaac Rabi, "the faculty are the university" and the most valuable currency for an academic institution is reputation. 

Unfortunately, university administrations the world over repeatedly surprise me by making structural changes that affect some of their very best academics and actually reduce the reputation of their institutions in the eyes of the community at large.

The latest example comes from the University of Leicester, where, as stated here,

[the] University VC proposes to merge Informatics and Mathematics into a combined school focussed exclusively on AI, data science, computational modelling and "digitalisation". This includes the proposal to cease research in Foundations of Computer Science (FoCo) where research is "highly theoretical and not directly linked with applications", retaining staff only if the research they have published in the past (!) aligns well with the new desired focus on foundations of AI, computational modelling, data science and digitalisation. Staff have been given no opportunity to alter their research to fit with the proposed new direction. The plan is to make redundant (in the middle of a pandemic) all (up to 10) staff in foundations of computer science whose past research is deemed not to be a good enough fit with the new strategic priorities.

See also this statement by the University and College Union of the University of Leicester. 

I might be biased, but I find it inconceivable that one can think of building a world-class research programme in AI, data science and computational modelling without building on existing strengths in the Foundations of Computer Science and Mathematics. What my crystal ball tells me is that the strong Leicester academics who might be affected by the planned restructuring will find positions elsewhere and that the University of Leicester is shooting itself in the foot. Which high-profile academic would be enticed to join a university that has shown so little consideration for its existing areas of strength and where one's job might be in danger when the buzzwords du jour change, as they undoubtedly will? 

I encourage you to sign the petition in support of our Leicester colleagues. Kudos to Isobel Armstrong, FBA, for returning her honorary doctorate to the University of Leicester upon hearing of their plans!

Monday, February 01, 2021

Two PhD positions at the Department of Computer Science, Reykjavik University: Model-driven SE for blockchain and smart contracts

The Software and Emerging Technology Lab at the Department of Computer Science, Reykjavik University, is looking for two PhD candidates to work on an ongoing research project on the application of Model Driven Software Engineering principles, methodologies, technologies and abstractions to Blockchain and Smart Contracts. While both positions require strong software development skills and familiarity with the model-driven software engineering approach, the first position will focus on domain analysis and code generation, while the second position is concerned with contract safety and validity and requires knowledge in model verification and validation. The project is based in Iceland. It will be directed by Mohammad Hamdaqa in collaboration with Luca Aceto and Gísli Hjálmtýsson and in close collaboration with Polytechnique Montréal in Canada. The positions are fully funded and include full tuition waiver and a salary in accordance with the Icelandic Research Fund guidelines. Particularly the funding is covering full tuition as well as a stipend of 383,000 ISK per month before taxes for a minimum of three years. 
If you are interested to apply, please send the documents below to the following email addresses:
  • A copy of your CV and research interests
  • A copy of all your transcripts
  • A sample publication
  • A maximum of one page research statement of your plans for research in your PhD.
  • Your intended starting date / and if you need a visa
  • For more information about the position and the research topics, do not hesitate to send your enquiries to any of the project collaborators.
Luca Aceto (

Gísli Hjálmtýsson (

Informal inquiries about the project and the conditions of work are very welcome. We will start reviewing applications as soon as they arrive and will continue to accept applications until each position is filled. We strongly encourage interested applicants to send their applications as soon as possible and no later than 28 February 2021.