Thursday, December 29, 2022

Computer Science and Mathematics at the GSSI named "Excellent Department"

The Computer Science and the Mathematics groups at the Gran Sasso Science Institute (GSSI) have been selected amongst the "excellent departments" in Italy, based on the outcome of the latest Italian research evaluation exercise and on a proposal submitted by those two groups. The proposal by the GSSI in Computer Science and Mathematics received a score of 29/30, which was the highest grade in those fields together with those of the Normale di Pisa, a scientific powerhouse, and the University of Pisa (Mathematics). The groups at the GSSI will receive approximately 7.3 million euros to support permanent faculty positions and to open new research laboratories.  

This is fantastic news for Computer Science and Mathematics at the GSSI. I congratulate my GSSI colleagues for this achievement! Since the establishment of the GSSI and its international PhD school, our computer science colleagues there have been building a group with a flat hierarchy, which has collegiality as one of its core values and where everyone is a principal investigator and a "leader" from day one.

As my colleagues at the GSSI know well, hiring and promotion decisions are two of the key factors in improving the quality of any department or research group. I trust they will use this funding to hire the best people they can get their hands on and to let them work in a nurturing and hierarchy-free research environment. Hire the best people you can attract and support them in doing the best work they can!

I look forward to seeing the developments in the computer science group at the GSSI and hope to give a small contribution, if I can. If you are considering a relocation to Italy, I encourage you to consider the GSSI. On a purely personal note, I would love to see the group there become as successful as the Department of Computing Sciences at Bocconi University in attracting foreign academics.

Friday, December 09, 2022

Report on the formative research evaluation of the Department of Computer Science at Reykjavik University


I am pleased to share with you the report I received yesterday from the panel that carried out a formative research evaluation review for the Department of Computer Science at Reykjavik University last month. (See below for some excerpts from the report.) The (IMHO, stellar) review panel consisted of Geraldine Fitzpatrick (TU Wien, Austria), Kim Guldstrand Larsen (Aalborg University, Denmark) and Michael Wooldridge (University of Oxford, United Kingdom).

Our evaluators have given us a lot of food for thought, have identified several challenges for the department and have given us many recommendations we might follow to improve our research environment and work, as well as its impact. I trust that some of those remarks will be useful for the university as a whole. 

Our next task as a department will be to do justice to the work of the review panel and build on it to improve our research environment and output.

I thank all my colleagues at the department, including postdocs and PhD students of course, whose creativity, drive, enthusiasm and research work have contributed to building a research environment that, in my admittedly very biassed opinion, punches well above its weight. I am very proud of their work. 

However, we have to keep our feet on the ground and realise that, as the challenges identified by the review panel indicate, we are just starting our journey.
 

Excerpts from the formative review report

"Overall we were pleased and impressed to find that a department which is very young in international terms has succeeded in establishing itself as an internationally competitive hub for Computer Science research. This is a noteworthy achievement by any measure, but is particularly impressive when considering the highly competitive culture of international computer science research, where world-class researchers are very highly-sought after and are able to demand highly lucrative packages.


We repeatedly heard that the department is a highly collegial environment, and has largely avoided the curse of factionalism that taints so many university departments.
 

We were impressed by the international links that the department has been able to establish, with many visitors who clearly contribute to the research culture of the department at all levels. We saw evidence that directly experiencing this culture has been instrumental in a number of hires and in attracting PhD students.
 

The self-evaluation report we were provided with gave a number of key performance indicators, such as volume of publications in internationally competitive journal and conference venues, research awards such as best-paper prizes, and the acquisition of research funding. We were pleased to note that, modulo some expected minor year-on-year variations, all of these measures seem to be on a positive upward trajectory.
 

We noted that much of the Department’s research portfolio is strongly interdisciplinary, and addresses key societal challenges with demonstrable national impact.
 

Finally, we noted that the Department does well in terms of diversity at faculty level, with an increased number of female staff. Other aspects of diversity are less clear, though this perhaps represents Iceland’s racial demographic."

With my ICE-TCS glasses on, I was delighted to read the panel's opinion on our centre:

"We were truly impressed by ICE-TCS that in a short span of time (inaugurated in 2005) has established itself as a world-class center within Theoretical Computer Science (TCS). In particular, we find that the center has been extremely successful combining Track A and Track B of TCS with notable research contributions within and recognitions from the sub-fields of Concurrency Theory, Logic, Programming Languages, Combinatorics and Algorithms."

As a centre, we will strive to improve following the panel's recommendations and to develop a crisp, overarching research vision for the coming few years, which may help us keep spreading the TCS gospel in Iceland and attract talent to the country.

Monday, November 28, 2022

The World Dynamics Project

Our colleagues Pierluigi Crescenzi, Emanuele Natale and Paulo Bruno Serafim have been doing some work on what they call the World Dynamics project, whose goal is to provide a modern framework for studying models of sustainable development, based on cutting-edge techniques from software engineering and machine learning. 

The first outcome of their work is a Julia library that allows scientists to use and adapt different world models, from Meadows et al.'s World3 to recent proposals, in an easy way.

IMHO, this is a fascinating and timely research effort. I encourage readers of this blog to try the current version of the Julia library, which is still under development. It would be great if this library contributed to "an open, interdisciplinary, and consistent comparative approach to scientific model development" and I hope that global policy makers on environmental and economic issues will use similar tools in the nearest future.

Thanks to Emanuele, Paulo and Pierluigi for their work. I'll be following its future development with great interest.

If you speak Italian, I strongly recommend this podcast, in the GSSI-SISSA Sidecar series, in which Pierluigi discusses economic growth with Michele Boldrin.

Saturday, November 12, 2022

Two faculty positions in Computer Science at Reykjavik University

My department has advertised two full-time, permanent faculty positions at any rank . Theoretical Computer Science is not the department's highest priority in hiring at this moment in time, but it is mentioned as one of the areas of interest, alongside Artificial Intelligence, Cybersecurity, Data Science and Machine Learning, and Software Engineering. Do consider applying if you are theoretically minded, your work has, or has the potential to have, impact on any of those fields, and you'd like to join our academic family and relocate to Iceland. The call is below and at https://jobs.50skills.com/ru/en/16728, where the application form can be found.

Two faculty positions in Computer Science at Reykjavik University

The Department of Computer Science at Reykjavik University invites applications for two full-time, permanent faculty positions at any rank in the fields of Artificial Intelligence, Cybersecurity, Data Science and Machine Learning, Software Engineering, and Theoretical Computer Science. For one of the positions, we will give preferential treatment to excellent applicants in Software Engineering, broadly construed. However, the primary evaluation criterion is scientific quality. Outstanding candidates in other areas of Computer Science are encouraged to apply as well. See https://jobs.50skills.com/ru/en/16728 for the link to the application form.  

We are looking for energetic, highly qualified academics with a proven international research record and excellent potential in their field of study. We particularly welcome applications from researchers who have a strong network of research collaborators, can strengthen internal collaborations within the department, have the proclivity to improve their academic environment and its culture, and have the drive and potential to flourish in our environment. The Department of Computer Science at Reykjavik University is characterised by a flat hierarchical structure and every faculty member is expected to act like a principal investigator regardless of their level of employment.  
 
Apart from developing their research career, the successful applicants will play a full part in the teaching and administrative activities of the department, teaching courses and supervising students at both graduate and undergraduate level. Applicants with a demonstrated history of excellence in teaching are preferred.
 
Salary and rank are commensurate with experience. Successful applicants receive a relocation budget, some seed research funding in the first two years of their employment and support for one PhD student. Among other benefits, Reykjavik University offers its research staff the option to take research semesters (sabbaticals) every three years of satisfactory teaching and research activity and provides some additional financial support during those semesters.  
 
The positions are open until filled, with intended starting date in August 2023. Later starting dates can be negotiated, but preference will be given to candidates who can take up their position in August 2023. The deadline for applications is January 27, 2023. The review of the applications will begin in late January 2023 and will continue until the positions are filled.
 
A PhD in Computer Science or a related field is required. Applications should be submitted through the university’s online application submission system and should include the following documents:

  • a cover letter specifying whether the candidate is applying for appointment as an assistant, associate, or full professor,
  • a CV with a full list of publications, 
  • links to three to five major publications, 
  • a research statement, 
  • a teaching statement, 
  • supporting material regarding excellence in teaching, if available, and 
  • any other relevant information the applicant wishes to supply.
Please arrange to have three letters of recommendation sent directly to mannaudur@ru.is (subject “Faculty Positions in CS”) with a copy to Professor Luca Aceto (luca@ru.is), Chair of the Department of Computer Science. Informal communication and discussions on any aspect related to the positions are encouraged, and interested candidates are welcome to contact the chair of the search committee, Associate Professor María Óskarsdóttir (mariaoskars@ru.is), for further information.
 
Department of Computer Science at Reykjavik University
 
The Department of Computer Science at Reykjavik University is research intensive and carries out research-based teaching in all its degree programmes. It offers undergraduate and graduate programs in Computer Science and Software Engineering, a combined undergraduate program in Discrete Mathematics and Computer Science, two graduate programs in Data Science and one in Artificial Intelligence and Language Technology. From the autumn semester 2023, the department will also offer an MSc in Digital Health. At the time of writing, it is home to 26 faculty members, seven of whom are women, five postdoctoral researchers, and 32 PhD students representing altogether over 20 different countries. In 2022, the department had 740 students registered for its BSc and MSc programmes.

The department provides an excellent working environment in which a motivated academic can have an impact at all levels and has a career-development framework that encourages and supports independence in academic endeavours.  

The department is home to several research centres producing high-quality collaborative research in areas such as artificial intelligence, data science, financial technology, information systems, language technology, software systems, and theoretical computer science, among others; for more information on those research centres, see https://en.ru.is/research/.
 
For further information about the Department of Computer Science at Reykjavik University and its activities, see http://en.ru.is/scs/.
 
Reykjavík University
 
On the Times Higher Education rankings for 2023, Reykjavik University is ranked among the 350 best universities world-wide, first among Icelandic universities, and 18th among Nordic ones. Moreover, it was ranked 12th amongst the best small universities in the Times Higher Education rankings 2022, when it was in first place along with eight other universities for the average number of citations per faculty and 53rd amongst all universities established fewer than 50 years ago.  
 
Iceland

Iceland is well known for breathtaking natural beauty, with volcanoes, hot springs, lava fields and glaciers offering a dramatic landscape. It consistently ranks as one of the best places in the world to live. It offers a high quality of life, is one of the safest places in the world, with high gender equality, and strong healthcare and social-support systems. It was in second position in the 2021 UN World Happiness Report, which correlates with various life factors. Reykjavik is a vibrant and cosmopolitan city, which provides an ideal environment for combining cultural and family activities with an active lifestyle.

Monday, September 19, 2022

Dean of the School of Technology at Reykjavik University: Call for applications

Reykjavik University is looking for a new dean of the School of Technology, which comprises the Department of Applied Engineering, the Department of Computer Science, and the Department of Engineering. 

If you have a strong academic career, a vision of how our school can improve its standing and impact, and would enjoy living in Iceland, I encourage you to consider this opportunity! See the ad at the link below for more information: 

https://jobs.50skills.com/ru/en/15613 

Spread the news through your network and encourage excellent candidates to apply.

Wednesday, August 10, 2022

CONCUR through time: A data- and graph-mining analysis

The 33rd edition of the International Conference on Concurrency Theory (CONCUR) will be held in Warsaw, Poland, in the period 16–19 September 2022. The first CONCUR conference dates back to 1990 and was one of the conferences organized as part of the two-year ESPRIT Basic  Research  Action  3006  with  the  same  name.   The  CONCUR  community  has  run  the conference ever since and established the IFIP WG 1.8 “Concurrency Theory” in 2005 under Technical Committee TC1 Foundations of Computer Science of IFIP1.

In light of the well-established nature of the CONCUR conference, and spurred by a data-and graph-mining comparative analysis carried out by Pierluigi Crescenzi and three of his students to celebrate the 50th anniversary of ICALP, Pierluigi and I undertook a similar study for the CONCUR conference using some, by now classic, tools from network science.  Our goal was to try and understand the evolution of the CONCUR conference throughout its history, the ebb and flow in the popularity of some research areas in concurrency theory, and the centrality of CONCUR authors, as measured by several metrics from network science, amongst other topics. 

Our article available here reports on our findings.  We hope that members of the CONCUR community will enjoy reading it and playing with the web-based resources that accompany this piece.  It goes without saying that the data analysis we present has to be taken with a huge pinch of salt and is only meant to provide an overview of the evolution of CONCUR and to be food for thought for the concurrency theory community.

Tuesday, August 09, 2022

Interview with Franck Cassez and Kim G. Larsen, CONCUR 2022 ToT Award Recipients

This post is devoted to an interview with Kim G. Larsen, who received the CONCUR 2022 Test-of-Time Award for the paper The Impressive Power of Stopwatches (available also here), which appeared at CONCUR 2000 and was co-authored with Franck Cassez. On behalf of the concurrency theory community, I thank Kim for taking the time to answer my questions. I trust that readers of this blog will enjoy reading Kim's answer as much as I did. 

Luca: You receive the CONCUR ToT Award 2022 for your paper  The Impressive Power of Stopwatches, which appeared at CONCUR 2000. In that article, you showed that timed automata enriched with stopwatches and unobservable time delays have the same expressive power of  linear hybrid automata. Could you briefly explain to our readers what timed automata with stopwatches are? Could you also tell us how you came to study the question addressed in your award-winning article? Which of the results in your paper did you find most surprising or challenging?

Kim: Well, in timed automata all clocks grow with rate 1 in all locations of the automata. Thus you can tell the amount of time that has elapsed since a particular clock was last reset, e.g. due to an external event of interest.  A stopwatch is a real-valued variable similar to a regular clock.  In contrast to a clock, a stopwatch will in certain locations grow with rate 1 and in other locations grow with rate 0, i.e. it is stopped.  As such, a stopwatch gives you information about the accumulated time spent in a certain parts of the automata. 

In modelling schedulability problems for real-time systems, the use of stopwatches is crucial in order to adequately capture preemption.   I definitely believe that it was our shared interest in schedulability that brought us to study timed automata with stopwatches.  We knew from earlier results by Alur et al. that properties such as reachability was undecidable. But what could we do about this? And how much expressive power would the addition of stopwatches provide?

In the paper we certainly put the most emphasis on the latter question, in that we showed that stopwatch automata and linear hybrid automata accept the same class of timed languages, and this was at least for me the most surprising and challenging result. However, focusing on impact, I think the approximate zone-based method that we apply in the paper has been extremely important from the point of view of having our verification tool UPPAAL being taken-up at large by the embedded systems community.  It has been really interesting to see how well the over-approximation method actually works.

Luca: In your article, you showed that linear hybrid automata and stopwatch automata accept the same class of timed languages. Would this result still hold if all delays were observable? Do the two models have the same expressive power with respect to finer notions of equivalence such as timed bisimilarity, say? Did you, or any other colleague, study that problem, assuming that it is an interesting one?

Kim:  These are definitely very interesting questions, and should be studied.  As for finer notions of equivalences – e.g. timed bisimilarity – I believe that our translation could be shown to be correct up to some timed variant of chunk-by-chunk simulation introduced by Anders Gammelgaard in his Licentiat Thesis from Aarhus University in 1991.  That could be a good starting point.


Luca: Did any of your subsequent research build explicitly on the results and the techniques you developed in your award-winning paper?
Which of your subsequent results on timed and hybrid automata do you like best? Is there any result obtained by other researchers that builds on your work and that you like in particular or found surprising?

Kim:  Looking up in DBLP, I see that I have some 28 papers containing the word “scheduling”.  For sure stopwatches will have been used in one way or another in these.  One thing that we never really examined thoroughly is to investigate how well the approximate zone-based will worked when applied to the translation of linear hybrid automata through the translation to stopwatch automata.  This would definitely be interesting to find out. 

This was the first joint publication between me and Franck.  I enjoyed fully the collaboration on all the next 10 joint papers.  Here the most significant ones are probably the paper at CONCUR 2005, where we presented the symbolic on-the-fly algorithms for synthesis for timed games and the branch UPPAAL TIGA.  And later in a European project GASICS with Jean-Francois Raskin, we used the TIGA in the synthesis of optimal and robust control of a hydraulic system.

Franck: Using the result in our paper, we can analyse scheduling problems where tasks can be stopped and restarted, using real-time model-checking and a tool like UPPAAL.


To do so, we build a network of stopwatch automata modelling the set of tasks and a scheduling policy, and reduce schedulability to a safety verification problem: avoid reaching states where tasks do not meet their deadlines. Because we over-approximate the state space, our analysis may yield some false positives and may wrongly declare a set of tasks non-schedulable because the over-approximation is too coarse. 

In the period 2003–2005, in cooperation with Francois Laroussinie we tried to identify some classes of stopwatch automata for which the over-approximation does not generate false positives.  We never managed to find an interesting subclass. 

This may look like a serious problem in terms of applicability of our result, but in practice, it does not matter too much. Most of the time, we are interested in the schedulability of a specific set of tasks (e.g. controlling a plant, a car, etc.) and for these instances, we can use our result: if we have false positives, we can refine the model tasks and scheduler and rule them out. Hopefully after a few iterations of refinement, we can prove that the set of tasks is schedulable.

The subsequent result on timed and hybrid automata of mine  that I probably like best is the one we obtained on solving optimal reachability in timed automata.
We had a paper at FSTTCS in 2004 presenting the theoretical results, and a companion paper at GDV 2004 with an implementation using HyTech, a tool for analysing hybrid automata. 

I like these results because we ended up with a rather simple proof, after 3-4 years working on this hard problem. 

Luca:  Could you tell us how you started your collaboration on the award-winning paper? I recall that Franck was a regular visitor to our department at Aalborg University for some time, but I can't recall how his collaboration with the Uppaal group started.  

Kim: I am not quite sure I remember how and when I first met Franck.  For some time we already worked substantially with French researchers, in particular from LSV Cachan (Francois Larroussinie and Patricia Bouyer).   I have the feeling that there were quite some strong links between Nantes (were Franck was) and LSV on timed systems in those days.  Also Nantes was the organizer of the PhD school MOVEP five times in the period 1994-2002, and I was lecturing there in one of the years, meeting Olivier Roux and Franck who were the organizers.   Funny enough, this year we are organizing MOVEP in Aalborg. Anyway, at some point Franck became a regular visitor to Aalborg, often for long periods of time – playing on the Squash team of the city when he was not working.

Franck: As Kim mentioned, I was in Nantes at that time, but I was working with Francois Laroussinie who was in Cachan. Francois had spent some time in Aalborg working with Kim and his group and he helped organise a mini workshop with Kim in 1999, in Nantes. That’s when Kim invited me to spend some time in Aalborg, and I visited Aalborg University for the first time from October 1999 until December 1999. This is when we worked on the stopwatch automata paper. We wanted to use UPPAAL to verify systems beyond timed automata. 

I visited Kim and his group almost every year from 1999 until 2007, when I moved to Australia. There were always lots of visitors at Aalborg University and I was very fortunate to be there and learn from the Masters. 

I always felt at home at Aalborg University, and loved all my visits there. The only downside was that I never managed to defeat Kim at badminton. I thought it was a gear issue, but Kim gave me his racket (I still have it) and the score did not change much.


Luca: What are the research topics that you find most interesting right now?
Is there any specific problem in your current field of interest that you'd like to see solved?

Kim: Currently I am spending quite some time on marrying symbolic synthesis with reinforcement learning for Timed Markov Decision Processes in order to achieve optimal as well as safe strategies for Cyber-Physical Systems.


Luca: Both Franck and you have a very strong track record in developing theoretical results and in applying them to real-life problems.
In my, admittedly biased, opinion, your work exemplifies Ben Schneiderman's Twin-Win Model (https://www.pnas.org/doi/pdf/10.1073/pnas.1802918115), which propounds the pursuit of "the dual goals of breakthrough theories in published papers and validated solutions that are ready for widespread dissemination." Could you say a few words on your research philosophy?

Kim: I completely subscribe to this.  Several early theoretically findings – as the paper on stopwatch automata – have been key in our sustainable transfer to industry.

Franck: Kim has been a mentor to me for a number of years now, and I certainly learned this approach/philosophy from him and his group. 
 

We always started from a concrete problem, e.g. scheduling tasks/checking schedulability, and to validate the solutions, building a tool to demonstrate applicability. The next step was to improve the tool to solve larger and larger problems.


UPPAAL is a fantastic example of this philosophy: the reachability problem for timed automata is PSPACE-complete. That would deter a number of people to try and build tools to solve this problem.  But with smart abstractions, algorithms and data-structures, and constant improvement over a number of years, UPPAAL can analyse very large and complex systems. It is amazing to see how UPPAAL is used in several areas from traffic control to planning and to precisely guiding a needle for an injection. 


Luca: What advice would you give to a young researcher who is keen to start working on topics related to formal methods?

Kim: Come to Aalborg, and participate in year's MOVEP.

Friday, July 29, 2022

Davide Sangiorgi's Interview with James Leifer, CONCUR 2022 ToT Award Recipient

I am pleased to post Davide Sangiorgi's interview with CONCUR 2022 Test-of-Time Award recipient James Leifer, who will receive the award for the paper
"Deriving Bisimulation Congruences for Reactive Systems" co-authored with the late Robin Milner.

Thanks to James for painting a rich picture of the scientific and social context within which the work on that paper was done and to Davide for conducting the interview. I trust that readers of this blog will enjoy reading it as much as I did.

Davide: How did the work presented in your CONCUR ToT paper come about?

James: I was introduced to Robin Milner by my undergraduate advisor Bernard Sufrin around 1994. Thanks to that meeting, I started with Robin at Cambridge in 1995 as a fresh Ph.D. student. Robin had recently moved from Edinburgh and had a wonderful research group, including, at various times, Peter Sewell, Adriana Compagnoni, Benjamin Pierce, and Philippa Gardner. There were also many colleagues working or visiting Cambridge interested in process calculi: Davide Sangiorgi, Andy Gordon, Luca Cardelli, Martín Abadi,... It was an exciting atmosphere! I was particularly close to Peter Sewell, with whom I discussed the ideas here extensively and who was generous with his guidance.

There was a trend in the community at the time of building complex process calculi (for encryption, Ambients, etc.) where the free syntax would be quotiented by a structural congruence to "stir the soup" and allow different parts of a tree to float together; reaction rules (unlabelled transitions) then would permit those agglomerated bits to react, to transform into something new.

Robin wanted to come up with a generalised framework, which he called Action Calculi, for modelling this style of process calculi.  His framework would describe graph-like "soups" of atoms linked together by arcs representing binding and sharing; moreover the atoms could contain subgraphs inside of them for freezing activity (as in prefixing in the pi-calculus), with the possibility of boundary crossing arcs (similarly to how nu-bound names in pi-calculus can be used in deeply nested subterms).  

Robin had an amazing talent for drawing beautiful graphs! He would "move" the nodes around on the chalkboard and reveal how a subgraph was in fact a reactum (the LHS of an unlabelled transition).  In the initial phases of my Ph.D. I just tried to understand these graphs: they were so natural to draw on the blackboard! And yet, they were also so uncomfortable to use when written out in linear tree- and list-like syntax, with so many distinct concrete representations for the same graph.

Putting aside the beauty of these graphs, what was the benefit of this framework? If one could manage to embed a process calculus in Action Calculi, using the graph structure and fancy binding and nesting to represent the quotiented syntax, what then? We dreamt about a proposition along the following lines: if you represent your syntax (quotiented by your structural congruence) in Action Calculi graphs, and you represent your reaction rules as Action Calculi graph rewrites, then we will give you a congruential bisimulation for free!

Compared to CCS for example, many of the rich new process calculi lacked labelled transitions systems. In CCS, there was a clean, simple notion of labelled transitions and, moreover, bisimulation over those labelled transitions yielded a congruence: for all processes P and Q, and all process contexts C[-], if P ~ Q, then C[P] ~ C[Q]. This is a key quality for a bisimulation to possess, since it allows modular reasoning about pieces of a process, something that's so much harder in a concurrent world than in a sequential one.

Returning to Action Calculi, we set out to make good on the dream that everyone gets a congruential bisimulation for free! Our idea was to find a general method to derive labelled transitions systems from the unlabelled transitions and then to prove that bisimulation built from those labelled transitions would be a congruence.

The idea was often discussed at that time that there was a duality whereby a process undergoing a labelled transition could be thought of as the environment providing a complementary context inducing the process to react. In the early labelled transition system in pi-calculus for example, I recall hearing that P undergoing the input labelled transition xy could be thought of as the environment outputting payload y on channel x to enable a tau transition with P.

So I tried to formalise this notion that labelled transitions are environmental contexts enabling reaction, i.e. defining P ---C[-]---> P' to mean C[P] ------> P' provided that C[-] was somehow "minimal", i.e. contained nothing superfluous beyond what was necessary to trigger the reaction. We wanted to get a rigorous definition of that intuitive idea. There was a long and difficult period (about 12 months) wandering through the weeds trying to define minimal contexts for Action Calculi graphs (in terms of minimal nodes and minimal arcs), but it was hugely complex, frustrating, and ugly and we seemed no closer to the original goal of achieving congruential bisimulation with these labelled transitions systems.

Eventually I stepped back from Action Calculi and started to work on a more theoretical definition of "minimal context" and we took inspiration from category theory.  Robin had always viewed Action Calculi graphs as categorical arrows between objects (where the objects represented interfaces for plugging together arcs). At the time, there was much discussion of category theory in the air (for game theory); I certainly didn't understand most of it but found it interesting and inspiring.

If we imagine that processes and process-contexts are just categorical arrows (where the objects are arities) then context composition is arrow composition. Now, assuming we have a reaction rule R ------> R', we can define labelled transitions P ---C[-]---> P' as follows: there exists a context D such that C[P] = D[R] and P' = D[R']. The first equality is a commuting diagram and Robin and I thought that we could formalise minimality by something like a categorical pushout! But that wasn't quite right as C and D are not the minimum pair (compared to all other candidates), but a minimal pair: there may be many incomparable minimal pairs all of which are witnesses of legitimate labelled transitions.  There was again a long period of frustration eventually resolved when I reinvented "relative pushouts" (in place of pushouts). They are a simple notion in slice categories but I didn't know that until later...

Having found a reasonable definition of "minimal", I worked excitedly on bisimulation, trying to get a proof of congruence: P ~ Q implies E[P] ~ E[Q]. For weeks, I was considering the labelled transitions of E[P] ---F[-]---> and all the ways that could arise. The most interesting case is when a part of P, a part of E, and F all "conspire" together to generate a reaction. From that I was able to derive a labelled transition of P by manipulating relative pushouts, which by hypothesis yielded a labelled transition of Q, and then, via a sort of "pushout pasting", a labelled transition E[Q] ---F[-]--->. It was a wonderful moment of elation when I pasted all the diagrams together on Robin's board and we realised that we had the congruence property for our synthesised labels!

We looked back again at Action Calculi, using the notion of relative pushouts to guide us (instead of the arbitrary approach we had considered before) and we further looked at other kinds of process calculi syntax to see how relative pushouts could work there...  Returning to the original motivation to make Action Calculi a universal framework with congruential bisimulation for free, I'm not convinced of its utility. But it was the challenge that led us to the journey of the relative pushout work, which I think is beautiful.

Davide: What influence did this work have in the rest of your career? How much of your subsequent work built on it?

James: It was thanks to this work that I visited INRIA Rocquencourt to discuss process calculi with Jean-Jacques Lévy and Georges Gonthier. They kindly invited me to spend a year as postdoc in 2001 after I finished my thesis with Robin, and I ended up staying in INRIA ever since. I didn't work on bisimulation again as a research topic, but stayed interested in concurrency and distribution for a long time, working with Peter Sewell et al on distributed language design with module migration and rebinding, and with Cédric Fournet et al on compiler design for automatically synthesising cryptographic protocols for high level sessions specifications.

Davide: Could you tell us about your interactions with Robin Milner? What was it like to work with him? What lessons did you learn from him?

James: I was tremendously inspired by Robin.

He would stand at his huge blackboard, his large hands covered in chalk, his bicycle clips glinting on his trousers, and he would stalk up and down the blackboard --- thinking and moving.  There was something theatrical and artistic about it: his thinking was done in physical movement and his drawings were dynamic as the representations of his ideas evolved across the board.

I loved his drawings. They would start simple, a circle for a node, a box for a subgraph, etc. and then develop more and more detail corresponding to his intuition. (It reminded me of descriptions I had read of Richard Feynman drawing quantum interactions.)

Sometimes I recall being frustrated because I couldn't read into his formulas everything that he wanted to convey (and we would then switch back to drawings) or I would be worried that there was an inconsistency creeping in or I just couldn't keep up, so the board sessions could be a roller coaster ride at times!

Robin worked tremendously hard and consistently. He would write out and rewrite out his ideas, regularly circulating hand written documents. He would refine over and over his diagrams. Behind his achievements there was an impressive consistency of effort.

He had a lot of confidence to carry on when the sledding was hard. He had such a strong intuition of what ought to be possible, that he was able to sustain years of effort to get there.

He was generous with praise, with credit, with acknowledgement of others' ideas. He was generous in sharing his own ideas and seemed delighted when others would pick them up and carry them forward. I've always admired his openness and lack of jealousy in sharing ideas.

In his personal life, he seemed to have real compatibility with Lucy (his wife), who also kept him grounded. I still laugh when I remember once working with him at his dining room table and Lucy announcing, "Robin, enough of the mathematics. It's time to mow the lawn!"

I visited Oxford for Lucy's funeral and recall Robin putting a brave face on his future plans; I returned a few weeks later when Robin passed away himself. I miss him greatly. 

Davide: What research topics are you most interested in right now? How do you see your work develop in the future?

James: I've been interested in a totally different area, namely healthcare, for many years. I'm fascinated by how patients, and information about them, flows through the complex human and machine interactions in hospital. When looking at how these flows work, and how they don't, it's possible to see where errors arise, where blockages happen, where there are informational and visual deficits that make the job of doctors and nurses difficult. I like to think visually in terms of graphs (incrementally adding detail) and physically moving through the space where the action happens --- all inspired by Robin!

Tuesday, July 05, 2022

ICALP and the EATCS turn 50

These days, our colleagues at IRIF are hosting ICALP 2022 in Paris. This is the 49th edition of the ICALP conference, which turns 50 since its first instalment was held in 1972. ICALP was the first conference of the, then newly founded, European Association for Theoretical Computer Science (EATCS).The rest is history and I let any readers this post might have draw their own conclusions on the role that the EATCS and ICALP have played in supporting the development of theoretical computer science. (Admittedly, my opinions on both the EATCS and ICALP are very biased.) 

The scientific programme of ICALP 2022 is mouthwatering as usual, thanks to the work done by the authors of submitted papers, Mikołaj Bojańczyk and David Woodruff (PC chairs), and their PCs. I encourage everyone to read the papers that are being presented at the conference.

The main purpose of this post, however, is to alert the readers of this blog that ICALP 2022 also hosts an exhibition to celebrate EATCS/ICALP at 50 and theoretical computer science at large. If you are in Paris, you can attend the exhibition in person. Otherwise, you can visit it virtually here. (See also the posters in one PDF file.)

I had the honour to take part in the preparation of the material for that exhibition, which was led by Sandrine Cadet and Sylvain Schmitz. I learnt a lot from all the other colleagues in the committee for the exhibition. 

As part of that work, I asked Pierluigi Crescenzi whether he'd be willing to carry out a graph and data mining analysis of ICALP vis-a-vis other major conferences in theoretical computer science based on DBLP data. Pierluigi's work went well beyond the call of duty and is summarised in this presentation. I trust that you'll find the results of the analysis by Pierluigi and three of his students at the Gran Sasso Science Institute very interesting. If you have any suggestions for expanding that analysis further, please write it in the comment section. 

Let me close by wishing the EATCS and ICALP a happy 50th birthday, and a great scientific and social event to all the colleagues who are attending ICALP 2022.

Tuesday, June 21, 2022

Interview with Luca de Alfaro, Marco Faella, Thomas A. Henzinger, Rupak Majumdar and Mariëlle Stoelinga, CONCUR 2022 ToT Award Recipients

In this instalment of the Process Algebra Diary, Mickael Randour and I joined forces to interview Luca de Alfaro, Marco Faella, Thomas A. Henzinger, Rupak Majumdar and Mariëlle Stoelinga, who are some of the recipients of the CONCUR 2022 Test-of-Time award. We hope that you'll enjoy reading the very inspiring and insightful answers provided by the above-mentioned colleagues to our questions. 

Note: In what follows, "Luca A." refers to me, whereas "Luca" is Luca de Alfaro.

Luca A. and Mickael: You receive the CONCUR ToT Award 2022 for your paper  "The Element of Surprise in Timed Games", which appeared at CONCUR 2003. In that article, you studied concurrent, two-player timed games. A key contribution of your paper is the definition of an elegant timed game model, allowing both the representation of moves that can take the opponent by surprise, as they are played “faster”, and the definition of natural concepts of winning conditions for the two players — ensuring that players can win only by playing according to a physically meaningful strategy. In our opinion, this is a great example of how novel concepts and definitions can advance a research field. Could you tell us more about the origin of your model?


All: Mariëlle and Marco were postdocs with Luca at UCSC in that period, Rupak was a student of Tom's, and we were all in close touch, meeting very often to work together.  We all had worked much on games, and an extension to timed games was natural for us to consider.


In untimed games, players propose a move, and the moves jointly determine the next game state. In these games there is no notion of real-time.  We wanted to study games in which players could decide not only the moves, but also the instant in time when to play them.


In timed automata, there is only one “player” (the automaton), which can take either a transition, or a time step.  The natural generalization would be a game in which players could propose either a move, or a time step.


Yet, we were unsatisfied with this model. It seemed to us that it was different to say “Let me wait 14 seconds and reconvene.  Then, let me play my King of Spades” or “Let me play my King of Spades in 14 seconds”. In the first, by stopping after 14 seconds, the player is providing a warning that the card might be played. In the second, there is no such warning.  In other words, if players propose either a move or a time-step, they cannot take the adversary by surprise with a move at an unanticipated instant.  We wanted a model that could capture this element of surprise.


To capture the element of surprise, we came up with a model in which players propose both a move and the delay with which it is played. After this natural insight, the difficulty was to find the appropriate winning condition, so that a player could not win by stopping time. 


Tom: Besides the infinite state space (region construction etc.), a second issue that is specific to timed systems is the divergence of time. Technically, divergence is a built-in Büchi condition ("there are infinitely many clock ticks"), so all safety and reachability questions about timed systems are really co-Büchi and Büchi questions, respectively.  This observation had been part of my work on timed systems since the early 1990s, but it has particularly subtle consequences for timed games, where no player (and no collaboration of players) should have the power to prevent time from diverging.  This had to be kept in mind during the exploration of the modeling space.


All: We came up with many possible winning conditions, and for each we identified some undesirable property, except for the one that we published.  This is in fact an aspect that did not receive enough attention in the paper; we presented the chosen winning condition, but we did not discuss in full detail why several other conditions that might have seemed plausible did not work.


In the process of analyzing the winning conditions, we came up with many interesting games, which form the basis of many results, such as the result on lack of determinazation, on the need for memory in reachability games (even when clock values are part of the state), and most famously as it gave the title to the paper, on the power of surprise.


After this fun ride came the hard work, where we had to figure out how to solve these games. We had worked at symbolic approaches to games before, and we followed the approach here, but there were many complex technical adaptations required. When we look at the paper in the distance of time, it has this combination of a natural game model, but also of a fairly sophisticated solution algorithm.


Luca A. and Mickael: Did any of your subsequent research build explicitly on the results and the techniques you developed in your award-winning paper? If so, which of your subsequent results on (timed) games do you like best? Is there any result obtained by other researchers that builds on your work and that you like in particular or found surprising?


Luca: Marco and I built Ticc, which was meant to be a tool for timed interface theories, based largely on the insights in this paper.  The idea was to be able to check the compatibility of real-time systems, and automatically infer the requirements that enable two system components to work well together – to be compatible in time.  We thought this would be useful for hardware or embedded systems, and especially for control systems, and in fact the application is important: there is now much successful work on the compositionality of StateFlow/Simulink models.


We used MTBDDs as the symbolic engine, and Marco and I invented a language for describing the components and we wrote by pair-programming some absolutely beautiful Ocaml code that compiled real-time component models into MTBDDs (perhaps the nicest code I have ever written). The problem was that we were too optimistic in our approach to state explosion, and we were never able to study any system of realistic size.


After this, I became interested in games more in an economic setting, and from there I veered into incentive systems, and from there to reputation systems and to a three-year period in which I applied reputation systems in practice in industry, thus losing somewhat touch with formal methods work.

Marco: I’ve kept working on games since the award-winning paper, in one way or another. The closest I’ve come to the timed game setting has been with controller synthesis games for hybrid automata. In a series of papers, we had fun designing and implementing symbolic algorithms that manipulate polyhedra to compute the winning region of a linear hybrid game. The experience gained on timed games helped me recognize the many subtleties arising in games played in real time on a continuous state-space.

Mariëlle: I have been working on games for test case generation: One player represents the tester, which chooses inputs to test; the other player represents the System-under-Test, and chooses the outputs of the system. Strategy synthesis algorithms can then compute strategies for the tester that maximize all kinds of objectives, eg reaching certain states, test coverage etc. 


A result that I really like is that we were able to show a very close correspondence between the existing testing frameworks and game theoretic frameworks: Specifications act as game arenas; test cases are exactly game strategies, and the conformance relation used in testing (namely ioco) coincides with game refinement (i.e. alternating refinement). 


Rupak: In an interesting way, the first paper on games I read was the one by Maler, Pnueli and Sifakis (STACS 95) that had both fixpoint algorithms and timed games (without “surprise”). So the problem of symbolic solutions to games and their applications in synthesis followed me throughout my career. I moved to finding controllers for games with more general (non-linear) dynamics, where we worked on abstraction techniques. We also realized some new ways to look at restricted classes of adversaries. I was always fortunate to have very good collaborators who kept my interest alive with new insights. Very recently, I have gotten interested in games from a more economic perspective, where players can try to signal each other or persuade each other about private information but it’s too early to tell where this will lead.


Luca A. and Mickael: What are the research topics that you find most interesting right now? Is there any specific problem in your current field of interest that you'd like to see solved?


Mariëlle: Throughout my academic life, I have been working on stochastic analysis --- with Luca and Marco, we worked on stochastic games a lot. First only on theory, but later also on industrial applications, esp in the railroad and high-tech domain. At some point in time, I realized that my work was actually centred around analysing failure probabilities and risk. That is how I moved into risk analysis; the official title of the title of the chair I hold is Risk Management for High Tech Systems. 


The nice thing is: this sells much better than Formal Methods! Almost nobody knows what Formal Methods are, and if they know, people think “yes, those difficult people who urge us to specify everything mathematically”. For risk management, this is completely different: everybody understands that this is an important area.

Luca: I am currently working on computational ecology, on ML for networks, and on fairness in data and ML.  In computational ecology, we are working on the role of habitat and territory for species viability. We use ML techniques to write “differentiable algorithms”, where we can compute the effect of each input – such as the kind of vegetation in each square-kilometer of territory – on the output.  If all goes well, this will enable us to efficiently compute which regions should be prioritized for protection and habitat conservation.


In networks, we have been able to show that reinforcement learning can yield tremendous throughput gains in wireless protocols, and we are now starting to work on routing and congestion control.


And in fairness and ML, we have worked on the automatic detection of anomalous data subgroups (something that can be useful in model diagnostics), and we are now working on the spontaneous inception of discriminatory behavior in agent systems.


While these do not really constitute a coherent research effort, I can certainly say that I am having a grand tour of CS – the kind of joy ride one can afford with tenure!


Rupak: I have veered between practical and theoretical problems. I am working on charting the decidability frontier for infinite-state model checking problems (most recently, for asynchronous programs and context-bounded reachability). I am also working on applying formal methods to the world of cyber-physical systems ---mostly games and synthesis. Finally, I have become very interested in applying formal methods to large scale industrial systems through a collaboration with Amazon Web Services. There is still a large gap between what is theoretically understood and what is practically applicable to these systems; and the problems are a mix of technical and social.


Luca A. and Mickael: You have a very strong track record in developing theoretical results and in applying them to real-life problems. In our, admittedly biased, opinion, your work exemplifies Ben Schneiderman's Twin-Win Model, which propounds the pursuit of "the dual goals of breakthrough theories in published papers and validated solutions that are ready for widespread dissemination." Could you say a few words on your research philosophy? How do you see the interplay between basic and applied research?


Luca: This is very kind for you to say, and a bit funny to hear, because certainly when I was young I had a particular talent for getting lost in useless theoretical problems.  


I think two things played in my favor.  One is that I am curious.  The other is that I have a practical streak: I still love writing code and tinkering with “things”, from IoT to biology to web and more.  This tinkering was at the basis of many of the works I did.  My work on reputation systems started when I created a wiki on cooking; people were vandalizing it, and I started to think about game theory and incentives for collaboration, which led to my writing much of the code for Wikipedia analysis, and at Google, for Maps edits analysis.  My work on networks started with me tinkering with simple reinforcement-learning schemes that might work, and writing the actual code. On the flip side, my curiosity too often had the better of me, so that I have been unable to pay the continuous and devoted attention to a single research field.  I am not a specialist in any single thing I do or I have done.  I am always learning the ropes of something I don’t quite know yet how to do.


My applied streak probably gave me some insight on which problems might be of more practical relevance, and my frequent field changes have allowed me to bring new perspectives to old problems.  There were not many people using RL for wireless networks, there are not many who write ML and GPU code and also avidly read about conservation biology.

Rupak: I must say that Tom and Luca were very strong influencers for me in my research: both in problem selection and in appreciating the joy of research. I remember one comment of Tom, paraphrased as “Life is short. We should write papers that get read.” I spent countless hours in Luca’s office and learnt a lot of things about research, coffee, the ideal way to make pasta, and so on.

Marco: It was an absolute privilege to be part of the group that wrote that paper (my 4th overall, according to DBLP). I’d like to thank my coauthors, and Luca in particular, for guiding me during those crucially formative years.

Mariëlle: I fully agree!


Luca A. and Mickael: Several of you have high-profile leadership roles at your institutions. What advice would you give to a colleague who is about to take up the role of department chair, director of a research centre, dean or president  of a university? How can one build a strong research culture, stay research active and live to tell the tale?


Luca: My colleagues may have better advice; my productivity certainly decreased when I was department chair, and is lower even now that I am the vice-chair.  
When I was young, I was ambitious enough to think that my scientific work would have the largest impact among the things I was doing.  But I soon realized that some of the greatest impact was on others: on my collaborators, on the students I advised, who went on to build great careers and stayed friends, and on all the students I was teaching.  This awareness serves to motivate and guide me in my administrative work. The CS department at UCSC is one of the ten largest in the number of students we graduate, and the time I spend on improving its organization and the quality of the education it delivers is surely very impactful.  My advice to colleagues is to consider their service not as an impediment to research, but as one of the most impactful things they do.


My way of staying alive is to fence off some days that I only dedicate to research (aside from some unavoidable emergency), and also, to have collaborators that give me such joy in working together that they brighten and energize my whole day. 


Luca A. and Mickael: Finally, what advice would you give to a young researcher who is keen to start working on topics related to concurrency theory today?
 

Luca: Oh that sounds very interesting!  And, may I show you this very interesting thing we are doing in Jax to model bird dispersal? We feed in this climate and vegetation data, and then we…


Just kidding.  Just kidding.  If I come to CONCUR I promise not to lead any of the concurrency yearlings astray.  At least I will try.


My main advice would be this: work on principles that allow correct-by-design development.  If you look at programming languages and software engineering, the progress in software productivity has not happened because people have become better at writing and debugging code written in machine language or C. It has happened because of the development of languages and software principles that make it easier to build large systems that are correct by construction.
We need the same kind of principles, (modeling) languages, and ideas to build correct concurrent systems.  Verification alone is not enough. Work on design tools, ideas to guide design, and design languages.


Tom: In concurrency theory we define formalisms and study their properties. Most papers do the studying, not the defining: they take a formalism that was defined previously, by themselves or by someone else, and study a property of that formalism, usually to answer a question that is inspired by some practical motivation. To me, this omits the most fun part of the exercise, the {\it defining} part. The point I am trying to make is not that we need more formalisms, but that, if one wishes to study a specific question, it is best to study the question on the simplest possible formalism that exhibits exactly the features that make the question meaningful. To do this, one often has to define that formalism. In other words, the formalism should follow the question, not the other way around. This principle has served me well again and again and led to formalisms such as timed games, which try to capture the essence needed to study the power of timing in strategic games played on graphs. So my advice to a young researcher in concurrency theory is: choose your formalism wisely and don't be afraid to define it.

Rupak: Problems have different measures. Some are practically justified (“Is this practically relevant in the near future?”) and some are justified by the foundations they build (“Does this avenue provide new insights and tools?”). Different communities place different values on the two. But both kinds of work are important and one should recognize that one set of values is not universally better than the other.

Mariëlle: As Michael Jordan puts it: Just play. Have fun. Enjoy the game.

Monday, May 30, 2022

Orna Kupferman's Interview with Christel Baier, Holger Hermanns and Joost-Pieter Katoen, CONCUR 2022 ToT Award Recipients

I am delighted to post Orna Kupferman's interview with CONCUR 2022 Test-of-Time Award recipients Christel Baier, Holger Hermanns and Joost-Pieter Katoen.


Thanks to Christel, Holger and Joost-Pieter for their answers (labelled BHK in what follows) and to Orna (Q below) for conducting the interview. Enjoy and watch this space for upcoming interviews with the other award recipients!
 

Q: You receive the CONCUR Test-of-Time Award 2022 for your paper "Approximate symbolic model checking of continuous-time Markov chains," which appeared
at CONCUR 1998. In that article, you combine three different challenges: symbolic algorithms, real-time systems, and probabilistic systems. Could you briefly explain to our readers what the main challenge in such a combination is?

BHK: The main challenge is to provide a fixed-point characterization of time-bounded reachability probabilities: the probability to reach a given target state within a given deadline. Almost all works in the field up to 1999 treated discrete-time probabilistic models and focused on "just" reachability probabilities: what is the probability to eventually end up in a given target state? This can be characterized as a unique solution of a linear equation system. The question at stake was: how to incorporate a real-valued deadline d? The main insight was
to split the problem in staying a certain amount of time, x say, in the current state and using the remaining d-x time to reach the target from its successor state. This yields a Volterra integral equation system; indeed time-bounded reachability probabilities are unique solutions of such equation systems. In the CONCUR'99 paper we suggested to use symbolic data structures to do the numerical integration; later we found out that much more efficient techniques can be applied.

Q: Could you tell us how you started your collaboration on the award-winning paper? In particular, as the paper combines three different challenges, is it the case that each of you has brought to the research different expertise?

BHK: Christel and Joost-Pieter were both in Birmingham, where a meeting of a
collaboration project between German and British research groups on stochastic systems and process algebra took place. There the first ideas of model checking continuous-time Markov chains arose, especially for time-bounded reachability: with stochastic process algebras there were means to model CTMCs in a compositional manner, but verification was lacking. Back in Germany, Holger suggested to include a steady-state operator, the counterpart of transient properties that can be expressed using timed reachability probabilities. We then also developed the symbolic data structure to support the verification of the entire logic.

Q: Your contribution included a generalization of BDDs (binary decision diagrams) to MTDDs (multi-terminal decision diagrams), which allow both Boolean and real-valued variables. What do you think about the current state of symbolic algorithms, in particular the choice between SAT-based methods and methods that are based on decision diagrams?

BHK: BDD-based techniques entered probabilistic model checking in the mid 1990's for discrete-time models such as Markov chains. Our paper was one of the first, perhaps even the first, that proposed to use BDD structures for real-time stochastic processes. Nowadays, SAT, and in particular SMT-based techniques belong to the standard machinery in probabilistic model checking. SMT techniques are e.g., used in bisimulation minimization at the language level, counterexample generation, and parameter synthesis. This includes both linear as well as non-linear theories. BDD techniques are still used, mostly in combination with sparse representations, but it is fair to say that SMT is becoming more and more relevant.

Q: What are the research topics that you find most interesting right now? Is there any specific problem in your current field of interest that you'd like to see solved?

BHK: This depends a bit on whom you ask! Christel's recent work is about cause-effect reasoning and notions of responsibility in the verification context. This ties into the research interest of Holger who looks at the foundations of perspicuous software systems. This research is rooted in the observation that the explosion of opportunities for software-driven innovations comes with an implosion of human opportunities and capabilities to understand and control these innovations. Joost-Pieter focuses on pushing the borders of automation in weakest-precondition reasoning of probabilistic programs. This involves loop invariant synthesis, probabilistic termination proofs, the development of deductive verifiers, and so forth. Challenges are to come up with good techniques for synthesizing quantitative loop invariants, or even complete probabilistic programs.

Q: What advice would you give to a young researcher who is keen to start working on topics related to symbolic algorithms, real-time systems, and probabilistic systems?

BHK: Try to keep it smart and simple.

Friday, May 06, 2022

HALG 2022: Call for participation

I am posting this call for participation on behalf of Keren Censor-Hillel, PC chair for HALG 2022. I expect that many colleagues from the Track A community will attend that event and enjoy its mouth-watering scientific programme.

 

7th Highlights of Algorithms conference (HALG 2022)
The London School of Economics and Political Science, June 1-3, 2022
https://www.lse.ac.uk/HALG-2022


The Highlights of Algorithms conference is a forum for presenting the highlights of recent developments in algorithms and for discussing potential further advances in this area. The conference will provide a broad picture of the latest research in algorithms through a series of invited talks, as well as the possibility for all researchers and students to present their recent results through a series of short talks and poster presentations. Attending the Highlights of Algorithms conference will also be an opportunity for networking and meeting leading researchers in algorithms.

For local information, visa information, or information about registration, please contact Tugkan Batu t.batu@lse.ac.uk.—
 

PROGRAM

A detailed schedule and a list of all accepted short contributions is available at https://www.lse.ac.uk/HALG-2022/programme/Programme

REGISTRATION

https://www.lse.ac.uk/HALG-2022/registration/Registration

Early registration (by 20th May 2022)

Students: £100
Non-students: £150

Late registration (from 21st May 2022)
Students: £175
Non-students: £225

Registration includes the lunches provided, coffee breaks, and the conference reception.

There are some funds from conference sponsors to subsidise student registration fees. Students can apply for a fee waiver by sending an email to Enfale Farooq (e.farooq@lse.ac.uk) by 15th May 2022. Those students presenting a contributed talk will be given priority in allocation of these funds. The applicants will be notified of the outcome by 17th May 2022.

INVITED SPEAKERS

Survey speakers:

Amir Abboud (Weizmann Institute of Science) 
Julia Chuzhoy (Toyota Technological Institute at Chicago)
Martin Grohe (RWTH Aachen University)
Anna Karlin (University of Washington)
Richard Peng (Georgia Institute of Technology)
Thatchaphol Saranurak (University of Michigan)

Invited talks:

Peyman Afshani (Aarhus University)  
Soheil Behnezhad (Stanford University)  
Sayan Bhattacharya (University of Warwick)
Guy Blelloch (Carnegie Mellon University)
Greg Bodwin (University of Michigan)
Mahsa Eftekhari (University of California, Davis)
John Kallaugher (Sandia National Laboratories)
William Kuszmaul (Massachusetts Institute of Technology)
Jason Li (Carnegie Mellon University)
Joseph Mitchell (SUNY, Stony Brook)
Shay Moran (Technion)
Merav Parter (Weizmann Institute of Science)
Aviad Rubinstein (Stanford University)
Rahul Savani (University of Liverpool)
Mehtaab Sawhney (Massachusetts Institute of Technology)
Jakub Tetek (University of Copenhagen)
Vera Traub (ETH Zurich)
Jan Vondrak (Stanford University)
Yelena Yuditsky (Université Libre de Bruxelles) 

Saturday, March 12, 2022

FOCS 2021 Test-of-Time Award winners (and one deserving paper that missed out)

As members of the TCS community will most likely know, FOCS established Test-of-Time Awards from its 2021 edition to celebrate contributions published at that conference 30, 20 and 10 years before. The first list of selected winners is, as one might have expected, stellar:

  • Uriel Feige, Shafi Goldwasser, László Lovász, Shmuel Safra, Mario Szegedy:
    Approximating Clique is Almost NP-Complete.
    FOCS 1991
  • David Zuckerman:
    Simulating BPP Using a General Weak Random Source.
    FOCS 1991
  • Serge A. Plotkin, David B. Shmoys, Éva Tardos:
    Fast Approximation Algorithms for Fractional Packing and Covering Problems.
    FOCS 1991
  • Ran Canetti:
    Universally Composable Security: A New Paradigm for Cryptographic Protocols.
    FOCS 2001
  • Boaz Barak:
    How to Go Beyond the Black-Box Simulation Barrier.
    FOCS 2001
  • Amit Chakrabarti, Yaoyun Shi, Anthony Wirth, Andrew Chi-Chih Yao:
    Informational Complexity and the Direct Sum Problem for Simultaneous Message Complexity.
    FOCS 2001
  • Zvika Brakerski, Vinod Vaikuntanathan:
    Efficient Fully Homomorphic Encryption from (Standard) LWE.
    FOCS 2011

FWIW, I offer my belated congratulations to all the award recipients, whose work has had, and continues to have, a profound influence on the "Volume A" TCS community. 

Apart from celebrating their achievement, the purpose of this post is to highlight a paper from FOCS 1991 that missed out on the Test-of-Time Award, but that, IMHO, would have fully deserved it. 

I am fully aware that the number of deserving papers/scientists is typically larger, if not much larger, than the number of available awards. Awards are a scarce resource! My goal with this post is simply to remind our community (and especially its younger members) of a seminal contribution that they might want to read or re-read.

The paper in question is "Tree automata, mu-calculus and determinacy" by Allen Emerson and Charanjit S. Jutla, which appeared at FOCS 1991. (Emerson shared the 2007 A.M. Turing Award for the invention of model checking and Jutla went on to doing path-breaking work in cryptography.) That paper is absolutely fundamental for the mu-calculus, but also for automata theory, and verification in general. It introduced many ideas and results that became the basis for extensive research.

As a first contribution, the article introduced parity games and proved their fundamental properties. The parity condition was a missing link in automata theory on infinite objects. It made the whole theory much simpler than that proposed in earlier work. Technically, the parity condition is both universal and positional. Universal means that tree automata with parity conditions are as expressive as those with Rabin or Muller conditions. Positional means that in the acceptance game if a player has a winning strategy then she has one depending only on the current position and not on the history of the play so far. This is a huge technical advance for all automata-theoretic constructions and for the analysis of infinite-duration games. It allows one, for instance, to avoid the complicated arguments of Gurevich and Harlington in their seminal STOC 1982 article, which were already a huge simplification of Rabin's original argument from 1969 proving the decidability of the monadic second-order theory of the infinite binary tree and much more. In passing, let me remark that Rabin has gone on record saying that "I consider this to be the most difficult research I have ever done." See this interview in CACM. 

The second main contribution of that paper is the discovery of the relation between parity games and the mu-calculus. The authors show how a mu-calculus model-checking problem can be reduced to solving a parity game, and conversely, how the set of winning positions in a parity game can be described by a mu-calculus formula. This result is the birth of the "model-checking via games" approach. It also shows that establishing a winner in parity games is contained both in NP and in co-NP. As a corollary, the model-checking problem is as complex as solving games. It is still not known if the problem is in PTIME. A recent advance from STOC'17 gives a quasi-polynomial-time algorithm. (See this blog post for a discussion of that result, which received the STOC 2017 best paper award and was immediately followed up by a flurry of related papers.) 

Finally, the paper also shows how to prove Rabin's complementation lemma, which is the most difficult step in his celebrated aforementioned decidability result, with the help of parity conditions. The proof is radically simpler than previous approaches. The paper puts this contribution most prominently, but actually the conceptual and technical contributions presented later in the paper turned out to be most important for the community. 

Overall, the above-mentioned paper by Emerson and Jutla is a truly seminal contribution that has stood the test of time, has sown the seeds for much research over the last thirty years (as partly witnessed by the over 1,130 citations it has received so far) and is still stimulating advances at the cutting edge of theoretical computer science that bridge the Volume A-Volume B divide. 

I encourage everyone to read it!

Acknowledgement: I have learnt much of the content of this post from Igor Walukiewicz. The responsibility for any infelicity is mine alone.

Sunday, March 06, 2022

HALG 2022: Call For Submissions of Short Contributed Presentations

On behalf of Keren Censor-Hillel, PC chair for HALG 2022, I am happy to post the call for submissions of short contributed presentations for that event. I encourage all members of the algorithms community to submit their contributed presentations. HALG has rapidly become a meeting point for that community in a relaxed workshop-style setting.

The 7th Highlights of Algorithms conference (HALG 2022) 

London, June 1-3, 2022 

https://www.lse.ac.uk/

HALG-2022 The Highlights of Algorithms conference is a forum for presenting the highlights of recent developments in algorithms and for discussing potential further advances in this area. The conference will provide a broad picture of the latest research in algorithms through a series of invited talks, as well as the possibility for all researchers and students to present their recent results through a series of short talks and poster presentations. Attending the Highlights of Algorithms conference will also be an opportunity for networking and meeting leading researchers in algorithms.  

Call For Submissions of Short Contributed Presentations: The HALG 2022 conference seeks submissions for contributed presentations. Each presentation is expected to consist of a poster and a short talk (serving as an invitation to the poster). There will be no conference proceedings, hence presenting work already published at a different venue or journal (or to be submitted there) is welcome. 

If you would like to present your results at HALG 2022, please submit their details: the abstract, the paper and the speaker of the talk via EasyChair: https://easychair.org/conferences/?conf=halg2022 

The abstract should include (when relevant) information where the results have been published/accepted (e.g., conference), and where they are publicly available (e.g., arXiv). All submissions will be reviewed by the program committee, giving priority to work accepted or published in 2021 or later. 

Submissions deadline: March 27, 2022. 

Acceptance/rejection notifications will be sent in early April.

Friday, March 04, 2022

Combinatorial Exploration: An algorithmic framework to automate the proof of results in combinatorics

Are you interested in combinatorial mathematics? If so, I am happy to welcome you to the future! 

I am proud to share with you a new, 99-page preprint by three colleagues from my department (Christian Bean, Émile Nadeau and Henning Ulfarsson) and three of their collaborators (Michael Albert, Anders Claesson and Jay Pantone) that is the result of years of work that led to the birth of Combinatorial Exploration. Combinatorial Exploration is an algorithmic framework that can prove results that so far have required the ingenuity of human combinatorialists. More specifically, it can study, at the press of a button, the structure of combinatorial objects and derive their counting sequences and generating functions. The applicability and power of Combinatorial Exploration are witnessed by the applications to the domain of permutation patterns given in the paper. My colleagues use it to re-derive hundreds of results in the literature in a uniform manner and prove many new ones. See the Permutation Pattern Avoidance Library (PermPAL) and Section 2.4 of the article for a more comprehensive list of notable results. The paper also gives three additional proofs-of-concept, showing examples of how Combinatorial Exploration can prove results in the domains of alternating sign matrices, polyominoes, and set partitions. Last, but by no means least, the github repository at https://github.com/PermutaTriangle/comb_spec_searcher contains the open-source python framework for Combinatorial Exploration, and the one at https://github.com/PermutaTriangle/Tilings contains the code needed to apply it to the field of permutation patterns. 

 "Det er svært at spå, især om fremtiden", as Niels Bohr and Piet Hein famously said. However, let me stick my neck out and predict that this work will have substantial impact and will be the basis for exciting future work. 

 Congratulations to my colleagues! With my department chair hat on, I am very proud to see work of this quality stem from my department and humbled by what my colleagues have achieved already. As an interested observer, I am very excited to see what their algorithms will be able to prove in the future. For the moment, let's all enjoy what they have done already.