Tuesday, June 21, 2022

Interview with Luca de Alfaro, Marco Faella, Thomas A. Henzinger, Rupak Majumdar and Mariëlle Stoelinga, CONCUR 2022 ToT Award Recipients

In this instalment of the Process Algebra Diary, Mickael Randour and I joined forces to interview Luca de Alfaro, Marco Faella, Thomas A. Henzinger, Rupak Majumdar and Mariëlle Stoelinga, who are some of the recipients of the CONCUR 2022 Test-of-Time award. We hope that you'll enjoy reading the very inspiring and insightful answers provided by the above-mentioned colleagues to our questions. 

Note: In what follows, "Luca A." refers to me, whereas "Luca" is Luca de Alfaro.

Luca A. and Mickael: You receive the CONCUR ToT Award 2022 for your paper  "The Element of Surprise in Timed Games", which appeared at CONCUR 2003. In that article, you studied concurrent, two-player timed games. A key contribution of your paper is the definition of an elegant timed game model, allowing both the representation of moves that can take the opponent by surprise, as they are played “faster”, and the definition of natural concepts of winning conditions for the two players — ensuring that players can win only by playing according to a physically meaningful strategy. In our opinion, this is a great example of how novel concepts and definitions can advance a research field. Could you tell us more about the origin of your model?

All: Mariëlle and Marco were postdocs with Luca at UCSC in that period, Rupak was a student of Tom's, and we were all in close touch, meeting very often to work together.  We all had worked much on games, and an extension to timed games was natural for us to consider.

In untimed games, players propose a move, and the moves jointly determine the next game state. In these games there is no notion of real-time.  We wanted to study games in which players could decide not only the moves, but also the instant in time when to play them.

In timed automata, there is only one “player” (the automaton), which can take either a transition, or a time step.  The natural generalization would be a game in which players could propose either a move, or a time step.

Yet, we were unsatisfied with this model. It seemed to us that it was different to say “Let me wait 14 seconds and reconvene.  Then, let me play my King of Spades” or “Let me play my King of Spades in 14 seconds”. In the first, by stopping after 14 seconds, the player is providing a warning that the card might be played. In the second, there is no such warning.  In other words, if players propose either a move or a time-step, they cannot take the adversary by surprise with a move at an unanticipated instant.  We wanted a model that could capture this element of surprise.

To capture the element of surprise, we came up with a model in which players propose both a move and the delay with which it is played. After this natural insight, the difficulty was to find the appropriate winning condition, so that a player could not win by stopping time. 

Tom: Besides the infinite state space (region construction etc.), a second issue that is specific to timed systems is the divergence of time. Technically, divergence is a built-in Büchi condition ("there are infinitely many clock ticks"), so all safety and reachability questions about timed systems are really co-Büchi and Büchi questions, respectively.  This observation had been part of my work on timed systems since the early 1990s, but it has particularly subtle consequences for timed games, where no player (and no collaboration of players) should have the power to prevent time from diverging.  This had to be kept in mind during the exploration of the modeling space.

All: We came up with many possible winning conditions, and for each we identified some undesirable property, except for the one that we published.  This is in fact an aspect that did not receive enough attention in the paper; we presented the chosen winning condition, but we did not discuss in full detail why several other conditions that might have seemed plausible did not work.

In the process of analyzing the winning conditions, we came up with many interesting games, which form the basis of many results, such as the result on lack of determinazation, on the need for memory in reachability games (even when clock values are part of the state), and most famously as it gave the title to the paper, on the power of surprise.

After this fun ride came the hard work, where we had to figure out how to solve these games. We had worked at symbolic approaches to games before, and we followed the approach here, but there were many complex technical adaptations required. When we look at the paper in the distance of time, it has this combination of a natural game model, but also of a fairly sophisticated solution algorithm.

Luca A. and Mickael: Did any of your subsequent research build explicitly on the results and the techniques you developed in your award-winning paper? If so, which of your subsequent results on (timed) games do you like best? Is there any result obtained by other researchers that builds on your work and that you like in particular or found surprising?

Luca: Marco and I built Ticc, which was meant to be a tool for timed interface theories, based largely on the insights in this paper.  The idea was to be able to check the compatibility of real-time systems, and automatically infer the requirements that enable two system components to work well together – to be compatible in time.  We thought this would be useful for hardware or embedded systems, and especially for control systems, and in fact the application is important: there is now much successful work on the compositionality of StateFlow/Simulink models.

We used MTBDDs as the symbolic engine, and Marco and I invented a language for describing the components and we wrote by pair-programming some absolutely beautiful Ocaml code that compiled real-time component models into MTBDDs (perhaps the nicest code I have ever written). The problem was that we were too optimistic in our approach to state explosion, and we were never able to study any system of realistic size.

After this, I became interested in games more in an economic setting, and from there I veered into incentive systems, and from there to reputation systems and to a three-year period in which I applied reputation systems in practice in industry, thus losing somewhat touch with formal methods work.

Marco: I’ve kept working on games since the award-winning paper, in one way or another. The closest I’ve come to the timed game setting has been with controller synthesis games for hybrid automata. In a series of papers, we had fun designing and implementing symbolic algorithms that manipulate polyhedra to compute the winning region of a linear hybrid game. The experience gained on timed games helped me recognize the many subtleties arising in games played in real time on a continuous state-space.

Mariëlle: I have been working on games for test case generation: One player represents the tester, which chooses inputs to test; the other player represents the System-under-Test, and chooses the outputs of the system. Strategy synthesis algorithms can then compute strategies for the tester that maximize all kinds of objectives, eg reaching certain states, test coverage etc. 

A result that I really like is that we were able to show a very close correspondence between the existing testing frameworks and game theoretic frameworks: Specifications act as game arenas; test cases are exactly game strategies, and the conformance relation used in testing (namely ioco) coincides with game refinement (i.e. alternating refinement). 

Rupak: In an interesting way, the first paper on games I read was the one by Maler, Pnueli and Sifakis (STACS 95) that had both fixpoint algorithms and timed games (without “surprise”). So the problem of symbolic solutions to games and their applications in synthesis followed me throughout my career. I moved to finding controllers for games with more general (non-linear) dynamics, where we worked on abstraction techniques. We also realized some new ways to look at restricted classes of adversaries. I was always fortunate to have very good collaborators who kept my interest alive with new insights. Very recently, I have gotten interested in games from a more economic perspective, where players can try to signal each other or persuade each other about private information but it’s too early to tell where this will lead.

Luca A. and Mickael: What are the research topics that you find most interesting right now? Is there any specific problem in your current field of interest that you'd like to see solved?

Mariëlle: Throughout my academic life, I have been working on stochastic analysis --- with Luca and Marco, we worked on stochastic games a lot. First only on theory, but later also on industrial applications, esp in the railroad and high-tech domain. At some point in time, I realized that my work was actually centred around analysing failure probabilities and risk. That is how I moved into risk analysis; the official title of the title of the chair I hold is Risk Management for High Tech Systems. 

The nice thing is: this sells much better than Formal Methods! Almost nobody knows what Formal Methods are, and if they know, people think “yes, those difficult people who urge us to specify everything mathematically”. For risk management, this is completely different: everybody understands that this is an important area.

Luca: I am currently working on computational ecology, on ML for networks, and on fairness in data and ML.  In computational ecology, we are working on the role of habitat and territory for species viability. We use ML techniques to write “differentiable algorithms”, where we can compute the effect of each input – such as the kind of vegetation in each square-kilometer of territory – on the output.  If all goes well, this will enable us to efficiently compute which regions should be prioritized for protection and habitat conservation.

In networks, we have been able to show that reinforcement learning can yield tremendous throughput gains in wireless protocols, and we are now starting to work on routing and congestion control.

And in fairness and ML, we have worked on the automatic detection of anomalous data subgroups (something that can be useful in model diagnostics), and we are now working on the spontaneous inception of discriminatory behavior in agent systems.

While these do not really constitute a coherent research effort, I can certainly say that I am having a grand tour of CS – the kind of joy ride one can afford with tenure!

Rupak: I have veered between practical and theoretical problems. I am working on charting the decidability frontier for infinite-state model checking problems (most recently, for asynchronous programs and context-bounded reachability). I am also working on applying formal methods to the world of cyber-physical systems ---mostly games and synthesis. Finally, I have become very interested in applying formal methods to large scale industrial systems through a collaboration with Amazon Web Services. There is still a large gap between what is theoretically understood and what is practically applicable to these systems; and the problems are a mix of technical and social.

Luca A. and Mickael: You have a very strong track record in developing theoretical results and in applying them to real-life problems. In our, admittedly biased, opinion, your work exemplifies Ben Schneiderman's Twin-Win Model, which propounds the pursuit of "the dual goals of breakthrough theories in published papers and validated solutions that are ready for widespread dissemination." Could you say a few words on your research philosophy? How do you see the interplay between basic and applied research?

Luca: This is very kind for you to say, and a bit funny to hear, because certainly when I was young I had a particular talent for getting lost in useless theoretical problems.  

I think two things played in my favor.  One is that I am curious.  The other is that I have a practical streak: I still love writing code and tinkering with “things”, from IoT to biology to web and more.  This tinkering was at the basis of many of the works I did.  My work on reputation systems started when I created a wiki on cooking; people were vandalizing it, and I started to think about game theory and incentives for collaboration, which led to my writing much of the code for Wikipedia analysis, and at Google, for Maps edits analysis.  My work on networks started with me tinkering with simple reinforcement-learning schemes that might work, and writing the actual code. On the flip side, my curiosity too often had the better of me, so that I have been unable to pay the continuous and devoted attention to a single research field.  I am not a specialist in any single thing I do or I have done.  I am always learning the ropes of something I don’t quite know yet how to do.

My applied streak probably gave me some insight on which problems might be of more practical relevance, and my frequent field changes have allowed me to bring new perspectives to old problems.  There were not many people using RL for wireless networks, there are not many who write ML and GPU code and also avidly read about conservation biology.

Rupak: I must say that Tom and Luca were very strong influencers for me in my research: both in problem selection and in appreciating the joy of research. I remember one comment of Tom, paraphrased as “Life is short. We should write papers that get read.” I spent countless hours in Luca’s office and learnt a lot of things about research, coffee, the ideal way to make pasta, and so on.

Marco: It was an absolute privilege to be part of the group that wrote that paper (my 4th overall, according to DBLP). I’d like to thank my coauthors, and Luca in particular, for guiding me during those crucially formative years.

Mariëlle: I fully agree!

Luca A. and Mickael: Several of you have high-profile leadership roles at your institutions. What advice would you give to a colleague who is about to take up the role of department chair, director of a research centre, dean or president  of a university? How can one build a strong research culture, stay research active and live to tell the tale?

Luca: My colleagues may have better advice; my productivity certainly decreased when I was department chair, and is lower even now that I am the vice-chair.  
When I was young, I was ambitious enough to think that my scientific work would have the largest impact among the things I was doing.  But I soon realized that some of the greatest impact was on others: on my collaborators, on the students I advised, who went on to build great careers and stayed friends, and on all the students I was teaching.  This awareness serves to motivate and guide me in my administrative work. The CS department at UCSC is one of the ten largest in the number of students we graduate, and the time I spend on improving its organization and the quality of the education it delivers is surely very impactful.  My advice to colleagues is to consider their service not as an impediment to research, but as one of the most impactful things they do.

My way of staying alive is to fence off some days that I only dedicate to research (aside from some unavoidable emergency), and also, to have collaborators that give me such joy in working together that they brighten and energize my whole day. 

Luca A. and Mickael: Finally, what advice would you give to a young researcher who is keen to start working on topics related to concurrency theory today?

Luca: Oh that sounds very interesting!  And, may I show you this very interesting thing we are doing in Jax to model bird dispersal? We feed in this climate and vegetation data, and then we…

Just kidding.  Just kidding.  If I come to CONCUR I promise not to lead any of the concurrency yearlings astray.  At least I will try.

My main advice would be this: work on principles that allow correct-by-design development.  If you look at programming languages and software engineering, the progress in software productivity has not happened because people have become better at writing and debugging code written in machine language or C. It has happened because of the development of languages and software principles that make it easier to build large systems that are correct by construction.
We need the same kind of principles, (modeling) languages, and ideas to build correct concurrent systems.  Verification alone is not enough. Work on design tools, ideas to guide design, and design languages.

Tom: In concurrency theory we define formalisms and study their properties. Most papers do the studying, not the defining: they take a formalism that was defined previously, by themselves or by someone else, and study a property of that formalism, usually to answer a question that is inspired by some practical motivation. To me, this omits the most fun part of the exercise, the {\it defining} part. The point I am trying to make is not that we need more formalisms, but that, if one wishes to study a specific question, it is best to study the question on the simplest possible formalism that exhibits exactly the features that make the question meaningful. To do this, one often has to define that formalism. In other words, the formalism should follow the question, not the other way around. This principle has served me well again and again and led to formalisms such as timed games, which try to capture the essence needed to study the power of timing in strategic games played on graphs. So my advice to a young researcher in concurrency theory is: choose your formalism wisely and don't be afraid to define it.

Rupak: Problems have different measures. Some are practically justified (“Is this practically relevant in the near future?”) and some are justified by the foundations they build (“Does this avenue provide new insights and tools?”). Different communities place different values on the two. But both kinds of work are important and one should recognize that one set of values is not universally better than the other.

Mariëlle: As Michael Jordan puts it: Just play. Have fun. Enjoy the game.

Monday, May 30, 2022

Orna Kupferman's Interview with Christel Baier, Holger Hermanns and Joost-Pieter Katoen, CONCUR 2022 ToT Award Recipients

I am delighted to post Orna Kupferman's interview with CONCUR 2022 Test-of-Time Award recipients Christel Baier, Holger Hermanns and Joost-Pieter Katoen.

Thanks to Christel, Holger and Joost-Pieter for their answers (labelled BHK in what follows) and to Orna (Q below) for conducting the interview. Enjoy and watch this space for upcoming interviews with the other award recipients!

Q: You receive the CONCUR Test-of-Time Award 2022 for your paper "Approximate symbolic model checking of continuous-time Markov chains," which appeared
at CONCUR 1998. In that article, you combine three different challenges: symbolic algorithms, real-time systems, and probabilistic systems. Could you briefly explain to our readers what the main challenge in such a combination is?

BHK: The main challenge is to provide a fixed-point characterization of time-bounded reachability probabilities: the probability to reach a given target state within a given deadline. Almost all works in the field up to 1999 treated discrete-time probabilistic models and focused on "just" reachability probabilities: what is the probability to eventually end up in a given target state? This can be characterized as a unique solution of a linear equation system. The question at stake was: how to incorporate a real-valued deadline d? The main insight was
to split the problem in staying a certain amount of time, x say, in the current state and using the remaining d-x time to reach the target from its successor state. This yields a Volterra integral equation system; indeed time-bounded reachability probabilities are unique solutions of such equation systems. In the CONCUR'99 paper we suggested to use symbolic data structures to do the numerical integration; later we found out that much more efficient techniques can be applied.

Q: Could you tell us how you started your collaboration on the award-winning paper? In particular, as the paper combines three different challenges, is it the case that each of you has brought to the research different expertise?

BHK: Christel and Joost-Pieter were both in Birmingham, where a meeting of a
collaboration project between German and British research groups on stochastic systems and process algebra took place. There the first ideas of model checking continuous-time Markov chains arose, especially for time-bounded reachability: with stochastic process algebras there were means to model CTMCs in a compositional manner, but verification was lacking. Back in Germany, Holger suggested to include a steady-state operator, the counterpart of transient properties that can be expressed using timed reachability probabilities. We then also developed the symbolic data structure to support the verification of the entire logic.

Q: Your contribution included a generalization of BDDs (binary decision diagrams) to MTDDs (multi-terminal decision diagrams), which allow both Boolean and real-valued variables. What do you think about the current state of symbolic algorithms, in particular the choice between SAT-based methods and methods that are based on decision diagrams?

BHK: BDD-based techniques entered probabilistic model checking in the mid 1990's for discrete-time models such as Markov chains. Our paper was one of the first, perhaps even the first, that proposed to use BDD structures for real-time stochastic processes. Nowadays, SAT, and in particular SMT-based techniques belong to the standard machinery in probabilistic model checking. SMT techniques are e.g., used in bisimulation minimization at the language level, counterexample generation, and parameter synthesis. This includes both linear as well as non-linear theories. BDD techniques are still used, mostly in combination with sparse representations, but it is fair to say that SMT is becoming more and more relevant.

Q: What are the research topics that you find most interesting right now? Is there any specific problem in your current field of interest that you'd like to see solved?

BHK: This depends a bit on whom you ask! Christel's recent work is about cause-effect reasoning and notions of responsibility in the verification context. This ties into the research interest of Holger who looks at the foundations of perspicuous software systems. This research is rooted in the observation that the explosion of opportunities for software-driven innovations comes with an implosion of human opportunities and capabilities to understand and control these innovations. Joost-Pieter focuses on pushing the borders of automation in weakest-precondition reasoning of probabilistic programs. This involves loop invariant synthesis, probabilistic termination proofs, the development of deductive verifiers, and so forth. Challenges are to come up with good techniques for synthesizing quantitative loop invariants, or even complete probabilistic programs.

Q: What advice would you give to a young researcher who is keen to start working on topics related to symbolic algorithms, real-time systems, and probabilistic systems?

BHK: Try to keep it smart and simple.

Friday, May 06, 2022

HALG 2022: Call for participation

I am posting this call for participation on behalf of Keren Censor-Hillel, PC chair for HALG 2022. I expect that many colleagues from the Track A community will attend that event and enjoy its mouth-watering scientific programme.


7th Highlights of Algorithms conference (HALG 2022)
The London School of Economics and Political Science, June 1-3, 2022

The Highlights of Algorithms conference is a forum for presenting the highlights of recent developments in algorithms and for discussing potential further advances in this area. The conference will provide a broad picture of the latest research in algorithms through a series of invited talks, as well as the possibility for all researchers and students to present their recent results through a series of short talks and poster presentations. Attending the Highlights of Algorithms conference will also be an opportunity for networking and meeting leading researchers in algorithms.

For local information, visa information, or information about registration, please contact Tugkan Batu t.batu@lse.ac.uk.—


A detailed schedule and a list of all accepted short contributions is available at https://www.lse.ac.uk/HALG-2022/programme/Programme



Early registration (by 20th May 2022)

Students: £100
Non-students: £150

Late registration (from 21st May 2022)
Students: £175
Non-students: £225

Registration includes the lunches provided, coffee breaks, and the conference reception.

There are some funds from conference sponsors to subsidise student registration fees. Students can apply for a fee waiver by sending an email to Enfale Farooq (e.farooq@lse.ac.uk) by 15th May 2022. Those students presenting a contributed talk will be given priority in allocation of these funds. The applicants will be notified of the outcome by 17th May 2022.


Survey speakers:

Amir Abboud (Weizmann Institute of Science) 
Julia Chuzhoy (Toyota Technological Institute at Chicago)
Martin Grohe (RWTH Aachen University)
Anna Karlin (University of Washington)
Richard Peng (Georgia Institute of Technology)
Thatchaphol Saranurak (University of Michigan)

Invited talks:

Peyman Afshani (Aarhus University)  
Soheil Behnezhad (Stanford University)  
Sayan Bhattacharya (University of Warwick)
Guy Blelloch (Carnegie Mellon University)
Greg Bodwin (University of Michigan)
Mahsa Eftekhari (University of California, Davis)
John Kallaugher (Sandia National Laboratories)
William Kuszmaul (Massachusetts Institute of Technology)
Jason Li (Carnegie Mellon University)
Joseph Mitchell (SUNY, Stony Brook)
Shay Moran (Technion)
Merav Parter (Weizmann Institute of Science)
Aviad Rubinstein (Stanford University)
Rahul Savani (University of Liverpool)
Mehtaab Sawhney (Massachusetts Institute of Technology)
Jakub Tetek (University of Copenhagen)
Vera Traub (ETH Zurich)
Jan Vondrak (Stanford University)
Yelena Yuditsky (Université Libre de Bruxelles) 

Saturday, March 12, 2022

FOCS 2021 Test-of-Time Award winners (and one deserving paper that missed out)

As members of the TCS community will most likely know, FOCS established Test-of-Time Awards from its 2021 edition to celebrate contributions published at that conference 30, 20 and 10 years before. The first list of selected winners is, as one might have expected, stellar:

  • Uriel Feige, Shafi Goldwasser, László Lovász, Shmuel Safra, Mario Szegedy:
    Approximating Clique is Almost NP-Complete.
    FOCS 1991
  • David Zuckerman:
    Simulating BPP Using a General Weak Random Source.
    FOCS 1991
  • Serge A. Plotkin, David B. Shmoys, Éva Tardos:
    Fast Approximation Algorithms for Fractional Packing and Covering Problems.
    FOCS 1991
  • Ran Canetti:
    Universally Composable Security: A New Paradigm for Cryptographic Protocols.
    FOCS 2001
  • Boaz Barak:
    How to Go Beyond the Black-Box Simulation Barrier.
    FOCS 2001
  • Amit Chakrabarti, Yaoyun Shi, Anthony Wirth, Andrew Chi-Chih Yao:
    Informational Complexity and the Direct Sum Problem for Simultaneous Message Complexity.
    FOCS 2001
  • Zvika Brakerski, Vinod Vaikuntanathan:
    Efficient Fully Homomorphic Encryption from (Standard) LWE.
    FOCS 2011

FWIW, I offer my belated congratulations to all the award recipients, whose work has had, and continues to have, a profound influence on the "Volume A" TCS community. 

Apart from celebrating their achievement, the purpose of this post is to highlight a paper from FOCS 1991 that missed out on the Test-of-Time Award, but that, IMHO, would have fully deserved it. 

I am fully aware that the number of deserving papers/scientists is typically larger, if not much larger, than the number of available awards. Awards are a scarce resource! My goal with this post is simply to remind our community (and especially its younger members) of a seminal contribution that they might want to read or re-read.

The paper in question is "Tree automata, mu-calculus and determinacy" by Allen Emerson and Charanjit S. Jutla, which appeared at FOCS 1991. (Emerson shared the 2007 A.M. Turing Award for the invention of model checking and Jutla went on to doing path-breaking work in cryptography.) That paper is absolutely fundamental for the mu-calculus, but also for automata theory, and verification in general. It introduced many ideas and results that became the basis for extensive research.

As a first contribution, the article introduced parity games and proved their fundamental properties. The parity condition was a missing link in automata theory on infinite objects. It made the whole theory much simpler than that proposed in earlier work. Technically, the parity condition is both universal and positional. Universal means that tree automata with parity conditions are as expressive as those with Rabin or Muller conditions. Positional means that in the acceptance game if a player has a winning strategy then she has one depending only on the current position and not on the history of the play so far. This is a huge technical advance for all automata-theoretic constructions and for the analysis of infinite-duration games. It allows one, for instance, to avoid the complicated arguments of Gurevich and Harlington in their seminal STOC 1982 article, which were already a huge simplification of Rabin's original argument from 1969 proving the decidability of the monadic second-order theory of the infinite binary tree and much more. In passing, let me remark that Rabin has gone on record saying that "I consider this to be the most difficult research I have ever done." See this interview in CACM. 

The second main contribution of that paper is the discovery of the relation between parity games and the mu-calculus. The authors show how a mu-calculus model-checking problem can be reduced to solving a parity game, and conversely, how the set of winning positions in a parity game can be described by a mu-calculus formula. This result is the birth of the "model-checking via games" approach. It also shows that establishing a winner in parity games is contained both in NP and in co-NP. As a corollary, the model-checking problem is as complex as solving games. It is still not known if the problem is in PTIME. A recent advance from STOC'17 gives a quasi-polynomial-time algorithm. (See this blog post for a discussion of that result, which received the STOC 2017 best paper award and was immediately followed up by a flurry of related papers.) 

Finally, the paper also shows how to prove Rabin's complementation lemma, which is the most difficult step in his celebrated aforementioned decidability result, with the help of parity conditions. The proof is radically simpler than previous approaches. The paper puts this contribution most prominently, but actually the conceptual and technical contributions presented later in the paper turned out to be most important for the community. 

Overall, the above-mentioned paper by Emerson and Jutla is a truly seminal contribution that has stood the test of time, has sown the seeds for much research over the last thirty years (as partly witnessed by the over 1,130 citations it has received so far) and is still stimulating advances at the cutting edge of theoretical computer science that bridge the Volume A-Volume B divide. 

I encourage everyone to read it!

Acknowledgement: I have learnt much of the content of this post from Igor Walukiewicz. The responsibility for any infelicity is mine alone.

Sunday, March 06, 2022

HALG 2022: Call For Submissions of Short Contributed Presentations

On behalf of Keren Censor-Hillel, PC chair for HALG 2022, I am happy to post the call for submissions of short contributed presentations for that event. I encourage all members of the algorithms community to submit their contributed presentations. HALG has rapidly become a meeting point for that community in a relaxed workshop-style setting.

The 7th Highlights of Algorithms conference (HALG 2022) 

London, June 1-3, 2022 


HALG-2022 The Highlights of Algorithms conference is a forum for presenting the highlights of recent developments in algorithms and for discussing potential further advances in this area. The conference will provide a broad picture of the latest research in algorithms through a series of invited talks, as well as the possibility for all researchers and students to present their recent results through a series of short talks and poster presentations. Attending the Highlights of Algorithms conference will also be an opportunity for networking and meeting leading researchers in algorithms.  

Call For Submissions of Short Contributed Presentations: The HALG 2022 conference seeks submissions for contributed presentations. Each presentation is expected to consist of a poster and a short talk (serving as an invitation to the poster). There will be no conference proceedings, hence presenting work already published at a different venue or journal (or to be submitted there) is welcome. 

If you would like to present your results at HALG 2022, please submit their details: the abstract, the paper and the speaker of the talk via EasyChair: https://easychair.org/conferences/?conf=halg2022 

The abstract should include (when relevant) information where the results have been published/accepted (e.g., conference), and where they are publicly available (e.g., arXiv). All submissions will be reviewed by the program committee, giving priority to work accepted or published in 2021 or later. 

Submissions deadline: March 27, 2022. 

Acceptance/rejection notifications will be sent in early April.

Friday, March 04, 2022

Combinatorial Exploration: An algorithmic framework to automate the proof of results in combinatorics

Are you interested in combinatorial mathematics? If so, I am happy to welcome you to the future! 

I am proud to share with you a new, 99-page preprint by three colleagues from my department (Christian Bean, Émile Nadeau and Henning Ulfarsson) and three of their collaborators (Michael Albert, Anders Claesson and Jay Pantone) that is the result of years of work that led to the birth of Combinatorial Exploration. Combinatorial Exploration is an algorithmic framework that can prove results that so far have required the ingenuity of human combinatorialists. More specifically, it can study, at the press of a button, the structure of combinatorial objects and derive their counting sequences and generating functions. The applicability and power of Combinatorial Exploration are witnessed by the applications to the domain of permutation patterns given in the paper. My colleagues use it to re-derive hundreds of results in the literature in a uniform manner and prove many new ones. See the Permutation Pattern Avoidance Library (PermPAL) and Section 2.4 of the article for a more comprehensive list of notable results. The paper also gives three additional proofs-of-concept, showing examples of how Combinatorial Exploration can prove results in the domains of alternating sign matrices, polyominoes, and set partitions. Last, but by no means least, the github repository at https://github.com/PermutaTriangle/comb_spec_searcher contains the open-source python framework for Combinatorial Exploration, and the one at https://github.com/PermutaTriangle/Tilings contains the code needed to apply it to the field of permutation patterns. 

 "Det er svært at spå, især om fremtiden", as Niels Bohr and Piet Hein famously said. However, let me stick my neck out and predict that this work will have substantial impact and will be the basis for exciting future work. 

 Congratulations to my colleagues! With my department chair hat on, I am very proud to see work of this quality stem from my department and humbled by what my colleagues have achieved already. As an interested observer, I am very excited to see what their algorithms will be able to prove in the future. For the moment, let's all enjoy what they have done already.