Tuesday, May 31, 2016

Scott Smolka's advice to the young theoretical computer scientist

As the fourth installment of the series in which Fellows of the EATCS provide their advice to the budding TCS researcher, I am posting the advice from Scott Smolka. Enjoy!

Advice I would give to a student interested in TCS Not surprising, it all starts with the basics: automata theory, formal languages, algorithms, complexity theory, programming languages and semantics.

Advice I would give a young researcher in TCS Go to conferences and establish connections with more established TCS researchers. Seek to work with them and see if you can arrange visits at their home institutions for a few months.

A short description of a research topic that excites me at this moment in time (and possibly why) Bird flocking and V-formation are topics I find very exciting. Previous approaches to this problem focused on models of dynamic behavior based on simple rules such as: Separation (avoid crowding neighbors), Alignment (steer towards average heading of neighbors), and Cohesion (steer towards average position of neighbors). My collaborators and I are instead treating this as a problem of Optimal Control, where the fitness function takes into account Velocity Matching (alignment), Upwash Benefit (birds in a flock moving into the upwash region of the bird(s) in front of them), and Clear View (birds in the flock having unobstructed views). What’s interesting about this problem is that it is inherently distributed in nature (a bird can only communicate with its nearest neighbors), and one can argue that our approach more closely mimics the neurological process birds use to achieve these formations.

Thursday, May 26, 2016

Sad news: Zoltán Ésik passed away yesterday

Our good colleague and friend Zoltán Ésik passed away suddenly yesterday afternoon in the hotel room where he was staying with his wife during a visit to our group at Reykjavik University. He had delivered a survey talk at Reykjavik University on the Equational Logic of Fixed Point Operations on Tuesday and we were making plans for the coming days.


Zoltán was a scientist of the highest calibre; he was one of the prime movers in the monumental development of Iteration Theories, amongst many other achievements. He had served the EATCS as a member of its Council for many years, had recently been named as one of the 2016 EATCS Fellows and he had been a member of the Presburger Award Committee for the last two years.

There is much more to be said about his scientific achievements and warm personality, but this will have to wait for better times.

R.I.P. Zoltán. 

Wednesday, May 25, 2016

Giuseppe Italiano's advice to young researchers in TCS

As the third installment of the series in which Fellows of the EATCS provide their advice to the budding TCS researcher, I am posting the advice from Giuseppe Italiano. Happy reading!

The advice I would give to a student interested in TCS There’s a great quote by Thomas Huxley: “Try to learn something about everything and everything about something.” When working through your PhD, you might end up focusing on a narrow topic so that you will fully understand it. That’s really great! But one of the wonderful things about Theoretical Computer Science is that you will still have the opportunity to learn the big picture!

The advice I would give a young researcher in TCS Keep working on the problems you love, but don’t be afraid to learn things outside of your own area. One good way to learn things outside your area is to attend talks (and even conferences) outside your research interests. You should always do that!

A short description of a research topic that excites me at this moment in time (and possibly why) I am really excited by recent results on conditional lower bounds, sparkled by the work of Virginia Vassilevska Williams et al. It is fascinating to see how a computational complexity conjecture such as SETH (Strong Exponential Time Hypothesis) had such an impact on the hardness results for many well-known basic problems. (Editor's notes: You might be interested in the slides for the presentations given at this workshop that was co-located with STOC 2015.)

Tuesday, May 24, 2016

David Harel's advice to the young theoretical computer scientist


As second installment of the series in which Fellows of the EATCS provide their advice to the budding TCS researcher, I am posting the advice from David Harel. Enjoy!

Advice I would give to a student interested in TCS:

If you are already enrolled in a computer science program, then unless you feel you are of absolutely stellar theoretical quality and the real world and its problems do not attract you at all, I’d recommend that you spend at least 2/3 of your course efforts on a variety of topics related to TCS but not “theory for the sake of theory”. Take lots of courses on languages, verification AI, databases, systems, hardware, etc. But clearly don’t shy away from pure mathematics. Being well-versed in a variety of topics in mathematics can only do you good in the future. If you are still able to choose a study program, go for a combination: TCS combined with software and systems engineering, for example, or bioinformatics/systems biology. I feel that computer science (not just programing, but the deep underlying ideas of CS and systems) will play a role in the science of the 21st century (which will be the century of the life sciences) similar to that played by mathematics in the science of the 20th century (which was the century of the physical sciences).


Advice I would give a young researcher in TCS:

Much of the above is relevant to young researchers too. Here I would add the following two things. First, if you are doing pure theory, then spend at least 1/3 of your time on problems that are simpler than the real hard one you are trying to solve. You might indeed succeed in settling the P=NP? problem, or the question of whether PTIME on general finite structures is r.e., but you might not. Nevertheless, in the latter case you’ll at least have all kinds of excellent, if less spectacular, results under your belt. Second, if you are doing research that is expected to be of some practical value, go talk to the actual people “out there”: engineers, programmers, system designers, etc. Consult for them, or just sit with them and see their problems first-hand. There is nothing better for good theoretical or conceptual research that may have practical value than dirtying your hands in the trenches.


A short description of a research topic that excites me at this moment in time (and possibly why):

I haven’t done any pure TCS for 25 years, although in work my group and I do on languages and software engineering there is quite a bit of theory too, as is the case in our work on biological modeling. However, for many years, I’ve had a small but nagging itch for trying to make progress on the problem of artificial olfaction ̶ the ability to record and remotely produce faithful renditions of arbitrary odors. This is still a far-from-solved issue, and is the holy grail of the world of olfaction. Addressing it involves chemistry, biology, psychophysics, engineering, mathematics and algorithmics (and is a great topic for young TCS researchers!). More recently, I’ve been thinking about the question of how to test the validity of a candidate olfactory reproduction system, so that we have an agreed-upon criterion of success for when such systems are developed. It is a kind of common-sense question, but one that appears to be very interesting, and not unlike Turing’s 1950 quest for testing AI, even though such systems were nowhere in sight at the time. In the present case, trying to compare testing artificial olfaction to testing the viability of sight and sound reproduction will not work, for many reasons. After struggling with this for quite a while, I now have a proposal for such a test, which is under review.

Friday, May 20, 2016

ICALP 2016 early registration deadline approaching


The organizers of ICALP 2016  have asked me to distribute the message below. I encourage you to attend the conference, which is going to be a veritable celebration of TCS research. 

The EARLY REGISTRATION period for ICALP 2016 ends on May 31, 2016.

The 43rd International Colloquium on Automata, Languages and Programming
(ICALP 2016) will be held in Rome (Italy) from July 12-th  to 15-th 2016 (http://www.easyconferences.eu/icalp2016/index.html).

The list of ACCEPTED PAPERS  is here (http://www.easyconferences.eu/icalp2016/accepted.html)

Conference's  INVITED SPEAKERS  are: 
- Subhash Khot (New York University, USA)
- Marta Z. Kwiatkowska (Oxford University, UK)
- Xavier Leroy (INRIA, Paris, France)
- Devavrat Shah (MIT, USA)

The following 2016 AWARDS will give a talk during the conference
-  Steve Brookes (Carnegie Mellon, USA) and Peter O'Hearn (UCL, UK)  -- Gödel Prize
-  Dexter Kozen (Cornell - USA) -- EATCS award
-  Mark Braverman (Princeton, USA) -- Presburger award


The PROGRAM of the conference is available here (http://www.easyconferences.eu/icalp2016/index.html). 

Thursday, May 19, 2016

Yuri Gurevich's advice to the young theoretical computer scientist

I have always enjoyed reading articles, interviews, blog posts and books in which top-class scientists share their experience with, and provide advice to, young researchers. In fact, despite not being young any more, alas, I feel that I invariably learn something new by reading those pieces, which, at the very least, remind me of the things that I should be doing, and that perhaps I am not doing, to uphold high standards in my job.

Based on my partiality for scientific advice and stories, it is not overly surprising that I was struck by the thought that it would be interesting to ask the EATCS Fellows for
  • the advice they would give to a student interested in TCS,
  • the advice they would give to a young researcher in TCS and
  • a short description of a research topic that excites them at this moment in time (and possibly why).
The EATCS Fellows are model citizens of the TCS community, have varied work experiences and backgrounds, and span a wide spectrum of research areas. One can learn much about our field of science and about academic life in general by reading their thoughts.

I am collecting the answers to the above-listed questions I have received from some of the current EATCS Fellows in a contribution that will appear in the June issue of the Bulletin of the EATCS. Over the next few days, as a sneak preview for that article, I'll post the advice from some of the fellows on this blog. Enjoy it!

Advice from Yuri Gurevich (Microsoft Research)

Advice I would give to a student interested in TCS Attending math seminars (mostly in my past), I noticed a discord. Experts in areas like complex analysis or PDEs (partial differential equations) typically presume that everybody knows Fourier transforms, differential forms, etc., while logicians tend to remind the audience of basic definitions (like what’s first-order logic) and theorems (e.g. the compactness theorem). Many talented mathematicians didn’t take logic in their college years, and they need those reminders. How come? Why don’t they effortlessly internalize those definitions and theorems once and for all? This is not because those definitions and theorems are particularly hard (they are not) but because they are radically different from what they know. It is easier to learn radically different things — whether it is logic or PDEs or AI — in your student years. Open your mind and use this opportunity!


Advice I would give a young researcher in TCS As the development of physics caused a parallel development of physics-applied mathematics, so the development of computer science and engineering causes a parallel development of theoretical computer science. TCS is an applied science. Applications justify it and give it value. I would counsel to take applications seriously and honestly. Not only immediate applications, but also applications down the line. Of course, like in mathematics, there are TCS issues of intrinsic value. And there were cases when the purest mathematics eventually was proven valuable and applied. But in most cases, potential applications not only justify research but also provide guidance of sorts. Almost any subject can be developed in innumerable ways. But which of those ways are valuable? The application guidance is indispensable.

I mentioned computer engineering above for a reason. Computer science is different from natural science like physics, chemistry, biology. Computers are artifacts, not “naturefacts.” Hence the importance of computer science and engineering as a natural area whose integral part is computer science.

A short description of a research topic that excites me at this moment in time (and possibly why) Right now, the topics that excite me most are quantum mechanics and quantum computing. I wish I could say that this is the result of a natural development of my research. But this isn’t so. During my long career, I moved several times from one area to another. Typically it was natural; e.g. the theory of abstract state machines developed in academia brought me to industry. But the move to quanta was spontaneous. There was an opportunity (they started a new quantum group at the Microsoft Redmond campus a couple of years ago), and I jumped upon it. I always wanted to understand quantum theory but occasional reading would not help as my physics had been poor to none and I haven’t been exposed much to the mathematics of quantum theory. In a sense I am back to being a student and discovering a new world of immense beauty and mystery, except that I do not have the luxury of having time to study things systematically. But that is fine. Life is full of challenges. That makes it interesting.


Friday, May 13, 2016

Interview with Rajeev Alur and David Dill, recipients of the 2016 Alonzo Church Award

As announced earlier today, Rajeev Alur (University of Pennsylvania USA) and David L. Dill (Stanford University, USA) are the recipients of the 2016 Alonzo Church Award for Outstanding Contributions to Logic and Computation for their work on timed automata, a decidable model of real-time systems that combines beautiful, new and deep theory with widespread practical impact.

In order to celebrate the award of the Alonzo Church Award to this hugely
influential work and to give the theoretical computer science community at large a glimpse of the history of the ideas that led to it, I interviewed Rajeev Alur
(abbreviated to RA in what follows) and David Dill (referred to as DD in the text
below) via email. The interview will also appear in the June issue of the Bulletin of the EATCS.

LA: You are receiving the 2016 Alonzo Church Award for Outstanding Contributions to Logic and Computation for your invention of timed automata, which, as far as I know, was the first decidable formalism for reasoning about the ubiquitous class of real-time systems.  Could you briefly describe the history of the ideas that led to the development of the formalism of timed automata, what were the main inspirations and motivations for its invention and how timed automata advanced the state of the art?

DD: The interesting part of the story for me is the inspiration that led to the question in the first place and the initial results.  Things worked out so magically that I'm still amazed, so I hope that the story will be interesting to others.  I also learned a lot of lessons about research, which I hope I can convey here.

I am not a logician and don't consider myself a theorist.  I generally want to do research that has practical impact, but I like being able to apply theory to help with that.  And, in this case, I ended up creating some theory as part of that process.

My PhD research under Ed Clarke at CMU was on using finite automata for formal verification of speed independent circuits.  I started working on them using CTL model checking, and then decided during my thesis work to abandon model checking and used finite automata (I am grateful that Ed accepted this change in direction, since his primary research agenda was CTL model checking).  Speed-independent circuits are digital circuits with no clocks, which are supposed to meet their specifications for all possible gate delays.  Ed had recently co-invented CTL model checking and was exploring speed-independent circuits as an application, because speed-independent circuits are among the simplest concurrent systems with bugs.  But speed independence is a very conservative model, because engineers often know some constraints on delays, even if they can't specify exact values for the delays.  A circuit designer (Prof. Chuck Seitz of CalTech) had some ways of designing speed-independent circuits that relied on timing constraints without precisely specified delays, and asked me whether I could prove such circuits correct.

I looked at the literature, and there were a number of people who had used finite automata with a special alphabet symbol representing clock ticks, and they would simply count the clock ticks using the states of the automaton (what was later called a discrete time model).  But I wasn't comfortable with that, because asynchronous circuits don't have clocks!  I felt that events in circuits happened in continuous time, which might be different from discrete time -- but I didn't know.

While I was working on my PhD, I sat down to try to work out a method to verify timed asynchronous circuits.  The research was painless compared to a lot of my other PhD work.  I imagined that each gate had a timer that was set to a real value between constant delay bounds, and that these timers decreased continually with time until the reached 0, at which point the gate output would change. Very quickly, I came up with an algorithm that kept track of the set of all possible timer values for various discrete states, and, magically, the analysis of the regions could be easily solved using the all-pairs shortest paths problem.  I think I just got lucky and happened to think about it the right way, so it was easy to do. (Later, I learned that Vaughan Pratt had observed that certain systems of linear inequalities could be solved using shortest paths, and that a similar algorithm was used in a kind of timed Petri nets -- but I didn't know that at the time).

I left this method out of my PhD thesis, because my thesis seemed coherent and the timed model didn't seem to mesh with the other material. Then, in my first few years at Stanford, I pulled it out again and tried to prove that it was actually sound.  Getting the details right was harder than working out the original idea.  It took several weeks. The paper eventually appeared in CAV '89.

Verification with linear time (as opposed to branching time) models seemed beautiful to me because all the problems reduced to standard closure properties and decision properties that had been solved for finite automata:  Closure under intersection, complementation, and projection, and the decidability of language emptiness.  Implicit in my CAV paper were were closure under intersection, projection, and the decidability of emptiness for timed regular languages, but I couldn't prove closure under complementation.  I felt very strongly that timed regular languages were closed under complementation and would work the same way as conventional finite automata.

Then Rajeev Alur walked into my office.  He was a second year student who was ready to do some research, and his research advisor, Zohar Manna, was out of the country for a while.  I explained timed automata to Rajeev and asked whether he could resolve the question of whether they were closed under complementation.  Rajeev very quickly came up with a nicer definition of timed automata, with clocks that counted up and predicates, and an "untiming" construction for deciding language emptiness that had lower complexity than mine.   I remember it took him several months to prove that, in fact, timed automata were not closed under complementation and that, surprisingly, the universality problem was undecidable.  He proved several other amazing results, and then we wrote the "Timed Automata" paper.   The paper was rejected twice from conferences with pretty harsh reviews (I think FOCS was one conference -- I don't remember the other one) and was eventually accepted at ICALP.

I'm not sure why it was so poorly received.  I don't remember changing it much when we resubmitted it.  I think there is a problem when you formulate a new problem and solve it, but reviewers don't understand why the problem is important.  If the solution is a bit difficult, they look for reasons to reject the paper.  If you attack a well-known problem so everyone knows why you want to solve it, it's sometimes easier to sell the paper (but maybe it won't have as much impact). Or maybe we just had some bad luck -- everyone gets bad reviews, and I've unfortunately written some bad reviews myself.

RA: I joined Stanford University in 1987 as a graduate student primarily interested in logic and theory of computation. Topics such as model checking, temporal logics, and automata over infinite strings were hot topics then, and extending these formalisms to reasoning about real-time systems was a natural research direction.

For me, the key starting point was Dave's wonderful paper titled "Timing assumptions and verification of finite-state concurrent systems" that appeared in CAV 1989. Dave has already explained how he arrived at the idea of difference bounds matrices that appeared originally in his paper, and I will explain two new ideas in our follow-up work aimed at generalizing the results. Dave's original paper modeled timing constraints by introducing timers each of which could be set to a value chosen nondeterministically from a bounded interval, decreased with time, and triggered an event upon hitting 0. We changed the model by introducing clock variables each of which could be reset to 0, increased with time, and could be tested on guards of transitions. In retrospect, this led to a simpler model, and more importantly, led naturally to many generalizations such as hybrid automata. The second idea was the definition of region equivalence that defines a finite quotient over the infinite set of clock values. This proved to be a versatile tool and forms the basis of many decidability results, and in particular, algorithms for model checking of branching-time temporal logics.

LA: According to Google Scholar,  the journal paper for which you receive the Alonzo Church Award, which was published in 1994, has so far received over 6,500 citations, whereas the ICALP 1990 on which it was based has been cited over 1,250 times. A Google Scholar query also reveals that at least 135 publications citing your landmark TCS paper have more than 135 citations themselves. Moreover, the most successful and widely used tools for modelling and verification of real-time systems are based on timed automata. When did it dawn on you that you had succeeded in finding a very good model for real-time systems and one that would have a lot of impact? Did you imagine that your model would generate such a large amount of follow-up work?

DD: It will be really interesting to hear what Rajeev says about this.

At the time, I just felt happy that we had a nice theory.  I was looking around for other uses besides asynchronous circuits to which timed automata would be applicable, and, based on the name, "real-time systems" seemed like a good candidate.  But I didn't know much about the practical side of that area, and timed automata didn't seem to be directly applicable to many of the problems they worried about (there were a *lot* of papers on scheduling!).

It took a very long time before I was confident that timed automata were useful for anything, including real-time systems. So, for me, it was based on feedback from other people.  I got some grant money to work on the problem, which was a good sign.  People from control theory and real-time systems invited me to come and talk to them about it. It may be that timed model checking (which is very closely related) has had more practical impact than timed automata themselves. Other people built tools that were better than anything I created.

After a few years, the area got too hard for me. Others, especially Rajeev, were doing such a good job that I didn't feel that my efforts were needed.  So, I moved on to new topics in formal verification and asynchronous circuit design. Now I only have a fuzzy idea about the impact of timed automata, embarrassingly.

RA: The initial reception to our work was not enthusiastic. In fact, the
paper was rejected twice, once from FOCS and once from STACS, before it
was accepted in ICALP 1990. There was also a vigorous argument
advocated by a number of prominent researchers that modeling time
as a discrete and bounded counter was sufficient. By mid 1990s though
the model started gaining acceptance: in theory conferences such as
CONCUR, ICALP,and LICS a steady stream of papers studying complexity
of various decision problems on timed automata and extensions started,
and a number of implementations such as KRONOS and UPPAAL were
developed. However, I could have never imagined so much follow-up work,
and I feel very grateful to all the researchers who have contributed to
this effort.

LA: What is the result of yours on timed automata you are most proud of? And what are your favourite results amongst those achieved by others on timed automata?

DD: I'm most proud of coming up with the question and the initial (not so hard) results. To me, the question seemed so completely obvious that I couldn't believe it hadn't been answered.  I contributed to some of the later results, but Rajeev took off like a rocket and I couldn't keep up with him.  At some point, there was so much work in the area that I didn't feel I had much to add.  I know there are a lot of amazing results that I haven't studied.

I like the fact that timed automata formed a paradigm that others followed with hybrid automata and so on.  The idea of extending finite automata with other gadgets, like clocks, and analyzing the resulting state space seems to have influenced people as much as the results on timed automata.

RA: There are lots of strong and unexpected results that I like and it is hard to choose. But since you asked, let me point out two which are closely related to our original paper. The paper by Henzinger et al titled "Symbolic model checking of real-time systems" (LICS 1992) introduced the idea of associating invariants with states. I think this is a much cleaner way of expressing upper bounds on delays, and in fact, is now part of the standard definition of timed automata. In our original paper, we had proved undecidability of timed language equivalence. Cerans showed in 1992 decidability of timed bisimulation which I did not expect and involves a clever use of region equivalence on product of two timed automata.

LA: Twenty five years have passed since the invention of timed automata and the literature on variations on timed automata as well as logics and analysis techniques for them is huge. Do you expect any further development related to theory and application of time automata in the coming years? What advice would you give to a PhD student who is interested in working on topics related to timed automata today?

DD: I'm sorry, but I haven't actively pursued the area and I just don't know.  If I were giving advice to a younger version of me, knowing my strengths and weaknesses, I would actually advise that person to go and find a new problem that hasn't been worked on much.  Maybe you'll start a new field, and you'll get to solve the first problems in the area before it gets too hard.  What has worked for me was looking at an application problem, trying to find a clean way to formalize it, and then looking at the problems stemming from that formalization. It's an amazing fact that there are fundamental questions that haven't been addressed at all. Starting with a practical problem and then thinking about what theory would be helpful to solve it is a good way to come up with those questions.  Also, watch for cases where existing theory doesn't exactly fit.  Instead of trying to pound nails with a wrench, imagine a more appropriate, but simple, theory and invent that.
RA: I feel that theoretical problems as well as verification tools have been extensively investigated. Given that, my advice would be to first focus on an application. When one tries to apply an existing technique/tool to a real-world problem, there is invariably a mismatch and that insight can suggest a research idea. The emerging area of cyber-physical systems is a rich source of applications. For instance, formal modeling and analysis of medical devices can be a fruitful direction.
LA: You have both been involved in inter-disciplinary research with colleagues from biology and control theory. What general lessons have you learned from those experiences? What advice would you give a young researcher who'd like to pursue that kind of research?
DD: On reflection, my research in formal verification was not as collaborative as it could have been.  But in computational biology, if you don't have a wet lab, you really have to collaborate with other people!  Collaboration can be rewarding but also frustrating. My advice would be: learn as much about the other area as you can, so you can at least talk the same language as your collaborators. And prepare to invest lots of time communicating and understanding what motivates them -- and make sure they're willing to do the same, otherwise, it's a collaboration that is not going to work out.  What keeps you working together is mutual benefit.

Most collaborations don't work out.  If you have a good collaborator, be grateful.  And, sometimes, you may want to choose among research directions based on which ones have the best collaborators.
LA: David, Rajeev, many thanks for taking the time to share your views with the members of the TCS community and congratulations for receiving the first Alonzo Church Award!

Rajeev Alur and David Dill receive the 2016 Alonzo Church Award for Outstanding Contributions to Logic and Computation

I am happy to inform the reders of this blog that the 2016 Alonzo Church Award for Outstanding Contributions to Logic and Computation is given to Rajeev Alur and David Dill for their invention of timed automata, a decidable model of real-time systems, which combines a novel, elegant, deep theory with widespread practical impact. Congratulations to David and Rajeev!

David and Rajeev are honoured, in particular, for their paper:

Rajeev Alur and David Dill: A theory of timed automata. Theoretical Computer Science 126(2):183–235, 1994. Fundamental study.

That paper proposes a variation on classical finite automata to model the behaviour of real-time systems. The resulting notion of timed automata provides a strikingly simple, and yet powerful, way to annotate state-transition graphs with timing constraints using finitely many real-valued clocks. The fundamental study by Alur and Dill developed the model of timed automata as a formalism for accepting languages of timed words, studied the model from the perspective of formal language theory by considering closure properties and decision problems for the full model and some of its sub-classes, mapped the decidability boundary for the considered decision problems, and – as the main tool – introduced a fundamental abstraction, the so-called region graph, that has since found application in virtually every decidability result for models of real-time and hybrid systems, and that is at the heart of coarser abstractions that are embodied in the verification engines of several industrial-strength tools for automatic verification of real-time systems.

In the twenty plus years since their invention, timed automata have become the
standard model for the analysis of continuous-time systems, which underlies thousands of papers, dozens of tools, and several textbooks. This is a remarkable achievement for any formalism. It is all the more remarkable that this achievement was accompanied by an elegant theory that was already put down, in its most essential elements, in the very first paper by Alur and Dill, where they identified timed automata as one of the cases where finite-state reasoning can be extended to infinite-state systems.

As a sign of the impact of the work by David and Rajeev, it suffices to note that, according to Google Scholar,  the journal paper for which they receive the Alonzo Church Award, which was published in 1994, has so far received over 6,500 citations, whereas the ICALP 1990 on which it was based has been cited over 1,250 times. Moreover, a Google Scholar query also reveals that at least 135 publications citing their landmark TCS paper have more than 135 citations themselves.

Alur and Dill will receive the award at the 31st Annual ACM/IEEE Symposium on
Logic in Computer Science (LICS), which will be held on July 5-8, 2016, at Columbia University, New York City, USA.

The Alonzo Church Award for Outstanding Contributions to Logic and Computation, was established in 2015 by the ACM Special Interest Group for Logic and Computation (SIGLOG), the European Association for Theoretical Computer Science (EATCS), the European Association for Computer Science Logic (EACSL), and the Kurt Gödel Society (KGS). The award is for an outstanding contribution represented by a paper or small group of papers within the past 25 years; this is the first such award. The Award Committee consisted of Catuscia Palamidessi, Gordon Plotkin, Wolfgang Thomas, and Moshe Vardi (chair).

Monday, May 09, 2016

Interview with Stephen Brookes and Peter W. O’Hearn, recipients of the 2016 Gödel Prize


Peter O'Hearn


Steve Brookes


















As announced earlier today, Stephen Brookes (Carnegie Mellon University, USA) and Peter W. O’Hearn (Facebook and University College London, UK) are the recipients of the 2016 Gödel Prize for their invention of Concurrent Separation Logic.

In order to celebrate the award of the Gödel prize to their path-breaking work and to allow the research community in theoretical computer science at large to appreciate the origin of the ideas that led to their invention of Concurrent Separation Logic, I interviewed Steve (abbreviated to SB in what follows) and Peter (referred to as PO in the text below) via email. Here is the interview, which will appear in the June issue of the Bulletin of the EATCS. (The interview in PDF is here.) Enjoy it!

Thanks to Peter and Steve for sharing their recollections and knowledge with the theoretical-computer-science community, and congratulations for the 2016 Gödel Prize!

LA: You are receiving the Gödel Prize 2016 for your invention of Concurrent Separation Logic (CSL), which, quoting from the prize citation, is "a revolutionary advance over  previous proof systems for verifying properties of systems software,  which commonly involve both pointer manipulation and shared-memory concurrency." Could you briefly describe the history of the ideas that led to the invention of CSL, what were the main inspirations and motivations for its invention and how CSL advanced the state of the art?

PO: After John Reynolds and I and others had done the initial work on separation logic for sequential programs it made sense to consider concurrency, just based on the idea of using the logic to keep track of the separation of resources used by different processes or threads. In the summer of 2001 I devised initial proof rules to do just that, by adapting an approach of Hoare to reasoning about concurrency from Hoare logic to separation logic. 

This first step seemed straightforward enough, but then it hit me that we could use the logic to track dynamically changing partitions rather than the static partitioning in Hoare’s approach. I used a little program, the pointer-transferring buffer, to explore this idea, and I made a program proof in which the fact that something was allocated seemed to move from one process to another… during the proof. The pointer itself was copied, but the fact that it was allocated (and hence the right to dereference it) was given up by the sending process. I began talking about “knowledge” or “ownership” transferring from one process to another. The striking thing was that there was no explicit concept of ownership in the logic or proof rules, but that this transfer seemed to be encoded in the way that the proofs of the processes worked. I circulated an unpublished note on proving the pointer-transferring buffer in August of 2001, and then a longer note in January 2002; these documents got a lot of attention from people working on separation logic.

It quickly became clear that quite a few concurrent programs would have much simpler proofs than before. Modular proofs were provided of semaphore programs, of a toy memory manager, and programs with interacting resources. I got up a huge head of steam because it seemed as if the logic could explain the way that synchronisation had been used in the fundamental works on concurrent programming by Dijkstra, Hoare and Brinch Hansen. For example, in the paper that essentially founded concurrent programming, Dijkstra in 1965  (Co-operating Sequential Processes, still the most important paper in concurrency) had explained that the point of synchronisation was to enable programmers to avoid minute considerations of timing, to simplify reasoning. Brinch Hansen had hammered into me the importance of speed independence and resource separation for simplifying thinking about concurrent processes when I was his colleague at Syracuse in the 1990s, and what he said to me seemed to be mirrored in the proofs in this early concurrent separation logic. And although I had never written a paper on concurrency, I was opinionated: I thought that work in the theory of concurrency was often missing the mark because while it could describe semaphores and other synchronisation primitives, it did not explain their significance because it did not connect back to simplifying reasoning; which was their whole point. Hoare had made important steps in various points in his work, but it seemed as if we could go much further armed with the separating conjunction. 

So, years of thinking about these issues seemed poised to come together at once in this concurrent separation logic. But, for all my opinionated excitement, I ran into a blocker of a problem: I wasn’t able to prove soundness of my proof rules. And it was the very feature which gave rise to the unexpected power, the ownership or knowledge transfer, that made soundness non-obvious. I worked hard on this problem for several months in the second half of 2001 and early in 2002, and got nowhere. Reynolds, Yang, and Calcagno, all semantics experts, also looked at the problem. Finally, I admitted to myself that it was technically beyond my expertise in concurrency theory and that I needed help. Luckily, I knew where to turn. Steve Brookes and I had never worked together before, but knew one another from way back. He was the external examiner of my 1991 PhD thesis, I was well aware of his fundamental work with Hoare and Roscoe on the foundations of CSP, and he had recently produced striking results on full abstraction for shared-memory parallelism, what I though of as the most impressive theoretical results in semantics of concurrency at the time. What is more, Steve had built up a powerful repertoire of techniques for proving properties of concurrent programming languages. So, sometime in 2002, I picked up the phone and gave him a call: “Steve, I have a problem!”.

SB: As Peter said, he called me out of the blue. I knew Peter well, having served as the external examiner on his PhD thesis, and I had kept in contact with him after he took up his first academic positions at Syracuse and at Queen Mary (University of London). We were in fairly regular email contact, but he didn’t normally phone me from England; I knew this must be important. I sat in my office at Carnegie Mellon University in Pittsburgh, intrigued and fascinated to hear his ideas and excited by the challenge he was offering me. I think he also spoke by phone at a different time to John Reynolds, whose office was right next to mine. We all agreed that the best plan was for Peter to come to Pittsburgh and give a talk, after which we would do some brainstorming.

That was how it started, from my viewpoint. Peter came to CMU in March of 2002 and gave a talk on his new logic. Peter was proposing a subtle and clever combination of key ideas from separation logic and a much earlier Owicki-Gries logic for shared-memory concurrency, itself based on ideas appearing in a classic paper of Tony Hoare (“Towards a theory of parallel programming”). Superficially the new logic was both very simple and also very perplexing. The Hoare-style logic has a simple rule for parallel composition that combines pre- and post-conditions using conventional conjunction, and a rule for conditional critical regions (essentially, binary semaphores) that makes use of “resource invariants”. The inference rules ensure that a provable program obeys what has come to be known as a “rely-guarantee” discipline: each process assumes that whenever it acquires a semaphore the relevant invariant holds, and guarantees that the invariant holds again when releasing the semaphore. However, the earlier logic is not sound for programs using pointers, because of the possibility of aliasing. On the other hand separation logic was originally formulated for reasoning about sequential programs using pointers, but lacked rules for shared-memory concurrency and (until then) it was not clear how to generalize. Peter introduced a radically simple and elegant way to combine the two: an overly simplistic summary is that he allows separation logic formulas as pre- and post-conditions (and resource invariants) and strategically replaces certain occurrences of conjunction in the Hoare-Owicki-Gries rules with the separating conjunction operator from separation logic. Of course this naive characterization glosses over some profound issues: in Peter’s approach the invariants and pre- and post-conditions are not really talking about the “global” state, but serve to specify the “footprint” of a program component, just the piece of state on which that component acts. I think this was the first time I saw use of terms such as “footprint” and “ownership transfer”, notions which now seem very intuitive but as yet had no formal semantic counterpart.
(Until this point, I think it is fair to say, semantic models for concurrent languages had typically dealt entirely with global state, and it was conventional wisdom that it was already hard enough to find a decent semantics for simple shared-memory programs, let alone try to incorporate mutable state, heap cells, allocation and deallocation.) My challenge was to develop a semantic model robust and flexible enough to handle the combination of concurrency and pointers, in which such notions as ownership transfer could be properly formalized: to build a foundation on which to establish soundness of Peter’s proposal. Furthermore, prior semantic models for concurrent languages had pretty much ignored race conditions, typically by assuming that assignments were executed atomically so that races never happen. While that worked well for “simple” shared-memory it was clearly an insufficiently sophisticated way to cope with mutable state and the more localized view of state that is so fundamental in Peter’s approach.

During this visit we basically barricaded ourselves in a room with Reynolds and Calcagno, dropping all other distractions and tossing ideas around in an attempt to lay out a groundplan, exploring the apparent benefits of the new logic while probing for limitations and possibly even counterexamples. John’s wife Mary recalls very intense discussions at the Reynolds’s household, walls covered with sticky paper, only breaking for meals. Peter enunciated what became known as the Separation Principle: “at all stages, the global heap is partitioned into disjoint parts…”, another way to articulate the idea of “ownership transfer”. Again it is easy to express these notions informally, but it turned out to be surprisingly tricky to encapsulate them semantically, and Peter was right to be cautious.

I remember marveling at the ease with which Peter was able to articulate,
with simple-looking little programs like a one-place buffer, the kinds of problem that arise when concurrent threads manipulate the heap. And the logic seemed elegantly suited for reasoning about the correctness of such programs, with the decided advantage that provable programs are guaranteed to be free of data races. (This was also true of the earlier Hoare-Owicki-Gries logic, but only in the much more limited setting of “simple” shared-memory without pointers.)

The use of separation in the logic neatly embodies a kind of disciplined use of resources in programs. Yet it was by no means clear how to formalize these ideas, and Peter was quite forthright about his unwillingness to publicize the ideas until it had been demonstrated that it all made sense. He had circulated an “unpublished manuscript” with the title “Notes on separation logic for shared-variable concurrency”, dated January 2002. Meanwhile I was excited to have a challenging problem to work on, and intense interactions continued between Peter, John and me as our understanding evolved.

I had plenty of experience in developing denotational semantic models for (simple) shared-memory programs and also for communication-based languages such as CSP. Just like the earliest denotational accounts of concurrency (dating back to David Park in the late 70’s) the most widespread and generally most adaptable approach was to use traces (or sequences of actions) of some kind; I had introduced “transition traces”, built from steps that represent (finite sequences of) atomic operations by a process, with gaps allowing for state changes made by the “environment”. I think that transition trace semantics is what Peter was referring to, when he mentioned full abstraction, but I should emphasize that this model was tailored specifically to simple shared-memory and I could not see an obvious way to adapt it to incorporate pointers. I did try! My more recent
focus had been on “action traces”, in which steps represent atomic actions but the details concerning state (when an action is enabled, and what state change it causes) are kept apart: a process denotes a set of action traces, and one can then plug in a model of state, give an interpretation of actions as state transformers, and be able to reason rigorously about program execution. It seemed to me that action traces offered the best basis for expansion to incorporate pointers: by augmenting the “alphabet” of actions to include heap lookup, update, allocation and deallocation. It should also be quite natural to treat semaphore operations (for acquiring and releasing) as actions. It took until some time in 2003 for me to iron out the technical details and be able to explain the key definitions and results clearly enough to be ready for publication.

In retrospect I regard this period of a couple of years as probably the most exciting and stimulating sustained research in which I have been involved.
I am immensely grateful to have had the opportunity to work with Peter on this project. And throughout all of this I was strongly influenced by John Reynolds, whose good taste, deep originality of thought and sheer intellectual inquisitiveness served to keep me focussed. Echoing Peter’s reluctance to publish without being sure, I always said to myself that I wouldn’t be sure until I could convince John (and Peter!). John and I had lunch together almost every day, and I camped out in his office whenever I had an idea that needed a sounding board. He prompted me to always seek clearer explanations, isolate the key concepts and make the right definitions, and strive for generality. I remember in particular that at one point, after several weeks trying to figure out how to extend action traces, I told John that I might be able to give a semantics in which (only) provable programs would be definable. My naive idea was to modify the way that parallel composition gets modelled to keep track of the preservation of resource invariants explicitly and treat any violation as a
“disaster”. I recall John upbraiding me gently about this plan; in his view, one should develop a semantics that is “agnostic” about any intended logic, and instead built to express computational properties of programs in general terms. He did agree that we needed a semantics that reported the potential for race conditions, an idea that is echoed in his own work. Having found such a semantics, it ought then to be possible to show using the semantics that programs provable in the logic are indeed race-free. In essence, that’s what happened. But it didn’t happen overnight, and
the path from start to end involved a few detours and dead-ends before
finding a robust solution.

There was a crucial role in this played by the concept of “precise” assertion. I will hand back to Peter to make a few remarks on that.

PO: Yes, a memorable moment came when John poked his head into my office one day: “want some bad news?”. He showed me an example where the proof rule for critical regions together with the usual Hoare proof rule for using conjunction in the postcondition lead to an inconsistency. This problem had nothing to do with concurrency per se, but rather was about the potential indeterminacy of the “angel” affecting ownership transfer. (We had been attempting to get a handle on ownership transfer by couching it in adversarial terms, involving an “angel” making program choices and a “demon” trying to invalidate invariants or induce race conditions.) The problem also arose in a sequential setting, in proof rules for modules that I was working on with John and Yang (which we eventually published in “Separation and Information Hiding, in POPL’04). In any case, I quickly proposed a concept of “precise” assertion, one that unambiguously picks out an area of storage, as a way to get around the problems cause by the indeterminacy of the angel, and this concept is used in the resource invariants in concurrent separation logic.

This indicates what a subtle problem we were up against. Some experienced semantics researchers had been working on soundness of CSL and of information hiding and there lurked a problem that none of us had spotted. We were lucky that John Reynolds was not only keeping us honest by tracking our progress, but thinking deeply about these problems himself. Concurrent separation logic owes a lot to John’s brilliance.

That being said, this problem with the ownership angel was not the biggest hurdle Steve had to get over. Setting up the concurrency semantics and identifying the right properties to prove to get inductions to go through was just hard. Once he had done it, others were able to generalize and sometimes (arguably) simplify his proof method; e.g., in work of Calcagno, Yang, Gotsman, Parkinson, Vafeiadis, Nanevsky, Sergey and others, and some of it drops the precision requirement (but also the conjunction rule). But, from my perspective, Steve solving this problem for the first time was difficult, and important. I like to say: he saved my logic!

LA: I noticed that you each published short versions of your papers in the same conference, CONCUR’04, and then again the long versions appeared in the same journal issue in TCS’07. How did that come about?

SB: My email to Peter and John, announcing that I had found the “right” semantics and that the concurrency rules can be shown to be sound when resource invariants are chosen to be precise, dates from early June 2003. (I also showed that the rules remain sound when invariants are chosen to be “supported”, so that in any state with a sub-heap satisfying an invariant there is a unique minimal sub-heap with that property.) This was obviously very exciting and I headed over to England to spend some time in London with Peter. Peter organized informal discussion meetings (called “yak sessions” to distinguish from full-blown seminars) and Peter and I both gave presentations there on CSL. Philippa Gardner witnessed these presentations, and she was so enthusiastic that she decided (as organizer for the CONCUR 2004 conference in the following year) to invite us to give back-to-back hour-long tutorials at that meeting. That’s how the original short versions of our papers ended up at the same conference.
We felt honoured to be asked, effectively, to give tutorial invited lectures on as-yet unpublished work! And thanks to Philippa for her part in encouraging this scenario.

We published the full, journal-length versions together in TCS as tributes to John on his 70th birthday. The Festschrift volume appeared as a book, under the TCS imprimatur, edited by Peter along with Olivier Danvy and Phil Wadler, in 2007. There’s an amusing story that Peter reminded me of. One day Philippa was walking with me, John and Peter, some time before the CONCUR short versions were finalized. John was berating Peter for what he perceived perhaps as dilatoriness in not submitting long versions to POPL or some other conference instead of preparing the journal-length versions; he couldn’t understand why it was taking us so long to get around to it. What he did not realize was that the plans were already afoot for his birthday festschrift, and we had already agreed to publish there. After all, what better way to acknowledge the profound influence John had on the work. Philippa leaned over to Peter and whispered that we couldn’t tell him because those (long) papers are for his “bloody festschrift”.

LA:  In your opinion and experience, how important are modularity and compositionality in proving properties of programs and systems?

SB: I think the benefits of modularity — in particular, allowing more “localized” reasoning by considering each component largely in isolation from the others — have been touted right from the early days. For example Dijkstra was in favor of “loose coupling”, and argued that the art of parallel program design was to ensure that processes can be regarded almost as independent, except for the (ideally small number of) places where they synchronize. The claim was, and continues to be, that this would improve our ability to
manage the sheer complexity caused by interactions between concurrent threads. For similar reasons, and dating back to Hoare logic (1969!), we seek compositional proof systems for proving program properties. In a compositional proof system one can derive correctness properties of a program by reasoning about program components individually, and the inference rules show how the properties of components determine the behaviour of the whole program. In short, compositional means “syntax-directed”, and again a major desire is to exploit syntax-directed analysis to tame the combinatorial explosion. But to design a compositional logic for a specific programming language, for use in establishing a specific class of program behaviour, you need to start with a suitably chosen assertion language — one for which compositional reasoning like this is even possible. This is particularly difficult for concurrent programs, since it has long been known that Hoare-style partial correctness assertions about a multi-threaded program cannot be deduced simply on the basis of partial correctness properties of individual threads.
A partial correctness assertion of form {p}c{q} says that every terminating execution of c from an initial state satisfying p ends in a state satisfying q.

In the early Hoare-style logics for shared-memory programs (Hoare-Owicki-Gries, as mentioned earlier) assertions look just like conventional partial correctness but are interpreted semantically as expressing a much more sophisticated property, allowing for
the program to be running in an “environment” of other processes. It has become common to describe this interpretation in terms of “rely/guarantee”. I think one of the key ingredients in my semantic model is that it allows formalization of the kind of ownership transfer discipline inherent in Peter’s inference rules. Turning this on its head, you could say that the semantic model supports compositional reasoning about programs whose correctness is justified by appeal to Peter’s separation principle and the ownership discipline.
    
 I’ll let Peter step in here too, as I’m sure he feels strongly about compositionality as a virtue. Maybe he can say something about monitors as well, which I know were a strong motivating factor in how he came up with the inference rules.

PO:  Generally speaking, when proving a program it is possible in principle to construct a global proof, one that talks about the global state of the system. But global proofs tend to be much harder to maintain when the program is changed. And programs are constantly being changed in the real world: the world won’t accept prove-it-and-forget-it proof efforts, verification needs to be active and move with the programmers. This, even more than efficiency considerations in constructing proofs at the outset, is the strongest reason for wanting modularity.

Separation logic has just provided a theory that often matches the intuitive modularity that comes up in data structure designs. The degree of modularity in proofs that have been done has been surprising. For instance, when I was thinking about proof rules for CSL my first idea was to axiomatize monitors (class-like abstractions for concurrency due to Brinch Hansen and Hoare), because I thought that low-level primitives like semaphores were too unstructured and that modular proofs would be impossible. Of course I knew that monitors could simulate semaphores, but I didn’t expect to find nice proofs of semaphore programs. It therefore came as a bit of a shock when I was able to provide very local independent reasoning about semaphores  in some nontrivial examples.

But, while modularity is important, you should be careful not to try to take it too far. For instance, sometimes multiple resources participate together in the implementation of a data abstraction, like the use of several locks in hand-over-hand locking on linked lists. Vafeiadis and Parkinson have some lovely work on RGSep, a descendent of the original concurrent separation logic, in which they show how you can describe the effects of operations in a very local way, but where the description sometimes involves two locks and a bit of a linked list, rather than only one lock; you would just be causing yourself trouble if you tried to formulate everything in terms of independent reasoning about the individual locks in this case.   So I like to think that you should make your specifications and proofs as modular as is natural, but not more so; logic should not block making specs+proofs modular, but neither should it force you to shoehorn your descriptions into a fixed granularity of modularity.

Some of the work that follows on from mine and Steve’s work is very flexible in the degree of abstraction that can be given to the way that state is composed and decomposed. For instance, work of Nanevski, Sergey, Dreyer, Birkedal and others is all based on proof methods which allow the granularity of modularity or separation to be chosen, but specifying a partial commutative monoid of composition (in place of the standard separation logic monoid of heaplets and disjoint union).

LA: What are some of the main developments since your papers appeared.

PO: First let me mention developments in theory. To me, the most surprising has been the demonstration that the most basic principles of concurrent separation logic, particularly independent reasoning about threads using the separating conjunction, cover a much broader range of situations than we ever expected. There have been proofs of fine-grained locking and non-blocking concurrency and cases that involve interference and general graph structures, what might have been though of as bad cases originally for separation logic.

SB: We should also mention generalizations based on permissions (for example, Boyland’s account of fractional permissions). In particular this path led to versions of CSL capable of dealing naturally with concurrent reads, and to some very elegant program proofs (due to Peter with Calcagno and Bornat) in which the correctness of the program depends on a permissive form of ownership transfer.

PO: Interestingly, the unexpected power of this is based on what you might call “non-standard models” of separation logic; I mean this by analogy with the usual situation in logic, where a theory (e.g. reals, or integers) has an intended model, but then additional non-standard models of the same axioms. The proof theory can then accomplish unexpected things when applied to the non-standard models. The standard model of separation logic is the original model based on splitting portions of the heap, or heaplets. There are lots of other models stemming from the “resource semantics” of bunched logic invented by David Pym (based on having a partial commutative monoid of possible worlds). The surprise is that some of these nonstandard models involve composing highly intertwined structures and interfering processes. Gardner coined the phrase “fiction of separation” to describe this phenomenon in the nonstandard models.

For example, in their POPL’13 work on Views, Dinsdale-Young, Parkinson and colleagues  show that a simple abstract version of concurrent separation logic can embed many other techniques for reasoning about concurrency including type systems and even the classic rely-guarantee method, which was invented for the purpose of reasoning about interference.  Work of Nanevski and Sergey on their Fine-Grained Concurrent Separation Logic shows how one of the sources of non-modularity in the classic Owicki-Gries approach to concurrency, the treatment of auxiliary variables, can be addressed by a suitable non-standard model of separation. They also obtain great mileage out of a model that interprets the separating conjunction in terms of a composition of histories, thus combining temporal and spatial aspects of reasoning. Finally, Hoare and others have been pursuing a very general theory of “Concurrent Kleene Algebra”, which encompasses message passing as well as shared variable concurrency, and its associated program logic is again a very general form of CSL.

Although they technically use simple abstract version of CSL, these works conceptually go well beyond the original because the nonstandard models have meanings so far removed from the standard models. And they represent technically significant advances as well. For instance, the Views work has a new and very flexible proof of soundness, which is needed I think to cover the concurrency in the nonstandard models. There is a lot happening in this space, and I have left out other very good work by Birkedal, Dreyer, Raad, Feng, Shao and others;  a number of competing logics are being advanced, and there is just too much good work to mention it all here. It will take some time to see these developments shake out, but already it is clear that the principles of CSL apply much more broadly one could have guessed at the time of mine and Steve’s papers.

Second, I would like to mention that a surprising practical development has been the degree of progress in tools for mechanized verification. The first implementation of CSL actually preceded the publication of mine and Steve’s papers. Cristiano Calcagno included CSL in his earliest prototypes of Smallfoot, the first separation logic verification tool, around 2002. But, since then, many tools have appeared for verification with CSL and relatives.
Examples of the state of the art include the Verifast tool of Jacobs et al and the implementation in Coq of the aforementioned Fine-grained CSL: in both tools there are examples of mechanized proofs of nontrivial concurrent programs including hand-over-hand locking on linked lists, lock-free queues, and a concurrent spanning tree construction. It is also possible to verify custom synchronisation primitives, such as for locks implemented using compare-and-swap, and then to use the specifications of the primitives in verifications of client code without having to look at the lock internals.


LA: What are some of the current directions of interest, or future problems, for research.

PO: One important direction in pure theory is unification, to bring to a form of local conclusion all of the developments on non-standard models. I sense that there is good and possibly deep theoretical work to be done there.

SB: I agree. And I think the time is ripe for a systematic attempt to develop a denotational framework capable of supporting such a unification. My own recent research into the foundations of weak memory concurrency is an attempt to start in this direction, and seems like a natural generalization from the action trace semantics that we used to formalize CSL. The main idea here is to go from traces (essentially, linearly ordered sets of actions) to a more general partial-order setting. Similar themes are also appearing in work of others, and we are starting to see papers proposing the use of pomsets (my own work, building on early ideas of Vaughan Pratt) and event structures (originally introduced by Winskel) in semantic accounts of weak memory. I think this is exciting, and I believe it would be a valuable service to cast these developments into a uniform framework, and to use such a framework to establish soundness of new CSL-style logics for weak memory and explore the relationships between such logics.

PO: As I said above, there has been tremendous progress in mechanized verification of concurrent programs; but there has been less in automatic program analysis. With program analysis we would like to give programmers feedback without requiring annotations, say by trying to prove specific integrity properties (such as memory safety or race freedom); annotations can help the analysis along, but are not needed to get started, and this greatly eases broad deployment. A number of prototype concurrency analyses based on CSL have been developed, but there has been much more work applying sequential separation logic to program analysis. For example, the Infer program analyser, which is in production at Facebook, uses sequential but not concurrent separation logic. To make advanced program analysis for concurrency which brings value to programmers in the real world is in the main an open problem. And not an easy one.

I would finally like to mention language design. There have been experimental type systems which incorporate ideas from CSL into a programming language, such as the Mezzo language and Asynchronous Liquid Separation Types. Related ideas can be found earlier in Cyclone, and more recently in the ownership typing that happens in the Rust language. It seems as if there is a lot of room for experimentation and innovation in this space.

Stephen Brookes and Peter W. O’Hearn receive the 2016 Gödel Prize

2016 GÖDEL PRIZE RECOGNIZES MAJOR ADVANCES IN VERIFICATION OF CONCURRENT PROGRAMS

The Association for Computing Machinery’s Special Interest Group on Algorithms and Computation Theory (SIGACT) and the European Association for Theoretical Computer Science (EATCS) have announced that Stephen Brookes and Peter W. O’Hearn are the recipients of the 2016 Gödel Prize for their invention of Concurrent Separation Logic. The prize will be presented at the 43rd International Colloquium on Automata, Languages and Programming (ICALP 2016), to be held July 12-15 in Rome, Italy. Congratulations to Peter and Steve!

In computer science, concurrency is the decomposition of programs, algorithms or problems into units whose order is not determined; concurrent program units might occur in arbitrary order, even overlapping in time. Such a decomposition is the basis for the parallel execution of programs or algorithms, which can considerably enhance the overall speed of execution of multicore and multiprocessor systems. However, concurrent execution can make programs difficult to understand and get right, because different execution orders can lead to different results, and because different threads of execution can interfere with one another. Computer scientists use logic to write, and to reason about, specifications of concurrency in computer software systems.
In their separate papers— Brookes’ “A Semantics for Concurrent Separation Logic and O’Hearn’s “Resources , Concurrency and Logical Reasoning,” the 2016 Gödel Prize recipients introduced and advanced the idea of Concurrent Separation Logic (CSL), which has had a far-reaching impact in both theoretical and practical realms. O'Hearn's paper introduces CSL and is very much about fluency with the logic – how to reason with it – rather than its meta-theory. The latter is the concern of Brookes' paper, which demonstrates soundness of the logic via a clever new model; this was essential for CSL to be widely accepted and applied.
In the theoretical realm, almost all research papers developing concurrent program logics in the last decade are based on CSL; including work on permissions; refinement and atomicity; on adaptations to assembly languages and weak memory models; on higher-order variants; and on the logics for termination of concurrent programs. In the practical realm, the beauty of CSL is in its similarity to the programming idioms commonly used by working engineers. The fact that the logic matches the common programming idioms has the effect of greatly simplifying proofs. CSL’s simplicity and structure also facilitates automation. As a result, numerous tools and techniques in the research community are based on it, and it is attracting attention in companies such as Facebook, Microsoft and Amazon.
Stephen Brookes is Professor of Computer Science at Carnegie Mellon University. A long-term aim of his research, culminating in his foundational work on Concurrent Separation Logic, has been to facilitate the design and analysis of correct concurrent programs. He has recently begun work on semantic models for weak memory concurrency. His work on Concurrent Separation Logic was partially funded by the National Science Foundation (NSF). He obtained his BA degree in Mathematics and his PhD in Computer Science, both from Oxford University.
Peter W. O’Hearn is an Engineering Manager at Facebook and a Professor of Computer Science at University College London. He cofounded a verification startup, Monoidics, which was acquired by Facebook in 2013. O’Hearn has also held academic positions at Queen Mary University of London and at Syracuse University. He is a past recipient of a Royal Society Wolfson Research Merit Award and a Most Influential POPL Paper award, and held a Royal Academy of Engineering/Microsoft Research Chair. He obtained his BS in Computer Science from Dalhousie University, Nova Scotia, and his MS and PhD Degrees in Computer Science from Queen’s University, Kingston, Canada.
The Gödel Prize is named in honour of Kurt Gödel, who was born in Austria-Hungary (now the Czech Republic) in 1906. Gödel's work has had immense impact upon scientific and philosophical thinking in the 20th century. The award is presented annually by ACM’s Special Interest Group on Algorithms and Computation Theory (SIGACT) (http://sigact.acm.org) and the European Association for Theoretical Computer Science (EATCS) (http://eatcs.org/). It recognizes major contributions to mathematical logic and the foundations of computer science and includes an award of $5,000.