Friday, April 03, 2020

An interview with Nancy Lynch and Roberto Segala, CONCUR Test-of-Time Award recipients

This post is devoted to the second interview with the colleagues who were selected for the first edition of the CONCUR Test-of-Time Award. (See here for the interview with Davide Sangiorgi.) I asked Nancy Lynch (MIT, USA) and Roberto Segala (University of Verona, Italy) a few questions via email and I copy their answers below. Let  me thank Nancy and Roberto for their answers. I trust that readers of this blog will find them interesting and inspiring.

Luca: You receive one of the two CONCUR ToT Awards for the period 1992-1995 for your paper "Probabilistic simulations for probabilistic processes", presented at CONCUR 1994. Could you tell us briefly what spurred you to develop the “simple” probabilistic automaton model introduced in that paper and how the ideas underlying that notion came about?

Roberto: We were studying randomized distributed algorithms and we noted that several algorithms suffered from subtle errors. We were looking for a formal, rigorous way to study the correctness of those algorithms and in particular we were trying to extend the hierarchical approach used for I/O automata. We knew that there was already an extensive literature within concurrency theory, but we were not able to use the theory to express even the simple idea that you may choose internally between flipping a fair or a biased coin; the existing approaches were extending labeled transition systems by adding probabilities to the arcs, losing the ability to distinguish between probabilistic and nondeterministic choice. The fact that the axioms for nondeterministic choice were not valid in most probabilistic calculi was further evidence of the problem. Starting from the observation that in the non-probabilistic world a step of a distributed algorithm corresponds to a transition of an I/O automaton, we tried to extend the notion of transition so that the process of flipping a coin could be represented by the "new" entity. This led to the idea of "replacing points with measures" within ordinary automata and/or ordinary labeled transition systems. The idea worked very well for our analysis of randomized distributed algorithms, but we also wanted to validate it by checking whether it would simplify the concurrency-theoretical approach to probability. We decided to use simulation relations for our test, and indeed it worked and led to the CONCUR 1994 paper.

Nancy: For background for this paper, we should recall that, at the time,  there was a divergence in styles of modeling for distributed systems, between different concurrency theory research communities.   The style I had been using since the late 70s for modeling distributed algorithms involved interactive state machines, such as I/O automata, which are based on a set-theoretic foundation.  Such models are good for describing distributed algorithms, and for analyzing them for both correctness and costs.  On the other hand, the predominant style in the European concurrency community was more syntactic, based on logical languages like PCTL or various process algebras.  These were good for formal verification, and for studying expressive power of different formalisms, but not great for analyzing complicated distributed algorithms.  Our models were in the general spirit of the Labeled Transition System (LTS) models previously studied in Europe. When Roberto came to MIT, he and I set about to extend prior modeling work for distributed systems to the probabilistic setting.  To do this, we considered both the set-theoretic and logical approaches.  We  needed to bridge the gap between them, which led us to the ideas in this paper.

Luca: How much of your later work has built on your CONCUR 1994 paper? What follow-up results of yours are you most proud of and why?

Roberto: Besides using the results of the paper for the analysis of several randomized distributed algorithms, we worked jointly as well as independently on the study of the theory and on the analysis of security protocols. In collaboration with Frits Vaandrager, we were able to discover different ways to analyze security in a hierarchical and compositional way. Furthermore, since in simple probabilistic automata probability and nondeterminism are well separated, it was easy to include computational complexity into the analysis of security protocols. This is work I did in collaboration with Andrea Turrini. The clear separation between probability and nondeterminism turned out to extend our approach to real-time models, leading to notions like Probabilistic Timed Automata and to several of the elements behind the first sketches of the probabilistic model checker PRISM in collaboration with the group of Marta Kwiatkowska.

Nancy: Roberto and I continued working on probabilistic models, in the set-theoretic style, for several years.  As Roberto notes, Frits Vaandrager also became  a major collaborator.  Some of our more recent papers following this direction are:
This last  paper provides a kind of compositional foundation for combining  security protocols.  The key idea was to weaken the power of the adversary, making it more "oblivious". In other related work, my students and I have worked extensively on  probabilistic distributed algorithms since then.  The models are similar to those developed in this early paper.  Examples include  wireless network algorithms, and more recently, biologically-inspired  algorithms, such as insect colony algorithms and neural network  algorithms.  I can't pinpoint a specific result that relies heavily on  the 1994 paper, but that paper certainly provided inspiration and foundation for the  later work.

Luca: Did you imagine at the time that the CONCUR 1994 paper and its journal version would have so much impact? In your opinion, what is the most interesting or unexpected use in the literature of the notions and techniques you developed in your award-winning paper?

Roberto: We knew that the change of view proposed in the paper would have simplified the study and extension of several other concepts within concurrency theory, but we did not have any specific expectations. The collaboration on the PRISM project and on Probabilistic Timed Automata made more evident that there was a connection with Markov Decision Processes, which lead to a fruitful cross-fertilization between artificial intelligence, model checking, and concurrency theory. It was a long exchange of ideas between a world interested in existential quantification, that is, find an optimal policy to achieve a goal, and universal quantification, that is to make sure that under any scheduler, policy, adversary, a system behaves correctly. The consequences of such exchange were many and unexpected.

Nancy: No, I did not anticipate that it would have broad impact, though of course I thought the ideas were important. But in general, I am  bad at predicting what will appeal to others and inspire them to further work.

Luca: The journal version of your paper appears in the Nordic Journal on Computing, which is not a prime venue. Did you ever regret not publishing that work somewhere else? What is your take on the trend of using the perceived quality of a publication venue to evaluate research quality?

Roberto: Within semantics I know of a few technical reports that are much more influential than most journal papers; I do not think that the venue of a paper should be given much importance. On the other hand, venue as an evaluation tool is more objective, allows an evaluator to consider many papers without reading them, and protects the evaluator from any type of claim against decisions, which sometimes may have even legal consequences. Needless to say that I do not agree with any of the ideas above. I do not agree as well with the other emerging trend of counting papers to evaluate quality. One good paper is much better than ten mediocre papers. What worries me the most is that young researchers may have a hard time to concentrate on quality if they have to focus on venue and quantity. I feel I was lucky not to have to worry about these issues when we submitted our paper for the special issue of the Nordic Journal of Computing.

Nancy: I certainly never regretted the venue for this paper. In general, I haven't paid too much attention to choice of publication venue.  The main thing is to reach the audience you want to reach, which can be done through prestigious journals, less prestigious journals, conferences, or even ArXiv technical reports and some publicity. It’s good to get feedback from referees, though. For evaluations, say for hiring or promotion, I think it’s fair to take many factors into account in evaluating research quality. Venues, number of citations, special factors about different fields,… all can be discussed by hiring and promotion committees. But I hope that those committees also take the time to actually read and consider the work itself.

Luca: The last thirty years have seen a huge amount of work on probabilistic models of computation and on the development of proof techniques and tools based on them. What advice would you give to a young researcher interested in working on probabilistic models in concurrency today?

Roberto: My first advice would be to keep things simple and to focus on key ideas, possibly under the guidance of multiple application scenarios. When we study probabilistic concurrency, especially in contexts where the external world is governed by non-discrete laws, the interplay between different features of the model may become overwhelmingly complex. Of course it is a nice exercise to dive into all the details and see what it leads to, or to extend known results to the new scenarios, but how about checking whether some simple changes to one of the building blocks could make life easier? Unfortunately, working on building blocks is risky. So, my second advice is to dive into details and write papers, but take some breaks and think whether there may be nicer ways to solve the same problems.

Nancy: Pick some particular application domain, and make sure that the models  and techniques you develop work well for that application.  Don't work  on the theory in a vacuum.  The models will turn out much better! Perhaps simpler, as Roberto says.

Luca: What are the research topics that currently excite you the most?

Roberto: Difficult question. There are many things I like to look at, but at the moment I am very curious about how quantum computing can fit in a nice and elementary way into a concurrency framework.

Nancy: Probabilistic algorithms, especially those that are flexible (work in  different environments), robust to failures and noise, and adaptive to  changes.  This includes such topics as wireless network algorithms,  robot swarms, and biologically-inspired algorithms.

Acknowledgements: Many thanks to Ilaria Castellani, who pointed out some typos in the original version of this text.  

Thursday, April 02, 2020

An interview with Davide Sangiorgi, CONCUR Test-of-Time Award recipient

The International Conference on Concurrency Theory (CONCUR) and the IFIP 1.8 Working Group on Concurrency Theory are happy to announce the first edition of the CONCUR Test-of-Time Award. The purpose of the award is to recognize important achievements in Concurrency Theory that were published at the CONCUR conference and have stood the test of time. All papers published in CONCUR between 1990 and 1995 were eligible.

The award winners for the CONCUR ToT Awards 2020 may be found here, together with the citations for the awars. They were selected by a jury composed of Jos Baeten, Patricia Bouyer-Decitre, Holger Hermanns, Alexandra Silva and myself.

This post is the first of a series in which I interview the recipients of the  CONCUR ToT Awards 2020. I asked Davide Sangiorgi (University of Bologna, Italy) a small number of questions via email and I report his answers below in a slightly edited form. Let  me thank Davide for his willingness to take part in an interview and for his inspiring answers, which I hope will be of interest to readers of this blog and will inspire young researchers to take up work on Concurrency Theory.

In what follows, LA stands for Luca Aceto and DS for Davide Sangiorgi.

LA: You receive one of the two CONCUR ToT Awards for the period 1992-1995 for your paper "A Theory of Bisimulation for the pi-Calculus", presented at CONCUR 1993. Could you tell us briefly what spurred you to develop open bisimilarity and how the ideas underlying that notion came about?

DS: I developed the paper on open bisimulation during 1992 and 1993.  I was in Robin Milner's group, in Edinburgh.  We were studying and questioning basic aspects of the theory of the pi-calculus. One such aspect was the definition of equality on processes; thus a very fundamental aspect, underlying the whole theory.  The equality had to be a form of bisimulation, in the same way as it was for CCS.  The two forms of bisimilarity that were around for the pi-calculus, late and early bisimilarity, are not congruence relations.  In both cases, the input clause of bisimilarity uses a universal quantification on the possible instantiations of the bound name. As  a consequence, neither bisimilarity is preserved by the input prefix (forbidding substitutions in the input clause would make things worse, as congruence  would fail for parallel composition). Therefore, one has to introduce separately the induced congruence, by universally quantifying the bisimilarities over all name substitutions.  In other words,  the two bisimilarities are  not fully substitutive ('equal' terms cannot be replaced, one for the other,  in an arbitrary context).   On the other hand, the  congruences induced by the bisimilarities  are not themselves  bisimilarities. Hence in this case 'equal' terms, after some actions,  need not be 'equal' anymore.  Thus, for instance, such relations do not support dynamic modifications of the context surrounding related terms.

This situation was not fully satisfactory. The same could be said for the algebraic theory: there were proof systems for the two bisimilarities (of course, on finitary processes) but, because of the above congruence issue, there were no axiomatisations.  (In those years I was also working with Joachim Parrow on axiomatisations of these relations.)

The universal quantification on substitutions in the input clause of the bisimilarites and in the definitions of the induced congruences was also unsatisfactory because it  could make checking equalities cumbersome.

All these were  motivations for looking at possible variations of the definition of bisimulation. The specific hints towards open bisimulation came from thinking at two key facets of the pi-calculus model that were somehow neglected in the definitions of  early and late bisimilarities.  The first facet has to do with the pi-calculus rejection of the separation between channels and variables (`channel' here meaning a 'constant identifier'). In the pi-calculus ,there is only one syntactic category, that of names, with no formal distinction between channels and variables. This contributes to the elegance of the model and its theory. However, both in early and in late bisimilarity, the free names of processes are treated as channels, whereas the bound names of  inputs are treated as variables because of their immediate  instantiation  in the bisimilarity clause.   There was somehow a discrepancy between the syntax and the semantics.

The second facet of the pi-calculus that contributed to the definition of open bisimilarity is the lack of the mismatch operator: the pi-calculus, at least in its original proposal, has a match operator to test equality between names, but not the dual mismatch, to test for inequality.  Mismatching had been excluded for the preservation of a monotonicity property on transitions, intuitively asserting that substitutions may only increase the action capabilities of a  process.  (Both facets above represented major differences between the pi-calculus and its closest ancestor----Engberg and Nielsen's Extended CCS.)  Thus I started playing with the idea of avoiding the name instantiation in the input clause and, instead, allowing, at any moment, for arbitrary instantiations (i.e., substitutions) of the names of the processes---the latter justified by the above monotonicity property of transitions. By adding the requirement of being a congruence, the definition of open bisimilarity came about.

Still, I was not sure that such a bisimulation could be interesting and robust.  Two further developments helped here. One was the axiomatisation (over recursion-free terms).  It was a pure axiomatisation, it was  simple,  and with a completeness proof that leads to the construction of canonical and minimal (in some syntactic sense) representatives for the equivalence classes of the bisimilarity.  For other bisimilarities, or related congruences, obtaining canonical representatives seems hard; at best such representatives are parametrised upon a set of free names and even in these cases minimality is not guaranteed.

The other development has to do with a symbolic or "efficient" characterisation of the bisimilarity. The initial definition of open bisimulation makes heavy use of substitutions.  In the symbolic characterisation, substitutions are performed only when needed (for instance, the unification of two names a and b is required if there is an input at a and an output at b that can interact), somehow echoing the call-by-need style of  functional languages.  Such a characterisation seemed promising for automated or semi-automated verification.

LA: How much of your later work has built on your CONCUR 1993 paper? What results of yours are you most proud of and why?

DS: The most basic idea in open bisimulation is to avoid the instantiation of the bound name of an input, possibly making such a bound name a free name of the derivative term.  The use of substitutions, elsewhere in the definition, is necessary to obtain a congruence relation for the pi-calculus.  I was surprised to discover, in the following years, that such substitutions are not necessary in two relevant subsets of the pi-calculus.  I called the variant of open bisimulation without substitutions ground bisimulation (I think the name came from Robin).  One subset is Honda and Boudol's Asynchronous pi-calculus, whose main constraint is to make outputs floating particles that do not trigger the activation of a continuation (other limitations concern sum and matching).  The other subset is the Internal (or Private) pi-calculus, in which only private (i.e., restricted) names may be transmitted.  I designed the Internal pi-calculus with ground bisimilarity in mind.  People seem to have found this calculus useful in many ways, partly because of its expressiveness combined with its  simple theory (in many aspects similar to that of CCS), partly because it allows one to limit or control aliasing between names, which can be useful for carrying out proofs about behavioural properties of processes, or for designing and reasoning about type systems, or for representing the calculus in logical systems.

Sometimes, the possibility of using ground bisimulation can considerably simplify proofs of equalities of terms. For instance, in my works on comparisons between pi-calculus and lambda-calculus, when I had to translate the latter into the former I have always used one of the above subcalculi (sometimes even combining them, e.g., the Asynchronous Internal pi-calculus), precisely for being able to use ground bisimilarity.

I  consider both ground bisimilarity and the Internal pi-calculus spin-offs of the work on open bisimilarity.

While working on open bisimilarity for the pi-calculus, in a different paper, I applied the idea of open bisimilarity to the lambda-calculus.  I kept the name 'open'  but the bisimulation is really 'ground', as there are no substitutions involved.  I remember Robin encouraging me to keep the name 'open' because it conveyed well the idea of setting a bisimulation on open terms, rather than on closed terms as usually done. In  open bisimulation for the lambda-calculus, a lambda-abstraction lambda x. M yields an action with label lambda x that should be matched (modulo  alpha-conversion) by the same action by a bisimilar term. (Of course additional bisimulation clauses are needed when a free variable is found in evaluation position.) In contrast, in the ordinary bisimulation for the lambda-calculus,  Abramsky's applicative bisimilarity, the bound variable of an abstraction has to be instantiated with all closed terms, which is heavy. In general, open bisimilarity is finer than applicative bisimilarity and contextual equivalence (the reference equivalence in the lambda-calculus) but they often coincide in examples of concrete interest. Moreover, open bisimilarity does coincide with contextual equivalence in appropriate extensions of the lambda-calculus.  In short, open bisimilarity offers us a technique for reasoning on higher-order languages  using  'first-order' tools, somehow similarly to what game semantics does.

This line of work about open bisimilarity in higher-order languages has been very fruitful, and is still studied a lot, for various forms of higher-order languages, sometimes under the name of 'normal-form' bisimulation.

LA: In your opinion, what is the most interesting or unexpected use in the literature of the notions and techniques you developed in your award-winning paper?

DS: I mentioned the hope, when working on open bisimilarity, that its symbolic "efficient" characterisation could be useful for automated or semi-automated tools for reasoning about behavioural properties.  Shortly after introducing open bisimulation,   Björn Victor and Faron Moller, both in Edinburgh at the time,   exploited it to design the Mobility Workbench. I also worked on an algorithm and a prototype tool for on-the-fly checking, with Marco Pistore. 

However the most surprising applications in this direction have arrived later, when the 'lazy' instantiation of bound names of open bisimilarity has been applied to languages richer than pi-calculus.  For instance, Chaki, Rajamani, Rehof have used open similarity in their methods and tools for model checking distributed message-passing software.  Others have applied open bisimilarity to languages for security and cryptography, like the spi-calculus and applied pi-calculus. These include S. Brias, U. Nestmann and colleagues at EPFL and Berlin.   R. Horne, A. Tiu and colleagues in Singapore and Luxembourg have pushed significantly in this direction, with verification techniques and tools. For instance very recently they have discovered a privacy vulnerability for e-passports. 

Similarly, Yuxin Deng and colleagues have applied the idea of open bisimulation to quantum processes, with analogous motivations --- avoiding the universal quantification in the  instantiation of variables, algorithmically unfeasible in the quantum setting as quantum states constitute a continuum.

Another line of work that I found interesting and surprising concerns abstract frameworks for concurrency, including logical frameworks. Here forms of open bisimulation are often the 'natural' bisimulation that come up. These frameworks may be based, for instance on  coalgebras and category theory  (e.g., works by M. Fiore and S. Staton, N. Ghani,   K. Yemane, and B. Victor), category theory for reactive systems (e.g., works by F. Bonchi, B. König and U. Montanari), nominal SOS rule formats  (e.g., works by M. Cimini,  M. R. Mousavi, and  M. A. Reniers), higher-order logic languages (e.g., works by A.  Tiu, G. Nadathur, and D. Miller).

There have been works that have pushed the idea of open bisimulation of avoiding the instantiation of  bound names in interactions with an external observer one step further: such bound names are not instantiated even in interactions internal to the processes. The substitutions produced by the interactions are added to the calculus, producing particles sometimes called fusions.  This mechanism resembles the explicit substitutions of the lambda-calculus, but it goes beyond that; for instance the addition of fusions leads to modifications of  input and output prefixes that produce pleasant syntactic and semantic symmetry properties.  Various people have worked on this, including B. Victor, J. Parrow, Y. Fu, C. Laneve, P. Gardner and L. Wischik.

I should also mention the recent work by K. Y. Ahn, R. Horne, and A. Tiu  on logical interpretations of open bisimilarity. Remarkably, they explain the difference between the original (late and early)  formulations of bisimilarity in the pi-calculus  and open bisimilarity as the difference between  intuitionistic and classical versions of modal logics.

Apologies for not mentioning everybody!

LA: To your mind, how has the focus of CONCUR changed since its first edition in 1990?

DS: I remember that in the early years of CONCUR there was  a tremendous excitement about the conference.  A forum for grouping the (fast-growing) community had been long-awaited.  In Edinburgh, every year after the conference,   the people with an interest in concurrency   would meet (multiple times!)  and discuss the contents of the proceedings.  Several of us every year would attend the conference. Attending the conference was very useful and interesting: one was sure to meet a lot of people, hear about excellent papers, have lively discussions.  We would  try to go,  even without  a paper in the proceedings.   I vividly remember the 1993 edition, where I presented the paper on open bisimulation.  It had been organised by Eike Best in Hildesheim, Germany. It was an  enjoyable and exciting week,  I met and knew a number of people of our community, and learned a lot. (How sad and unwise that the Computer Science department in Hildesheim, so strong in concurrency  at the time, was shut down a few years later.)  Over the years, the CONCUR community has kept increasing its size. The conference has substantially broadened its scope,  rightly including new emerging topics. Perhaps it is more difficult than in the past to (more or less) understand most of the presented papers, both because of the diversity of the topics and of their technical specialisation.  On the other hand, there are now satellite events, covering  a number of areas. Hence  there are always plenty of interesting presentations and talks to attend (this definitely occurred to me in the last edition, in Amsterdam!).  I should also mention here the activity on the IFIP WG 1.8 on Concurrency Theory, currently chaired by Ilaria Castellani, that in many ways  supports and promotes CONCUR.

The quality of the papers at CONCUR is still very high. This is very important. As a community we should strive to maintain, and possibly even increase, the excellence and prestige of the conference, first of all, by submitting our best papers to the conference. CONCUR must be a reference conference in Computer Science, which is essential for injecting new people and energy into the community.

Acknowledgements: Many thanks to Ilaria Castellani, who pointed out a number of typos in the original version of this text. 

Wednesday, April 01, 2020

Magnus M. Halldorsson: EATCS Fellow 2020

As announced here, the EATCS Fellows vintage 2020 are
  • Pierpaolo Degano, Universita di Pisa, Italy: for his contributions in concurrency theory and applications in security and for biological systems. 
  • Mohammad Taghi Hajiaghayi, University of Maryland, USA: for his contributions to the theory of algorithms, in particular algorithmic graph theory, game theory, and distributed computing. 
  • Magnus Mar Halldorsson, Reyjavik University, Iceland: for his contributions to the theory of approximation and graph algorithms as well as to the study of wireless algorithmics. 
Congratulations to all of them! However, I trust that Mohammad and Pierpaolo will forgive me if I devote this post to celebrate Magnus, his work and his contributions to the TCS community.

So, why was Magnus chosen as one of the EATCS Fellows 2020? Here are some reasons why.

Magnús has offered seminal contributions to the theory of approximation and graph algorithms as well as to the study of wireless algorithmics. His research career and contributions so far can be roughly divided into two phases. The first phase spans the time from the beginning of his career until roughly ten years ago. During that time, Magnús made significant contributions to approximation algorithms for maximum independent set and graph colouring, amongst many other problems. In the second phase, which started a bit more than ten years ago, he has worked on the algorithmics of realistic models for wireless computation. I think that it is fair to say that Magnús is currently the expert on wireless algorithmics based on the SINR model.

These two phases are not at all disjoint. Indeed, the typical problems studied in the SINR model, such as determining the capacity of wireless networks or how to schedule messages in such networks, can be seen as independent set and colouring problems, respectively, and his experience with those problems in graph algorithmics certainly helped Magnús in obtaining breakthrough results in wireless algorithmics. Throughout his career, Magnús has also given significant contributions to the computation of independent sets and colourings in restricted computational models, such as the online model of computation and the data streaming model. His sustained research productivity, both in quality and in quantity, is all the more remarkable since it has largely been achieved working in the difficult research environment in Iceland, where he was largely isolated until the establishment of the Icelandic Centre of Excellent in Theoretical Computer Science (ICE-TCS) in 2005. (Magnus has been the scientific director of the centre for 15 years.)

In addition to his seminal research achievements, Magnús has served the theoretical computer science community by sitting on prestigious award committees, organizing conferences and workshops in Iceland and elsewhere, serving on steering committees and by acting as an inspiring mentor for young researchers in computer science who have come to Iceland explicitly to work with him. By way of example, Magnús is a member of the steering committees for SIROCCO (chair), ALGOSENSORS and SWAT, Scandinavian Symposium and Workshops on Algorithm Theory (chair). He was a member of the Council of the EATCS and has organized the best attended ICALP conference to date (ICALP 2008). Amongst many other such duties, he was PC chair for Track C of ICALP 2015 and of ESA 2011.

Magnús was also one of the initiators and first editors of “A compendium of NP optimization problems”, which is catalog of approximability results for NP optimization problems and has been a useful resource for researchers in that field for a long time.

Summing up, Magnús is a true stalwart of the algorithmics research community, and a great example for many of us. In my, admittedly biased, opinion, he richly deserves the recognition of being named an EATCS Fellow. I have no doubt that he will continue to lead by example in the coming years.



Monday, December 02, 2019

Faculty positions at the Department of Computer Science, Reykjavik University

This job ad could be of interest to you or to someone you know. Feel free to spread it as you see fit. Thanks!
 
The Department of Computer Science at Reykjavik University invites applications for several full-time faculty positions.

We are looking for energetic, highly qualified academics who are eager to develop their own research programs, strengthen existing research within the department and build bridges with industry. Of particular interest are candidates in the areas of machine learning, data science, computer security and software systems, broadly construed, but exceptionally qualified candidates from all areas of computer science are encouraged to apply.

Candidates should have a proven international research record that is commensurate with the level of the position for which they apply. The successful applicants will play a full part in the teaching and administrative activities of the department; in particular, they will teach courses and supervise students at both graduate and undergraduate level. Applicants having a demonstrated history of excellence in teaching are preferred. A PhD in computer science or a related field is required.

Salary and rank are commensurate with experience. An appointment at assistant-professor level is permanent track, with the expectation that a successful candidate will qualify for promotion to associate professor within six years.

The positions are open until filled, with the earliest available starting date in August 2020. Later starting dates can be negotiated.

The review of the applications will begin on Monday, 17 February 2020, and will continue until the positions are filled.

See http://radningar.hr.is/storf/viewjobonweb.aspx?jobid=3558 for further information on how to apply for the position and on the required documents.

About the department, Reykjavik University and Iceland

The Department of Computer Science at Reykjavik University has about 650 full-time-equivalent students and 22 faculty members. It provides an excellent working environment that encourages and supports independence in academic endeavours, and in which a motivated academic can have impact at all levels. The department offers undergraduate and graduate programs in computer science and software engineering, as well as a combined-degree undergraduate program in discrete mathematics and computer science. The doctoral program within the department received its ministerial accreditation in 2009 and has been active ever since.

The department is home to several research centres producing high-quality collaborative research in areas such as artificial intelligence, financial technology, language technology, software systems, and theoretical computer science, among others; for more information on those research centres, see https://en.ru.is/research/units/.

On the Times Higher Education rankings for 2020, Reykjavík University is ranked in first place along with six other universities for the average number of citations per faculty. Overall, according to that list, RU is ranked among the 300 best universities world-wide, 52nd out of all universities established fewer than 50 years ago, 14th among universities with fewer than 5000 students, and first among Icelandic universities.

Iceland is well known for its breathtaking natural beauty, with volcanoes, geysers, hot springs, lava fields and glaciers offering a dramatic landscape. It is consistently ranked as one of the best places in the world to live. It offers a high quality of life, is one of the safest places in the world, has high gender equality, and strong health-care and social-support systems. It is in fourth position in the 2019 UN World Happiness Report, which ranks the world's countries by their happiness level.

For further information about the Department of Computer Science at Reykjavik University and its activities, see http://en.ru.is/scs/.

Sunday, November 03, 2019

Call for opinions: Length of papes in conference proceedings in TCS

As current chair of the editorial board of LIPIcs, Leibniz International Proceedings in Informatics, I have been looking at some data about the length of the papers published in the series. The average length of the articles published in LIPIcs in 2019 so far is of 15.8 pages, including front matter and bibliography. However, three of the published papers are over 40 pages and three conferences have an average article length above 20 pages (22, 23.9 and 28.4, respectively).

I have seen that some conferences in TCS have no limit on the length of the submitted papers. A colleague whose opinions I hold in high esteem wrote to me saying:
Some people in the "conference name removed" community strongly feel there should be no page limit. My opinion may not be as strong as some, but I believe these people have a point.
Whether we like it or not, most papers in the TOC community only appear in conferences. Many of those conferences have no page limit for submissions, and encourage authors to include all proofs while making sure that the key ideas are presented in the first 10 pages or so. Scientific progress is hindered when authors are then forced to take out parts of their writeups for the conference proceedings in order for their papers to fit within the page limits. One may wish that they'd publish a full version of their paper in a journal subsequently, but most of them won't bother. The net effect is that there is no actual paper with all the details, which is no good.
I share the point made in the last sentence, but I am somewhat bothered by the fact that full versions of papers in TCS are increasingly not being submitted to journals. I am probably very old fashioned, but I feel that it is desirable to see our published results vetted by a journal-strength review process. I still view conferences as means for the rapid dissemination of results and as a meeting point for their community of reference, and I consider journals as the media for final archival publication of mature pieces of research. However, with my LIPIcs EB chair hat on and out of personal interest, I am keen to hear your opinion on whether it is good for the TCS community to publish only in conferences and to publish conference papers without page limits, bearing in mind that, quoting from the FOCS 2019 call for papers:
Although there is no bound on the length of a submission, material other than the title page, references, and the first ten pages will be read at the committee’s discretion.
I'd be grateful if you could post your thoughts on this matter in the comment section. Let's focus on what is best for the dissemination of science, even though long articles are more expensive than those whose average length is around 15 pages.

I look forward to hearing your opinions. Thanks in advance!

Tuesday, October 15, 2019

Facebook Testing and Verification Research Award to two CS@GSSI students

It gives me great pleasure to inform you that the project "Static prediction of test flakiness" submitted by Breno Alexandro Ferreira de Miranda (Federal University of Pernambuco, Brazil), Antonia Bertolino (Istituto di Scienza e Tecnologie dell’Informazione – CNR), Emilio Cruciani (Gran Sasso Science Institute) and Roberto Verdecchia (Gran Sasso Science Institute) has been selected for a Facebook Testing and Verification research award. Congratulations to everyone and to CS@GSSI as a whole!

The winners have been selected from a pool of over 100 proposals and are listed at 


Quoting from that web page: 

“We are excited that, once again, we received over 100 proposals, and of such high quality,” says Mark Harman, Research Scientist and Facebook TAV co-chair. “We are really excited to see this research develop, and to see the deployment of advanced research in industry.

“Despite doubling the funding available this year, there were many excellent proposals that we were sadly unable to fund this round. Such was the quality of the proposals we received,” says Mark. “We are very grateful to the scientific research community for engaging so enthusiastically with this call for proposals.”

The award will be presented at the upcoming Facebook Testing and Verification Symposium.

Tuesday, October 01, 2019

PhD position at TU/e on product line engineering in multidisciplinary cyber-physical systems

The Model Driven Software Engineering section at Eindhoven University of Technology (TU/e) is searching for a candidate for a fully-funded PhD position on product line engineering in the multidisciplinary context of cyber-physical systems to collaborate with the high-tech company ASML in the context of the EU ECSEL project Arrowhead Tools.

See here for the details of the position.

TU/e is a dynamic, research-intensive university in the heart of Europe, and in the Brainport region, a leading European technology region, and a centre for innovation and hi-tech industry. TU/e is consistently ranked within the top-100 positions in several world rankings for its research and quality of education.