Monday, January 21, 2008

Concurrency Column for the February Issue of the BEATCS

I have just posted the concurrency column that will appear in the February 2008 issue of the Bulletin of the EATCS. This installment of the concurrency column is devoted to a double bill, as it offers the following two contributions:
The first piece is a survey devoted to spatial logics contributed by Luis Caires, one of the prime movers behind the development of this exciting kind of specification logics. Since the original work of Pnueli, (temporal) logics have been a prime formalism for the description of the behaviour of concurrent systems. Spatial logics are specification logics for describing the behaviour as well as the spatial structure of concurrent systems. Despite being only a fairly recent addition to the family of specification formalisms for concurrent computation, spatial logics are already the subject of a large literature reporting on a substantial body of non-trivial results. Luis Caires’s survey gives us a highly readable and welcome bird’s-eye view of this fast-moving subject.

The second contribution is a strategic report on applying concurrency research in industry. This non-technical article is one of the outcomes of the Workshop on Applying Concurrency Research in Industry, colocated with CONCUR 2007 in Lisbon, which I co-organized on behalf of IFIP WG1.8 “Concurrency Theory”. The essay tries to distil the contents of the presentations delivered at the workshop, and of the ensuing discussion, for the benefit of the concurrency community as a whole. I thank all the participants in the workshop for their contribution to the event. Special thanks go to the invited speakers (Vincent Danos, Hubert Garavel, Jan Friso Groote and Kim G. Larsen) for making the trip to Lisbon to share their considerable experience on this topic with us, and to Rance Cleaveland, Joost-Pieter Katoen, Moshe Vardi and Frits Vaandrager for providing their answers to the questions that the audience asked the invited speakers at the workshop.

I hope that you will enjoy these contributions, and that you will feel enticed to contribute to the ongoing discussion within the concurrency theory community.

Last, but not least, do contact me if you'd like to contribute a piece to the column!

Wednesday, January 16, 2008

Using Model Checkers in "Intro to OS" Courses

The advent of mature model-checking tools has made algorithmic model-based verification much more accessible to the average computer scientist/engineer. So much so that we can now teach first-year students to use model-checking tools by sweeping essentially all of the theory behind them under the carpet. See, for example, the very recent experience report:

Roelof Hamberg and Frits Vaandrager. Using Model Checkers in an Introductory Course on Operating Systems. Technical Report ICIS-R07031,
ICIS, Radboud University Nijmegen, December 2007.

I strongly recommend reading this report to anybody with even a passing interest in formal methods. It is well written, content rich, and presents material that can inspire many of us in the lecture room. I myself wish that the paper had been posted last August, when I was planning (at the last minute) a second-year course on Operating Systems. I would have used the authors' tips and teaching material then. Not to mention great quotes like:

“Programs are not released without being tested; why should algorithms be published without being model checked?” (Leslie Lamport)

Last November, on the spur of a sudden moment of inspiration, I did use the Uppaal model checker in a one-week intensive course on operating systems for engineering students, and the results were very encouraging.

Frits and his coauthor have done all of us a great service by making their material available on the web, and by sharing their experience with the rest of us. Let's expose our students to easy-to-use model checkers like Uppaal from the first year of the studies in CS and Engineering. This will also a increase the impact of research in formal methods. Several of the students taking my one-week course wrote in their course evaluation that they were glad to have been exposed to Uppaal, and that they think they will use the tool again in their future studies on, e.g., control systems. This indicates that, once students have seen how useful model checkers are, they will be enticed to use them later on when facing similar problems. Last, but not least, the mathematically inclined students may be motivated to carry out independent studies and (under)graduate research in formal methods and other areas of theoretical computer science.

Quoting from the concluding section of the paper:

“Why should algorithms be explained without the use of a model checker?”

Indeed, why not? It will be a good day for the construction of reliable software systems when our students will routinely simulate and analyze concurrent algorithms using model checkers. For the moment, let's share our teaching experiences following the example set by Frits and his coworker, and let's make model checking and logic permeate our undergraduate education as much as possible.

Monday, January 14, 2008

Author Contributions, Redux

Some time ago, I pointed out an article with a very detailed account of the authors' contributions. Here is another similar case I have just seen, via Galileo. (Look at the acknowledgements.) This makes me wonder whether it is standard practice in the medical literature to list what each of the authors contributed to a paper. Is it?

Friday, January 11, 2008

Knuth and Me (Guest Post by Sergey Kitaev)

Guest post by Sergey Kitaev, a very good colleague of mine from the combinatorics group at Reykjavík University.

To show the significance of Donald Knuth in my life, it would probably be enough to indicate that he introduced in 1969 the area of “permutation patterns” which is the field of my main research interest in combinatorics that I’m dealing with almost every day. However, while thinking on the subject, it comes to mind the personal communications with Knuth at Mittag-Leffler Institute at the beginning of 2005. In particular, after attending my talk on partially ordered generalized patterns, Knuth decided to include a result of mine in volume 4 of “Art of computer programming.” It is remarkable that Knuth was collecting information for this volume for over 40 years! However, this was not the thing I was offered Knuth’s famous $1.28 reward for. Unlike most other rewards, this one was not directly related to mathematics – I let Knuth know the middle name of Alexandr Kostochka that he recorded in his name database both in English and Russian (Knuth is able of writing things in Russian which he learned in college, and I find this to be impressive). In any case, stupidly enough, I refused taking the check from Knuth, which would be a nice souvenir as I realized later on; I simply told Knuth that it was a great pleasure for me to be helpful for him …


Addendum. A (permutation) pattern is a permutation of a totally ordered set. An occurrence of a pattern P in a permutation p is a subsequence of letters of p whose relative order is the same as that of the letters in P. As an example, the permutation 461352 has three occurrences of the pattern 321, namely the subsequences 432, 632 and 652.


The initial motivation for studying pattern avoiding permutations came from its connections with container data types in computer science. In 1969 Don Knuth pioneered this work by showing that the stack sortable permutations are exactly the 231-avoiding permutations.

A Free Journal-Ranking Tool

The latest issue of Nature feutures a news item reporting on a freely-available tool that can be used to generate citation statistics for papers, journals and countries. The SCImago Journal & Country Rank is a portal that includes the journals and country scientific indicators developed from the information contained in the Scopus® database. This platform takes its name from the SCImago Journal Rank (SJR) indicatorpdf, developed by SCImago, a Spanish data-mining and visualization group. This indicator is based on Google PageRank. This tool is a competitor to Thomson's Web of Science, and covers more journals (15,000 in lieu of 9,000) and 20-45% more records than the Web of Science.

The availability of this tool, as well as of Google Scholar of course, puts Thomson under some pressure. I think that this is welcome pressure. To see why, you might wish to read this editorial. Basically, the "impact factor" is one of the Gods of modern-day academia, together with "leadership" and a few other criteria not necessarily related to scholarship. It has "a strong influence on the scientific community, affecting decisions on where to publish, whom to promote or hire, the success of grant applications, and salary bonuses. " However, as claimed in the editorial, "members of the community seem to have little understanding of how impact factors are determined, and, to our knowledge, no one has independently audited the underlying data to validate their reliability." This is obviously undesirable.

I think that, for good or for worse, impact-factor-based evaluation of our research output is here to stay. However, when making decisions based on impact factor, citations and what not, I hope that deans, employers, funding agencies and rectors will consult several different sources and compare the results that they get. Moreover, I do hope that good, old-fashioned evaluation of the quality of one's work will not disappear altogether to be replaced by purely quantitative indicators.

For the moment, let's play with our new toy. In case you are interested here are the rankings of countries in computer science: all subjects, computational theory and mathematics, TCS (but as a subcategory of mathematics), logic (as a subcategory of mathematics) and mathematics as a whole.
Draw your own conclusions.

Thursday, January 10, 2008

The Theoretic Centre of Computer Science

The latest issue of SIGACT News features a very entertaining piece entitled The Theoretic Center of Computer Science by Michael Kuhn and Roger Wattenhofer. This article is a "printed version of a frivolous PODC 2007 business meeting talk, held by the second author". It speculates on the central conferences and researchers in computer science, with emphasis on theory, and makes for excellent after-lunch reading. I recommend it to the readers of this blog.

In this post, I'd like to offer a couple of comments on the list of most central authors in computer science. First of all, how do Michael Kuhn and Roger Wattenhofer determine how central an author is? Here is the relevant excerpt from the paper.

Analogously to the construction of the Erdos number, we base our method on the co-author-graph. We then create the induced subgraph for each region of interest (PODC, STOC/FOCS/SODA, and computer science). For PODC, for example, this graph would only contain authors that have at least one PODC paper, and so on.

Other than in the construction of the Erdos number, we do not rely on shortest paths, but rather on the PageRank idea: We start several short random walks at different nodes of the graph, and count how often each author gets visited. This idea is then extended to time dependent centrality, by starting the random walks only at authors that have published in the last five years.

What are the results of this approach? Table 1 on page 62 of the paper gives the most central authors for computer science as a whole. Alberto Sangiovanni-Vincentelli (another Italian expat) tops the all-time list, and Noga Alon is the runner-up. Without doubt, the list contains only major players, and concurrency theorists will be happy to see Moshe Vardi at #7 on the list of most "central" authors in computer science. Some analysis of this table is provided in the paper. Here are a couple of quick thoughts and questions from yours truly.
  • Computer scientists working in the area of databases feature prominently in the all-time most central authors in CS. Indeed, a look at the list of researchers with the largest number of entries in DBLP, the data set used by the authors, shows that some of the most prolific authors in CS work in that area. Could it be that people publish more and have more coauthors in the field of databases, broadly construed, than in other areas of CS?
  • The list of most central authors does not include any Turing award winner, as far as I can tell at first sight. Does this mean that Turing award winners are not central? Of course not! To my mind, this just means that an analysis of the collaboration graph favours prolific authors with lots of coauthors. Consider, by way of example, two giants of CS research like Tony Hoare and Robin Milner, who are not even in the list of top 1000 CS authors. (Neither are Stephen Cook, Don Knuth, Gordon Plotkin or Leslie Valiant to name but a few giants, by the way :-)) By modern standards, their "number of papers" is not outstanding. However, their ideas and writings have had, and still have, enormous impact on computer science research. All of what I have done myself, for instance, is built on their original work, which has kept many computer scientists busy for about thirty years. (Of course, Robin and Tony are not responsible for the noise I have generated myself :-)) These tables are a fun read, and they do tell us something worthwhile about the players in our subject. However, as the authors point out themselves, "Without doubt the future will teach our evaluations a lesson, ultimately revealing in which direction computer science evolves, and maybe even discover the most influential computer scientist. After all, research is not about how many papers we write, or how many citations they get, but rather, what the best contributions are."
  • I think that the right-hand side of Table 1 is strongly influenced by the phenomenon of name ambiguity. Consider, for instance, Wei Wang, who is ranked as #2. When I saw that Wei Wang has 86 DBLP entries for 2006, I asked myself the question: "Who is Wei Wang"? This Google search return over 1.8 million entries! It seems to me that Wei Wang is a (very large) disciple of Bourbaki or Lothaire :-)
  • Table 2 on page 62 is a veritable who's who in the FOCS/STOC/SODA branch of TCS. It has a truly amazing number of Israeli computer scientists (six out of ten on the "all-time" list, and four out of ten on the "last-five-years" list, I believe).
Enjoy!

Tuesday, January 08, 2008

AMS Prizes 2008

Via the Geomblog, I see that the list of AMS prizes for 2008 is out.

First of all, let me join the chorus of congratulations for Shlomo Hoory, Nati Linial and Avi Wigderson, who have been awarded the Levi L. Conant Prize for the best expository article in the Notices or the Bulletin of the AMS in the last 5 years. They received this prize, which is a great recognition for TCS research within the mathematical community, for their article Expander graphs and their applications in the Bulletin of the AMS.

As Nati Linial says in his response for the prize

I believe that the full potential impact of combinatorics on the rest of mathematics is only starting to reveal itself and the study of expander graphs can give us some idea of the true power of these connections.

Combinatorial research is also honoured with the Leroy Steele Prize for seminal contribution to research going to Endre Szemeredi for the paper On sets of integers containing no k elements in arithmetic progression, Acta Arithmetica XXVII (1975), 199–245. I love these words in Szemeredi's response:

This award could not have occurred were it not for the fundamental work of
other mathematicians who developed the field of additive combinatorics and
established its relations with many other areas. Without them my theorem is only
a fairly strong result, but no “seminal contribution to research”.
Calling his theorem a "fairly strong result" is really faint praise, but I like the fact that Szemeredi points out that a result becomes a seminal contribution to research when it used by other researchers to obtain fruits that were considered beforehand too high on the tree of knowledge to be picked.

The prize booklet makes for some interesting reading. Wearing my Italian expatriate's glasses, I note in particular the two awards to Italian mathematicians (Alberto Bressan and Enrico Bombieri) , both of whom work in the US.

I look forward to seeing who will be the recipient of the EATCS award for 2008.

Monday, January 07, 2008

The Dangers of Blogging

In this entertaining TED talk, Yossi Vardi issues a word of warning for the male bloggers out there, and addresses the "local warming" problem.

I know that this post is somewhat different in nature from my typical ones, but it's the first day of term and a little after-lunch entertainment was called for :-)

Enjoy!

Friday, January 04, 2008

Science Magazine's Breakthroughs of the Year 2007

Anders Claesson just alerted me to the fact that Solving Checkers has been listed by Science magazine in tenth position in the list of breakthroughs of the year 2007. See here.

My colleague Yngvi Björnsson from the School of Computer Science at Reykjavík University was a member of the team behind this breakthrough. Congratulations to Yngvi and his colleagues at Alberta.

Sunday, December 30, 2007

A Good Example from Canada

I recently became aware of the Canada Research Chairs programme. That programme has been running since the year 2000, and aims at establishing 2000 research professorships—the so-called Canada Research Chairs—in universities across Canada by 2008. The Canada Research Chairs programme invests $300 million a year to attract and retain some of the world's most accomplished and promising minds.

I encourage the readers of this blog to have a look at the web site for the programme. There is a lot of interesting material there, and I cannot help but think that many countries would be well served by setting up a similar programme to attract the best possible scientists in all disciplines. Now, this is something well worth lobbying for in the coming year, isn't it?

If you do not have time to look at the web site I linked to above, here is my executive summary of the programme.

  • Each eligible degree-granting institution in Canada receives an allocation of Chairs. For each Chair, a university nominates a researcher whose work complements its strategic research plan and who meets the program's high standards.

    Three members of a college of reviewers, composed of experts from around the world, assess each nomination and recommend whether to support it.

  • Universities are allocated Chairs in proportion to the amount of research grant funding they have received from the three federal granting agencies: NSERC, CIHR, and SSHRC in the three years prior to the year of the allocation.

  • There are two types of Canada Research Chair:

    Tier 1 Chairs, tenable for seven years and renewable, are for outstanding researchers acknowledged by their peers as world leaders in their fields. For each Tier 1 Chair, the university receives $200,000 annually for seven years.

    Tier 2 Chairs, tenable for five years and renewable once, are for exceptional emerging researchers, acknowledged by their peers as having the potential to lead in their field. For each Tier 2 Chair, the university receives $100,000 annually for five years.

  • As you can see, there is a strong financial incentive to attract people to these endowed chairs!
  • Chairholders are also eligible for infrastructure support from the Canada Foundation for Innovation (CFI) to help acquire state-of-the-art equipment essential to their work.

  • If an institution's performance decreases relative to other institutions to the extent that the next recalculation of Chair allocations results in that institution's allocation being reduced, the Chairs Secretariat will reclaim, as appropriate, one or more of its unoccupied Chairs. Should all of the institution's Chairs be occupied, the secretariat will negotiate with the university on how best to reclaim the lost Chair(s).

Of course, the success of a programme like this one should be measured by the quality of the people who take up the chairs. (Italy has a similar programme already in place. You can read about it in a short article in Nature, with commentaries in the blog posts "The Runaway Brains" and "Brain Drain and Brain Gain".) You can look up the chairholders in all disciplines here. A quick browse through the names of the Canada Research chairholders in Information Technology and Mathematics makes me pretty sure that you'll find outstanding people in your area of interest.

Wouldn't it be great if we could convince our own ministries for education, university and research to set up a Research Chairs programme along the Canadian lines? Let's see what the new year will bring, but I do not hold my breath. I am already doing so waiting for the result of the pending research grant applications .

I wish a happy and productive 2008 to all readers of this blog.


Thursday, December 20, 2007

Workshop on Women in TCS

The department of CS at Princeton University is hosting a "Women in Theory" student workshop in Princeton on June 14-18, 2008. See http://www.cs.princeton.edu/theory/index.php/Main/WIT08 for more details and list of confirmed speakers. (Via in theory.)

I am happy to see an initiative like this. In fact, I believe that we should have more events that highlight the achievements of women in TCS and that offer prospective students role models that they can look up to.

During my student days in Pisa, I thought that it was very natural for CS classes to be attended by roughly an equal number of men and women. I also followed a good number of classes where lecturers or TAs were female members of staff. Back then, I never thought that computer science was a male-dominated subject, and I had no reason to think so. Only much later, did I realize that what I thought was the norm was, in fact, an exception and, by the time I taught a class in Aalborg that was being attended by only one female student out of about 50 registered students, the lack of women in CS was not a surprise to me any more.

It is still my impression that Italy has a fairly substantial number of female (theoretical) computer scientists---at least compared to countries in Northern Europe. People often ask me for the reasons behind this phenomenon, and I am always at a loss to try and explain it.

Do any of you have a good explanation why countries like France and Italy seem to suffer less than others from the lack of women in subjects like CS? Could it be that things are getting worse there too?

Saturday, December 15, 2007

Computer Scientist: 21st Century Renaissance Man

The period from January till May each year is when faculty at the School of Computer Science at Reykjavík University try to make a determined effort to entice students to study our lovely discipline. (I know, we should do so all the year round, but somehow our good intentions do not become good deeds on a regular basis by themselves :-()

As part of our 2008 campaign, I have coauthored a short essay entitled Computer Scientist: 21st Century Renaissance Man. There is nothing particularly new in it, but I hope that it is readable and that it carries the message that CS is much more than most laypeople believe it is. Maybe some of you will find it useful for your own PR campaigns. Feel free to use it, if you think it may help.

You can draw some more inspiration from the items in our suggested-reading list. More essays of general interest may be found here. See also the excellent survey collection at Theory Matters.


Friday, December 14, 2007

Positions in Computer Science and Applied Maths at Reykjavík University

Some readers of this blog might be interested in the following job announcement. We are particularly interested in applicants in Computer Security, System Dependability, and related areas within the field of computer science. Moreover, Software Engineering is intended in a broad sense and we welcome applications from people working in, e.g., formal development techniques, model-based software development, and testing and verification.

In order to implement its ambitious strategy in research and teaching, the School of Computer Science at Reykjavik University seeks to hire faculty members for new academic positions. The following links point to pages with more detailed information about the vacant positions.

Applied Mathematics: http://hr.is/?PageID=6595
Computer Science: http://hr.is/?PageID=6596
Software Engineering: http://hr.is/?PageID=6608

In all cases,
position levels can range from assistant professor to full professor, depending on the qualifications of the applicant. Salary level is negotiable and relocation assistance is offered. The position is available immediately, but later starting dates can be negotiated.

Informal communication and discussions are encouraged, and interested candidates are welcome to contact the Dean of the School of Computer Science, Dr. Ari K. Jónsson (ari@ru.is), for further information.


Wednesday, December 12, 2007

Registration for ICALP 2008 is Open

The registration page for ICALP 2008 and affiliated events went live yesterday evening at http://www.ru.is/icalp08/registration.html.

We have done our best to keep the registration fees as competitive as we possibly could. The prices on the registration form are in ISK, but, by way of example, the regular early fee for a non-ICALP participant to a one-day workshop is around 77 euros, which become roughly 56 euros for somebody who registers also for ICALP. The early registration fee for ICALP is a little below 380 euros (including the excursion).

If you know that you will attend ICALP 2008, as you should :-), I strongly encourage you to book your flights and accommodation early. July is prime holiday time in Iceland, and you are more likely to get a good deal on your flights if you book as early as you possibly can.

Let me end, in the style of Numb3rs, by noting that ICALP 2008 is

13 workshops
8 days
5 invited speakers
3 tracks
2 prize awards
1 conference.

Thanks to my co-organizer Magnús M. Halldórsson for pointing out this "ICALP for fun" countdown and that this is going to be a Fibonacci-like ICALP!

Ranking of Excellent European Graduate Programmes in Natural Sciences

This report of the Centre for Higher Education Development (CHE), which was released about a week ago, may be of interest to readers of this blog. The CHE is a think tank for higher education. Based on international comparisons, they develop models for the modernization of higher education systems and institutions.

Their report develops a Ranking of Excellent European Graduate Programmes in Natural Sciences (viz. biology, chemistry, mathematics and physics), which is intended as an orientation guide for undergraduates, helping them find their way around European Higher Education while at the same time helping them to choose a suitable university for their graduate studies: Master’s and PhD.

At first sight, the report looks very well done, and for an Italian expatriate like me it is good to see that Italian institutions are doing rather well. I'd like to see a similar analysis carried out for programmes in computer science.

Let me try and provide a few remarks on the CHE report. In passing, I'll also offer some personal conclusions related to what the findings of this report may mean for a country like Iceland, which has great ambitions despite its tiny population.

Let me start by focusing on one message from the report that I find most important here (quoted from page 14 of the report). While reading the following text, bear in mind that "subject areas" refers to biology, chemistry, mathematics and physics, and not to a huge array of different disciplines.

"Another interesting finding is the fact that most institutions (33) are selected in only one subject area, 15 in two subject areas, 4 in three and also only 4 in all subject areas. If, even in the relatively closely connected academic fields of the natural sciences and mathematics, only 14% of the very top institutions in one geographic region are featuring three or all four subject areas, this can indeed be taken as an argument against institutionwide rankings."

Even though I enjoy reading the results of university-wide rankings, I believe that what should concern students choosing where to pursue their studies and funding agencies determining where to invest their research funds in specific disciplines is not the overall ranking of a university, but rather its excellence in the specific topic of interest. For this reason, I agree with the finding that subject-specific rankings are much more informative than institution-wide ones.

Of course, in general, a university that scores highly institution-wide won't have any very weak department. However, there may be, and indeed there are, universities that have peaks of true excellence in specific areas, even though they may not be world-beaters in many areas. If I were a prospective PhD student, I would prefer going to study in a department which is known to be top-class in the specific area of my interest rather than going to university X just because it has a globally good reputation. Quoting from the report:

"Prospective doctoral students are possibly less interested in the general performance of a faculty or department than in a specific research group. They usually have very clear ideas about the specialised topic on which they are focusing. Thus, it might be of some value for a student searching for a biology doctoral programme specialising in insects to know that the faculty at University A is excellent in its research output in this domain. However, it might be much more interesting for this individual to
learn that he could delve into honeybee studies in Würzburg's bee group. Or, a student in astrophysics might be attracted less by the overall performance of the Physics Department at the University of Copenhagen than by its research group focusing on dark matter and cosmology."

So my first conclusion is:

Conclusion 1. Our business as academic institutions is reputation. It is better to be known in a few selected areas than to be unknown in many. Icelandic universities should prioritize and place more resources in those areas where they can maintain or build a strong reputation internationally. The competition is growing stronger by the day; nobody stands still and we will need many more resources in the future just to maintain our present standing where we have one.

The second point that I'd like to pick out from the report is the minimum entry requirement for even entering the evaluation. The 3000 ISI publications from a institution over the evaluation period are indeed a very tall order for any Icelandic institution at this moment in time. Sometimes we pat ourselves on the shoulders and tell each other how well we are doing, and for very good reasons. However, we should never lose sight of the "big picture". A very good practice for any scientist is to remain humble, to know that there is a lot one does not know, and to keep in mind that there are very many strong scientists and departments out there.

As Socrates famously put it, "A wise man is one who knows he does not know." In this setting, I would translate this statement into something like this:

Conclusion 2. A wise rector/dean/head of department is one who knows that her university/faculty/department will need to improve its research quality and output considerably just to maintain its present status, no matter what its present strength is. The only way to do so is to hire the best possible researchers, to give them the best possible working environment and the freedom to follow their research interests. Research output will need to be considered when distributing research money to ensure that the most funding goes where the highest "interests" (read "quality publications in internationally recognized outlets") will be generated.

Two of the indicators considered by the CHE Ranking are
  • the percentage of international and female staff within the group of staff with a doctorate and
  • the percentage of female and international doctoral and master's students.

I am afraid that, despite our snow queens, we score badly on both of these fronts. Ranking measurements aside, it is of paramount importance for science in Iceland to nurture female talent and to seek actively to hire the best available female applicants. Mind you, I am against hiring female applicants just because of their gender. What I am saying is that our departments should have search committees who actively nurture connections with the best possible female applicants for positions and that outstanding female applicants should be given precedence when they are at least as good as the competition. Here the ministry could also chip in with some financial incentives to universities to hire outstanding female applicants. (I won't turn this into a conclusion though )

One may wonder whether some research groups from Icelandic universities can make it into the big league. The answer to this natural question that emerges from the CHE report is, I believe, positive. Look at the bottom of page 12 in the report. There you will read:

"Looking at table 2, the United Kingdom not only attains the largest number of gold medals but also the largest number of medals in total within the excellence group. Switzerland, with only three universities in this group, is in third place concerning gold medals and holds the largest relative percentage of gold medals: 16 out of 22 medals in the whole."

This is an outstanding, and not unexpected, performance of Swiss institutions. In fact, ETH Zurich is one of only four universities with gold medals in all of the subjects in the excellence group (the others being Imperial College, the University of Cambridge and the University of Utrecht)! How can Switzerland achieve this outstanding level of academic achievement? Rather than trying to answer this question myself, I will rely on higher authority and freely quote a few excerpts from an interview to the Italian mathematician Alfio Quarteroni (professor at the Ecole Federale Polytechnique de Lausanne and at the Politecnico di Milano) published in this book.

  • Switzerland has only two federal universities (ETH and EFPL).
  • These are two truly international institutions. To wit, about 70% of their professors are foreigners, and so are about 65% of the PhD students and about 33% of their undergraduates.
  • Each of the few and carefully chosen full professors in those institutions has the financial resources to build her own research team. For instance, Quarteroni's team has about 20 members. (As a curiousity, they helped build Alinghi, the boat that has won the last two installments of the America's cup.)
  • Quarteroni roughly says: "EFPL offers outstanding environmental and quality conditions that I have not found elsewhere. I have worked at the University of Minnesota, at Paris VI and, for shorter periods, in about 50 universities and research centres throughout the world, including NASA at Langley; well, on the basis of my personal experience, Lausanne is the place where I have been able to realize my goals in the simplest, fastest and most efficient way."

Conclusion 3. I let you draw your own conclusions as to what we need to do here in Iceland in order to approach the lofty heights that those Swiss institutions as well as several universities in Finland, The Netherlands, and Sweden have managed to attain. The above opinions of Quarteroni's raise many questions which I hope Icelandic university administrators will be willing to answer.


RU in the Guardian Education

Colin Stirling pointed out this article in the Guardian Education to me yesterday. This article is going to generate a little publicity for my current institution (Reykjavík University) and we can certainly do with that! It is also certainly true that Icelandic (academic) institutions feature more women in leading positions than elsewhere, and that their wages are comparable to those of equally qualified male colleagues. However, I am not so sure that Iceland is a particularly good example of a country that attracts good numbers of female students and members of staff in science and technology. We still have far too few women enrolling in computer science degrees, for instance, and my department employs one female professor and two female assistant professors. (Yes, we now have English web pages!) We will need to work very hard to try and change this situation, and this is one of our tasks for the future as far as recruiting is concerned both at student and staff level.

But enough grumping, let's enjoy our five minutes of fame in the British media, even though, as my rector Svafa Groenfeldt wrote to me,

"The interview was ok but as always the journalists take a bit of an artistic license when they quote what we said :-)"

Tuesday, December 11, 2007

A Cancellation Theorem for BCCSP

Wan Fokkink, Anna Ingolfsdottir and I have recently completed the paper A Cancellation Theorem for BCCSP, which is now available from the web page where Anna and I collect our papers.

The aim of this paper is to prove a cancellation result for the language BCCSP modulo all the classic semantics in van Glabbeek's linear time-branching time spectrum. The statement of this result is as follows.

Theorem. Let t and u be BCCSP terms that do not contain the variable x as a summand. Let <= be a preorder in van Glabbeek's spectrum. If t+x <= u+x then t <= u.

Apart from having some intrinsic interest, this cancellation result plays a crucial role in the study of the cover equations, in the sense of Fokkink and Nain, that characterize the studied semantics.

Fokkink and Nain proved the instance of the above theorem for failures semantics, with the aim to obtain an ω-completeness result for this semantics; their proof is rather delicate. To the best of our knowledge, failures semantics has so far been the only semantics in the spectrum for which the above result has been published. In our paper, we provide a proof of the above-mentioned property for all of the other semantics in the linear time-branching time spectrum. Despite the naturalness of the statement, which appears obvious, these proofs are far from trivial (at least for yours truly), and quite technical. I myself was really surprised by the amount of work we needed to do to prove such an "obvious" statement. (In case you wonder, the "obvious" statement is false in general. Consider, for instance, an algebra with carrier {0,1} and where the sum of any two elements is 1. Then the inequation y+x <= z+x holds, but y <= z obviously does not.)

I hope that some of the readers of this blog will find the paper worth reading. The techniques used in the proof of the cancellation theorem may also have some independent interest.

Sunday, December 09, 2007

Accepted Papers at FOSSACS 2008

The list of accepted papers for FOSSACS 2008 is out. I was in the PC for the conference, and I have to say that this year the number of very good submissions was substantially higher than the number of slots. (And this without counting the papers with which I had to declare a conflict of interests.)

Looking at the accepted papers, it is striking how many of them have French authors. In fact, France was the country with the largest number of authors of submitted papers this year (67 to be precise) , and the acceptance ratio for papers co-authored by French authors was high. By way of comparison, the country that had the second largest number of authors was the US with 29. Germany, Italy and the UK were roughly on a par with the US.

French TCS is hot, judging by these figures. LSV alone contributes at least five papers to FOSSACS 2008. This is quite a way of celebrating their tenth anniversary.

Thursday, December 06, 2007

ACM Fellows - 2007

I just had a look at the list of ACM Fellows for 2007. It has been a good year for TCS, and even concurrency theory and computer-aided verification are well represented in the list. Congratulations to Rajeev Alur, Lance Fortnow, Georg Gottlob, Rajeev Motwani, Amir Pnueli and all the other the new fellows.

On the subject of awards, I yesterday saw the call for nominations for the first CAV award. I have at least one person that I'd really like to nominate, but since I would prefer that person to be in cool Reykjavík for ICALP 2008 instead of sultry Princeton for CAV 2008, I guess I'll have to wait for one year before sending in my nomination :-) Who would you nominate for such an award? Note that
The cited contribution(s) must have been made not more recently than five years ago and not over twenty years ago. In addition, the contribution(s) should not yet have received recognition via a major award, such as the ACM Turing or Kanellakis Awards.
Unfortunately, the CAV organizers decided to fix the dates for their conference so that they coincide with the dates for ICALP 2008, which were known long before theirs. Time will tell whether this was an inspired decision on their part.

Wednesday, December 05, 2007

ICALP 2008: Second Call for Papers

The second call for papers for ICALP 2008 has just been posted to several mailing lists. The chances are that your mailboxes will be flooded by a good number of copies of the call for papers, but, in case this does not happen, you have been notified via this post :-) (In fact, I was quite amazed to discover that this modest blog is featured in the Theory of Computing Blog Aggregator, and so the odd post of mine might even be read by a CS theorist or two.)

In fact, submissions to ICALP 2008 are already open! To submit, please follow this link. Registration for the conference and affiliated workshops will be open very soon, allowing prospective participants to book flights and accommodation at a decent price. Iceland is a hot tourist destination in July.

Tuesday, December 04, 2007

ICALP 2008: Co-located Events

The list of events that will be co-located with ICALP 2008 (6-13 July, Reykjavík, Iceland) is now available. We received a record number of workshop proposals, and the number of affiliated events witnesses the interest in visiting the land that hosts me right now.

A second call for paper for ICALP 2008 will be posted very soon. Sharpen your pencils, and submit your best paper to the conference!

Sunday, December 02, 2007

10 Years of Verification in Cachan: Part I

On November 26 and 27, Anna and I were in Paris for 10 Years of Verification in Cachan, a two-day workshop organized by the Laboratoire Spécification & Vérification (LSV) of the ENS Cachan to celebrate its 10th anniversary. The workshop was centered around two special award ceremonies honouring two very good colleagues and friends of ours: Patricia Bouyer received CNRS's 2007 Bronze Medal for Computer Science (Monday 26th November), and Kim G. Larsen became Doctor Honoris Causa at ENS Cachan (Tuesday 27th November).

Despite being very busy and getting nowhere fast, we felt that we really ought to make the trip to Paris for this event, to which the organizers had kindly invited us to contribute talks. We were visiting professors at LSV in May 1998, when that laboratory was in its infancy, and we have very fond memories, both scientifically and socially, from that stay. In fact, our connection with LSV started informally long before that time when François Laroussinie and I shared an office in Aalborg for about a year starting from the autumn 1994 (when BRICS started its activities).

Today, there is little doubt that LSV is one of the hotbeds of TCS research in the Paris area, where there is really no shortage of talent and of extremely strong CS departments covering the whole gamut of TCS research. According to Philippe Schnoebelen, the present director of LSV, the centre now has about 40 members, and since its inception it had graduated 33 PhD students, seven of whom have been hired by CNRS. This is just one of the many indications of the success achieved by LSV over the first ten years of its existence. To my mind, another definite sign of impact is the number of former members of the laboratory who have taken up high-profile academic positions elsewhere. Here is what I could find on the LSV web site.
The workshop was a really enjoyable event. We had a great time, listened to some excellent talks, met a lot of colleagues and friends, and tasted some very good food. The organization was simply superb, thanks to the sterling efforts of Philippe Schnoebelen, Thomas Chatain, and Stéphanie Delaune.

Apart from my presentation, which ended the event, the two-day workshop featured talks by André Arnold, Pierre Wolper, Kim G. Larsen, Claude Kirchner, Marta Kwiatkowska, Michel Bidoit, Patricia Bouyer, Anna Ingólfsdóttir, Wolfgang Thomas, and Colin Stirling. This was really an embarrassment of riches, and I learned a lot from all of the talks---even from the two delivered in French :-) The quality of the presentations by these colleagues was invariably high, and the talks offered very accessible introductions to several areas of research covered by the members of LSV.

It would take way too long to report on all of the presentations. In this post and in subsequent ones, I'll therefore limit myself to recalling a few opinions and trivia that I heard at the workshop.

In his talk 25 years of automata and verification, Pierre Wolper went on record as saying that "Complexity, as traditionally measured, is not very relevant in verification." To wit, in his talk he pointed out that verification algorithms with high worst-case complexity turn out to perform well in practice and are widely used. Prime examples are automata-theoretic algorithms for LTL model checking as well as those implemented in the tool MONA, which implements decision procedures for the Weak Second-order Theory of One or Two successors (aka WS1S/WS2S)---a theory that is not elementary-recursive, as shown by Albert Meyer in this seminal paper. On the other hand, he presented an example from his own work yielding efficient automata-theoretic algorithms for CTL model checking, which are not used in practice. His conclusion was that one cannot trust complexity results. (I myself have mixed feelings about this opinion of Wolper's. Maybe I'll devote a post to this topic when life is less hectic. In the meantime, I'd love to hear your opinion.)

Pierre Wolper also said that his LICS 1986 paper An Automata-Theoretic Approach to Automatic Program Verification (Preliminary Report) with Moshe Vardi, which eventually won the LICS Test-of-Time Award and formed the basis for their paper that won the Gödel prize in 2000, had been rejected by two or three conferences before being accepted at LICS 1986! He also recalled the following sequence of reactions by Gerald Holzmann to their automata-theoretic algorithms for LTL model checking:
  1. "It must be wrong!"
  2. "It is impractical!"
  3. "It does not fit into SPIN!"
Eventually, the algorithm was implemented in SPIN, showing the power of elegant ideas in practice.

At the end of his talk, Pierre called for renewed efforts in implementing automata. I keep telling my students that automata are the most basic computational model in computer science, and I cannot help but share Pierre's call to arms.

I hope to report on a few other talks from the workshop when I have managed to catch up with a few of the items on my to-do list.

Tuesday, November 20, 2007

Acknowledging Author Contributions to Papers

Yesterday, I had a quick look at a news item published on the Italian on-line science magazine Galileo. (Warning: It's in Italian.) The news item describes work on computational biology carried out by Niko Beerenwinkel (ETH Zürich) and co-workers. The authors develop a new mathematical model for the somatic evolution of colorectal cancers and use it to derive an analytical approximation for the expected waiting time to the cancer phenotype.

This all sounds very interesting, but what caught my eye as a casual observer was the following information on page 19 of the resulting paper.

Author Contributions
K.W.K., V.E.V., and B.V. performed experiments; N.B., T.A., D.D., A.T., and M.A.N. developed the mathematical model; and N.B., T.A., D.D., B.V., and M.A.N. wrote the paper.
Have any of you ever written anything similar on any of your papers? I wonder what similar piece of text might accompany a TCS paper. Will we ever see a TCS article stating anything like the following piece of text?

X and Y proved Lemmas 1-4 and Theorem 2, Z proved Theorem 1, contributed ideas to the proof of Lemma 3 and found an error in the original proof of Theorem 1. Author W wrote the paper and was the main driving force in previous joint work that led to the development of this paper.

And what about papers that also report on implementations? I can't imagine how detailed the description of author contributions would have to be then.

I feel relieved to live in a research environment where we collaborate and mainly use alphabetical order amongst the authors---and not just because my surname is Aceto :-) I personally like to work by closely following the Hardy-Littlewood Rules for collaboration, whose spirit is very much at odds with the identification of author contributions.

Friday, November 16, 2007

A Wonderful Read On Gödel's Theorem

I recently finished reading the book Gödel's Theorem: An Incomplete Guide to its Use and Abuse by Torkel Franzén. This is a book I would recommend to all of the readers of this blog. (Not "This is a book I would recommend to all of the readers of this book", as I had written originally. Thanks to Luca Trevisan for spotting this embarrassing statement of mine. Haste generates monsters, alas :-() It is accessible yet precise, and it sets the record straight on what Gödel's theorem implies and what it does not imply. While reading it I quickly learned that, despite my previous reading of texts on Gödel's theorem, I suffered from some of the misunderstandings that Franzén discusses in his lovely book.

To quote from the overview chapter in the book:

The aim of the present addition to the literature on Gödel’s theorem is to set out the content, scope and limits of the incompleteness theorem in such a way as to allow a reader with no knowledge of formal logic to form a sober and soundly based opinion of these various arguments and reflections invoking the theorem. To this end, a number of commonly occurring such arguments and reflections will be presented, in an attempt to counteract common misconceptions and clarify the issues. The formulations of these arguments and reflections, when not attributed to any specific author, are adaptations of statements found on the net, representative of many such reflections.
I think that the book succeeds beautifully to do just that, and will repay re-reading.

Torkel Franzén also maintained the page Gödel on the net, from which the book eventually originated.

You might have noticed the use of the past tense in the previous sentence. Unfortunately, I just discovered that the author passed away in 2006. He left some books that, like this one, have received praise from many sources. I certainly enjoyed reading this book, and I am feeling tempted to open my purse and purchase a copy of this volume for my personal library.

Addendum (17 November 2007): I encourage any reader who wants to catch a glimpse of what this book is about, and of the writing style of the author, to have a look at this short article Torkel Franzén wrote for the Notices of the AMS (volume 53, number 4). Here is an example of one of the many things I personally learned from the book.

Let us say that a formal system has an arithmetical component if it is possible to interpret some of its statements as statements about the natural numbers, in such a way that the system proves some basic principles of arithmetic having to do with summation and multiplication. Given this, we can produce (using Barkley Rosser’s strengthening of Gödel’s theorem in conjunction with the proof of the Matiyasevich-Davis-Robinson-Putnam theorem about the representability of recursively enumerable sets by Diophantine equations) a particular statement of the form “The Diophantine equation p(x1 , . . . , xn ) = 0 has no solution” which is undecidable in the theory, provided it is consistent.

Franzén remarks that

....it is mathematically a very striking fact that any sufficiently strong consistent formal system is incomplete with respect to this class of statements,....

However, no famous arithmetical conjecture has been shown to be undecidable in ZFC.....From a logician's point of view, it would be immensely interesting if some famous arithmetical conjecture turned out to be undecidable in ZFC, but it seems too much to hope for.
Wikipedia has a short list of mathematical statements that are independent of ZFC on this page. You may find it interesting.

Saharon Shelah's result to the effect that the Whitehead problem is undecidable in ZFC is discussed in this paper by Eklof. Shelah has also shown that the problem is independent of ZFC+CH.

There is also an interesting post on a combinatorial problem that is independent of ZFC on the complexity blog. That post also includes several pointers to pieces that are well worth reading.

Thursday, November 15, 2007

Rejecta Mathematica

Silvio Capobianco pointed out to me a new mathematical journal, whose existence he discovered by looking at this opinion of Doron Zeilberger's. The journal is called Rejecta Mathematica, and its description reads

Rejecta Mathematica is a new, open access, online journal that publishes only papers that have been rejected from peer-reviewed journals (or conferences with comparable review standards) in the mathematical sciences.
A unique aspect of Rejecta Mathematica is that each paper will be published together with an open letter from the authors discussing the paper's original review process, disclosing any known flaws in the paper and stating the case for the paper's value to the community.

Is this a joke? This and other natural questions are discussed here. In case you wonder who the editors of this journal are, look here.

I am looking forward to seeing what papers will appear in the first volume of the journal :-)

Tuesday, November 13, 2007

My Web Page Has Moved

My directories in Aalborg are about to be closed down, so after three years I have finally needed to move my web pages to Reykjavík University. I have been loathe to do so until now because Aalborg was ensuring adequate visibility on the web to my pages. My web page used to come out as first link in a Google search for "Luca Aceto" and was one of the top hits in a Google search for "Aceto" (where I am competing with the vinegar producers :-)).

My fears have now materialized. This search does not find my RU web page on the first page, which is not good. I'll have to explain to our web administrator that if Google can't find my home page, I basically don't exist.

Luckily, however, this blog is still visible and from there one can find my home page in a rather roundabout way :-)

Monday, November 12, 2007

The Origins of Bisimulation and Coinduction

I recently finished reading the paper On the Origins of Bisimulation, Coinduction, and Fixed Points by Davide Sangiorgi.

I had the pleasure of hearing Davide give a talk on the topic of this survey paper during a workshop I co-organized in August 2005 in Bertinoro, and I had been waiting for the paper to appear in print ever since. The wait has now been vindicated by this truly excellent piece of scholarship. I suggest that any reader of this blog grabs a copy of this paper and reads it. I myself learned something new from it and enjoyed reading it immensely.

In this well-organized paper, Davide takes us on a historical journey tracing back the origins of the ideas of bisimulation and coinduction through the three fields where they emerged independently: Computer Science, Modal Logic and Set Theory. He highlights the role played by some key figures in the development of bisimulation and co-induction, but he does not forget to mention the interesting contributions of less-known players. Each section of the paper ends with a discussion of the ideas it presents and speculates on the reasons why some authors went very close to defining bisimulation, but did not take the extra step needed to discover/invent bisimilarity.

The level of scholarship in the paper is outstanding. We need to thank Davide for making the effort to find appropriate sources, to read them so carefully and to present his findings in such a well-organized survey paper.

I trust that the writing of this paper took a lot of work and I hope that our community will recognize the importance of writing articles like this one. Shouldn't journals accept historical papers of this quality? I believe they should!

Thanks again to Davide for making the effort to write this piece. I firmly believe that we should have more articles like this one and hope that this paper will enjoy a wide readership throughout the TCS community as a whole.

Friday, November 09, 2007

BEATCS Is Open Access

Vladimiro Sassone, the editor in chief of the Bulletin of the EATCS, has posted the appended message on several theory-related mailing lists. I hope that my readers will enjoy reading the Bulletin and will feel enticed to contribute to it by sending in survey articles for its columns, technical articles, conference and workshop reports, position papers, puzzles, obituaries etc.

I have always been a staunch supporter of an open-access Bulletin. A professional society like the EATCS must suppport the TCS research community in all possible ways. Having an open-access publication like the Bulletin is one of the contributions that the EATCS can have. Compare with SIGACT News, whose access is restricted to SIGACT members. Is this the best way to serve the TCS community?

I also believe that an open-access Bulletin of the EATCS will entice more colleagues to contribute to the Bulletin since their contributions will stand a better chance to be read and appreciated by anyone within the TCS community. This will also increase the quality of the contributions.


Support the EATCS and the open-access Bulletin by becoming a member of the EATCS! Becoming a member is easy, and only costs € 30 for a period of one year. (In fact, it is only 25 € if you are also member of SIGACT.)
To quote from the classic Nike ad, "Just do it"

----------------------------------

Dear Colleagues,

Since 2003 all issues of the Bulletin of the EATCS, the European
Association for Theoretical Computer Science, have been produced
entirely electronically and made available on the web for members only.

The EATCS is now piloting the idea of Open Access for the Bulletin,
in the spirit of best serving its research community. So, until
further notice the volumes from no 79 onwards of the Bulletin of the
EATCS will be available from

http://www.eatcs.org/publications/bulletin.html

With best regards (and apologies for cross-posting)
\vs




Friday, November 02, 2007

Joining Our "User Group"

In case you are using the book Reactive Systems: Modelling, Specification and Verification for one of your courses, I'd love to hear from you! I'll then mention your course on the list of classes that use the book and you'll be informed once we make supplementary material available on the book's website (or via other channels).

I look forward to hearing from you.

Thursday, November 01, 2007

Submissions for ICALP 2008 Now Open

The submission page for ICALP 2008 is up. See here. (The submission site for track C will be available soon.) Be sure to submit your best paper to the conference!

The deadline for workshop proposals passed yesterday, and I can already say that we will be able to offer a very varied and large selection of workshops. Watch this space and be the first to know which workshops will be co-located with the conference :-)

Thursday, October 25, 2007

Does Presenting Your Results Simply Damage Your Early Career Development?

Over the last couple of days, I have been peeking at Nicole Immorlica's enjoyable live report from FOCS 2007. This comment on a post of hers, amongst others, really got me thinking.

There is another factor going on here, which is that younger people often feel the need to "impress" the audience (both those in and outside their area) with how difficult their result is. For better or for worse, this is to some extent essential for grad students hoping to get faculty jobs, and assistant profs looking ahead to their tenure letters.

Once someone gets tenure, they can decide (depending on their personality) whether to keep up this ruse, or whether to aim to present results as simply as possible.

I agree that there is a general tendency amongst researchers at an early stage of their career to show off the hardness of their results, both in in their talks and in their papers. (I, for one, would write all of my early papers differently now.) However, I am not at all convinced that this is essential to foster one's early career, as claimed by the commentator.

When it comes to talks, as in other job-related things, I tend to be a follower of Paul Halmos. (See this compilation of his thoughts that recently appeared in the Notices of the AMS.)

....the purpose of a public lecture is to inform, but to do so in a manner that makes it possible for the audience to absorb the information. An attractive presentation with no content is worthless, to be sure, but a lump of indigestible information is worth no more.…

Less is more, said the great architect Mies van der Rohe, and if all lecturers remember that adage, all audiences would be both wiser and happier.

Have you ever disliked a lecture because it was too elementary? I am sure that there are people who would answer yes to that question, but not many. Every time I have asked the question, the person who answered said no, and then looked a little surprised at hearing the answer. A public lecture should be simple and elementary; it should not be complicated and technical.

I like to think that this is the message we should pass on to young researchers from the very beginning of their research careers. Building a reputation as a clear speaker, who can make difficult ideas and results accessible to her/his audience, can only help in one's career.

Isn't it going to be a huge bonus to be remembered as a person who gave a highly accessible and entertaining talk at FOCS/ICALP/LICS/SODA/STOC 2nnn? Why don't we tell our young colleagues that a talk at a generalist conference is essentially a public lecture? The result would be better talks at conferences, a happier audience, and an increased reputation for the speakers.

Wednesday, October 24, 2007

Call For Nominations: EATCS Award

The call for nominations for the EATCS Award 2008 has been published (pdf). The EATCS Award committee consists of Catuscia Palamidessi, David Peleg (chair), and Emo Welzl. Nominations should be sent to David Peleg by December 1, 2007. Further details on the award, including the stellar list of previous awardees, is here.

Be sure to nominate your favourite TCS researcher! The award will be delivered at ICALP'08 in Reykjavík.

Monday, October 22, 2007

Call For Nominations For The Gödel Prize 2008

The 2008 Gödel Prize for outstanding journal articles in the area of theoretical computer science, co-sponsored by EATCS and ACM SIGACT, will be awarded at ICALP 2008, which I am co-organizing in Reykjavík with Magnús M. Halldórsson and Anna Ingolfsdottir . The call for nominations, with deadline 31 January 2008, is now available here.

Make sure that you nominate your favourite paper in TCS for the award! One of the authors of the award-winning paper will attend the conference and will deliver a talk.

Sunday, October 14, 2007

Deadline Approaching: Call for Workshops for ICALP 2008

The deadline for workshop proposals for ICALP 2008 is 31 October 2007. See the call for workshops for details on how to submit a proposal.

Friday, October 12, 2007

NordSec 2007

Over the last couple of days, ICE-TCS and the School of Computer Science at Reykjavík University hosted NordSec 2007, the 12th Nordic Workshop on Secure IT-systems. The workshop featured invited talks by Cedric Fournet (Microsoft Research, Cambridge, UK) and Greg Morissett (Harvard University, USA). I attended a few talks at the workshop (in between visits to my office to take care of typical daily chores and make sure that my inbox did not overflow), and chaired a session with short presentations.

I found the event interesting and enjoyable. In particular, the two invited talks were excellent. Cedric Fournet presented some joint work with Andrew D. Gordon and Sergio Maffeis reported in the paper A Type Discipline for Authorization in Distributed Systems. In his talk, Cedric addressed the following two key questions.
  1. How can we express good authorization policies?
  2. How can we enforce good authorization policies?
Cedric's tenet in the talk was that logics are good languages for expressing policies, and that type systems can be used enforce good policies at compile time. He also showed the audience what can be guaranteed at compile time when parts of the system are compromised, and the role that "robust safety" (safety despite compromised principals) plays in reasoning about processes.

Greg Morissett delivered an inspiring talk on the Cyclone project. Cyclone is a safe dialect of C. It is designed so that pure Cyclone programs are not vulnerable to a wide class of bugs that plague C programs such as buffer overflows. (Greg said early on in his talk that the legacy of C is, if you allow me to phrase one of his slides as a regular expression, (Buffer overrun)* and that he hates C!) Greg's talk was a great ad for some of the topics I am about to cover in my course on semantics for programming languages, and highlighted how good time-honoured theory in new clothes can help improve on the safety of a beast like C.

I think that his talk gave each member of the audience something to take home, and that's one of the main secrets of a successful invited address. I just wish that more of my colleagues had been there to listen.

CONCUR 2009

Breaking news! I am pleased to announce that CONCUR 2009 will be held in Bologna, and will organized by Mario Bravetti and Gianluigi Zavattaro. This will be the second Italian CONCUR after Pisa 1996.

Congratulations to Bologna, which hosts one of the strongest CS departments in Italy and is a hotbed of concurrency-theoretic research, and to CONCUR for visiting a lovely city.

Monday, October 08, 2007

Prize Time

It is the time of the year when prize announcements are out fast and furious.

The ink has not yet dried on the news announcing that the 2007 Nobel prize for medicine has been awarded to Mario R. Capecchi, Sir Martin J. Evans and Oliver Smithies "for their discoveries of principles for introducing specific gene modifications in mice by the use of embryonic stem cells". (I notice that Mario Capecchi is yet another Italian expatriate who, like Fermi and Segre before him, has become a US citizen.)

In mathematics/CS, László Lovász has received the Bolyai Prize, and Ben Green, Hershel Smith Professor of Mathematics at Cambridge University (UK), will receive the 2007 SASTRA Ramanujan Prize "for his phenomenal contributions to number theory by himself and in collaboration with Terence Tao, that have changed the face of combinatorial additive number theory."

Tomorrow is the turn of the Nobel prize for physics. It'll be interesting to see who receives it. I hope for an award to an Italian in Italy :-)

Sunday, October 07, 2007

Frits Vaandrager's Answers

I have received a contribution from Frits Vaandrager to the discussion arising from the CONCUR workshop. I am very pleased to post them here.
Enjoy!


# What is the role/importance of real-time in modelling? Does industry want dense-time or discrete-time models?

When modelling embedded systems and/or protocols, real-time aspects have to be dealt with very frequently. If one prefers not to use a real-time model checker, one can often encode
real-time constraints in finite state model checkers. For instance, in an STTT paper from 2002, Brinksma, Mader and Fehnker analyzed a PLC control schedule using SPIN, and obtained results that were (at that time) competitive with Uppaal. Their trick was to advance the discrete clock variable not one by one, but directly to the point of the next event.

In his PhD thesis, Rajeev Alur has made a good case for dense time models. In the case of open systems with components that have different clocks, dense time is conceptually the right
thing to do. But in practice one often gets away with discrete-time models.

Industry has no preference for dense-time or discrete-time: any approach that solves their problems is fine.


# How does one involve small- and medium-size companies in collaborations with concurrency theoreticians/practitioners? Does "company size" matter?

The maturity of model checkers has increased enormously over the last years and as a result it becomes much easier to apply them, also for non-experts. I give two examples:

- Allen B. Downey has written "The little book of semaphores", a nice book in which he presents solutions to dozens of concurrency problems, ranging from classical ones (like the barbershop)
to tricky and obscure problems (like the Sushi bar). This year I asked a class of first-year computer science students with just a few hours of model checking experience to pick a problem from Downey's book (a different problem for each pair of students) and to model and analyze Downey's solution for it using Uppaal. As a result, my students spotted several mistakes in Downey's book, mistakes which have been confirmed by the author.

- Matthijs Mekking, a student interested in implementing protocols and with no particular expertise in formal methods just completed an MSc thesis project at NLnet Labs on the SHIM6
protocol, a proposed IETF standard for multi-homing. At some point he started to model the protocol using Uppaal and became enthusiastic. He managed to find several nontrivial mistakes
in the standard and greatly impressed the protocol designers. His results directly influenced the new version of the standard. (See http://www.ita.cs.ru.nl/publications/papers/fvaan/SHIM6/)

SMEs usually don't have verification specialists in house, so they need some advice on modelling and analysis. But with todays technology it is possible to get results rather quickly, and they can do part of the work themselves. The situation will further improve when MSc and PhD graduates with a background in model checking get jobs at these companies. My group is collaborating with several SMEs and the verification problems they provide us with are often
interesting from an academic perspective.

Unlike SMEs large companies are able to invest in long-term research.

# Is there any need for stochastic and probabilistic modelling in applications? More pointedly, have you met an example that you could not model because your tool does not support stochastic or probabilistic phenomena?

Yes!! There is a great need for stochastic and probabilistic modelling and analysis techniques, and I would for instance welcome a tool that combines the functionality of Uppaal and PRISM.

The Zeroconf protocol, for instance, is full of probabilistic features that we could not model using Uppaal. If we really want to make impact in the area of wireless sensor networks, mobile ad-hoc networks, and P2P networks we need to extend model checkers with probabilities since the design of these networks can only be properly understood if we take probabilities into account.

# How can we, as a community, foster the building of industrial-strength tools based on sound theories?

Theory for the sake of theory is good and important but IMHO the concurrency community has too much of it. In order to really push model checking technology into industry the performance and functionality of these tools must be further improved by several orders of magnitude. This can only be achieved by a combined and focused effort of a large team of researchers and is also a major academic challenge.

In order to test new ideas one needs prototype tools to play with. However, I believe it is not healthy that almost every research group on model checking has its own tool. Only a few groups manage to keep their model checking tool in the air for more than a decade. Developing an
industrial-strength model checking tool requires a huge effort. I think academic groups have to join forces if they want to build (and maintain!) industrial-strength tools. Uppaal has been highly succesful in part due to the continued efforts from teams in Aalborg, Uppsala, Nijmegen, Twente and elsewhere. So why don't the CADP and muCRL teams join forces? Why isn't it possible to establish stronger ties between Uppaal and IF? Why are there so many probabilistic model
checkers?

By combining efforts we can be much more effective.

# What has concurrency theory offered industry so far? What are the next steps that the concurrency community should take in order to increase the impact of its research in an industrial setting? And what are future promising application areas for concurrency research?

Contributions are very powerful modeling languages and concepts, and of course model checking.

A major challenge ahead of us is to combine features. In model checking people proposed symmetry reduction, partial order reduction, CEGAR, etc. What about the combination of all these features? How can we prove it is sound. Can we combine probabilistic choice, hybrid aspects, real-time, and hierarchy as in state-charts? I believe it will be possible to define such combinations but I expect we will need theorem provers and proof assistents to help us to manage the resulting complexity.

I am aware of applications in other areas, but I believe computer system engineering will remain the most important application area for concurrency theory the coming decade.

Friday, October 05, 2007

Joost-Pieter Katoen On The Need for Probabilistic and Stochastic Modelling

I am posting this message on behalf of Joost-Pieter Katoen, who sent me his reaction to one of the questions posed to the panel members during our workshop at CONCUR 2007. Enjoy!

I'd like to answer to the question on the need for stochastic and probabilistic modeling (and analysis). Some concrete examples of case studies provided by industry for which probabilistic aspects are very important are listed below. The importance of explicitly modeling random effects explicitly stands or falls with the kind of property to be established, of course, so I am definitely not claiming that these examples cannot (and should not) be modeled by techniques that do not support random phenomena.

1. Leader election in IEEE 1394: in case of a conflict (two nodes
pretend to be a leader), the contending nodes send a message (be
my parent) and randomly wait either short or long. What is the
optimal policy to resolve the contention the fastest? (This turns
out to be a slightly unbiased coin).

2. In the Ametist EU-project, the German industrial partner Axxom
generated schedules for a lacquer production plant. While doing
so, they abstracted from many details that the lacquer producer
supplies such as: the average fraction of time a resource is not
operational, the fraction of (operational) time the resource can
be used because necessary human support is present, and so
forth. In their abstraction they scheduled tasks conservatively
and they were interested in whether they could improve their
schedules while reducing the probability to miss the deadline.
Clearly, a stochastic modeling is needed, and indeed has been
carried out using a stochastic process algebra.

3. Hubert and Holger should be able to say much more about
a recent project they are pursuing with a French company on
the validation of multiprocessor multi-threaded architectures.
I do not know exactly what they are investigating, but they use
stochastic process algebras to model!

Finally, let me say that (as Moshe is also indicating) that the
interest in probabilistic modeling is growing steadily. To give
an example, the European Space Agency (ESA) is currently
considering to use probabilistic modeling and analysis in the
context of AADL, an architecture specification language where
an important ingredient ais the failure rates of components.

All in all, it is fair to say that there is a quest for probabilistic
techniques!

Joost-Pieter Katoen

CNRS Bronze Medal To Concurrency Theorist

I learned last night that Patricia Bouyer (CNRS researcher at LSV, ENS Cachan) has received a CNRS bronze medal 2007 in section 7. The CNRS bronze medal is awarded for outstanding achievement by a junior researcher.

Congratulations to Patricia, with whom I was lucky enough to work on this paper. At that time, Patricia was a freshly minted MSc student, but her talent was already abundantly clear. I recall that we were trying to show that a property language was sufficiently expressive to describe all of the properties of timed automata that could be checked by reachability analysis in the context of a test process. Patricia constructed a test process which had no counterpart in the property language, and this led to a strengthening of the property language, which we eventually were able to prove expressively complete.

From those early days, Patricia has gone from strength to strength, as witnessed by her impressive output. I believe that there is much more to come from her.

Tuesday, October 02, 2007

Reverse Age Discrimination

Every now and then, I rant on this blog about Italian academia. I do this because of the strong ties I have with my country (or in typical Italian fashion I should say "well, actually, with my region Abruzzo" or even "with my home town, Pescara" :-)) , and because I am saddened by the frustration I sense in the Italian colleagues I meet at conferences or I exchange emails with.

The latest installment of the saga of Italian academia I read is the commentary Reverse Age Discrimination written for Nature Physics by Francesco Sylos Labini and Stefano Zapperi, two physicists based in Rome. Their piece paints a rather bleak picture for young Italian scientists and reports on what they call "Lo Tsnunami dell'Universita' Italiana" (The Tsunami of Italian University). I encourage you to read their opinions and to look at the statistics they report on here.

The figures are amazing. Italian universities have an unusually large fraction of old professors. In Italy, 41% of the university professors are aged 60 or older and only 4% are below 40. If we consider full professors only, we discover that overall in Italy more than 47% are aged 60 or older, but for physicists this percentage reaches a hefty 64%!

Add to this picture that Italian universities have a very irregular recruitment flow, with sudden bursts of hiring based on seniority rather than on merit, and you will see that the only option
left for several talented young scientists is to move to other countries. The commentary reports that this year the French National Research Council (CNRS) selected seven young researchers (CR2) in theoretical physics, out of whom four were Italians! If we look at CS, let me go on record once more as saying that Italians in Paris would form a very strong TCS department by any measure.

Let me close by quoting from the last paragraph in the article.

To conclude, we acknowledge that some older scientists are active and productive, and that European countries should do something to keep them engaged. .... The broad question of how to use the experience of older faculty without hindering the advancement of the younger generation remains an important challenge.

I hope that this is a challenge that Italy's academic establishment will face very soon. The future of Italian science depends on it.