The scientific program for the conference was preceded by short presentations by the rector of the University of Latvia and by Rusins Freivalds. The rector welcomed the ICALP participants, gave us some interesting information about the University of Latvia and wished us long coffee breaks. Rusins discussed the unity of science, and reminded us that, even at the time of the iron curtain, there was

*one*Computer Science.

The conference proper was kicked off on Monday, 8 July, by an invited talk delivered by the Liverpool-bound Paul Spirakis. Paul's talk was entitled

*A Guided Tour in Random Intersection Graphs*. Random Intersection Graphs are random graphs in which there is a universe

*M*of labels and each one of the vertices selects a random subset of

*M*. Two vertices are connected if, and only if, their corresponding subsets of labels intersect. Random intersection graphs were introduced by M. Karonski, E.R. Sheinerman and K.B. Singer-Cohen and have several applications, as well as a rich theory. Paul's talk provided a survey of the main results on the topic obtained by his research group on combinatorial problems over random intersection graphs, such as independent sets, Hamiltonian cycles, colouring, maximum cliques, expansion and random walks. Paul closed the talk by saying that this model is an excellent area of study for PhD students in TCS.

Tuesday's invited address was delivered by Daniel Marx. Daniel's talk has three chapters (his words) and was entitled

*The Square Root Phenomenon in Planar Graphs*. Its starting point was the observation that most of the classic NP-hard problems remain NP-hard when restricted to planar graphs, and only exponential-time algorithms are known for the exact solution of these planar problems. However, in many cases, the exponential-time algorithms on planar graphs are significantly faster than the algorithms for general graphs: for various problems on planar graphs, one often sees a square root appearing in the exponent of the running time of the best algorithms for their solution. Daniel told his audience that, by now, we have a good understanding of why this square root appears: most of these algorithms rely on the notion of treewidth and its relation to grid minors in planar graphs. Daniel also argued that, under the Exponential Time Hypothesis, one can show that these algorithms are essentially best possible, and therefore the square root must appear in the running time.

In passing, Daniel contributed also one paper to ICALP Track A and one to ICALP Track B!

Susanne Albers delivered the invited talk on Wednesday, 10 July, on

*Recent Advances for a Classical Scheduling Problem*. In her talk, Susanne revisited the classic on-line makespan minimization problem, which has been studied since the 1960s. After presenting the classic results on this problem, starting from Graham's 1966 List algorithm and its competitive analysis, she surveyed recent research on settings in which an online algorithm is given extra information or power while processing a job sequence.

The scientific programme on Thursday, 11 July, started with an invited talk by Orna Kupferman, who gave the only invited address that could be readily classified as belonging to ICALP Track B. Orna's presentation dealt with

*Formalizing and Reasoning about Quality*. Traditionally, formal approaches to the verification of reactive systems are boolean in nature: either a system satisfies its specification or it doesn't. In case a system does not meet its specification, one expects a good verification framework to provide a counter-example, that is, a reason why the system is not correct. Orna and her co-authors have generalized formal specification and verification methods to address the

*quality*of systems. In her talk, Orna introduced the linear temporal logic LTL[F], where F is a set arbitrary functions over the interval [0, 1]. Formulae in LTL[F] are interpreted over computations consisting of sequences of atomic propositions. The satisfaction value of an LTL[F] formula is a number between 0 and 1 that describes how well a computation satisfies a formula. The logic generalizes traditional LTL with the functions in F; examples of functions that might be in F are the maximum or minimum between the satisfaction values of subformulas (these are the quantitative counterparts of boolean OR and AND, respectively), their product and their average. In her talk, Orna showed us how to generalize classic decision problems in formal methods, such as satisfiability, model checking and synthesis, to search and optimization problems in the quantitative setting. This is achieved by means of an extension of the automata-theoretic approach to LTL to the setting of LTL[F].

Before the conference dinner, Jon Kleinberg delivered a special EATCS lecture to celebrate the 40th ICALP. Jon gave an inspiring and very articulate talk entitled

*Algorithms, Networks, and Social Phenomena*. The talk discussed the development of computational models for systems involving social networks and large human audiences. In particular, Jon focused on how information spreads through such systems, and the ways in which this spread is affected by the underlying network structure. Jon said a few times that, despite having so much data at our disposal, we still do not understand human behaviour. However, IMHO, the work by Jon and his coworkers is shedding some light on some aspects of our behaviour.

Peter Widmayer delivered the last invited talk on Friday, 12 July. His presentation was entitled

*To Be Uncertain Is Uncomfortable, But to Be Certain Is Ridiculous*, and was accessible and well paced. The starting point of Peter's talk was a "Socratic dialogue" between a statistical physicist and himself. Traditionally, in combinatorial optimization one assumes that an input instance is given with absolute certainty. The goal is then to find an optimum solution for the given instance. In contrast, as the statistical physicist would argue, input data are in reality uncertain, noisy and inaccurate. As a consequence, an optimum solution to a combinatorial optimization problem might not be meaningful in practice. (For example, the shortest path to our work place we computed yesterday evening might not be usable this morning because of changed traffic conditions.) Peter advocated the development of algorithms that find "meaningful" solutions in the presence of uncertain inputs, proposed an approach towards reaching this goal. and argued that it leads to good solutions in the real world.

Videos of these talks (with the exception of Kleinberg's talk, which, as far as I know, was not recorded) will be available in due course.

## No comments:

Post a Comment