Friday, July 11, 2014

July 9, 2014 - July 10, 2014 ICALP (guest post by Clément Canonne)

Here is the second guest post from ICALP 2014 kindly written by Clément Canonne. Enjoy it! (This  post is also available in PDF. Any typo or typesetting issue with the HTML version is my responsibility.)

Fast Algorithms for Constructing Maximum Entropy Summary Trees (Howard Karloff)

This talk [1] revolved around a question. A simple question, really. “How to draw a rooted tree?” Not any tree, though: a big one. A huge, gigantic n-node tree T, with n » ⌊222.891112⌋ [2], on a tiny sheet of paper (or a screen, for that matter). Most of the time, the approach is either to
  • draw the whole thing, get a binocular and hope something can be seen at all;
  • go for interaction, and draw/zoom on parts of the graph on-the-fly.
Here, none of this: T is T, magnificent n-node weighted tree; and we are given a parameter k ≥ n, to be though of as k « n; the task is to draw a k-node tree T, the “best summary” of T. (At this point – a demo! From the Mathematical Genealogy Project: the tree being made of the academic descendants of a obscure mathematician, someone named Gauss. That’s a hell of a tree.)
The key of their approach is not to draw only k carefully chosen nodes of T, and dismiss the others; but instead to allow T to combine subsets of T’s nodes into a big, meta-node named “others” (†).
Sure. But what is a “good summary” anyway? Quite naturally, the authors went for entropy – a tree is good if, assigning weights to nodes in the straightforward fashion (for v ∈ T, ωv ∝ the total weight of original nodes of T in the subtree of T rooted at v), the entropy


is big. That is, if T is as balanced as possible, or phrased differently remains very informative.

So, what’s the problem? The problem is exactly (†), when it comes to computing such an optimal k-node summary tree. Indeed, if we allow an algorithm to combine an arbitrary subset of a node v into a metanode othersv (and we do!), there is the slight problem of having exponentially many possibilities – and that could turn out pretty bad. An intuitive idea [3] would be to sort v’s children by non-decreasing weights, and combine together a prefix (the first, say, ℓ of them) – they shouldn’t be too important anyway. Sadly, it does not work – there are counter examples, even for n = 7 and k = 4.

In a previous work [4], Kenneth Shirley and Howard Karloff managed to give a pseudopolynomial algorithm (polynomial in the total weight), and a polynomial-time (additive) approximation one. This works came for the kill, removing the “pseudo” to give
  • a O(k2n + nlog n)-time algorithm for computing an optimal k-node summary tree;
  • a (better) O(n + poly(k,1∕ε))-time approximation algorithm
as well as a faster (albeit non-provably optimal) greedy algorithm. The crux of the approach? Prefixes don’t work, alright. But extended prefixes do! Instead of (after sorting a node v’s children) merging the nodes {1,…,ℓ}, it is sufficient to merge the {1,…,ℓ,ℓ} for some ℓ ≥ ℓ + 1 (yes, it is surprising to me too).
And voilà! Polynomial time.

Testing Equivalence of Polynomials under Shifts (Rafael Mendes de Oliveira)

And now, some testing [5]! Rafael has a device: a blackbox computing a n-variate polynomial P ∈ F[X1,…,Xn] (for some fixed field F). And another one – computing Q ∈ F[X1,…,Xn]. On inputs x ∈ Fn, these two devices allow him to compute P(x), Q(x) in constant time.

Rafael also has two coauthors, Zeev Dvir and Amir Shpilka, and a question:
Question (Shift Equivalence Testing (SET)). Given a bound on their degree, decide if P and Q are morally the same polynomial. That is, if there is a shift vector a such that P(⋅ + a) = Q(⋅) (and if so, please, find one.)
This is arguably a natural generalization of the infamous Polynomial Identity Testing (PIT) problem, where the goal is to decide whether P is the all-zero polynomial or not. PIT does not have any known efficient deterministic algorithm, but a very simple polynomial-time randomized one; and the task of improving this state of affairs has ramifications in many areas.
So SET must be difficult, since it not only generalizes the question, but also asks for an explicit witness – this vector a. After giving some background on what is known about related questions (is P “shift-equivalent” to a polynomial that can be represented with very few non-zero coefficients, for instance), Rafael gave the main result of their paper:
Theorem. There is an efficient (randomized) algorithm for SET. Furthermore, the only randomized part comes from solving instances of PIT.
In other words, in spite of what it seems, SET is not much harder than PIT; and any improvement on the use of randomness for PIT directly translates to the corresponding improvement for SET!
Roughly speaking, the idea of the proof is to try to find a “one step at a time”, by successively coming up with candidates ad, ad-1, …, a1. How? Find ad that matches the degree-d part of P(⋅ + a) and Q(⋅) – this is just a linear system to solve, along with a call to a randomized subroutine for PIT. Then find ad-1 that matches both the degree-d and degree-(d - 1) parts of P(⋅ + a) and Q(⋅) – this no longer linear, alas. But, and that’s the trick, one can plug the known value of ad in the quadratic system to make it collapse to a linear system, which has the same solutions as the quadratic one!
And so on, and so forth: after computing ak+1, one just has to solve a linear system to get ak, using the nice properties of ak+1 (simplifications “trickles down”), before calling PIT to make sure the result is correct so far. And at the end of the day, either a1 is a good shift-vector, or there is none – and a final call the PIT subroutine lets us know which of the two cases hold.
Hop.
  • [1] Fast Algorithms for Constructing Maximum Entropy Summary Trees, R. Cole and H. Karloff. ICALP, 2014.
  • [2] This is a prime number. Check it out.
  • [3] Rule of thumb: when I write “intuitive”, it is a trap.
  • [4] Maximum Entropy Summary Trees, K. Shirley and H. Karloff, Computer Graphics Forum. 2013.
  • [5] Testing Equivalence of Polynomials under Shifts, Z. Dvir, R.M de Oliveira, and A. Shpilka. ICALP, 2014.

Thursday, July 10, 2014

Day 3 of ICALP 2014 (guest post by Andrew Winslow)

Here is the third guest post by Andrew Winslow  from ICALP 2014. Enjoy it!

Invited Talk: Sanjeev Arora

Sanjeev gave a survey of recent work by himself and others in working to develop provable bounds on the performance of many problems across unsupervised learning. The talk itself was entitled "Overcoming Intractability for Unsupervised Learning", where the "unsupervised" refers (loosely) to being given a pile of raw data and asked to extra some relevant statistical data, e.g. given a collection of newspaper articles, what topics do they cover. The foothold for these problems is assuming that the data comes from some (unknown) distribution, and many fo the problems relate to learning something about the distribution.

Almost all of the problems covered are NP-hard, and the first part of the talk was an explanation of how this is overcome. In Sanjeev's words, "NP-hardness is not the right way to think about this". He explained that the starting point is determining if the set of interesting/realistic/natural instances lie in a tractable subset by looking for realistic assumptions that lead to tractable instances.
As an example, he described the topic modelling problem, where one is given a corpus of text (e.g. all New York Times articles) and the goal is to describe each article as a linear combination of topics, where the topics are not given in advance (i.e. unsupervised learning). A geometric view of this problem is being given a set of points in a convex polytope (articles) and being asked to find the vertices of the polytope (topics). The problem is NP-hard in general, but can be made tractable by assuming that the data is separable: the vertices are included in the set of points given. Then a polynomial time algorithm have be obtained: for each point, test if it's contained on the convex hull, if not, it can be removed, and otherwise it must be a vertex.

He also spoke about making process on other machine learning techniques, including dictionary learning (given complex objects built from a small, simple dictionary, compute the dictionary) and approaches like deep learning (many-level neutral networks). Dictionary learning and deep learning are both thought to take place in the human visual cortex, giving good evidence that their worst-case intractability is not a barrier to their effective use. For the case of deep learning, Sanjeev pointed out that learning the network is effectively trying to learn a TC^0 (ouch!) and yet they have been used successfully by machine learning people to achieve high accuracy rates on large corpora.

Amir Abboud, Virginia Vassilevska Williams and Oren Weimann. Consequences of Faster Alignment of Sequences.

Amir gave this talk on conditional lower bounds on local matching a pair of strings: given a function defining a score for every pair of symbols, find a pair substrings from the input string that maximizes the score. Like other string problems, including longest common subsequence, the best known algorithm for local alignment takes quadratic time and has resisted improvements in the exponent to, say, O(n^1.99).
Amir was quick to point out local alignment is hugely important in genomic sequence analysis (heuristics like BLAST are hugely popular) and that quadratic is too slow in these cases, since n can be several billion. Abboud, Williams, and Weimann prove that a O(n^{2-ε})-time algorithm for local alignment implies the following results:
  • 3SUM solved in O(n^{2-δ}) time for some δ, refuting the weak form of the 3SUM conjecture.
  • CNF-SAT solvable in O((2-δ)^n) time, refuting the strong exponential time hypothesis.
  • Max-4-Clique solvable in O(n^{4-δ}) time for some δ, resolving a long-standing opening problem of improving the best-known algorithm for Max-4-Clique.
They also prove similar results for other problems, including edit distance with gap penalties, and longest commen subsequence with wildcards (input strings are allowed to have wildcard characters that match any character). The proofs follow the usual approach of using a O(n^{2-ε})-time local alignment algorithm to produce faster algorithms for the other problems and those that Amir showed were very clean.

Amihood Amir, Timothy M. Chan, Moshe Lewenstein and Noa Lewenstein. On Hardness of Jumbled Indexing.

This work has a similar flavor to the Abboud, Williams, Weimann paper, using the 3SUM conjecture to develop lower bounds for string problems. Unlike that paper, this work gives lower bounds for data structures answering queries. The main problem considered is of jumbled, a.k.a. histogram indexing: given a string T with alphabet of size k, preprocess T to answer queries of the form "what substrings of T have character occurrence counts ", i.e. what substrings have the given histogram. For many pattern matching problems on strings, suffix trees/arrays are used to achieve linear-time queries.
For histogram indexing, naive approaches yield either O(1) preprocessing and O(n) query time (scanning the string using a window whose size is the sum of the entries in the histogram) or O(kn^2) preprocessing time and O(1) query time (precompute the histograms of all substrings and look up the answer). The contribution of Amir, Chan, Lewenstein, and Lewenstein is to prove that, assuming the 3SUM conjecture, these results are the best possible.
For a fixed alphabet of size k at least 3, they prove that achieving O(n^{2-2/k}) preprocessing and O(n^{1-1/k}) query time simultaneously is 3SUM-hard. For alphabets of non-constant size, they prove that achieving O(n^{2-ε}) preprocessing and O(n^{1-δ}) query time is 3SUM-hard. He also mentioned that unpublished work with Timothy Chan achieves a positive result in the form of a data structure that achieves O(n^{2 - 2/k + O(1)}) preprocessing and O(n^{1 - 1/k + O(1)}) query time simultaneously for fixed alphabets.
Moshe did a good job of mixing the problems, motivation, and technical details in a way that kept the talk interesting. Related to the reductions in this work, which use a reduction from convolution 3SUM (testing whether there exist two indices i, j, of the input array such that A[i] + A[j] = A[i+j]), he also recommended that people read Patrascu's paper that proves this special form of 3SUM is 3SUM-hard.

John Iacono and Özgür Özkan. Why Some Heaps Support Constant-Amortized-Time Decrease-Key Operations, and Others Do Not

John's hand-drawn slides caught the attention of some people from the very beginning, and combined with his deadpan delivery and promise to "keep it light" left the audience quiet chuckling through the talk. The talk started with a chat about heaps. All good heaps have O(log(n))-time insert and extract-min operations. Decrease-key is also an operation of heaps; some heaps are better at it. There are three families of heaps with three different running times for decrease-key:
  • Fibonacci and "improvements" that achieve O(1) time.
  • Pairing heaps that achieve O(2^{2^{sqrt{loglog(n)}}}) time.
  • Elmasry/sort heaps that achieve O(loglog(n)) time.
All decrease-key operations of these trees pair small trees into bigger trees. What distingushes them is how they do this:
  • Pairing heaps: repeatedly pair trees recursively. Repeat until one tree remains.
  • Elmasry/sort heaps: break roots into blocks of size log(n), chain all roots of a block together.
  • For Fibonacci heaps: pair heaps whose roots have the same number of children.
Note that the Fibonacci pair based on tree size, not key value, and use an augmentation of loglog(n) bits in each root. Seems like you augmented bits = fast? Fredman considered this in 1999, proving O(1) decrease-key implies Omega(loglog(n)) bits for some classes of heaps, namely pairing heaps. But lower bound requires assumption that every time a pair of roots have their values compared, their heaps are combined.
This contribution of Iacono and Özkan is a new lower bound without such the requirement that trees are combined when compared, but with a new restriction: must pay pointer model costs. This result yields lower bound of Omega(loglog(n)) for pairing and sort heaps, but says nothing about Fibonacci heaps because they cheat: they do not pay pointer model costs. A pointer model version would build a tree above the array, which would have height loglog(n) and give decrease-key a similar running time. The actual proof is adversarial, and considers a whole family of distinct Fibonacci heaps that must grow larger than the total number of possible heaps, a contradiction.

ICALP Day 2 (guest post by Andrew Winslow)

Andrew Winslow was one of the colleagues who kindly answered my call for guest bloggers from ICALP 2014. Here is guest post on the second conference day. Enjoy it! 

Invited Talk: Victor Kuncak

Victor gave a talk entitled "Verifying and Synthesizing Software with Recursive Functions" about work with his students, including Ruzica Piskac, Phillipe Suter, Eva Darulova, Ettienne Kneuss, and Andrej Spielmann. The work focused on a number of problems in the area of turning a specification into a program that meets it. He did a great job of breaking out four types of problems they're working on:
Run-time Compile-time
Have specification C and program P Assertion checking
(Check P meets C)
Prove correctness
(Prove P always meets C)
Have specification C Constraint programming
(Carry out execution according to C)
Program synthesis
(Create P meeting C)
Their work is done in Scala and implemented in the Leon system. Out of the four types of problems, Victor indicated that the first two (first row of the table) are easier. Of course, easier doesn't mean easy, and he briefly covered a few of the ideas (induction, arithmetic/algebraic reasoning, SAT solvers, integer linear programs) used to achieve good empirical performance -- a few seconds to prove correctness programs of 100 lines of code or more, and counterexamples in the case of incorrect code.
Constraint programming was next, and he gave the example of programming red-black tress via constraints: using as constraints the five properties a red-black tree must have. Insertion, for instance, would be encoded as a combination of the five properties, plus the fact that the new tree must have the elements of the old tree plus the inserted item. Although the performance was not great, he stressed that the applications of constraint programming are more along the lines of prototyping new data structures, test case generation, etc. One cool trick was using the fact that a solution is a counterexample of the negation of the constraints...reducing the problem to one in program verification.
Finally, he covered most extreme problem: writing code based on a specification. Here he had some really nice examples, generating code for converting dates and times, simple data structures like sorted lists, etc. in just a few seconds. Many ideas were used, but I was especially impressed by his mention of their work on floating point representations, synthesizing programs that avoided rounding errors.

Andreas Björklund and Thore Husfeldt, Shortest Two Disjoint Paths in Polynomial Time

Andreas gave the talk for this Best Paper award winner in Track A. The problem considered is a very simple one: given an undirected graph and two pairs of vertices (s1, t1) and (s2, t2), find a pair of disjoint paths whose total length is minimized. Several problems near to this one are NP-hard (e.g. minimizing the maximum of the two path lengths), so placing the problem into P is the major thrust of the paper -- the algorithms given run in O(n^11) and O(n^22) time for vertex- and edge-disjoint paths, respectively.
The approach of the algorithm is almost entirely matrix-based, focusing on solving a cycle cover problem on the input graph with self-loops added to each vertex...except that doesn't quite work. Andreas did a great job of leading us on a journey where obstacle after obstacle to the approach was eliminated. Many of the issues relate to the complexity of computing permanent...indeed one of the major technical contributions is proving that computing the permanent of a matrix over a certain type of ring is in P. In the end, the necessary permanent computation is replaced with a polynomial number of determinant computations, giving the large (but polynomial) running time.
In perhaps the most natural course of events, someone asked after the talk whether the technique might work for the shortest three disjoint paths. So natural was the question that a followup slide for the question was already prepared. The conclusion was "probably not", but Andreas posed a first problem to try in this direction: given an undireced graph, compute the parity of the number of cycle covers having c cycles, with c divisible by 4.

Mitsuru Kusumoto and Yuichi Yoshida, Testing Forest-Isomorphism in the Adjacency List Model

Mitsuru started by introducing the notion of property testing on graphs: computing whether the graph has a given property, answering correctly at least 2/3rds of the time while using only a small number of queries on the graph. In this setting, even the representation of a graph as an adjacency matrix or list matters, as checking whether a given edge is present takes too long with an adjacent list repesentation.
Kusumoto and Yoshida considered the problem of testing whether two forests are isomorphic. It was known that in general graphs this problem requires Omega(sqrt(n)) queries to the graphs, and only O(1) if the graphs have bounded degree. For forests, they achieve an algorithm that uses only O(polylog(n)) queries and prove Omega(sqrt(log(n)) are needed. The algorithm works by first performing a special type of partitioning of the input graph, then testing whether each subgraph is isomorphic. The partitioning is done in a way that ensures that if the two input graphs are significantly far away from being isomorphic, then some pair of their subgraphs in their partitions are too.

Rom Aschner, Matthew J. Katz, Bounded-Angle Spanning Tree: Modeling Networks with Angular Constraints

Rom gave a nicely visual talk about a problem in wireless network design in the plane. The input is a set of points, and the output is a (Euclidean) minimum spanning tree (MST) with an additional constraint: the smallest angle spanning all of the edges is at most some fixed constant b (a b-MST). Another way to think about this constraint is that the largest angle between any pair of adjacent incoming edges must be at least 2*π - b.
For b < π / 3, a b-MST may not even exist: three points at the vertices of an equilateral triangle require at least one of the points to have two incoming edges with interior angle π / 3. For b >= 8 π / 5, the b-MST is simply the minimum spanning tree, as the upper degree bound of 5 on every node in an MST ensures the smallest angle spanning all of the edges is at least 2 * π - 2 π / 5 = 8 π / 5. Finally, for b < 1/2, there exists a simple family of examples (n-1 points placed collinearly ε distance apart and a final point distance 1 from the shared line) that causes the b-MST to have total weight ε * (n - 1) + 1 and b-MST to have total weight 1 * (n - 1) = n - 1. So the non-trivial b are those in the interval [1/2, 8/5*pi).
For these, Aschner and Katz prove both NP-hardness and approximation results. They prove that finding the b-MST for b = π and b = 2 π / 3 are both NP-hard, using a reduction from Hamiltonian path in square and hexagonal grid graphs, respectively. They also achieve constant-factor approximation algorithms for b = π, 2 π / 3, π / 2. The approximation algorithms use a common sequence of steps:
  1. Compute the MST.
  2. Turn the MST into a Hamiltonian cycle via a tour around the boundary of the MST.
  3. Decompose the points into consecutive groups along the cycle.
  4. For each group, compute a set of edges connecting the points and minimizing total edge length.
  5. Connect adjacent groups using the shortest edge possible.
Step 4 requires a careful selection of group sizes, as larger groups give greater potential for shorter collections of edges, but for large enough b, actually connecting the points may not be possible. The resulting approximation ratios are 16 and lower, and can further be reduced by a neat pigeonhole argument regarding shofting the groupings along the tour.

Canal Tour and Conference Dinner

The day concluded with a tour through the canals of Copenhagen and dinner at the Odd Fellow Palace. The 2014 Gödel Prize was awarded to Ronald Fagin, Amnon Lotem, and Moni Naor. Moni gave a great historical and technical talk on the work for which they received the award, including the threshold algorithm. Ronald Fagin also spoke briefly about Michael Franklin, the "missing fourth author" of the work, and his role in facilitating the successful collaboration.

Wednesday, July 09, 2014

July 8, 2014 — ICALP Review (guest post by Clément Canonne)

Here is a first guest post from ICALP 2014 kindly written by Clément Canonne. Enjoy it! (This  post is also available in PDF. Any typo or typesetting issue with the HTML version is my responsibility.)

 

On DNF approximators for monotone Boolean functions (Eric Blais)

In this talk, Eric Blais presented joint work with Johan Håstad, Rocco Servedio and Li-Yang Tan, concerned with Boolean functions. More precisely, "the simplest representations of functions": DNF (Disjunctive Normal Form) formulas.
For a little bit of background, recall that a Boolean function f : {0,1}n →{0,1} defined on the hypercube [2] is a DNF if it can be written as


that is as an OR of AND’s. One can also see such functions as being eactly those taking value 1 on an "union of subcubes" (if One prefers. I will not argue with One).

A nice property of DNF formulas is that they are arguably amongst the simplest of all representations of Boolean functions; while formulas of depth ≥ 3 are a nightmare, DNFs have been extensively studied, and by now "Everything is known about them". Well, almost everything.

Indeed, amidst other facts, we have that

Theorem 1 (Folklore). Every Boolean function can be computed by a DNF of size 2n-1.

Theorem 2 (Lupanov ’61). This is tight (PARITYn needs that much).

Theorem 3 (Korshunov '81, Kuznetsov '83). A random Boolean function can be computed by DNFs of size Θ(2n∕log n) (and requires that much).

So... are we done yet? The mere presence of Eric in the auditorium was a clear hint that all was not settled. And as it turns out, if the picture is well understood for exact computation of Boolean functions by DNFs, what about approximate representation of a function? That is, what about the size required to approximate a Boolean function by a DNF, if one allows error ε (as a fraction of the inputs).

This leads to the notion of DNF approximator complexity; and here again some results – much more recent results:

Theorem 4 (Blais–Tan ’13). Every Boolean function can be approximated by a DNF of size O(2n∕log n). Furthermore, our all friend PARITYn only needs DNF size O(2(1-2ε)n).

That’s way better than 2n-1. So, again – are we done here? And, again... not quite. This brings us to the main point of the paper, namely: what about monotone functions? Can they be computed more efficiently? Approximated more efficiently? (Recall that a Boolean function f is monotone if x ≼ y implies f(x) ≤ f(y), where ≼ is the coordinate-wise partial order on bit-strings.)
As a starter: no.

Theorem 5 (Folklore) Every monotone Boolean function can be computed by a DNF of size O(2n∕n1/2) (using subcubes rooted on each min-term); and again, this is tight for PARITY.
Furthermore, and quite intuitively, using negations does not buy you anything to compute a monotone function (and why should it, indeed?):

Theorem 6 (Quine ’54). To compute monotone Boolean functions, monotone DNFs are the best amongst DNFs.
Not surprising, I suppose? Well... it’s a whole new game when one (one, again!) asks only for approximations; and that’s the gist of the paper presented here. First of all, drastic savings in the size of the formulas!

Theorem 7 (Blais–Hastad–Servedio–Tan ’14). Every monotone Boolean function can be approximated by a DNF of size O(2n∕2Ω(n1/2) ).
Eric gave a high-level view of the proof: again, it works by considering the subcubes rooted on each min-term, but in two steps:
  • Regularity lemma: the world would be much simpler if all subcubes were rooted on the same level of the hypercube; so first, reduce it to this case (writing f = f1 + .....+ fk, each fi has this property)
  • then, approximate independently each fi, using a probabilistic argument (via random sampling), to prove there exists a good approximator for all fi’s, and then stitching them together.
And they also show it is tight: this time with the majority function MAJn. The proof goes by a counting argument and concentration of measure on the hypercube (every or almost every input is on the middle "belt" of the hypercube; but each subcube thus has to be rooted there, and each cannot cover too much... so many are needed)

So, approximation does buy us a lot. But clearly, using negations shouldn’t, should it? Why would allowing non-monotone DNF’s to approximate monotone functions ever help? (Hint: it does.) (Yep.)

Theorem 8 (Blais–Hastad–Servedio–Tan ’14). For every n, there exists εn and f : {0,1}6n →{0,1} such that
  • f can be εn-approximated by DNFs of size O(n);
  • any monotone DNF εn-approximating f must have size Ω(n2).
(Take that, intuition!)

The upshot: exact computation and approximate computation have intrinsically very different properties!

Eric then concluded with an open question: namely, how to improve/better understand the gap between approximating functions with monotone DNF vs. approximating them with general DNF’s (the currently known gap in the size being quite huge – almost exponential). Additionally, how to get a separation as in the mind-boggling theorem above, but changing the quantifiers – that is, for a constant ε independent of n?

Also, can someone fix my intuition? I think it’s broken.

Day 1 of ICALP 2014 (guest post by Andrew Winslow)


Andrew Winslow was one of the colleagues who kindly answered my call for guest bloggers from ICALP 2014. Here is guest post on the first conference day. Enjoy it!

Conference Welcome

Local organizing committee chair Thore Husfeldt kicked off ICALP by mentioning a few stats about IT University of Copenhagen (ITU), where the conference is located:
  • Started in 1999(!)
  • 2000 students.
  • 10 theory CS faculty (mostly working in "track B" areas in ICALP parlance)
  • 1 building.
He also noted that ITU is separate from two other universities in Copenhagen hosting TCS conferences this month: Copenhagen University (hosting SEA last week) and Technical University of Denmark (hosting SWAT last week).

Invited Talk: Claire Matthieu

Claire presented work entitled "Homophily and the Emergence of a Glass Ceiling Effect in Social Networks", joint work with Chen Avin, Zvi Lotker, Barbara Keller, David Peleg, and Yvonne-Ann Pignolet. She started the talk by asking why there's only one female Danish scientist (Inge Lehmann) listed on the Wikipedia page for famous Danish scientists out of about 60 entries -- why so rare? Looking at the DBLP database data as a social network with co-authorship edges gives similarly bleak results: the induced subgraphs of men and women in the top 1000 nodes shows a large, well-connected graph and a disconnected, sparse graph, respectively.
Claire and co-authors developed a formal model of growing co-authorship social networks, including assumptions about what a "glass ceiling effect" means for these networks, and proved that three properties are minimal sufficient set of properties to imply the effect. These properties are:
  • Preferential attachment: a "rich get richer" effect where incoming students choose to work an advisor with probability proportional to the number of co-authors the advisor has.
  • Minority: an incoming student is female with probability r < 0.5.
  • Homophily: students stick with an advisor of the same gender with probability 1, otherwise find a new advisor with probability p < 1.
Incoming students are iteratively added to the graph to form growing random graphs, repeatedly chosing new advisors according to homophily and preferential attachment. The glass ceiling effect is defined by looking at all nodes with degree k, where k = k(n) chosen so that the number of such nodes goes to infinity with n, and examining the ratio of women to men in this set. If the expected ratio of female over male nodes goes to 0 as the number of nodes goes to infinity, then the system is said to have a glass ceiling effect for the parameters r, p chosen. They are able to prove that the above three properties imply the class ceiling effect, and removing any one of them does not. The model was also verified experimentally using the DBLP database.
As a final twist, Claire considered modifying the model to account to capture the "men are intrinsically better" hypothesis, giving new male students two advisors rather than one. This was proven not to cause the glass ceiling effect. I was really struck by the meta-ness of the talk and how natural the problem seems. In the formal study of (social) networks, isn't the most natural network of study the one that directly impacts our ability to formally study networks?

György Dósa and Jiří Sgall. Optimal Analysis of Best Fit Bin Packing.

Dösa and Sgall complete the analysis of two classic (online) algorithms for the standard bin packing problem: given a collection of items specified by numbers in [0, 1], assign these items to the fewest number of bins where each bin has an item total of at most 1. The two algorithms, first-fit and best-fit, process the items one-by-one, placing each item permanently into an existing bin if possible, otherwise placing the item into a newly created bin. First-fit and best-fit differ only in the criteria by which they place item into existing bins, using the oldest bin (first-fit) or bin that would have the least remaining room after the item is placed into it (best-fit). It was previously shown that for every instance with optimal solution using OPT bins, first-fit and best-fit use ceil(1.7*OPT) bins and there exist instances where both use 1.7*OPT bins. Dösa and Sgall improved the upper bound to 1.7*OPT bins, dropping the ceiling and completing the analysis of the competitive ratio of these algorithms. Jiří commented that he felt that such classic algorithms deserved a complete analysis and I agree; the talk had an air of finality, having put the 40-year-old gap to rest.

Hossein Esfandiari, Mohammadtaghi Hajiaghayi, Reza Khani, Vahid Liaghat, Hamid Mahini and Harald Räcke. Online Stochastic Reordering Buffer Scheduling.

Vahid Liaghat gave this talk on a scheduling problem where items have types from a fixed universe and enter into a buffer of sized size k. One can then process all items of a type, flushing them from a buffer and causing new items to take their place. The cost of processing a sequence of items is either the number of processing steps used (block operation model) or number of times the type processed changes (standard model). It was known that a O(loglog(k))-competitive algorithm is possible for the adversarial/worst-case. This work studies the sitation where the input is not adversarial, but is randomly sampled from some set. Esfandiari et al. achieve an average-case O(1)-competitive algorithm for both the block operation and standard models for the case where an adversary designs an input that is then randomly permuted. The analysis uses several lemmas regarding the optimal solutions of various buffer sizes as well as threshold algorithms: algorithms that process each type as soon as a specific number of occurrences of a type is exceeded.

Erik D. Demaine, Yamming Huang, Chung-Shou Liao and Kunihiko Sadakane. Canadians Should Travel Randomly.

This paper attracted a few comments related to the curious title. The title comes from the "canandian traveler problem" of study, introduced by Papadimitriou and Yannakakis in 1991. The problem is a shortest path problem with partial information : some of the edges of the path are blocked (by intense canadian snowfall) and cannot be traversed, but this is only known after a traveler has visited an endpoint of the blocked edge. The parameterized version of the problem has k blocked edges, and the goal is find an algorithm with good competitive ratio compared to the shortest path that avoids all blocked edges. For deterministic algorithms, tight 2k+1 upper and lower bounds on the ratio are known, while for randomized algorithms, only a lower bound of k+1 (and implied upper bound of 2k+1) were known.
Demaine, Huang, Liao, and Sadakane improve the best-known randomized algorithm in two ways, giving a polynomial-time algorithm that achieves a ratio of (2 - o(1))k + 1, and a pseudo-polynomial-time algorithm (parameterized by k, the number of blocked edges) that achieves a (1 + sqrt(2)/2)k + 1 ratio. The ideas for both algorithms are to repeatedly compute a collection of nearly shortest paths, traverse them each until blocked, and repeating the process with the new information if none of the paths lead to reaching the destination. A careful selection of the number of paths used then yields the bound. The main difference between the two algorithms is exactly what short paths are used; the pseudo-polynomial-time algorithm uses a list of shortest paths, whereas the polynomial-time algorithm only finds paths within a constant factor of the shortest.

Dmitry Gavinsky and Shachar Lovett. En Route to the log-rank Conjecture: New Reductions and Equivalent Formulations.

Dmitry started by reviewing two-party communication problems, where the goal is to compute a combined Boolean function f of inputs x and y held by the two parties while passing the fewest number of bits between the parties. One can consider the matrix defined by the two inputs, with an entry for every (x, y) pair, called the Boolean matrix. The rank of this matrix (and Boolean function f) is the minimum number of rectangles found in some partition of the matrix into rectangular blocks of entries with the same value. It is known that if a c-bit communication protocol is possible, then rank of the matrix is at most 2^c, i.e. the communication complexity (in bits) is bounded from below by log(rank(f)). The log-rank conjecture is that polylog(rank(f)) is an upper bound on the communication complexity of computing f.
Part of this work by Gavinsky and Lovett is to provide a number of equivalent versions of the problem that may provide a way to make progress on the log-rank conjecture. For instance, they show that proving the log-rank conjecture for the randomized communication complexity on low-rank functions implies the log-rank conjecture for the deterministic version, as does proving the conjecture for the information cost of f, the minimum amount of information given by each party the other party in the worst-case.

EATCS General Assembly

The day ended with the EATCS general assembly led by Luca Aceto and appearances by the many people involved in ICALP and EATCS. ICALP 2015 will be in Kyoto, Japan, and some information about travel, accomodations, and Kyoto were presented by Kazuo Iwama, and ICALP 2016 will be in Rome, Italy. Some statistics about ICALP 2014 were given by Elias Koutsoupias -- the number of submissions is up from previous years, acceptance rate is about steady at 30%, and the vast bulk of the submissions were to to Track A.
As one final thought from the day, the organization of the conference so far has been top-notch. Many of the talks are online (including Claire Matthieu's), the sessions start and end on time, sound and projection issues are non-existent, and the food/drink is tasty (and rotates every break...).

Monday, July 07, 2014

In Copenhagen for ICALP 2014

It is good to be back in lovely Copenhagen and breathe the atmosphere of the Danish capital city.

One of the first things I saw when arriving in Kongens Nytorv was a smiling guy showing a sign to cyclists and car drivers that read "Husk; du er smuk" (Remember; you are beautiful).

The Carsten Dahl Trio opened the Copenhagen Jazz Festival events at the legendary Jazzhus Montmartre (which can host 150 people) with a lunch concert to launch their new recording.

The ICALP welcome reception was an opportunity to meet colleagues, thank the organizing committee for all the work they have done in preparing the conference and to enjoy the band De Græd, which created a jazzy and relaxed ambience.

The conference kicks off tomorrow at 6:30am GMT with the invited talk Homophily and the Glass Ceiling Effect in Social Networks by  Claire Mathieu (ENS Paris). The talk will be streamed live as will all the other invited talks and most of the other plenary events

Watch this space for some reports on ICALP 2014. 

Friday, July 04, 2014

Music composed for the ICALP 2014 Gödel Prize ceremony gets coverage in Nature!

The piece of music composed for the ICALP 2014 Gödel Prize ceremony gets coverage in Nature! See also the video presenting the composition (in Danish, but with English sub-titles).



Thanks to Thore Husfeldt for adding this further twist to the award ceremony for the Gödel Prize! I am looking forward to the event very much. 

Sunday, June 29, 2014

Issue 113 of the Bulletin of the EATCS is out

Issue 113 of the Bulletin of the EATCS is now available on line abd be downloaded as a single PDF file. As usual, the bulletin is freely accessible for everyone.

In the current issue, you will find very interesting survey articles on, for instance, the complexity of valued constraint satisfaction, kernelization and contextual semantics. I strongly recommend the first installment of the Concurrency Column edited by Nobuko Yoshida on recreational formal methods.  The piece by Frits Vaandrager and Freek Verbeek tells us how to design vacuum cleaning trajectories using Uppaal, SAT solvers and theorem provers.

I hope that you will enjoy this issue of the BEATCS. At the EATCS general assembly, which will be held at ICALP 2014 in Copenhagen, we will discuss the developments of the BEATCS. Your input to the discussion is very valuable.


Sunday, June 15, 2014

Sad news: Berthold Vöcking passed away

Berthold Vöcking, who was a professor for Algorithms and Complexity at RWTH Aachen,  passed away on 11th June after a long illness. He was only 47.

Berthold was a leading figure in algorithmics research in Europe, was an invited speaker at ICALP 2012 and was one of the authors of the best paper for Track C at this year's ICALP, where we will remember him during the award session.

Our thoughts go to his family.

Addendum: For some information on Berthold's work, do read Sad News: Berthold Vöcking by Michael Mitzenmacher's blog.

Monday, June 09, 2014

Another computer scientist becomes rector of an Italian university

I just heard that Michele Bugliesi has been elected as the new rector of the Universita' Ca' Foscari in Venice. (Italian speaking  readers may wish to look here for details.) Michele is yet another (theoretical) computer scientist who becomes rector of an Italian university. This is a sign of recognition for our discipline as a whole, as well as for the Italian computer science community.

I am sure that Michele will do a great job and I wish him the best of luck in his new, and challenging, role.

Wednesday, June 04, 2014

Call for guest posts from ICALP 2014

If you are going to ICALP 2014 and you are willing to contribute guests posts on that conference on this blog, drop me a line. It would be good to have reports on the three tracks, on the invited talks and on the award-related events, as well as on the conference as a whole.


Tuesday, June 03, 2014

Helga Guðmundsdóttir lands one of Google’s 2014 Anita Borg Scholarships

My colleagues and I at the  School of Computer Science at Reykjavik University are thrilled by the recent news that Helga Guðmundsdóttir,  one of our master students and one of the very best BSc students we have ever had, has been selected as one of Google’s 2014 Anita Borg Scholarship recipients. Congratulations to Helga for this achievement!

Apart from being an outstanding student, who has already done some research in algorithms for wireless networks and has had successful internships at the Fraunhofer Center for Experimental Software Engineering (University of Maryland) and Cornell University, Helga is also a role model for our female students and fantastic ambassador for computer science. She is one of the founders of /sys/tur, the association of female computer scientists at Reykjavik University. I was amazed by how active the association is in promoting computer science at all our outreach events.

I don't need to look at my crystal ball to forecast a bright future for Helga and that she will go from strength to strength whatever she decides to do. 




Saturday, May 31, 2014

PhD positions in Computer Science at IMT Lucca (Italy)

As readers of this blog might have noticed already, IMT Lucca is one of the institutions in Italy that is close to my heart. I am therefore happy to advertise their ongoing call for applications for their PhD positions in Computer Science. Do encourage your students to apply: the terms of employment for PhD students are excellent and so are the facilities. The town of Lucca is lovely and has a very high quality of life. IMT is still relatively small, but it is a very ambitious graduate school. 

If your students or you are interested, IMT will be holding a (free) interactive webinar on June 11 for any students interested in the program. 

=========================================================
PhD positions in Computer Science at IMT Lucca (Italy)
- Deadline July 14, 2014 -
=========================================================

The Institute for Advanced Studies IMT Lucca - Italy
( announces multiple PhD scholarships (appx. €13,600/year), that also include accommodation and full board.

Deadline for application is July 14th, 2014 at 18:00 Italian time.

IMT Lucca (Italy) is a research university within the Italian public higher education system. IMT's mission is to establish itself as a research center that promotes cutting-edge research in key areas, structuring its PhD program in close connection with research, to attract top students, researchers and scholars through competitive international selections, and to contribute to technological innovation, economic growth and social development.

The three year doctoral program, which is taught entirely in English, is articulated in curricula. The 8 curricula currently offered are field-specific, although in many instances they share a common scientific background. The curriculum in Computer Science is coordinated by Rocco De Nicola and focuses on key aspects of current research in the theory and applications of informatics, such as open-endedness, autonomy, security, concurrency, cost-effectiveness, quality of services, and dependability.

The main goal of the curriculum is to develop models, algorithms, and verification methods for modern distributed systems. The doctoral students enrolled in this curriculum will carry out cutting-edge research on the fundamentals and applications of architectures and languages for modern distributed systems, including global and cloud computing systems, web systems and services, and mobile systems. They will also acquire professional skills in the application of computer technologies to massively distributed systems, working in close collaboration with the SysMA research unit of IMT (http://sysma.lab.imtlucca.it/). Graduates from the curriculum are qualified to work in universities, public and industrial research centers, and to take on professional roles and high-profile tasks and responsibilities in both private companies and public institutions.

PhD students, besides receiving a research scholarship, are offered on-campus housing on a newly restored and fully integrated San Francesco Complex in the historical center of the beautiful Tuscan city of Lucca, and daily access to the canteen. Students also get the opportunity to spend research periods abroad during the program, with the possibility of receiving additional financing through the Erasmus+ program.

Please note that students who are expected to obtain the required degree by October 31, 2014 will also be considered; they must still apply by July 14. Further details, along with online the application form, can be found at:


Be sure to sign up for our free interactive webinar (http://brightrecruits.com/webinars/) scheduled for June 11th 2014.


Thursday, May 08, 2014

Best paper awards at ICALP 2014

The EATCS is proud to announce that the program committees of the three tracks of ICALP 2014 have selected the following papers for the best paper awards:


Track A:
Andreas Björklund and Thore Husfeldt, Shortest Two Disjoint Paths in Polynomial Time
Track B:
Track C:
Oliver Göbel, Martin Hoefer, Thomas Kesselheim, Thomas Schleiden and Berthold Voecking, Online Independent Set Beyond the Worst-Case: Secretaries, Prophets, and Periods

The best student paper awards are given to papers that are solely authored by students. This year these awards will go to:
Congratulations to the award recipients and many thanks to the PC chairs and their PCs for their sterling work!

It is interesting to see that the best papers for 2014 are all from Europe, whereas two of the three best student papers are from the US. Having worked in Northern Europe for the best part of 20 years, I am happy to see that the best papers for Track A have Scandinavian authors.

I hope that you will enjoy reading the award-receiving papers.

Monday, May 05, 2014

CCC: request for volunteers, sponsoring & organizational meeting

Dieter van Melkebeek asked me to post the following message. See also here I encourage the members of the CCC community to reply by May 8. 

As you may know, after a discussion period and a recent poll (see http://computationalcomplexity.org/forum), the steering committee of the Conference on Computational Complexity (CCC) is seriously considering the option of becoming an independent conference. This will only be possible if
(i) enough people are willing to make a commitment and help with this endeavor, and
(ii) we can gather the required startup funds.

VOLUNTEERING

Among other things we need volunteers for the following:
 - setting up the conference: creating a non-profit organization, other potential legal issues, insurance, banking, sponsoring, proceedings, registration, record keeping;
 - serving on the executive committee of the independent conference;
 - local organization for conferences in the near future.
If you are willing to help, please let us know and be as specific as possible.

START-UP FUNDS

We'd like to get an idea of how much funding we'd be able to collect. We're considering two types:
 - Funds that would be used to bootstrap the conference and are intended to be returned within three years. If you would be willing to contribute to this fund, please tell us how much.
 - Permanent donations. If you know of contacts for possible donors (labs, institutions), please let us know.

ORGANIZATIONAL MEETING

If we decide to become independent, we'd set up an organizational meeting the day before and/or after CCC'14 in Vancouver, i.e., Tuesday 6/10 and/or Saturday 6/14. If you are volunteering as above, please include in your response the following:
 - Would you be able to attend on Tuesday 6/10: Y/N
 - Would you be able to attend on Saturday 6/14: Y/N

Please send your response by May 8 to dieter@cs.wisc.edu.

Thanks for your consideration!

PhD Scholarships at the Gran Sasso Science Institute

The Gran Sasso Science Institute is a new international PhD school and a centre for advanced studies in physics, mathematics, computer science and social sciences. It is located in L'Aquila (Abruzzo, Italy) and enrolled its first bunch of PhD students this autumn. The first group of PhD students includes eight students in computer science and eight in mathematics. The institute has just advertised ten PhD fellowships, which might be of interest to your students or you. Please advertise them as you see fit.

The institute's organization mirrors the one of other international PhD schools in Italy and has strong ties with IMT Lucca. In particular, PhD students at GSSI receive a PhD stipend, free lodging and food vouchers and pay no tuition fees.

I arrived here on Thursday night to deliver a course. My first impression is positive: the staff at the institute are friendly and ready to assist guest professors and students, and the facilities seem good. I will try to write more on GSSI at a later time.

As an expat from Abruzzo myself (albeit from Pescara, a rival city of L'Aquila since it became the prominent centre in the region), I am happy to give my tiny contribution to the development of the GSSI, which I hope will become a permanent institute after the ministerial evaluation in three years' time.

------------ CALL for PhD Applications ----------------------------

10 fellowships at GSSI  L'AQUILA Italy, for the PhD program in
                      Computer Science
In collaboration with IMT - Institute for Advanced Studies - Lucca

Core Topics:
- Foundations of (Modern) Networks,
- Specification and Analysis of Concurrent Reactive Systems,
- Software Systems and Services.

Download the call for applications at

Online applications 
                                   Deadline June 22, 2014


Scholarships
GSSI awards scholarships for 3 years. 
The yearly amount of the scholarship is of € 16.159,91 gross  

Facilities and benefits
All PhD students will receive free accommodation, tuition fees waived, free luncheon vouchers.

Further Informations 

For additional  informations please contact: info@gssi.infn.it

Friday, May 02, 2014

Gödel Prize 2014 to Ronald Fagin, Amnon Lotem, and Moni Naor for Optimal Aggregation Algorithms for Middleware

Ronald Fagin, Amnon Lotem, and Moni Naor will receive the 2014 Gödel Prize for their paper Optimal Aggregation Algorithms for Middleware (http://researcher.watson.ibm.com/researcher/files/us-fagin/jcss03.pdf), which introduced the powerful “threshold algorithm” that is widely used in applications and systems that demand optimal results for gathering multi-sourced information. The award, which recognizes outstanding papers in theoretical computer science, is presented by the European Association for Theoretical Computer Science (EATCS) and ACM’s Special Interest Group on Algorithms and Computation Theory (SIGACT). The ceremony takes place at the International Colloquium on Automata, Languages, and Programming (ICALP) http://icalp2014.itu.dk/ July 7-11, in Copenhagen, Denmark.

The prize-winning paper provides a framework to design and analyze algorithms where aggregation of information from multiple data sources is needed, such as in information retrieval and machine learning. In these situations, the threshold algorithm offers a very efficient method for producing a single unified list of the “top k” results from the combined data sources. The threshold algorithm’s elegant mathematical properties and simplicity are particularly suitable for use in middleware, software that is often used to augment computer operating systems that support complex, distributed applications. The authors also introduced the notion of instance optimality, an extremely strong guarantee of performance, and showed that the threshold algorithm is instance optimal. The paper’s groundbreaking results have built a foundation for much follow-on research. 
Congratulations to the award recipients and many thanks to the Gödel Prize Committee for this year!

Saturday, April 19, 2014

ACM SIGLOG chartered

After a journey of many years, ACM SIGLOG is now officially chartered and will play an important role in spreading the gospel of the intimate connections between logic and computation and in their further developments.  Congratulations to all the people who worked very hard over the last seven years to make this happen!

Prakash Panangaden is the first SIGLOG  chair and together with his team is working on getting the web site up and running and the SIGLOG Newsletter in shape for the first issue. 

As a member of the research community and as current president of the European Association for Theoretical Computer Science, I wish SIGLOG the best of luck and look forward to a fruitful cooperation between the EATCS and SIGLOG. 

For the moment I would like to encourage my readers to join SIGLOG officially.

The link is:

 https://campus.acm.org/public/qj/gensigqj/siglist/gensigqj_siglist.cfm


One can join SIGLOG without joining the ACM.

Wednesday, April 16, 2014

Accepted papers for ICALP 2014

The list of accepted papers for ICALP 2014 is now available here. The PC chairs told me that the quality of the submissions was very good and the scientific programme for the conference looks mouth-watering. Apart from the contributed papers and the excellent invited speakers, there will also be three award presentations. The conference dinner also promises to be a cultural highlight and the Copenhagen Jazz Festival will be in full swing.

There is going to be a lot going on in Copenhagen for TCS buffs this summer, which makes it one of the two places to be. (The other is Vienna.) 

I hope to see many of you at what will be a memorable ICALP. Thanks to Thore Husfeldt and his team for all the sterling work they are doing!

Monday, April 14, 2014

First EATCS Young Researcher School on Automata, Logic and Games


In the year 2013, the European Association for Theoretical Computer Science (EATCS) established a series of Young Researcher Schools on TCS topics. The first such school is devoted Automata, Logic and Games, and it will be held in Telc, Czech Republic, from July 27 to August 1, 2014.
The programme consists of five Basic Tutorials (four hours each) devoted to fundamental subjects, and eight Advanced Lectures (2 hours each) focussed on recent results and specific topics.

Basic Tutorials
---------------

- Probabilistic Model-Checking
  Christel Baier (Dresden)

- Logic and Databases
  Phokion G. Kolaitis (Santa Cruz)

- Games and Synthesis
  Nir Piterman (Leicester)

- Logic and Automata
  Wolfgang Thomas (Aachen)

- Timed Automata
  Wang Yi (Uppsala)


Advanced Lectures
-----------------

- The Unbounding Quantifier
  Mikolaj Bojanczyk (Warsaw)

- Robustness in Timed Systems
  Patricia Bouyer-Decitre (Cachan)

- A Logic-Based Approach to Cloud Computing
  Jan Van den Bussche (Hasselt)

- Regular Automata and Monadic Theory
  Didier Caucal (Paris)

- Stochastic model checking - A continuous time perspective
  Holger Hermanns (Saarbruecken)

- Synthesis of Recursive Programs
  Martin Lange (Kassel)

- Infinite-State Probabilistic Systems
  Richard Mayr (Edinburgh)

- Prophetic Automata
  Thomas Wilke (Kiel)
See  http://eatcs-school.fi.muni.cz/  for further details.

Friday, April 11, 2014

Adam W. Marcus, Daniel A. Spielman and Nikhil Srivastava to receive the Pólya Prize 2014

According to the web site for the prize, Adam W. Marcus, Daniel A. Spielman and Nikhil Srivastava will receive the Pólya Prize 2014.

The George Pólya Prize, established in 1969, is given every two years, alternately in two categories: (1) for a notable application of combinatorial theory; (2) for a notable contribution in another area of interest to George Pólya such as approximation theory, complex analysis, number theory, orthogonal polynomials, probability theory, or mathematical discovery and learning.

There does not seem to be a press release on the SIAM web site, but this should not prevent us from congratulating Adam, Dan and Nikhil for receiving this award. Congrats!

Sunday, March 23, 2014

Contemplate Ltd

These are days in which one of the outcomes of the work academics do, and one of the measures of its impact, is commercial exploitation of research and development carried out within the four walls of the "Ivory Tower". Universities the world over have different approaches towards commercial exploitation of research funding and IP rights and mine is no exception, apart from being a bit late in this game.

A discussion of IP policy issues might be the topic for a future post. Here I will just limit myself to pointing out an excellent example of a spin-off company from the University of Edinburgh, Contemplate Ltd, whose CTO and founder is Don Sannella. (Disclaimer: I have absolutely no connection with Contemplate myself!)

Contemplate was founded to commercialise research on static analysis done at the University of Edinburgh. Its product, ThreadSafe, pinpoints and helps to diagnose the most common and pernicious Java concurrency bugs. Concurrency is essential for high performance and low latency, but concurrent programming is hard to do right and therefore the use of automatic analysis tools should play an important role in the development of parallel software.

ThreadSafe is available as an easy-to-use Eclipse plug-in, which relates bug reports directly to the source code, and also as a SonarQube plugin (for team working). Also, by the end of March, as a command-line tool which generates an HTML report (for use with build tools).

This InfoQ article shows ThreadSafe in action finding concurrency errors in open source applications including Apache JMeter and K9Mail that are not caught by any other Java static analysis tool. 

Free two-week trials are available from www.contemplateltd.com/threadsafe. Moreover, I understand that the tool will soon be offered with monthly and annual subscriptions.

If you develop concurrent software with Java, I encourage you to try the tool.

Saturday, March 15, 2014

EATCS-IPEC Nerode Prize 2014

The EATCS and IPEC are proud to announce that the Nerode Prize 2014 for outstanding papers in the area of multivariate algorithmics will be awarded to the following two papers:
  • "On problems without polynomial kernels", Hans L. Bodlaender, Rodney G. Downey, Michael R. Fellows, Danny Hermelin, Journal of Computer and System Sciences, 2009 and
  • "Infeasibility of instance compression and succinct PCPs for NP", Lance Fortnow, Rahul Santhanam, Journal of Computer and System Sciences, 2011.
Congratulations to all the award recipients!
 
The presentation of the prize will take place at IPEC 2014, which this year will be organized as part of ALGO 2014 (Wroclaw, Poland, 10-12 September 2014).

The Prize is named in honour of Anil Nerode, in recognition of his major contributions to mathematical logic, theory of automata, computability and complexity theory.

The Nerode Prize 2014 Committee consists of Georg Gottlob (University of Oxford, UK), Jan Arne Telle (University of Bergen, Norway), and Peter Widmayer (ETH Zurich, Switzerland; chair).

Thursday, March 13, 2014

ACM TALG Editor-in-Chief Search: Call for Nominations

Anne Condon asked me to distribute this call. Do consider nominating colleagues for this prestigious position.

Nominations are now open for the next editor-in-chief of the ACM Transactions on Algorithms. ACM TALG (web page: http://talg.acm.org/) publishes original research of the highest quality dealing with algorithms. It is a peer-reviewed journal, appearing quarterly. Specific areas of computation covered by the journal are listed at http://talg.acm.org/Aims.html.

We are looking for a well-established person with a strong record of research achievements and service, and with a vision for the future of the field. The term of appointment is three years, to begin late summer 2014, with the possibility of  renewal for a second term. The editor-in-chief is responsible for faithfully executing the editorial charter of the journal yet should be proactive in
adapting the journal and its charter to changes in the field. A description of the duties of the EiC and evaluation criteria can be found at h ttp://www.acm.org/publications/policies/evaluation.

Professor Valerie King of the University of Victoria, Victoria, BC, Canada is Chair of the Search Committee. All nominees, including self-nominees, should send a CV and a Vision Statement for TALG (at least one page), with subject header “EiC nomination” to: val@uvic.ca

The deadline for nominations is Friday, May 16, 2014 at 11:59 p.m. (PST).

Wednesday, March 12, 2014

February issue of the BEATCS on line

The 112nd issue of the EATCS Bulletin, is now available online for everyone at http://www.eatcs.org/beatcs/index.php/beatcs/issue/view/14

If you prefer, you can download a pdf with the printed version of the bulletin from http://www.eatcs.org/images/bulletin/beatcs112.pdf

Thanks to Efi and Ioannis from our secretary office in Greece for all the work they have done on this issue and to the colleagues who contributed pieces to this issue of the BEATCS.

Readers of TCS blogs might notice that this issue of the BEATCS includes revised versions of two essays by Sanjeev Arora and Boaz Barak that had appeared earlier in Windows on Theory. Thanks to both Boaz and Sanjeev for agreeing to publish their pieces in the BEATCS!

The EATCS Bulletin has been open access for a few years now and we are happy to make it accessible also to interested readers who are not members of the association.

As you can read in my Letter from the President published in this issue, the EATCS has been taking several steps in support of young researchers, in addition to the instruments we had already in place. If you'd like to support these initiatives and forthcoming ones, consider joining the EATCS. The membership fee for one year (two years for young researchers) is 30€ and we offer a joint membership discount with SIGACT.

Tuesday, March 11, 2014

9th SCANDINAVIAN LOGIC SYMPOSIUM: Call for submissions

Various topics in logic in computer science are of interest for this meeting. Consider submitting an abstract!

9th SCANDINAVIAN LOGIC SYMPOSIUM
25-27 August 2014, University of Tampere, Finland
http://www.sis.uta.fi/SLS2014/


SECOND ANNOUNCEMENT AND CALL FOR SUBMISSIONS


The 9th Scandinavian Logic Symposium will be held at the Museum Centre
Vapriikki in Tampere, Finland, during 25-27 August 2014 under the auspices
of the Scandinavian Logic Society (SLS, http://scandinavianlogic.org/)
.

As with previous editions of the Symposium, its primary aims are to
reflect the current activities in logic in the Nordic countries and to
provide a local meeting forum for their logical communities, broadly
conceived. Besides, it invites and warmly welcomes participation of
logicians from all over the world.


SCOPE AND TOPICS
The scope of SLS 2014 is broad, ranging over the whole areas of
Mathematical and Philosophical Logic, as well as Logical Methods in
Computer Science, Artificial Intelligence, Linguistics, etc. Major
topics include (but are not limited to): Proof Theory, Constructivism,
Model Theory, Set Theory, Computability Theory, Algebra and Logic,
Categorical Logic, Logic and Computer Science, Logic and Linguistics,
Logic in AI and Multi-Agent Systems, Logics of Games, Modal and other
non-classical Logics, Philosophical Logic.




     INVITED SPEAKERS:
     Mai Gehrke (LIAFA, Paris)
     Volker Halbach (University of Oxford)
     Asger Törnquist (University of Copenhagen)
Jouko Väänänen (University of Helsinki and University of Amsterdam)
     Thomas Ågotnes (University of Bergen)




     PROGRAM COMMITTEE
     Co-chairs: Sara Negri (University of Helsinki)
and Valentin Goranko (Technical University of Denmark)


     Members:
     Luca Aceto (Reykjavik University)
     Lars Birkedal (Aarhus University)
     Patrick Blackburn (Roskilde University)
     Patricia Blanchette  (University of Notre Dame, US)
     Thierry Coquand (University of Gothenburg)
     Ali Enayat (University of Gothenburg)
     Øystein Linnebo (University of Oslo and Birkbeck College London)
     Kerkko Luosto (University of Tampere)
     Dag Normann (University of Oslo)
     Gabriel Sandu (University of Helsinki)
     Arild Waaler (University of Oslo)
     Dag Westerståhl (University of Stockholm)




     ORGANISING COMMITTEE
     Chair: Lauri Hella  (University of Tampere)


     Members:
     Kerkko Luosto (University of Tampere)
     Antti Kuusisto (University of Wroclaw)
     Jonni Virtema (University of Tampere)
     Jevgeni Haigora (University of Tampere)



     SUBMISSIONS
     Abstracts of contributed talks, in PDF format, not exceeding one A4
     (11pt) page, should be submitted by  April 25, 2014, through
     EasyChair: https://www.easychair.org/conferences/?conf=sls2014
     Abstracts should be typeset following the format of a LaTeX style
file that will be posted on the conference website on March 16.



     IMPORTANT DATES:
     Submission deadline: April 25, 2014
     Notification: May 16, 2014
     Final programme: July 25, 2014

LOCATION
Museum Centre Vapriikki is situated on the banks of the Tammmerkoski
rapids.
It is within a walking distance from the center of Tampere and the railway
station.


     ACCOMMODATION
     There are several hotels within a walking distance from the conference
     venue. The organizers will provide a list of some alternatives on the
     conference website.


REGISTRATION
Details concerning registration will be posted soon on the conference
website.



     ENQUIRIES: Write email to scandinavianlogicsymposium@gmail.com

Friday, March 07, 2014

David Woodruff receives the Presburger Award 2014


The EATCS is proud to announce that the Presburger Award 2014 Committee has chosen David Woodruff (IBM Almaden Research Center) as the recipient of the Presburger Award 2014. Congratulations to David!

Since 2010, the Presburger Award has been given each year to a young scientist (in exceptional cases to several young scientists) for outstanding contributions in theoretical computer science, documented by a published paper or a series of published papers. The award is named after Mojzesz Presburger who accomplished his path-breaking work on decidability of the theory of addition (which today is called Presburger arithmetic) as a student in 1929. The Presburger Award 2014 is sponsored by CWI Amsterdam and will be presented at ICALP 2014 in Copenhagen, Denmark.

David Woodruff, born in 1980, has made important contributions to the theory of data streams, both creating new algorithms, and demonstrating that certain algorithms cannot exist. His work has an impact on other fields, including compressed sensing, machine learning, and numerical linear algebra. In the area of data streams, he resolved the Distinct Elements Problem, simultaneously optimizing the amount of memory used, the time needed to process each new entity, and the time needed to report an estimate of the number of distinct elements in the stream. In the area of machine learning, he used his previous results on data streams to design sub-linear algorithms for linear classification and minimum enclosing ball problems. In numerical linear algebra, he developed the first algorithms for low rank approximation and regression that run in time proportional to the number of non-zero entries of the input matrix. His work also resulted in 17 patents related to data streams and their applications.

The 2014 Presburger Award Committee consisted of
Antonin Kucera Brno, chair
Claire Mathieu ENS Paris
Peter Widmayer Zurich

Wednesday, March 05, 2014

EATCS Fellows class of 2014 named

The EATCS has recognized ten of its members for their outstanding contributions to theoretical computer science by naming them as the first recipients of an EATCS fellowship.

The EATCS Fellows for 2014 are:
  • Susanne Albers (Technische Universität München, Germany) for "her contributions to the design and analysis of algorithms, especially online algorithms, approximation algorithms, algorithmic game theory and algorithm engineering";
  • Giorgio Ausiello (Università di Roma "La Sapienza", Italy) for "the impact of his scientific work in the field of algorithms and computational complexity and for his service to the scientific community";
  • the late Wilfried Brauer (Technische Universität München, Germany) for "outstanding contributions to the foundation and organization of the European TCS community";
  • Herbert Edelsbrunner (Institute of Science and Technology Austria and Duke University, USA) for "his tremendous impact on the field of computational geometry";
  • Mike Fellows (Charles Darwin University, Australia) for "his role in founding the field of parameterized complexity theory, which has become a major subfield of research in theoretical computer science, and for being a leader in computer science education";
  • Yuri Gurevich (Microsoft Research, USA) for "his development of abstract state machines and for outstanding contributions to algebra, logic, game theory, complexity theory and  software engineering";
  • Monika Henzinger (University of Vienna, Austria) for "being one of the pioneers of web algorithms, algorithms that deal with problems of the world wide web";
  • Jean-Eric Pin (LIAFA, CNRS and University Paris Diderot, France) for "outstanding contributions to the algebraic theory of automata and languages in connection with logic, topology, and combinatorics and service to the European TCS community";
  • Paul Spirakis (University of Liverpool, UK, and University of Patras, Greece) for "seminal papers on Random Graphs and Population Protocols, Algorithmic Game Theory, as well as Robust Parallel Distribute Computing";
  • Wolfgang Thomas (RWTH Aachen University, Germany) for "foundational  contributions to the development of automata theory as a framework for modelling, analyzing, verifying and synthesizing information processing systems."
The aforementioned members of the EATCS were selected by the EATCS Fellow Selection Committee, after examining the nominations received from our research community. The EATCS Fellow Selection Committee for 2014 consisted of
  • Rocco De Nicola (IMT Lucca, Italy,
  • Paul Goldberg (Oxford, UK),
  • Anca Muscholl (Bordeaux, France; chair),
  • Dorothea Wagner (Karlsruhe, Germany) and
  • Roger Wattenhofer (ETH Zurich, CH)
The EATCS Fellows Program was established by the association last year  to recognize outstanding EACTS members for their scientific achievements in the field of Theoretical Computer Science. The EATCS is very proud to have the above-mentioned members of the organization as its first fellows. Congratulations all of them!