Friday, July 17, 2015

ICALP/LICS 2015: Christos Papadimitriou on 40 years of TCS

As part of a session devoted to the 40th anniversary of TCS (the journal) and just before the conference dinner, Christos Papadimitriou delivered a talk entitled 40 Years of TCS (the field). The talk was very interesting and inspirational. Since I trust that its content might be of interest to some of the readers of this blog, and to the community in general, I'll devote this post to summarizing the main point made by Christos in his presentation.

Christos started off by saying that in the early 1970s, ICALP was right on the money since TCS was roughly equivalent to Languages, Automata and Programs. At that time, 
  • P vs NP was, as Christos put it, an "obscure recent conjecture";
  • Compilers and the “semantics problem” were the topics that any young researcher who wanted to have impact and secure fame should work on;
  • Relational databases were under the horizon; and 
  • TCS owned CS, in the sense that the responsibility and main mission of TCS researchers was to outfit all areas of CS with rigour and the power of mathematics,  that TCS researchers of that time knew all of CS and exercised complete intellectual hegemony over the rest of CS. 
However, in those times CS was a second class citizen in the society of scientific fields.

Over the next twenty years, CS was an insecure and introverted field. However, with crucial help from TCS, it made progress on some great and difficult engineering problems such as compilers, databases, chips, OS, networks, software and parallelism to name but a few. Along the way, fast algorithms for some tough engineering problems were developed, the great importance of P vs NP became clearer as CS flooded the planet with IT. In Christos' opinion, CS grew tremendously in prestige because of these three reasons (that is, tough engineering problems, flooded the planet, P vs NP).

During those twenty years, TCS gave rigorous machinery for the engineering problems and scouted for depth.

That was also the time during which the split between TCS A and TCS B research occurred in North America. In his talk, Christos recounted the story of STOC 1985 in Providence, Rhode Island, during which all the accepted Vol. B papers were put in a single session (in which  his students Cosmadakis and Kanellakis had a paper) and an excursion to Newport Beach was organized at the same time!  After that, he said that Vol. B papers were basically not submitted any more to STOC/FOCS. 

Then the internet happened and deeply changed CS, which became extroverted, a natural science and a social science. TCS changed too, as it moved towards mathematics and opened up to the other sciences by applying the algorithmic lens to study their fundamental problems. In his talk, Christos mentioned the following examples of areas to which the algorithmic lens has been applied:
  • Statistical mechanics and phase transitions
  • Quantum computation
  • Economics and game theory
  • Evolution
  • The study of the brain. 
In his talk, he focussed on the study of evolution and the brain.

The theory of evolution is a brilliant theory for which a deluge of data is available. However, most important questions remain unanswered: Why is there so much genetic diversity? What is the role of sex/recombination? Is Evolution optimizing anything?

Christos pointed a surprising connection between evolution and game theory that he explored in this paper with Chastain, Livnat and U. Vazirani. The short story is that evolution of a population is a repeated game between genes, the strategies of each gene are its alleles, the common utility is the organism's fitness and the genes update their probabilities of play through multiplicative updates.

Moving to the study of the brain, Christos started by pointing out some great disconnects in our knowledge of the brain:
  • Babies vs computers (why is it that babies can easily achieve cognitive tasks that are very hard for computers? Addendum: According to Clement's recollection, Christos also asked why is it that "computers can easily achieve cognitive tasks that are very hard for babies"?)
  • Clever algorithms vs what happens in cortex
  • Understanding Brain anatomy and function vs understanding the emergence of the Mind
  • No known “neurally plausible” algorithm solving a nontrivial problem
  • Plus, disruptive insights such as downward traffic, reciprocity and clustering, prediction.
He forcefully stated that computer scientists should work on those problems. Indeed, Valiant has done work on the brain since 1995, pointing out that theory is essential and proposing the neuroidal model and vicinal algorithms. In the paper Cortical Learning via Prediction with Santosh Vempala (to appear at COLT 2015), Christos has studied a version of Valiant's Join primitive, called the Predictive Join (Pjoin), and proved that, in a precise technical sense, it solves the unsupervised learning problem.

Christos said that the PJoin and the PLink primitives seem to be useful for learning grammar and so a natural next step in his research is to tackle the problem of the emergence of language. His hypothesis is that language evolved in order  to exploit the strengths of the brain.

Christos closed his talk with a tantalizing question: What if the Mind emerges from rudimentary primitives, massive hardware and our brain's interaction with the very interesting environment provided by our planet?

This was a fitting finale to a wonderful story, devoted to a fascinating subject (TCS) and recounted by one of the true intellectual giants of our discipline.

Disclaimer: Any infelicity and inaccuracy in this report is solely my responsibility.

4 comments:

vz nuri said...

its really great to have a few such broadminded scientists in (T)CS like papadimitriou who can see the forest for the trees, instead of the narrow-focused specialization that sometimes inundates the field (like other scientific fields). also while widespread, recognition and exploration of the "algorithmic lens" still seems in early days with vast terra incognita remaining. more on the algorithmic lens

vz nuri said...

oh, one other item/ feedback. re point "No known “neurally plausible” algorithm solving a nontrivial problem". actually this seems to be superseded by recent research mainly into image recogintion/ vision processing. algorithmic models are increasingly biologically plausible and there is fairly solid theory about how the mammalian vision processing system works wrt algorithm perspectives eg "sparse coding," "feature detection" etc. some of this is captured in new/ cutting edge "deep learning" approaches even recently generating big headlines. agreed these approaches are not attempting to have high "biological fidelity/ verisimilitude" but they at least have high "biological similarity". part of the difficulty here is the compartmentalization of scientific fields. biologists study biology, machine learning theorists study machine learning. there are some rare crosscutting scientists/ work. again papadimitriou is a case study here eg wrt his interest in "computational evolutionary theory".

Clement C. said...

Nice summary — a small nitpick, however: if I remember correctly (this may be a dubious assumption, however) the talk, the "Babies vs. Computer disconnect" was two-way:
"babies can easily achieve cognitive tasks that are very hard for computers"
AND
"computers can easily achieve cognitive tasks that are very hard for babies"

Luca Aceto said...

Thanks Clement. I mentioned your recollection in the post since I might have missed the other half of the disconnect. However, it seems to me that computers are capable of cognitive tasks that are very hard for humans of all ages :-)