An anonymous comment to my previous post on ICALP 2008 asked
I'm very curious what Valiant talked about Evolution, Intelligence and Human Brain. In particular if he said anything concrete.Here are some recollections I have from Leslie Valiant's talk. I apologize if I have misinterpreted the main messages from his presentation.
Leslie started his talk by recalling that the first conference he ever attended was ALP 1972 in Paris, which would later become the first ICALP conference. He also quoted an excerpt from the preface of the proceedings for that conference, which I did copy verbatim, written by Maurice Nivat:
“If I were asked, in this preface, to explain how each of these forty five papers contributes to the solution of the main problems in Computer Science, I would be very embarrassed."I leave it to you to interpret this quote :-)
The bulk of Leslie's presentation was supposed to deal with three topics that interest him right now, namely:
- Human Brain.
Regarding evolution, Leslie Valiant remarked that in the 19th century Darwin and Wallace posited an algorithmic mechanism for evolution: “random variation and natural selection.” Up to this day, there are no serious rivals to the Darwin/Wallace theory, but computer simulations of “random variation and selection” offer rather unimpressive results in evolving complex mechanisms.
Leslie mentioned the "quantitative mystery" behind evolution:
- How can such wonderful 3 billion length DNA strings be found in only 3 billion years?
- Does evolving complex circuits take exponential time?
V. Feldman. Evolvability from learning algorithms, Proc. 40th ACM Symposium on Theory of Computing (2008).
Regarding intelligence, Leslie started by asking
What are people good at that computers are bad at?His answer was "common-sense knowledge". He then provided the example of word completion (that he used in some recent experiments) and introduced the concept of robust logic, which he introduced about eight years ago. In his own words:
Robust Logic gives a common semantics for learning and reasoning, polynomial time algorithms for both, and soundness and completeness for reasoning.Leslie described a recent experiment he carried out with Michael in which they developed a system to analyze about 500,000 sentences from the Wall Street Journal. They used a parser to find the structure of the sentences, learned rules to predict erased words and investigated whether chaining these rules leads to an improvement over baseline methods in predicting missing words. The results were a little better than those obtained using baseline methods.
Leslie concluded by saying that we need good "teaching material" for systems like the one Michael and he developed.