*Prompted by a reference to it in a recent CACM editorial by Moshe Vardi, I belatedly read the very interesting piece on logic activities in Europe that Yuri Gurevich wrote in 1992.
I was struck by the idea that it might be interesting to ask some selected colleagues to contribute (short) opinion pieces to the Bulletin of the EATCS reflecting on the points raised by Yuri in that article twenty years later. *
*In order to whet your appetite, I post below Moshe's contribution. Thanks to Moshe for taking the time to write this piece and for allowing me to share it on this blog. Enjoy it! *
**On
the Two Sides of the Atlantic in Logic and Computation**

Rice
University

The
key issue raised in that editorial was the split between
Volume-A-type and Volume-B-type research in Theoretical Computer
Science (TCS), referring to the 1990 *Handbook
of Theoretical Computer Science*,
with Jan van Leeuwen as editor. The handbook consisted of Volume A,
focusing on algorithms and complexity, and Volume B, focusing on
formal models and semantics. In other words, Volume A is the theory
of algorithms, while Volume B is the theory of systems (hardware and
software). North American TCS tends to be quite heavily focused on
Volume A, while European TCS tends to encompass both Volume A and
Volume B. The ACM Special Interest Group on Algorithms and
Computation Theory (SIGACT) is, de facto, a special-interest group
for Volume-A TCS.

I
pointed out in my editorial that this division did not exist prior to
the 1980s. In fact, the tables of contents of the proceedings of two
North American premier TCS conferences—IEEE Symposium on
Foundations of Computer Science (FOCS) and ACM Symposium on Theory of
Computing (STOC)---from the 1970s reveal a surprisingly (from today's
perspective) high level of Volume-B content. In the 1980s, the level
of TCS activities in North America
grew
beyond the capacity of two annual single-track three-day conferences,
which led to the launching of what was known then as "satellite
conferences." Shedding the "satellite" topics allowed
FOCS and STOC to specialize and develop a narrower focus on TCS. But
this narrower focus in turn has influenced what is considered TCS in
North America. In contrast, the European Association for Theoretical
Computer Science (EATCS), expanded the scope of its flagship
conference, the International Colloquium on Automata, Languages, and
Programming (ICALP), by reorganizing the conference along several
tracks. In 2015, ICALP consisted of three tracks: Track A:
Algorithms, Complexity and Games; Track B: Logic, Semantics, Automata
and Theory of Programming; and Track C: Foundations of Networked
Computation: Models, Algorithms and Information Management. The
reorganization along tracks allowed EATCS to broaden its scope,
rather than narrow it like SIGACT.

But
the reality is that if one zooms into Volume-B research, one finds
again the Volume-A/Volume-B dichotomy, also reflected in the range
of topics of the Symposium on Logic in Computer Science (LICS), the
flagship conference of the ACM Special Interest Group on Logic and
Computation (SIGLOG). Sub-volume A of Volume-B research is concerned
with connections between logic, algorithms, and computational
complexity. Descriptive-Complexity Theory, for example, aims at
bridging computational complexity and logic by studying the
expressive power needed to describe problems in given complexity
classes. A celebrated result in this area is Fagin’s Theorem, which
relates NP to Existential Second-Order Logic. Model Checking, as
another example, studies the evaluation of logical formalisms,
including various temporal logics, over finitely represented
structures. Automata theory often provides tools to bridge between
logic and algorithms. The Büchi-Elgot-Trakhtenbrot Theorem, for
example, provides automata-theoretic tools for solving the
satisfiability problem for Monadic Second-Order Logic on finite
words.

Sub-volume
B of Volume-B research, in contrast, is concerned with semantical and
methodological foundations for programming and programming languages.
Domain theory, for example, studies special kinds of partially
ordered sets called domains. Domain theory is used to specify
denotational semantics, especially for functional programming
languages. Category theory, as another example, formalizes
mathematical structure and its concepts in terms of a collection of
objects and of arrows (also called morphisms). Category theory
provides powerful modeling idioms and has deep connections to types
in programming languages. Concurrency theory studies formalisms for
modeling and analyzing concurrent systems. Proof Theory is of major
interest in Sub-volume B of Volume B research, and a distinguished
result is the Curry–Howard Correspondence, which provides a direct
relationship between types in computer programs and formal proofs in
certain logics.

The
split between Sub-volumes A and B within Volume-B research can
perhaps be traced to the standard division of mathematical Logic into
several branches: computability theory, model theory, proof Theory,
and set Theory. (See the 1989 *Handbook
of Mathematical Logic*,
with John Barwise as editor.) While set theory has no clear
computer-science counterpart, Sub-volume A of Volume-B research can
be traced to computability theory and model theory, while Sub-volume
B of Volume-B research can be traced to proof theory. Indeed, a
scientific discipline, as it grows and matures, inevitably grows
branches, which gradually grow apart from each other. As scientists
are forced to go deeper, it becomes gradually impossible for them to
keep track of developments in more than a very small number of
branches. In fact different branches develop their own specialized
languages, impeding communication between branches.

It
is often at the interfaces between branches, however, that the most
exciting developments occur. Consider Artificial Intelligence, for
example. Since the establishment of the field in the late 1950s,
logic has played a key role as the fundamental formalism for
describing reasoning. Ultimately, however, logical tools were not
fully adequate to capture the common-sense reasoning that
characterizes human reasoning. In the 21^{st}
Century, probabilistic and statistical approaches have become
dominant, for example, in machine learning. Synthesizing the logical
and probabilistic approaches is a new frontier, where I expect to see
many exciting developments in the next few years.

Finally,
while 25 years ago computing-research took place mostly in North
America and Western Europe, computing research has since globalized.
The Atlantic Ocean is no longer as dominant as it used to be. I look
forward to the day when we will write about “The Two Sides of the
Pacific/Indian Ocean in Logic and Computation.”