Friday, October 13, 2006

Teaching Reductions

This semester I am teaching Introduction to the Theory of Computation at Reykjavík University, a course taken by third year BSc students and MSc students alike. I like teaching this course (based on Michael Sipser's book bearing the same title) very much, and I try to convey to the students the intellectual excitement I feel about the theory of automata, computability and complexity.

After last Wednesday's lecture, however, I was totally drained of energy. Why? During Tuesday's lecture I introduced the concept of reduction, and used it to prove the undecidability of the emptiness, equality and regularity problem for Turing machines. The day after I started the lecture with a warm-up, peer instruction session asking the students to argue that it is undecidable whether a Turing machine accepts the language {Alan, Turing}. The reaction was summarized by the blank stare I got from most of my audience Even when we worked through the solution together, I still had the feeling that many of them were not comfortable with the notion of reduction. I had to work very hard to try and clarify it as best as I could, and it is not clear to me yet whether I succeeded.

Looking back to my previous experiences teaching this course, or variations on it, in Aalborg, this is not surprising. The notion of reduction is the bread-and-butter of computability and complexity, and one of the cornerstones of algorithmic thinking. Having mastered it, the rest of the material in my introductory courses becomes, I would venture to say, relatively easy. It is one of those powerful concepts that opens up new vistas, and, despite being very natural and ubiquitous in the theory of computing, requires some intellectual maturity to be understood fully.

A typical pitfall I have experienced over the years is that many students think that the pre-processing algorithm converting, say, the question "Does a Turing machine M accept the string w?" into an equivalent one of the form "Does the Turing machine M' accept the empty language?" actually runs M on input w when, while producing the code for M', it writes the line of code

Run M on input w.

Here I usually use the analogy with a compiler to try and dispel their doubts. (Basically, I tell them to view the pre-processing algorithm implementing the reduction as a compiler between inputs to two different problems. The pre-processing algorithm outputs a piece of code, but never runs it.) It is not up to me to say whether this works, but it seems to me that most of the students "crack the code", and start thinking naturally in terms of reductions between algorithmic problems.

Is it just my personal experience, or are reductions hard to grasp for many of our students? I'd be happy to hear how you go about teaching reductions in your classes on the theory of computation, and what analogies/metaphors you use to help your students understand such a fundamental notion.

No comments: