June 25, 2005

Turning to Turing

Can computers think? This question has fascinated scientists for over half a century, since the first computers were born. Computers are good at processing data, solving complicated problems, doing repetitive tasks -- far better at these things than humans are. But can they think?

Then again, what do we mean by "thinking"? A computer can add up a row of figures in an instant. Does that constitute thinking? After all, there's something disappointingly trivial about such a task. Thinking, to most people, means knowledge, understanding, intelligence, creativity: none of that, in just adding numbers. But trying to make computers exhibit those traits gave us a whole new discipline: Artificial Intelligence (AI) -- the science of making computers think, or behave in human ways.

But how can we tell that a computer is thinking? Here's one way: if it acts as a person acts when thinking, we might say the computer is itself thinking. Suppose you ask a computer the same questions you might ask a person, and it gives you answers indistinguishable from the person's answers. If it did this consistently, you would likely say the machine is thinking.

In 1950, Alan Turing made exactly this argument in a remarkable paper called Computing Machinery and Intelligence. Thus was born what we now know as the Turing Test.

In the Test, a questioner puts a series of questions to a human volunteer and to a computer. Questions and answers are transferred back and forth via a keyboard and a screen; so the only information available to the questioner is what shows up on her screen. The computer is programmed to give as human-like answers as possible; the volunteer, we presume, will be human-like anyway.

The questioner must now decide: which of the two is the human?

If over a series of these tests, the questioner is consistently unable to identify the human volunteer, the computer passes the Test.

Passing the Turing Test has become a kind of Holy Grail for AI. Its practitioners believe that it is a strong, even sufficient, indication of a computer's intelligence and ability to think. And it's hard to disagree. For this -- questioning, discussion, conversation -- is precisely how we decide that a fellow human is intelligent. Shouldn't we apply the same standard to a computer?

Pursuing their Holy Grail, AI researchers have produced a number of interesting systems. Two early ones simulated psychotherapists. This sample conversation (quoted by Roger Penrose in his The Emperor's New Mind) best shows their behaviour:
    Computer: How can I help you?
    Patient: I don't know.
    C: Try to tell me something about it.
    P: No.
    C: Can you elaborate?
    P: I am sick and tired today.
    C: Do you have any idea why?
    P: Yes, I'm old.
    C: Please go on.
    P: I seem to have lost interest in everything.
    C: Tell me more about such feelings.
    P: I have some idea why I feel this way too.
    C: You have.
    P: It seems that I miss having a family.
    C: What makes you feel you miss having a family?

What do you think, is there some definite, almost spooky understanding happening here? If you said yes, you're in good company. K. M. Colby and Joseph Weizenbaum, who separately wrote the two programs, reported that many people swore the computer really understood them. So well, too, that some preferred to unload all their problems on these electronic therapists rather than human ones.

Yet there's no understanding going on at all. As any halfway decent programmer will tell you, these systems are simply following some ordinary rules about what to say.

In the '70s, Roger Schank produced a programme that could understand stories like this one:
    A man walked into a restaurant and ordered a dosa. When it came, it was burned to a crisp. He was so furious that he walked out without paying or leaving a tip.

Asked "Did the man eat the dosa?", Schank's system correctly infers and answers: "No".

Impressive, you think? Many have argued that in the limited sense that such systems answered simple questions about simple contexts just as a human would, they had already passed the Turing Test. But can we truly say there's thinking going on? That the computers understand Schank's stories, or a patient's neurotic woes?

In a thought experiment that is still controversial among AI-ers, John Searle argued "no". He imagined himself simulating Schank's computer, like this.

Lock Searle in a room. Pass him Schank's story and the questions, now written in Chinese which Searle doesn't follow. Also give him instructions in English on what exactly to do to process the story. He follows the instructions, and thus carries out the steps of Schank's programme. Eventually he hands over the answers to the questions, also in Chinese.

Fine? But -- this is a critical "but" -- in what sense has Searle, sitting in this room, understood the stories, especially since he understands no Chinese at all?

With the identical reasoning, in what sense has Schank's computer understood the dosa story?

Some AI-ers would have it that intelligence is embodied in Schank's algorithm itself. They say that the mind works, in essence, like an enormously complicated algorithm. Others, like Searle, believe that intelligence cannot be simulated by computers carrying out the steps of an algorithm, however sophisticated. Intelligence, they say, means a certain consciousness that algorithms just don't have.

The debate between these two views rages on. But that's not such a bad thing. After all, AI's fondest hope is to produce a better understanding of intelligence itself. Sure, we might one day build a computer that passes the Turing test, that thinks. But when that happens, the real victory will be what AI has taught us about our own minds.

Think of that, if you will.

10 comments:

Anonymous said...

I feel that there is a long way to go before computers will start to "think". I tried to understand the present day "learning" algorithms. Most of them are very similar to regression. Of course, they are interesting to study for the mathematics involved!

sameer said...

I think I agree with you. Have you read Thomas Nagel's "What is it like to be a bat?" Its a great read.

When Deep Blue beat Kasparov, some AI people went mad. I saw one game where Deep Blue played this incredibly beautiful intuitive move- not one a computer would ordinarly play. If Kasparov played that he would have just FELT it was a good move, Deep Blue could calculate far enough... But of course Deep Blue doesn't understand chess.

Anonymous said...

Computers can process inputs, yes. With concepts like neural networks, we can even program them to learn responses to inputs and carry out complex regressions to guess responses to previously unknown sets of inputs too.Your chinese story illustrates this point... thinking isnt about input/output sets. A human's response in real life, as opposed to trial, situations is based on inputs, prior learning... and that so far unquantified factor of "feelings" or "attitude" or "mood" or whatever you may call it.

Suresh Venkatasubramanian said...

AI's fondest hope is to produce a better understanding of intelligence itself. Sure, we might one day build a computer that passes the Turing test, that thinks. But when that happens, the real victory will be what AI has taught us about our own minds.

The interesting point here is that to date, approaches to AI based on "modelling humans", ideas like genetic algorithms and neural nets, are good general tools, but invariably perform worse than specialized methods that really don't do anything that humans might (do we solve second order partial differential equations in our head ? Maybe, maybe not).

It is almost a truism of AI that the minute an area of problem solving becomes amenable to computation, it moves out of AI. Fields like robotics, vision, and search were AI perennials, but the more technical and concrete they became, the less they were seen as part of AI itself, and more as computer science and signal processing.

It is also human nature to discount all aspects of human intelligence that can be automated. Chess playing, learning, language skills etc are all things that were once perceived to be core to the idea of what constitutes human intelligence.

Overall, I would identify myself as a 'weak AI' person. It might be possible one day to create computers that can simulate human intelligence in many ways. I don't however believe that such a machine represents an accurate model of how humans function, modulo stunning new advances in neuroscience.

daemon said...

And then there's the Reverse Test : can a machine interrogator tell a machine from a human being, from their answers? Turing proposed his test partly to avoid the ambiguous terms Think and Machine. Were a computer to consistently beat the Turing Test, we will come up with more refinements like the reverse one, to maintain the distinction between us and it. Cogito, ergo sum, I think :), is too deeply ingrained in us to be discarded.

The Tobacconist said...

I can't tell you enough how enriching working in this field is. It is amazing how much it tells you about yourself and how much your appreciation for the trivial grows.

As far as AI goes I feel some of the progress that people in the 50's anticipated was unreasonable. They were excited by the work they were undertaking and I think they grossly underestimated the nature of the problems we'd run up against.

Many AI researchers (reasonable ones) strongly believe we are still at the very beginning and there is a long long way to go. Sometimes AI researchers get caught up in the philosophy. There is no denying the significance but there is also the need to not get bogged down by them.

Over the last couple of decades I a lot of fields have branched out, as someone has pointed out. I believe this is good because we are going at each one of them with a microscope and really nailing the issue.

From my little experience what I see are tremendous strides being made in technology. However we are only too aware of the shortcomings too.

This area is simply brilliant. It ought to be pursued by a greater number of people than it is now. In India in particular students still haven't caught on to the beauty of the field. I really hope more engineers take it up. I am sure we can contribute significantly.

I am really glad you chose to write this. I love reading your science pieces.

Anurag said...

BTW, if you download XEmacs, it comes with an inbuilt psychoanalyst game, which is quite fun. :))

Anurag said...

After posting the comment, I went and played a game with Psychotherapist, which was hilarious. Check out my blog for the transcript. :))

Anonymous said...

I find some optimistic conclusions about Artificial Intelligence, and the devotion to that nomenclature, not the field of study itself, quite disturbing.

One of my close friends has been talking recently about exploring the logical generatives of pursuing the theory of "artificial stupidity". I like that approach.

We'll understand our friends better that way.

Will computers ever be able to think? I think that is a strange and unnecessary question, considering that we haven't been able to build one single clock that tells the correct time, or is able to give the same time to everyone on the planet. or to even define time.

we don't know what information is.

we don't even know what makes us human beings do things where we question whether we thought about the repercussions before we did them, or did we just do them out of some higher compulsion.

we do not even know how different thought is from action.

Anonymous said...

I know this comment is a little late but, I found your article very interesting and i think it really hits the nail on the head with some issues.