Up@dawn 2.0

Wednesday, April 17, 2013

(16-1) Turing and Searle

April 16, 2012

Group 1 discussed the Turing test and whether computers will ever think.

You're sitting in a room. There is a door into the room with a letterbox. Every now and then a piece of card with a squiggle shape drawn on it comes through the door and drops on your doormat. Your task is to look up the squiggle in a book that is on the table in the room. Each squiggle is paired with another symbol in the book. You have to find your squiggle in the book, look at the symbol it is paired with, and then find a bit of card with a symbol that matches it from a pack in the room. You then carefully push that bit of card out through your letterbox. That's it. You do this for a while and wonder what's going on. This is the Chinese Room thought experiment, the invention of the American philosopher John Searle (born 1932).
It's an imaginary situation designed to show that a computer can't really think even if it seems to. In order to see what's going on here you need to understand the Turing Test. Alan Turing (1912– 54) was an outstanding Cambridge mathematician who helped to invent the modern computer. His number-crunching machines built during the Second World War at Bletchley Park in England cracked the ‘Enigma’ codes used by German submarine commanders. The Allies could then intercept messages and know what the Nazis were planning. Intrigued by the idea that one day
computers might do more than crack codes, and could be genuinely intelligent, in 1950 he suggested a test that any such computer would have to pass. This has come to be known as the Turing Test for artificial intelligence but he originally called it the Imitation Game. It comes from his belief that what's interesting about the brain isn't that it has the consistency of cold porridge. Its function matters more than the way it wobbles when removed from the head, or the fact that it is grey. Computers may be hard and made from electronic components, but they can still do many things brains do. When we judge whether a person is intelligent or not we do that based on the answers they give to questions rather than opening up their brains to look at how the neurons join up.
So it's only fair that when we judge computers we focus on external evidence rather than on how they are constructed. We should look at inputs and outputs, not the blood and nerves or the wiring and transistors inside. Here's what Turing suggested. A tester is in one room, typing a conversation on to a screen. The tester doesn't know whether he or she is having a conversation with another person in a different room via the screen – or with a computer generating its own answers. If during the conversation the tester can't tell whether there is a person or a human being responding, the computer passes the Turing Test. If a computer passes that test then it is reasonable to say that it is intelligent – not just in a metaphorical way, but in the way that a human being can be.
What Searle's Chinese Room example – the scenario with the squiggles on bits of card – is meant to show is that even if a computer passed Turing's test for artificial intelligence that wouldn't prove that it genuinely understood anything. Remember you are in this room with strange symbols coming through the letterbox and are passing other symbols back out through the letterbox, and you are guided by a rulebook. This is a meaningless task for you, and you have no idea why you are doing it. But without your realizing it, you are answering questions in Chinese. You only speak English and know no Chinese at all. But the signs coming in are questions in Chinese, and the signs you give out are plausible answers to those questions. The Chinese Room with you in it wins the Imitation Game. You give answers that would fool someone outside into thinking that you really understand what you are talking about.
 So, this suggests, a computer that passes the Turing Test isn't necessarily intelligent, since from within the room you don't have any sense of what's being discussed at all. Searle thinks that computers are like someone in the Chinese Room: they don't really have intelligence and can't really think. All they do is shuffle symbols around following rules that their makers have programmed into them. The processes they use are built into the software. But that is very different from truly understanding something or having genuine intelligence. Another way of putting this is that the people who program the computer give it a syntax: that is, they provide rules about the correct order in which to process the symbols. But they don't provide it with a semantics: they don't give meanings to the symbols. Human beings mean things when they speak – their thoughts relate in various ways to the world. Computers that seem to mean things are only imitating human thought, a bit like parrots. Although a parrot can mimic speech, it never really understands what it is saying. Similarly, according to Searle, computers don't really understand or think about anything: you can't get semantics from syntax alone.

Warburton, Nigel (2011-10-25). A Little History of Philosophy (pp. 234-237). Yale University Press. Kindle Edition.

So what do you think. Do computers actually have intelligence? Also Will there ever be a time where computers will be able to think for themselves and if so when?

No comments:

Post a Comment