![]() Artificial Intelligence |
THE TEN BIG QUESTIONS Artificial Intelligence
Machines such as computers can calculate, but calculating is not the same as thinking. A chess computer running a program like Deeper Blue, the program that defeated Kasparov, calculates moves at an amazing speed but it does not think. It does not have beliefs or desires. It is not conscious. The mathematician Alan Turing once proposed a test which could be used to settle the question whether a computer met the criteria for being an 'artificial intelligence', that is to say, a test which would determine whether the computer was not only able to calculate but also to think, as we do. The idea of the test is childishly simple. Suppose you log onto a chat line. You have a conversation with an individual who calls themself 'Daniel'. How do you know that Daniel is not a machine programmed with set responses to the words that you type on the screen? If Daniel is able to continue to hold up an intelligent conversation, respond appropriately to whatever questions you asked, then you would conclude that Daniel was capable of thinking. What Turing said was that it is irrelevant whether Daniel is a human being or a machine. A machine that can do what Daniel does is intelligent, is capable of thinking, by definition. Your question, however, is how it is possible that a machine like 'Daniel', a machine with genuine 'artificial intelligence', could ever exist. Is that a technical question or a philosophical question? Daniel's 'brain' is not made of biological tissue but metal, silicon and plastic. But why should biological tissue be the only material that is capable of producing thoughts? On the other hand, perhaps you are just as puzzled (as I am) by the question how a person can think! Daniel does not have a body, as we have. He sits motionless on a desk. His only action is to spew out words when words are fed in. For me, that is a serious, perhaps fatal objection, which is why I am not finally convinced by the 'Turing test'. I share the view held by a number of researchers in this area that a genuine 'artificial intelligence' would have to possess something analogous to arms and legs, eyes and ears. I would have to be an agent, and not merely a language user. Geoffrey Klempner
Let me acquaint you with an incontrovertible fact: The mind is not a computer. Strictly speaking, that's the end of the evaluation. However, it is probably not unimportant to add that claims to the contrary require the burden of proof to be shifted onto the shoulders of those who make them; but to date no-one has put together an even half-way convincing case. I'll put a couple of idea into your head further below, but le me finish the present train of thought first. Fifty years ago, similar notions were touted about the mind being a like a telephone exchange, and fifty years before that it was a more sophisticated variety of electromotor. And so on. You can see from this that whatever the present pinnacle of human invention happens to be immediately suggests itself as an analogue of the mind. But the mind (need I remind you!?) is neither an electromotor nor a telephone exchange (nor even remotely like either of these): and the case for computers is not a jot better or more plausible. For not only is the mind not a computer, but it is not software either; and calling the brain and its contents "hardware" is just as far-fetched. Now I'm not saying all these negative things to scare you away (fat chance!). Nor is it my intention to belittle the truly worthwhile research done by hundreds and perhaps thousands of earnest scientists into artificial intelligence after all, the benefits are all around us and they are meaningful. But one has to draw a line somewhere between research and the broadcasting of ideas that, inadequate as they are, still harbour the possibility of dehumanising the idea of human intelligence and to do so from a basis of utterly inadequate evidence. That's my point; and I'm not the only one making it; and I wish people would start to listen before it is too late. There is a sinister joke around about computers, that the greatest danger for us is not that computers might begin to think like us, but that we will begin to think like computers. In the end the truth about computational theories of the mind is that neurologists and biologists, i.e. the people who study life "in the raw", not on a computer screen, would hardly lend their support to it. These people know too much about bodies and brains and nerves and all that to be taken in by fancy electronic gismoes. So to end, I'll toss a couple of ideas your way that you might really like to think about: 1. The brain is made up of cells. Cells are living things, just like you and I; and this means they're neither logic gates nor chips nor wires etc etc. They are living things that make a living out of constructing bodies and brains and lungs and skin. Now being alive means, of course, that they are vulnerable to disease, to shortages of food, to tiredness and all the other problems that beset life forms; and eventually they grow old and die. Ask yourself: what happens to a computer when the algorithm looks for a memory address and can't find it? HANG UP! In the brain, some 10,000 cells die every day. Would you like to write a program for a computer where you are not allowed to define a certain cell as memory X, for fear that it might be dead tomorrow? With the rate of fatality I've just noted, how many times do you suppose your brain/ computer would hang up on an average day? Do you think you or any of us would be around today to discuss this problem? 2. Okay, you might want to answer, surely there's got to be a way. After all the brain does work as a sort of information processing device, even if our terminology is a bit nave. Now I might be inclined to accept that point, but again with considerable reservations. Because it is commonly accepted theory that the brain as an intelligence device, works by parallel distributed processing. Computers, as you know, work by digital processing. These two methods are as different as chalk and cheese; and while we do understand a great deal about the way the brain does its parallel processing, it is not a theory that is easily portable into machines. With all our sophistication, we are stuck on the problem that the only truly parallel distributing intelligence system is, in fact, the brain. What we can simulate by means of batteries of CPUs strung up together is a very poor substitute and in any case a bit of a cheat. But this being the case, we are back to where we started. If the brain is a parallel processing information device, if the only truly parallel processing system known to us is the brain, and if the brain is part of the biological partition of the universe, then there is no warrant for holding that computationalism as an account of the mind has a hope in Hades of being a true account. I don't know what age you are, and so it is a little difficult to recommend something for you to read. A great deal of the literature devoted to these subject matters is a damned hard slog unless you have some prior training, for without it, you're just as likely to be bowled over by one or the other argument and left without that proper resource we call "independent thinking". Still, you might like to sample on the "pro" side books like After Thought, written by supercomputer expert James Bailey; or Paul Churchland's Engine of Reason, Seat of the Soul and if you want a mind-spinning yarn try Consciousness Explained by Daniel Dennett. But especially in regard to the last-named, be warned: this is a chrome-coated phantasmagoria that takes one hell of an effort to keep at arm's length. On the "contra" side, John Searle has written two smallish, but very accessible books, and his Discovery of the Mind is on the way to becoming a classic for the "no" case. The well-known mathematician Ian Stewart has written several books in which the mind is a prominent "character" (e.g. The Collapse of Chaos and Figments of Reality, with co-author Jack Cohen). But if you choose not to read any of these, inform yourself at least about the reasons behind the pro and con arguments, and for this you can do no better than Gerald Edelman's popular account of his decades of research into the brain itself: Bright Air, Brilliant Fire. This book, I think, should be mandatory reading for anyone who wants to be in possession of a sound opinion on matters related to the human brain including the question on how credible (or not) computationalism is as a theory of the mind. Jürgen Lawrenz
When we answer the questions set in Pathways with only the web site as silent witness we are in a similar position to that occupied by a questioner and responder in Alan Turing's famous test for determining machine intelligence. In this test if the answers sent back to the questioner could not be distinguished from the answers a human would send then we could conclude that the machine is responding as intelligently as a human would so that we could not be sure if the sender is a living human or a non-living machine. In the game we are playing, we can't be sure if questioner or the answer provider is taking the position of the machine, or if in fact there are only machines communicating. I am pretty certain that if I am a machine then I am indistinguishable from every other human machine. Furthermore if I thought I was responding to a non-human machine the kind of answer I would give would be different to the kind of answer I would give if I thought the receiver was human and I also had some idea of the context in which the question was being asked. I could, with no concern for the consequences of the discussion pursue an analysis of the concept of 'being alive' if the analysis was only a word game. But if the philosopher-player believes there could be life changing consequences for the questioner-player there should be constraints on the scope of the answer provided. What follows from these considerations is that philosophical investigations that take on practical issues ought to work on a principle of non-indifference with respect to the learning and actions that may follow from the interchange. Suppose that 'Peter' is a pseudonym for a woman in the UK who is currently seeking the right to die because she does not want to continue her life in a severely reduced and dependent form. She is alive but not independently alive. Should the correct philosophical Turing response be to neutrally elicit from the questioner what her understanding is of the phrase 'to be alive' as she was before her present condition, what it is now and what she thinks it will be? Given the questioner's position would it also for the sake of logical completeness be the correct response to offer interpretations of the key phrase not included in her perspective and persuade her that she may not choose to reject some of those meanings? For a person in this position the question we are thinking about is clearly very heavily weighted with both issues of fact and issues of value. So that a logico-linguistic approach to the analysis of the question may only provide one approach to unfolding the complexities of the issue or even changing minds. If the questioner has full mental competence, as in fact the individual in question does, then Descartes might try and persuade her that she is no less alive in her present position than she was before, given the belief that the essence of being is thinking. If thinking though was not a source of satisfaction either because it was not something she particularly excelled in or practiced very often or it was not something that she would place in her list of preferences then an approach that would compliment the previous one mentioned would be to elicit from the questioner what are the satisfactions and non-satisfactions of being alive. The Turing P-game has now altered so that it is not simply a matter of discrete question, response, evaluation, decision and closure but more a continuum of interchange in which information, ideas, questions, learning and teaching are flowing in both directions. But if the elicitation of knowledge in the context of a philosophical investigation has the form of a directed interview then the philosopher should have some idea about the direction the interview can be steered in and those it should not, in the context of a philosophical and not legal or medical inquiry or any other kind of inquiry. The practical philosopher should be aware of what has been, in the history of ideas considered to be sources of satisfaction for individuals in being alive but they can also take an alternative, more general approach of considering individual satisfaction to be an indivisible part of dualistic satisfaction. Decisions then relating to being alive or choosing to not be alive then become inseparable from how the satisfaction and non-satisfaction of others are affected either as classes of individuals such as relatives, such as husbands, wives, children or parents, others in similar positions now or in the future or others in the abstract as represented in medicine as the abstract entity, 'the patient', 'the defendant' in law, the social services 'client', the 'child' in family law, the therapy client. In answering this question I felt that it was necessary to first talk about the logical delicacy of philosophical investigations conducted blind and suggested that the philosopher player should work within the ground rules of non-indifference in such contexts. Secondly I have suggested that in the context of the questioner-player making life changing decisions as a consequence of the game then the philosopher-player can use a variety of techniques for practical philosophy:
Finally, it also seems to me that given the tendency of most individuals in the position of questioner producer, answer provider or both, to make two kinds of cognitive mistake of scale at some time due to stress, material or political circumstances which can be characterised as mistakes of over generalisation and mistakes of under discrimination then all philosophical investigations should be conducted within the constraints of non-indifference to consequences. References: Turing Test: Alan Turing Andrew Hodges London 1985 Neil Buckland |
|||
![]() |
This site is brought to you by Pathways to Philosophy the world leading distance learning program run by the International Society for Philosophers. More answers to philosophical questions can be found at Ask a Philosopher and the PhiloSophos Knowledge Base. Webmaster Geoffrey Klempner |