Up@dawn 2.0

Tuesday, November 27, 2018

H2 Nov. 29th Contemporary Issues



1st Issue

Watch this clip about the 2014 movie Transcendence for some context for our discussion!

Described by some as "possibly the most challenging and pervasive source of problems in the whole of philosophy" (Blackburn 74), consciousness is one of those subjects that tends to give the ancient quest for wisdom its impenetrable quality. Talk about consciousness, self, mind, soul, etc. is often––but not always unnecessarily––complicated. Hopefully, our conversation will be more "down to earth," thanks to our use of artificial intelligence as an anchoring focus. Our central question is: Can artificial intelligence (A.I.) ever be conscious?

Before we really get going, a bit more should be said about what these terms mean. What is A.I. and what is consciousness? Let’s take them in that order. Although A.I. is a young but rapidly growing field of science itself, we’ll concern ourselves just with the created "machines that can do the kinds of thing that humans can do" (Blackburn 25). Of course, this is a very general definition and includes everything from calculators to the digital assistant on your smartphone, as well as a supercomputer capable of performing a much wider range of human activities. The latter end of this spectrum, often called “strong A.I.,” is where the potential for consciousness, at least theoretically, resides.

As for consciousness, I like Simon Blackburn’s language here too: consciousness is the "theater where my experiences and thoughts have their existence, where my desires are felt and where my intentions are formed” (74). If "theater" is too loaded a word, then maybe subjectivity is better. To put it another way, consciousness is that “first-person” awareness of not only sights, sounds, and smells but also complex thoughts and feelings. More could be said about consciousness. However, since a significant part of our discussion involves reasoning our way through understandings of consciousness, taking too much for granted in this initial definition would be begging the question. (FYI, although I try to remain clear with terms, there are instances below where I’m speaking of “soul” or “mind” because this is the language of the source or philosopher, but what’s being said can be applied to consciousness also.)

Check out this Crash Course Philosophy video, if you’d like more clarification:

One more thing! Besides clarifying terms, there’s also some fundamental starting positions we should identify. Doing this now will illuminate what ideas are open to us down the road and, hopefully, keep us consistent. I’ve decided to use a baseball field to map out these positions. For inspiration, I'm indebted both to our own Dr. Oliver--particularly his love for all things baseball--and to Dr. Patrick Grim from whom I took this idea (FYI, Grim got the idea from John Haugeland), .


Materialism:
If you play the materialism position, then that means you see the universe, and everything in it, as made up of only physical things. These physical things may be different in various ways (size, shape, etc.), and some of these physical things may be presently unknown, but they’re all—things known and yet to be known—physical. This thinking position also means that apparently immaterial things, such as the mental states involved in consciousness, depend on physical things for their existence, e.g. happiness is the result of certain physical, neurological processes.

Idealism:
If you play the idealism position, then that means you see the universe, and everything in it, as a product of thought, consciousness, or perception. Physical reality, for all its convincing appearance, isn’t really there, certainly not in the sense of existing independently, that is, apart from mind(s). This can be a slippery idea to grasp so let’s have an example. George Berkeley is a well-known representative of idealism—remember him? Berkeley thought that “everything you experience and think about . . . only exists in the mind” (Warburton 90), a counter-intuitive idea he expressed in the Latin phrase “esse est percipi” (“to be is to be perceived”). According to this position, minds are what really exist. Bodies, including the brains in them? Nope.


Of course, Berkeley and other western thinkers don't have a monopoly on idealism. This is a common metaphysical position among Easter philosophical and religious traditions. A fascinating example from Hinduism is the account of the creation of the universe by the deity Brahma, who sits atop a lotus flower that grows from the navel of Vishnu, another deity sleeping on a cosmic serpent. The universe, and everything in it, is Brahma's projection of the sleeping Vishnu's dream.
Watch comparative mythologist Jospeph Campell talk about it here and/or read the story yourself here!

Dualism:
If you play the dualism position, then that means you see the universe as a combination of two distinctly different kinds of things, some things being physical and other things being non-physical. Although dualism tries to bring together the strengths of idealism and materialism, an especially troublesome aspect of this position is how to explain that two distinctly different kinds of things can influence one another. How can a non-physical entity effect anything in a physical entity? Despite this difficulty, dualism does allow its "players" to talk about minds/souls/spirits/etc. and bodies as somehow joined together. Traditional Christian views of human nature sit comfortably in this position.  

I want to wrap up this talk about materialism, idealism, and dualism by giving an example of how knowing your position makes a difference. Let's take myself. I'm a materialist, which means that I see the universe and everything in it as physical in nature. There's nothing immaterial or spiritual behind reality. It's physical all the way down, from top to bottom. Now take this thought experiment. There is a robot and it is physically identical with me. Our one difference is that I was conceived naturally and it was built in a robot factory. Is it possible that the robot is conscious? Because I am a materialist, I must affirm this possibility if I am to avoid contradicting myself. If the robot is physically just like me, all the way down, and it is my physical reality that produces my consciousness (as with everything else in the universe), then the robot can be/is also conscious. If I am finally unwilling to affirm this possibility, maybe because I think consciousness must require something "extra," that's okay––but then I am not a materialist.


Where you choose to play is up to you. And you’re free to choose another position if after thinking you find your current position untenable. The point is that you have rationally defensible reasons for choosing a position and that, once you’re there, you don’t hop around arbitrarily or merely when it’s convenient for you to do so. To return to the earlier example of Berkeley, as ridiculous as his conclusions may seem, we can’t easily accuse him of being inconsistent. If nothing else, Berkeley demonstrates a strong commitment to follow a chosen path wherever it leads.

Alright, now I think we’re ready to really step into thoughts about consciousness and A.I. As promised, I’m going to supply some thoughts from various philosophical sources that I’ve identified as relevant to our topic. To be sure, there are many more to explore. I’ve put these sources into two groups in order to give a balanced perspective. The first three provide ideas that I think can support negative responses to our central question, while the remaining three are useful for affirmative responses. Keep in mind that just because I’ve grouped these sources together doesn’t mean that they totally agree with one another. Within the first group, for instance, there’s a lot that separates Plato from Descartes!

Negative Sources

Plato:
Warburton talked about how Plato's ideal society is organized, from top to bottom, into three parts: philosophers, soldiers, and workers. Warburton didn’t say much about how Plato's soul (that is, a soul as he understood it) is likewise divided into three parts: rational, spirited, and appetites. These parts can be further explained this way:

"First is the motivation for goodness and truth: the reason. Second is the drive toward action: the spirited (will). Third is the desire for pleasure of the body: the appetites. The will is neutral and inclined to follow reason, but it can be pulled in either direction" (Price 51).

Later, Price says that "Plato argued that the soul exists before it enters the body and will continue to exist after the body dies. The body is bound to the physical world . . . . But the soul is immortal. Because the rational soul is not physical, it can survey the world of Forms. Each soul will pass through many lifetimes" (Price 54).

Check out this creative video about Plato’s understanding of the soul:

Now, it's possible to accept parts rather than the whole of Plato's thinking. Besides the issue of immortality, which is important to many for religious reasons, Plato's division of the soul into different parts might have some value—even for secularists. There are other ways of understanding consciousness, perspectives that tend to stress the conscious self as a unitary and undivided entity. What we get with Plato is obviously not a unity but a conflict, an inner struggle. Some say that this better accounts for our common subjective experiences, and they point to a similar but expanded understanding later articulated in Freud's own tripartite psychology of the self (id, ego, and superego).

There's a scene in the movie Transcendence when Max, a scientist friend of the protagonists and one who's been quite skeptical about whether or not computer "Will" is really Will, says that he doesn’t think a supercomputer can have human consciousness. It’s not for lack of computational power but rather because of any computer’s inability to hold something together that is fragmentary and conflicted. Humans are shot through with contradictions, Max says. Speaking about his own contradictions, the American poet Walt Whitman wrote, "I am large. I contain multitudes." Can computers do that?

Expressing what he sees as Shakespeare's philosophy of the mind, Colin McGuinn writes, "The psyche is more like a seething pit of warring factions than an orderly and unified progression of logically consequential thoughts. There is much more (and also much less) to the mind than rational calculations and transparent reasons . . ." (159).

Descartes:
If for nothing else, maybe you remember Descartes for his beyond-doubt belief in the existence of his mind. Our existence is most fully and certainly known through our being res cogitans ("a thinking thing"). Like Plato––but with some real differences we'll not bother with––Descartes is a dualist. He thinks "that your mind is separate from the body and interacts with it" (Warburton 66). What is importantly distinctive about Cartesian dualism is its attempt to create a compromise between the highly mechanical physics that was increasingly all the rage during Descartes' time and cherished ideas like "free will, moral responsibility, and God" (176). A geometer himself, Descartes accepted the mechanistic and deterministic worldview, while also wanting to place clear limits on it. Arguing for a distinct, independent consciousness is one way of protecting humans from the determinism of a mechanistic universe and preserving the immortality of the soul. In a universe full of cold mechanisms, Descartes wanted to show that humans are more than just machines.

This neat and artistic video is puts Descartes' mind-body idea well:

Something quite relevant to our discussion is Descartes’ talk about animals. Bertrand Russell puts it this way: “animals [Descartes] regarded as automata, governed entirely by the laws of physics, and devoid of feeling or consciousness. Men are different: they have a soul . . .” (561). Some people think of this as a way of saying that animals are entirely driven by instinct. Their behavior is based on very shallow ground, a mere stimulus-response way of acting.

If we think especially of this last point, then perhaps we’ll see how an understanding of consciousness can depend on more than just the “right” behavior or responses. To put it in a question: is consciousness based on mental states (think of the subjective experiences described at the beginning) or is it only based on “a propensity or disposition to act or behave in a certain way,” the latter being a position of philosophical behaviorism (Moore and Bruder 238)? (I'll say may about behaviorism and other philosophical theories of mind below.) For example, let’s say that there is an A.I. whose behavior is identical to your own. And this identification in behavior was a result of pre-determined, pre-set, and programmed responses; an input-output execution. I think a behaviorist would say that such an A.I. is no less and no more conscious than its human counterpart. I think Descartes, and next-up Searle, would disagree. A machine with a stimulus-response way of behaving, no matter how complex and human-like, is still just a machine.

 . . . Then again, maybe humans just are very complex machines! If you find this thought interesting, check out this video:

John Searle:
This brings us to the final source in this group: Searle and the Chinese Room. Perhaps you remember this from one of our recent classes. Searle was a critic of Alan Turing, who thought an A.I. could be said to be doing the equivalent of human thinking if it convinced real humans with whom it interacted. Searle's thought experiment, in which one English-speaking person is given Chinese queries they do not understand themselves but to which they are able (via a guidebook) to provide corresponding responses, pushes back against Turing confidence in A.I. Warburton writes, "Searle thinks that computers are like someone in the Chinese Room: they don't really have intelligence and can't really think. All they do is shuffle symbols around following rules that their makers have programmed into them. The processes they use are built into the software. But that is very different from truly understanding something or having genuine intelligence" (236). Simply put, "Understanding involves more than just giving the right answers" (237).


Here's an animated explanation of the Chinese Room thought experiment:


Affirmative Sources

Aristotle:
I can't shake the feeling that by including Aristotle among the affirmative sources I'm stretching my interpretive creativity. After all, Aristotle is a student of Plato. He's also a dualist and talks a great deal about "soul." How can any of what he says steer us toward affirming the possibility of consciousness in A.I.?

Well, here's where being wary of commonly grouped together philosophers can really come in hand. If you've been listening to Dr. Oliver, then you probably remember that Aristotle and Plato were quite different in many ways. Although Aristotle did talk a lot about "soul," he meant something totally other than what Plato described. Anthony Gottlieb writes, "[Aristotle] did not think of the soul as some sort of ghostly substance temporarily occupying the body, as in the Orphic type of belief adopted by Plato. Instead he though of it as whichever arrangement of physical characteristics makes the body alive and capable of perception and thought. . . . to have a soul was to have a body that was organized so that it worked in certain ways" (230-231). In Aristotle's view, all organisms have soul, including plants, animals, and humans. It's the thing that gives their body integrity, the power behind activities like nutrition, growth, reproduction, and movement.


Watch this for a decent overview of Aristotle's theory of soul:

What I think Aristotle's wide attribution of soul may open up is talk about seeing consciousness as something already widely held by much more than human beings. Consciousness need not be a zero-sum situation, where the more things that have consciousness, the less consciousness human beings have. There's a view called "panpsychism," which holds that "all parts of matter involve consciousness" or, more broadly, that the "world, or nature, produces living creatures, and accordingly ought to be thought of as itself as an alive and animated organism, literally describable as possessing reason, emotion, and a 'world-soul'" (Blackburn 266). A lot to unpack in that quote, I know, but I wonder if, following Aristotle's theory of soul, allows for expand consciousness to include non-human entities. A possible related question for our discussion on A.I. are: If an A.I. has "soul," somewhat along the lines of Aristotle's view (that is, if it is able to eat, grow, reproduce, and move), then is it conscious? This kind of possibility is brought up in the 1995 classic anime Ghost in the Shell and the more recent sci-movie Blade Runner 2049.


Check out this video for an overview of panpsychism:


Thomas Hobbes:
Unlike his contemporary, Descartes, Hobbes was a hardcore materialist, "believing that humans were simply physical beings. There is no such thing as the soul: we are simply bodies, which are ultimately complex machines" (Warburton 60). Hobbes thought that humans, brains and all, are rather like sophisticated clocks, the mechanisms of which he was quite familiar with. There's no exception to this: "thoughts, will power, and emotions," as well "hatred and love," are all physical realities subject to the laws of motion (Price 168).

If you suspect that such thinking wouldn't allow for free will, you're right. Hobbes denied the possibility of free will (although he did allow for free acts, which are free because they are without external restraint). So, if our understanding of consciousness makes use of Hobbesian thinking, then an A.I. being without free will would be a non-issue. A robot doesn't need to have free will in order to be conscious. Neither A.I. nor human beings have free will. We're all physical bodies subject to the laws of motion.



Check out this video for an explanation of determinism:



Alan Turing:
And now, friends, we've come full circle in our sources. I'll conclude by returning to the Turing test and Turing's own confidence in the eventual ability of computers to convince humans that they are really thinking. So far, Turing's confidence has yet to be pay off. Moreover, there are real criticisms about the perceived craftiness in programming by which computers are able to be so convincing. This hasn't put an end to philosophical behaviorism I described above though. In fact, behaviorism has given way to functionalism, a theory of mind that sees mental states (or consciousness) as a "triplet of relations: what typically causes them, what effects they have on other mental states, and what effects they have on behavior" (Blackburn 144). According to this, the mind certainly involves complex states and processes, but each of these can be broken down into smaller and much simpler processes. Furthermore, these underlying processes could be different--that is, different from our own--while still yielding the same outcomes, e.g. beliefs, desires, etc. Moore and Bruder give a couple of ways how this might be applied:

"For example, there may be beings in a far distant galaxy whose brains and nervous systems are radically different from our own but who nevertheless have thoughts and beliefs and desires and motives and other mental states. This is not a terribly far-fetched possibility. Now if there are such beings, it's quite possible that when they believe something, what goes on in their 'brains' and 'nervous systems' may not be the same thing at all as what goes on in ours when we believe something. (They might not even have what we would call brains!)

. . . And some day thinking robots may be created (at least physicalists must admit that this is theoretically possible) with 'brains' made out of silicon and plastic. Though these robots will think, in all probability somewhat different physical processes will be involved when they do than are involved when we think" (243).


Watch Hilary Putnam talk about functionalism in this video:

Bibliography

Blackburn, Simon. Oxford Dictionary of Philosophy. Oxford University Press, 2005.

Gottlieb, Anthony. The Dream of Reason: A History of Philosophy from the Greeks to the Renaissance. New York: W. W. Norton and Company, 2000.

McGinn, Colin. Shakespeare's Philosophy: Discovering the Meaning behind the Plays. New York: HarperCollins Publishers, 2005.

Moore, Brooke Noel, and Kenneth Bruder. Philosophy: The Power of Ideas. Mountain View, California: Mayfield Publishing Company, 1995.

Russell, Bertrand. The History of Western Philosophy. New York: Simon and Schuster, 1972.

J.G., Nov.2018
==
Additional bibliography:





4 comments:

  1. Fascinating stuff, Jamil. Can't wait!

    I mentioned watching CSPAN/BookTV last weekend... one of the segments featured Virtual Reality pioneer Jaron Lanier, expressing skepticism about AI. His views are cogently presented in several books, including "You Are Not A Gadget"-

    "“But the Turing test cuts both ways. You can't tell if a machine has gotten smarter or if you've just lowered your own standards of intelligence to such a degree that the machine seems smart. If you can have a conversation with a simulated person presented by an AI program, can you tell how far you've let your sense of personhood degrade in order to make the illusion work for you?

    People degrade themselves in order to make machines seem smart all the time. Before the crash, bankers believed in supposedly intelligent algorithms that could calculate credit risks before making bad loans. We ask teachers to teach to standardized tests so a student will look good to an algorithm. We have repeatedly demonstrated our species' bottomless ability to lower our standards to make information technology look good. Every instance of intelligence in a machine is ambiguous.

    The same ambiguity that motivated dubious academic AI projects in the past has been repackaged as mass culture today. Did that search engine really know what you want, or are you playing along, lowering your standards to make it seem clever? While it's to be expected that the human perspective will be changed by encounters with profound new technologies, the exercise of treating machine intelligence as real requires people to reduce their mooring to reality.”
    ― Jaron Lanier, You Are Not a Gadget

    ReplyDelete
    Replies
    1. And: “I fear that we are beginning to design ourselves to suit digital models of us, and I worry about a leaching of emphaty and humanity in that process.”

      But also: “The most important thing about a technology is how it changes people.”

      Delete
    2. Ah, great quotes! Interesting point about the Turing test too!

      Delete
  2. A fascinating 60 Minutes story on AI, facial recognition, privacy etc., aired January 13, 2019. https://www.cbsnews.com/news/60-minutes-ai-facial-and-emotional-recognition-how-one-man-is-advancing-artificial-intelligence/

    ReplyDelete

Note: Only a member of this blog may post a comment.