Up@dawn 2.0

Tuesday, April 25, 2017

The Philosophy and Implications of Artificial Intelligence

Tristan McGuffin        Section 10                                                                         Installment 1


        Technology has always been an innate fascination for me throughout my life. While I was growing up, my father worked as a computer programmer and he along with my two older brothers spent much time bonding with me over various things such as videogames and electronics. Thusly being thrust into such a world of innovation and exploration has really left a mark on my life; a mark that I feel has many philosophical quandaries and ignited introspection into myself and the society I live in.

        Now, I know this is somewhat of a stretch at first glance, but hopefully after a bit of explanation I can give you some food for thought and give you ammunition in your further philosophical endeavors.

        Let me begin by giving you a quick and dirty of what I'm going to talk about: Artificial Intelligence.  Defined as the theory and development of computer systems able to perform tasks that normally require human intelligence. This refers to things such as voice recognition, decision making, and visual perception and identification. You might be surprised just by how many every day modern technological innovations use these kinds functions, things you might even have in your house, such as the Amazon Echo or your iPhone's Siri functionality.

        Every day our technology is advancing into new frontiers. We're constantly on the search for new and exciting ways to make our lives easier and what better way is there to do this than with electronic assistants and smart technology? However, all this has begun to raise some very serious questions about the future and what path this could lead us to.

What if our creations become self aware and excel beyond what we (humans) will ever be capable of? What if they resent us or don't comprehend the value of life the same way we do? What if the robots take over?

       Mankind is so scared of these possibilities that it has set rules and regulations in place to make an attempt to slow or hopefully prevent some crazy kind of apocalyptic future where robots are the overlords and use us as slaves, or worse, seek to destroy us. Though I appreciate and understand taking such ethical preventative measures, it really makes me wonder if all this doom and gloom isn't a bit unnecessary. If animals are capable of compassion, could a man made form of intelligence not develop similar dispositions? Consider Chappie: he is kind, self-sacrificial, and in a sense somewhat human in how he learns and adapts to what happens around him.

       Maybe it's some kind of innate human behavior to focus on negative things. When things become uncertain we tend to lean towards the doom and gloom, allowing the proverbial shadows on the bedroom wall to become wide-mawed ghosts and goblins waiting for us in the dark. However, I do understand that this is just the doings of an instinctive behavior that served us well in the times of prehistory when that shadow or that bush could have in fact masked the presence of some hungry and ferocious predator waiting for us to put our guard down.

      Now that we live in a relative state of luxury and safety, our minds have ventured down many
meandering paths of hypotheticals and innovations. We experience the world around us with the sensibilities of the modern age, but the lingering tendrils of the past still grasp us at our core. I feel like this has had very serious implications on how we've explored the unknown.

       That being said, we should use this knowledge to weigh in on how we view such fantastical, yet increasingly imminent futures for mankind. The promises of what technology can afford us is enticing to say the least, but we should truly contemplate what we are doing.

      We must ask ourselves "Is Artificial Intelligence a necessity?", "Right now, are we a child playing with a loaded gun?", and most importantly "Is it even ethical to imbibe something with the burden of intelligence and everything it may entail?"

 
   This is something I'm going to explore even more in my next installment covering this enigma. In the meantime, I recommend that you make your own thought expeditions over the subject as it requires a great deal of diversified philosophical schools of thought.

5 comments:

  1. Though it may be a bit scary to consider, you are right. Our technology is advancing every day, so why not make use of what society offers us. If there are more efficient ways of completing a task then why not do so. In this day in time, we are all worried about getting more for doing so little, so I think with changes like these reality could get a rain-check on what we've been wishing for all of these times we "don't want to do something".

    ReplyDelete
  2. It is interesting to see just how much technology has changed over the years. You hit on some great points!

    ReplyDelete
  3. ""Is it even ethical to imbibe something with the burden of intelligence and everything it may entail?" - I don't think "imbibe" is quite the word for it, but this is a crucial question going forward with AI.

    As for gloom and doom, that's a relatively recent phenomenon. The Jetsons spoke for my generation: optimistic, hopeful, unthreatened by Rosie the Robot... same for Star Trek's Mr. Data. Lately the pessimists are speaking more loudly. I hope they're wrong, I love the prospect of intelligent machines with attitude who intend us no harm beyond sarcasm and merited disdain.

    ReplyDelete
  4. A link to part 2 has been placed at the bottom of this installment and a link leading back to part 1 has been placed in the other.

    ReplyDelete
  5. Anonymous10:55 AM CDT

    Very good written article. It will be supportive to anyone who utilizes it, including me. Keep doing what you are doing – can’r wait to read more posts. philosophy

    ReplyDelete

Note: Only a member of this blog may post a comment.