Up@dawn 2.0

Tuesday, November 19, 2013

the peripatetic namless 6 (17-1)

Once again our little group went on a peripatetic romp so I have no idea what the other groups talked about. As for us we talked on a range of topics from Sacks' take on artificial intelligence, Ayer's validity, the redundancy theory and fear and superstition. If you give a robot the ability to experience the world and make connections from it's experiences would you be able to call it human? I personally believe that the robot would not be considered human but alive. Although not alive in the since of you, me, or my cat but alive in that it is aware of itself. In order for that robot to become human it would need to be able to express and read emotions. If you are a fan of J.J. Abram then you might know that he has a new T.V. series on FOX called Almost Human. If not here is a brief synapsis of the show in the future all cops are required to have a robot partner but there is one that doesn't like robots so here is stuck with one that has what is called a "synthetic soul". Basically he was designed to feel emotions like Data and David, from Star Trek and Ai.

13 comments:

  1. Ricky (16-3)3:15 PM CST

    I agree with you because technically human's have a certain build and make-up that makes us human which a robot would lack and I also agree with you that the robot would be alive but it still wouldn't be classified as a human regardless if it has its own consciousness.

    ReplyDelete
    Replies
    1. I still believe that humanity requires a soul. I know this is normally seen as a religious belief (and it is), but I think that it can still be seen as a philosophical conclusion as well. I don't want to go too in depth here because what I have to say I want to write a book about (or at least a thesis), but I feel that a closer examination of the presence of ideals in the world can lead one to a better and more complete understanding of the divine origin of man.

      Delete
  2. (16-1)Talking about artificial intelligence is interesting. On one hand, if AI goes to the extent of having feelings just like humans, it would seem wrong to treat them as anything other than another human being, but ultimately it's just a program. I wouldn't even go as far as to call it alive. Putting feelings into AI could be interesting, but it would just complicate things in the long run.

    ReplyDelete
  3. I also agree that a robot cannot be considered humans. Many things are required to be considered human. Also, i do not think it is alive in that it is not a living organism.

    ReplyDelete
  4. Austin Duke8:14 PM CST

    (16-1) I think artificial intelligence could be very helpful as it would allow us to use the machines to perform tasks too dangerous for humans. But if they could truly think, act, and feel as human, it would be unethical to use the machines for such dangerous tasks.
    FQ: Who is considered the most famous philosopher of the 20th century? (Jean-Paul Sartre)
    DQ: Do you think that human life is meaningless if each person doesn't give his/her self purpose.
    http://www.youtube.com/watch?v=wq3B5prBsK0

    ReplyDelete
  5. I think it's fun to imagine androids who learn to feel or become humanlike, but I'm not sure it could ever happen. Our emotions and sense of self are probably an effect of evolution and the desire for survival. Computers are made in such a different way from humans, I don't see how they would ever come to have that same sense of self. I read somewhere that the biggest drawback of current computers compared to humans was that they can only process one thing at a time, so I'm pretty sure we'd have to wait for something like quantum computing that can handle things simultaneously before any real artificial intelligence becomes plausible. I'm sure if artificial intelligence did develop that far, there would be all kinds of philosophy going on about their sentience, and their status as "beings."

    ReplyDelete
  6. 16-2
    I dont think we will experience androids that have full human sentiment any time soon, definitely not in our lifetime. Quantum computers provide little advantage over today's machines. They are some aspects that they can calculature with an advantage but overall they will still be slower then regular binary computers. So from this point Id say talking about machines with human sentiment would be meaningless. haha

    ReplyDelete
  7. FQ-What was the name of the book Sartre wrote? (Being and Nothingness)
    DQ- Would you guys read a book thats 800 pages about Satre's idea of states of beings?
    Comment- i loved our walking and discussion
    Link-http://www.youtube.com/watch?v=xJEDTgLjJyQ This is a summary of his book

    ReplyDelete
  8. Robots that have the intelligence to be aware of themselves is another thing that scares me about the direction science is going. Doesn't anyone remember skynet? Joking aside, some science is taking humanity to heights just lofty enough to destroy ourselves in just about every way possible.

    ReplyDelete
  9. Andrew 16-110:31 AM CST

    The redundency theory seems to basic to even be a theory. Its as though if something can't be explained it simply exhist.

    ReplyDelete
  10. I don't think that a robot could be considered human because it's a man-made object that's made to look and act like a human but they are not. Robots are not able to think for themselves, express human emotions or have a conscience. I don't even consider them being alive because their not any form of a species, just a machine that can be turned on/off.

    ReplyDelete
  11. Our group also went on a peripatetic walk. It is a very fun thing to do instead of discussing in the class.

    ReplyDelete