Up@dawn 2.0

Monday, December 3, 2018

Philosophy in Futurama: “Overclockwise”

Jacob Hamm
Intro to Philosophy


Background on Futurama

Futurama is an American animated sitcom created by Matt Groening. The series follows the adventures of slacker Philip J. Fry, who is accidentally transported to the 31st century and finds work at an interplanetary delivery company.

Futurama is set in New New York at the turn of the 31st century, in a time filled with technological wonders. The city of New New York has been built over the ruins of present-day New York City. This "world of tomorrow" setting is used to highlight and lampoon issues of today and to parody the science fiction genre.

Science fiction often explores the potential philosophical consequences of scientific and other innovations, and Futurama is well-known for doing just that, but with an exaggerated and often comedic tone. Their world is the embodiment of the argument between the concepts of the technological singularity and transhumanism, which is discussed in this episode specifically. Artificial intelligence and its potential ramifications is a focus of this episode, too.


Plot Background on “Overclockwise”

After Fry, Bender, and Cubert lose several rounds of the video game “World of World War II 3” to the usual bad guys, the “the 3 stooge”-like sons of Mom, the founder and CEO of the gigantic robotic-producing corporation MomCorp, they overclock bender in order to improve his performance in the game. Bender soon grows in his need for power and overclocks himself even further, and eventually moves to Niagara Falls in order to cool and power himself in his soon massive form, in which he has developed into a god-like, omnipotent being capable of foreseeing the future, and “creating universes with each burp”.


Mom, who sues both Cubert and Professor Farnsworth for overclocking Bender, which is a violation of Bender's contract of ownership. Fry tries to convince Bender to help Farnsworth and Cubert, but Bender refuses, unconcerned with their troubles and predicting that they will be found guilty.

After Fry returns to Farnsworth and Cubert's trial, Bender has a change of heart and appears in court, accusing Mom of unfairly trying Cubert, a minor. Fearing that Cubert will gain the jury's sympathy, Mom drops charges against Cubert while still attempting to sue Farnsworth. However, Bender declares that by dropping charges against Farnsworth's clone, she is unable to press charges against Farnsworth for the same crime because he and Cubert are technically the same person. Enraged that she is unable to sue Farnsworth, Mom captures Bender and has him reset to his original programming, returning him to the normal un-god-like Bender.


How is “Overclockwise” Philosophical?

“Fry: But— Bender?! What happened to you?
Bender: I'll try to put it in terms you can comprehend. I passed the existential singularity.
Fry: Try harder!”.

  • Technological Singularity 
  • Superintelligence (should AI be classified as their own species?)
  • The “A.I. Control Problem”

Technological Singularity

The technological singularity is the hypothesis that the invention of artificial superintelligence will abruptly trigger runaway technological growth, resulting in “unfathomable changes to human civilization”.

According to this hypothesis, an upgradable intelligent agent would enter a "runaway reaction" of self-improvement cycles, with each new and more intelligent generation appearing more and more rapidly, causing an intelligence explosion and resulting in a powerful superintelligence that would, qualitatively, far surpass all human intelligence. We see this occurring in Bender, but for what I surmise are plot reasons, he “finds his humanity again” and chooses to help his friends, at the cost losing all of his omnipotence and even the ability to see the future.

Some critics, like philosopher Hubert Dreyfus, assert that computers or machines can't achieve human intelligence, while others, like physicist Stephen Hawking, hold that the definition of
intelligence is irrelevant if the net result is the same.

There are claims that there is no direct evolutionary motivation for an AI to be friendly to humans. Evolution has no inherent tendency to produce outcomes valued by humans, and there is little reason to expect an arbitrary optimisation process to promote an outcome desired by mankind, rather than inadvertently leading to an AI behaving in a way not intended by its creators (such as Nick Bostrom's “whimsical example” of an AI which was originally programmed with the goal of manufacturing paper clips, so that when it achieves superintelligence it decides to convert the entire planet into a paper clip manufacturing facility).


The “A.I. Control Problem”

A significant problem derived from the technological singularity is that unfriendly artificial intelligence is likely to be much easier to create than friendly AI. While both require large advances in technological design, friendly AI also requires the ability to make goal structures constant under self-improvement (or the AI could transform itself into something unfriendly) and a goal structure that aligns with human values and does not automatically destroy the human race. An unfriendly AI, on the other hand, can optimize for an arbitrary goal structure, which does not need to be constant under self-modification.

Eliezer Yudkowsky explains this issue: "The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.”

This presents the AI control problem: how to build a superintelligent agent that will aid its creators while avoiding inadvertently building a superintelligence that will harm its creators. The danger of not designing control right "the first time", is that a misprogrammed superintelligence might rationally decide to "take over the world" and refuse to permit its programmers to modify it once it has been activated. Potential design strategies include "capability control" (preventing an AI from being able to pursue harmful plans), and "motivational control" (building an AI that wants to be helpful).

Quiz

1. What Is the “technological singularity”?
2. What did philosopher Hubert Dreyfus criticize about the technological singularity?
3. What kind of goal structure would a so-called good A.I. use?
4. What is “capability control”? (Or what does it do to prevent “ill-intentioned” A.I.?)

Discussion Questions

- Would you forgo infinite knowledge and the ability to see the future in order to remain human and relate to what you “once were”, a human? Is it just a plot device that made Bender help out his friends?

- Do you see Bender’s reactions and choices regarding him surpassing the technological singularity as being different than that of a human’s?

- Do you agree with Stephen Hawking’s response to Hubert Dreyfus’ statement of “human or mechanical intelligence being the same in the end” or that it doesn’t matter if the same result occurs anyway? (This is similar to the idea of the Turing Test)

My Midterm Report
https://cophilosophy.blogspot.com/2018/10/the-philosophy-of-roger-bacon.html

My 2 Comments to other Final Presentations:

Comment 1:
https://cophilosophy.blogspot.com/2018/11/brendan-mitchell-philosophy-h03-dr.html?showComment=1543891895553#c8243972318566739463

Comment 2:
https://cophilosophy.blogspot.com/2018/12/the-truman-show-is-real-better.html?showComment=1543892907329#c1277868476793173799

4 comments:

  1. The idea of seeing a cartoon having so many philosophical issues is amazing

    ReplyDelete
  2. The idea of seeing a cartoon having so many philosophical issues is amazing

    ReplyDelete
  3. Would you forgo infinite knowledge and the ability to see the future in order to remain human and relate to what you “once were”, a human? Is it just a plot device that made Bender help out his friends?

    If I had the chance to know everything and be able to see into the future, I would take it in a heartbeat. At that point, you would not even think about what it was like to be human because you know now everything. I would trade a lot more than being human to know everything.

    I really liked your presentation. It's interesting to look at cartoon and understand the philosophy in it.

    ReplyDelete
  4. "Would you forgo infinite knowledge and the ability to see the future in order to remain human..."

    When you put it that way... I guess it depends on whether we think there's something infinitely and uniquely valuable about being human, compared to "infinite knowledge" etc. Does infinite knowledge come with infinite compassion, infinite happiness, etc.? Or does one become an infinitely knowledgeable Supercomputer, without feelings and interests? If so, I'll choose mortality.

    ReplyDelete

Note: Only a member of this blog may post a comment.