Saturday, April 21, 2012

Us from Computers (part 7) Agalmatophilia -- look it up!


[Jump to part 1, 2, 3, 4, 5, 6, 7, 8]       


     Remember Aristotle’s view of the different types of “souls”?  Well, in his view, which does seem popular with writers like Ovid (43 BCE – 12 CE), the dividing line between life (actuality) and everything else (potentiality) was animation or movement.  So, a stone statue that moves, breaths, and walks, was thought to be alive.  The fact that later the statue gives birth to a son, well, that just proves that it clearly functionally equivalent to a human.

     Now, the astute reader might comment, but clearly, that was a work of fiction, a parable likely meant to teach a moral that is somehow lost on modern observers.  To that I say, not so!  Remember that we look upon Homer’s Odyssey in the same light as we look upon Ovid.  The Greeks and Romans did not share our views.  They taught it as history, as real and as true as Christians insist the Bible is true, and more factual that historians declare their accounts are of past doings.

     So, when Ovid sat down to record what was likely an older tale of a statue come to life through the blessings of Venus, well, there would have been little doubt that the statue was truly alive, by Aristotle’s definition, and in possession of at least an adaptive soul, if not a fully rational soul, a mind, as well.  This would even be true if the “flesh” of the statue had never changed from the ivory.


     When clever artisans filled their temples with automatons that moved, spoke and re-enacted scenes from their writings, the ancient people didn’t think they were seeing merely works of mathematics and engineering but living entities, the same way the Shelley’s characters thought they were seeing Dr. Frankenstein’s “monster”, and they way futurist think we will see “living” computers.  The major difference between the three views is the definition and the requirements of sentient life.

     If we want to be moral people that act in an ethical manor, we better damn well spend some time hashing over this issue if we really think that we are close to strong AI, even if it is a few lifetimes away.  An ethical 21st century person wouldn’t dream of actively promoting slavery of less developed culture, as the 18th century Western powers did.  If functionalism is true, there is no difference between enslaving a sentient computer made of silicon than one made of neurons. 

     Of course, if functionalism is true, then our answer to how to deal with a non-human person is already preprogrammed into us as part of finite number of responses we could ever have. So to find our answer, we simply need to invoke the right mental state, feed in the right input, allow the machinery of the brain to change to a new state and spit out its response.  If functionalism is true, and we’ve done that, we will have the only answer to the question we could ever have.  It’s a finite system.

     Like what the divine command theory does with a believer’s ethics, there is a list of rules that the functionalist brain follows in its deterministic way, creating the exact pattern it would ever time.  It is the only thing that can be expected, because physics.  But is that all we are?  Is that all we can do?  Are we only functionally equivalent to an over-complicated Newtonian desk toy?  

     No, we are not.


[Jump to part 1, 2, 3, 4, 5, 6, 7, 8]     

No comments:

Post a Comment