[Jump to part 1, 2, 3, 4, 5, 6, 7, 8]
Let us first look at
Turing’s machines. He was saying that any reasonably complex machine,
could with the right set of rules, be mistaken for a human mind during a five
minute game. The rules to the game are simple, two people and one machine
are set down in such a way so that one of the people cannot see the computer or
the other person. That person is the interrogator that must, through
questioning, determine which is the human and which is the computer.
To Turing, being
mistaken for a human 70% of the time was the same as being functionally
equivalent to a human, or at least reasonable so to be able to declare that
machine/program combo to be a thinking machine. One point of interest is
that Turing was only talking about digital computers, which has been a point of contention for
some time for philosophers examining his arguments. Despite the semantics
of it, if his test of intelligence was a good one, then it should be able to be
applied to anything we think might be intelligent.
We should be able to
run his game with dolphins, elephants, chimpanzees, gorillas, and even unknown unknowns
like aliens. Of course, the tests would have to modified for unique
biology or engineering limitations. If a computer sounded like a Dalek, or the species being interrogated lacked
human-like vocal cords, a human couldn’t help but know off the bat they weren’t
talking to a human.
Nonetheless, a properly calibrated and administered test should be
able to at least give us reason to think that the object of the test was
somehow thinking, but is the process of thinking enough? Going back to Warren’s list of personhood, we can see that
passing the Turing test doesn’t actually meet any of the characteristics of
personhood. It might give us some reason to pause the next time we feel
like throwing our “smart” phone across the room. If the engineer gave the
phone a way to detect physical damage to itself, it would meet number one, with
a little more tinkering, Siri might reach number four, and it can already zip
through any number of logic puzzles that stump humans on a regular basis.
Think about it, are we enslaving cell phones?
Again, not really a new
problem. Let’s take the case of Koko, the talking gorilla. Koko
clearly has the ability to feel pain, the ability to solve a variety of puzzles,
a wide range of non-genetic or externally controlled behaviors, a vocabulary of 1,000 human words in
American Sign Language, and strong indications of a self-awareness. Does
that make Koko a non-human person?
If we are going to
claim that we are not biased, then we have to acknowledge the possibility that
both the “really smart” phone and Koko could be persons in their own
right. What legal rights does that morally obligate us to impart upon
non-human persons? We recognize the legal personhood of corporations now,
so why is it so hard for us to see that things already on earth might be due
the same consideration? Again, a post for another day.
However, merely passing
a Turing test, or being a Turing machine, does not grant us the same level of
personhood. It only gives us some comparison between machine and human
conversational intelligence. Why did Putnam think that everything we call
a mind is just a Turing machine? It seemed like a good analogy of how a
deterministic mind might work.
1) If a complex
enough device could run a sufficiently complex algorithm, then the device is
thinking, or showing signs of intelligence.
2) Formalized
Logic, the best way to model human intelligence, operates on a set of rules.
3) A complex device
can run those same rules and produce the same results as a human mind would
produce.
4) So, from
premises one, two, and three, it is conceivable to soon see thinking devices
that show human-type intelligence.
Putnam then generalized
that argument.
5) Human minds are
reducible to Turing machines.
However, with all
analogies, they can only go so far. As I see it, it breaks down at point
two. Formal Logic is still plagued by inconstancies with human
intellect. It, in no way, captures the full scope of a human mind, which
allows us to be rations, emotional, creative, intuitive, and at times
completely unpredictable. Even if we had the optimum computer system,
equally complex as a human brain, using the greatest program ever written, the
program would be based on a still flawed logic system.
An example of what I mean:
Gödel sentence G, “this statement is not provable”. If a mechanical
system of logic attempted to prove the statement true, it makes the statement
false, but if it tries to prove it false then the statement is true.
Outside formalized logic, we can see that it is a semantically true
statement. We have an intuition about logic that none of our systems have
ever completely captured. Computers are limited by their programs, which
a based on our formal logic systems.
So, what separates us from
computers? Take the best logic system out there, find a way to make it
into a computer program and run it on the optimum supercomputer, and then
somehow scoop out a mind from one of our clay bodies (if that even makes
sense). You don’t have a human mind, but like Dr. Frankenstein, you have
a flawed copy; one that lacks some significant portions of what allowed us to
write the logic system in the first place, the ability to disregard formalized
rules.
We make these rules to try
and capture how we think, but no matter how far we progress, there is some kind
of flaw in our system, at least at the present time. Usually, it is that
our minds can reason past something that our models of our minds cannot.
These models are nothing more than tools we use, and they do evolve, but
like the way we use computers, we evolve with our tools. Our
culture changes, our mindsets adapt, the scope of our influence expands, and
the reach of our intellect goes beyond what it was before.
Using the tools we already
have generally means following rules. Making new tools, or thinking about
how the tool could do something different, that is an act of true creativity.
It is an act that defies the rules. It is an act humans excel in,
but when we try to express how the process was done, our words normally fail
us. The tools of language, culture, science, and art always
seem inadequate to capture the moment when something new first takes
form in imagination. "Rules and models destroy genius and
art." William Hazlitt.
If we didn’t have
the ability to ignore the rules, we’d never have broken away from
divine command theory. Our science would have never progressed past
Aristotle. Our artwork would never have moved beyond formalism. Our
literature would never have allowed Shelley to write The Modern
Prometheus. Take that away from us, and we cease to be human.
No comments:
Post a Comment