[Jump to part 1, 2, 3, 4, 5, 6, 7, 8]
At this point a good number of computer geeks are likely either screaming at their screens or they’re scratching their heads, wondering what the hell Tarski had to do with computers. Perhaps it would make more sense to them if I were talking about George Boole, the 19th century logician that invented what would become Boolean algebra. Maybe I should point out that Boolean algebra became the basis for most computer programming languages and is what Turing was referring too when he talked about algorithms in a thinking machine.
At this point a good number of computer geeks are likely either screaming at their screens or they’re scratching their heads, wondering what the hell Tarski had to do with computers. Perhaps it would make more sense to them if I were talking about George Boole, the 19th century logician that invented what would become Boolean algebra. Maybe I should point out that Boolean algebra became the basis for most computer programming languages and is what Turing was referring too when he talked about algorithms in a thinking machine.
Or possibly, I should talk about Yuri Gurevich, and his Abstract State Machines. After all, he applied a form of Tarski’s modeling to computer languages that we’re using today. Indeed, they would all be good selections of logicians and computer scientists to talk about in this discussion.
The one thing that Tarski did, that none of them did, is overcome a major, gaping weakness in modeling logic, the fact that logic was incapable of symbolizing, properly interrupting the semantics, the meanings of words we use in arguments with ease all the time. What he did was allow others like Gurevich, to more closely model the way we think, but inside a digital computer. Before Tarski’s work, Turing’s ideas had no possibility of being true, because the Boolean algebra he was using couldn’t process categorical arguments like syllogisms, one of the oldest logic argument forms that were first recorded by Aristotle.
Statements like, “all A’s are B’s”, “no A’s are B’s”, “some A’s are B’s”, and “not all A’s are B’s”, had no meaning to any computer that Turing could have made. But humans make statements like that all the time, “all politicians are corrupt”, “no math major has a normal social life”, “some musicians are famous”, “not all Christians are criminals”, for example. That was a serious block to the existence of true strong artificial intelligence. With Tarski’s addition, we’re well on our way now.
So there we have it, the cosmological model of a deterministic universe, the psychological theory that suggest brain states, the logical language and the mechanical/electrical engineering need to make machines that think in a similar fashion as we do, and Putnam thinking about to it all. So what did he conclude? That we, and in fact anything that has a mind, are really just a Turing machine. If the machine is in one particular state (like default, awaiting a command) and we feed in a given input (like “load memebase.com”), it will drop into a different state (fetching “memebase”) and dispense one output (“herp derp”). His thought was that the human brain has a limited number of states and potential inputs that it could receive so it would, of course, have a finite number of possible outputs, just like Turing’s “monsters”.
In his view (which I’m still oversimplifying for want of time and impetus to rewrite everything he said on this), the only thing that really separated humans from computers is that one is a bioelectric calculator, and the other is an electro-mechanical calculator. Both are just as deterministic as the rest of the universe, both can only do what they are programmed to do, and both have a finite capacity of preprogrammed states resulting in a finite number of responses either could give.
At this point, I’d be remiss if I did not inform you, dear readers, that this is not the only version of Functionalism, but it is the one that I’m talking about right now. The reason that I am is this version gave rise to a wide number of popular concepts that ripped through our culture. That concept was that science was finally working on replacing one of the last religious things left untouched, or at least only unsatisfyingly molested; science was about find a way to scoop out our consciousness, our personalities from our weak clay bodies, our rotting flesh, and place them in shiny new computer-puppeted robot bodies.
Think of the possibilities. What if we never had to die? What if great minds could be preserved in some kind of digital format for as long as we wanted? It would be immortality in bodies that would be free from all the pains of life. Never tiring, never going hungry, never falling ill, never growing old or weakening without the chance for a full overhaul. The best part, we would escape Pascal’s ridiculous wager, we wouldn’t have to wait until after we died to know that we’d be living for all eternity.
No mind means we don’t need a supernatural event for our personalities to keep living, and because we have a much better understanding of meshing science and art, we can make them look and feel … not horrible (like Dr. Frankenstein’s creation). Of course, popular opinion of these concepts immediately went two diametrically opposed directions; this was the worst dystopia anyone could imagine, or this was the best thing ever. Overtime, the former, more emotionally based opinion has slowly been mellowing to open skepticism and nay saying, leaving those that are actively perusing this idea in fiction and in reality basically free to do their work. (There are too many variations on this theme to link anything meaningfully inclusive here, so just Google it if you want to know more.)
No doubt, this line of thinking has brought us some wonderful advancements in both the computer and medical industries. Many current futurists are predicting that we’ll have strong AI within the next few decades and that we’ll merge with our computers. The question is having “smart” technologies as our tools and then blending the line between the functions of humans and tools, really the same thing as being Turing machines with interchangeable parts and software with the machines we built?
There is no doubt that the future of strong AI, an artificial “mind” that truly has what we’d call general intelligence, one that is aware both of itself and its surroundings, that is reasoning, that is self-motivated, that is capable of true communication, and that can form an identity (see Warren’s list), that will be a serious challenge of human morals. Who actually knows how far off we are from realizing the potential. However, we’d be mistaken if we think this is a new issue in humanity or a question for the future.
[Jump to part 1, 2, 3, 4, 5, 6, 7, 8]
[Jump to part 1, 2, 3, 4, 5, 6, 7, 8]
No comments:
Post a Comment