Can computers think? This essay presents arguments both for and against artificial intelligence. While AI has been the subject of many bad 80s” movies and countless science fiction novels, it is worth considering the possibility of computers that can truly think.
Is it possible for computers to have complex thoughts and emotions like Homo sapiens? This paper will seek to answer that question and look at attempts being made to create artificial intelligence (AI). Before we can investigate whether computers can think, we must establish what thinking is. Examining the three main theories is like examining three religions, but none offers enough support to eliminate the possibility of the others being true. The three main theories are:
Thought doesn’t exist. Enough said. Thought does exist, but is contained wholly in the brain. In other words, the actual material of the brain is capable of what we identify as thought.
Thought is the result of mystical phenomena involving the soul and other unprovable ideas. As neither the reader nor the writer is a scientist, we will simply say that thought is what we experience as humans. So, what is intelligence? The most compelling argument is that intelligence is the ability to adapt to an environment. Desktop computers, for example, can go to a specific website address. However, if the address were to change, it wouldn’t know how to find the new one (or even that it should).
Intelligence is the ability to perform a task taking into consideration the circumstances of completing the task. Can computers think? This issue is contested as hotly among scientists as the advantages of Superman over Batman is among pre-pubescent boys. On one hand, there are scientists who say, as philosopher John Searle does, that programs are all syntax and no semantics. Put another way, a computer cannot actually achieve thought because it merely follows rules that tell it how to shift symbols without ever understanding the meaning of those symbols. On the other side of the debate are the advocates of pandemonium, explained by Robert Wright in Time. Our brain subconsciously generates competing theories about the world, and only the winning theory becomes part of consciousness. Is that a nearby fly or a distant airplane on the edge of your vision? Is that a baby crying or a cat meowing? By the time we become aware of such images and sounds, these debates have usually been resolved via a winner-take-all struggle.
The winning theory, the one that best matches the data, has wrested control of our neurons and thus our perceptual field. So, since our thoughts are based on previous experience, computers can eventually learn to think. The event that brought this debate into public scrutiny was Garry Kasparov, the reigning chess champion of the world, competing in a six-game chess match against Deep Blue, an IBM supercomputer with 32 microprocessors. Kasparov eventually won (4-2), but it raised the legitimate question: if a computer can beat the chess champion of the world at his own game (a game thought of as the ultimate thinking man’s game), is there any question of AI’s legitimacy? Even Kasparov said he could feel – I could smell – a new kind of intelligence across the table. But eventually, everyone, including Kasparov, realized that what amounts to nothing more than brute force, while impressive, is not thought.
Deep Blue could consider 200 million moves a second, but it lacked the intuition that good human players have. Fred Guterl, writing in Discover, explains that studies have shown that in a typical position, a strong human player considers on average only two moves. In other words, the player is choosing between two candidate moves that they intuitively recognize, based on prior experience, as contributing to the goals of the position. Seeking to go beyond the brute force of Deep Blue in separate projects are M.
I.T. Professor Rodney Brooks and computer scientist Douglas Lenat share a desire to conquer AI, but that is where their similarities end. Brooks is working on an AI being nicknamed Cog, which has cameras for eyes, eight 32-bit microprocessors for a brain, and will soon have a skin-like membrane.
Brooks is allowing Cog to learn about the world like a baby.