preview

Artificial Intelligence Concepts : The Future Of AI

Better Essays

Kevin: Hello, Dr. Ayala, Dr. Ruiz, and Mr. Skep Ticks. Welcome to “The Future of AI,” and it is my understanding that each of you are proponents to different concepts. Dr. Ayala, you’re a strong proponent of connectionism, while Dr. Ruiz is a strong supporter of symbol manipulation. Mr. Skep Ticks is a skeptic of the aforementioned concepts and believes that AIs cannot be intelligent. Intelligence, he believes, can only be simulated by systems but not created. Having all of you seated in front of me brings about the unique opportunity to ask if either of you believe that a machine can achieve intelligence. Ayala: In order to answer this question, one must have a bit of some context information. For example, what is good old-fashioned AI? …show more content…

The action of the Turing machine determined by the machine’s state. Programming a Turing machine to manipulate symbols is very similar to how human minds process information. Human intelligence is based on symbolic computation. At close inspection, humans are computers made out of different material. The brain can function as a manipulator of symbols. This can also be seen through the idea of multiple realizability. Take a watch as an example. A watch is able to tell time, correct? Ticks: Yes, a watch is able to tell time. Ruiz: If the time it reads can be taken as a state that the watch is in, then one can’t assume that its structure is the only way to tell that time. The watch on your wrist is structured differently than mine but is still able to tell time and be at the same state as my own watch. Even a digital watch can be in the same state of telling time as an analog clock. Through this hypothetical, there can be more than one way to reach intelligence and the human mind is not the only formula. AIs can be intelligent in their own right, and they aren’t simulations of intelligence. Ticks: That’s all interesting, but have you heard of the Searle’s Chinese room thought experiment? It’s a thought experiment that basically implies that genuine intelligence in AI is impossible to achieve. It’s the thought that, since computer programs are purely symbolic, aren’t capable of genuinely understanding the semantics of objects.

Get Access