The following column appeared in the DEC Professional in 1993 and holds up. It’s about a classic computerized response system that one could bust in a very few questions. These sorts of bots are still floating around trying to convince people that they are not inane computer programs. Nothing has really improved since 1993.
The Thinking Computer
by John C. Dvorak
We know computers can’t think, but when will it be possible for a computer to make someone believe that it can think. This is the goal of something called the Turing test. A person is put in front of a computer to exchange tales, quips and comments with the machine. The person has to decide whether the computer is really chatting or whether a person, someone else, is communicating via the computer console. If you can’t say for sure that it’s a person or a computer, then the program/computer passes the Turing test.
This is kind of the goal of a yearly competition held at the Boston Computer Museum. Dubbed the Loebner Prize it pits computer against person. The competition works something like this. A bunch of computers are in a room. Some are running AI programs designed to fool a group of judges who go from machine to machine. The other machines are “fronts” for real people who type responses from another room. The judge decides whether it’s a person or a computer. The program that consistently gets picked as a person wins the prize. Last year the award went to “The PC Professor” written by Joseph Weintraub (Thinking Software, Woodside, NY). While this program is a good attempt at faking out a naive computer user, it cannot fool a sophisticated user familiar with the shortcomings of a computer. In fact it just proves that we have a long way to go before computers can come close to mimicking humans adequately. To prove my point I had a chat with the PC Professor.
You be the judge. Here’s the conversation: