Most who follow Artificial Intelligence (AI) have heard of the Turing Test. This test was first postulated by Alan Turing, the brilliant British mathematician, as a test for machine intelligence. His simple test has human judges using a terminal to communicate to either another human sitting at a terminal or an intelligent computer. If the judges can’t distinguish between the human and the computer, Turing claimed, then the machine must be intelligent. What many do not know is that this test has an associated prize called the Loebner prize. Each year a bronze medal is awarded for the best entry, but to date no silver medal has been awarded, which would mean someone has actually succeeded in fooling a majority of the judges.
What is interesting to me is that Turing’s test is focused on communication. The unstated assumption is that: if a computer can communicate like a human it is intelligent like a human. So why focus on communication? Why not object recognition, design, or planning? At the time the Turing Test was published in 1950, the philosophical community was focused on language. Ludwig Wittgenstein’s Tractatus Logico-Philosophicus (1921) formed the foundation for the “logical positivists”, a dominant school of thought that had intellectuals focused on language and logic. More to the point, the positivists believed that language could (or should) be reduced to logic at a mathematical level. Or perhaps more accurately a computational level. The Turing test, therefore, naturally associated computation with language and language with cognition. It was also highly convenient (and not coincidental), that early computers were (and remain) symbol processing machines. Each word typed into Turing’s terminal is a symbol. Traditional AI holds that a thinking system is a symbol processing system. This is often called the “symbol system hypothesis”.
So why hasn’t a silver medal been claimed for the Loebner prize? Why are we not easily fooled by machines? The reason is twofold: knowledge and shared awareness. It isn’t the ability to communicate that is lacking; it is having the shared knowledge, understanding and awareness. Our ability to communicate with intelligent systems has outpaced their knowledge. Apple’s Siri, for instance, offers users the ability to converse with it. It is a highly communicative interface. The communication breaks down, however, due to Siri’s lack of knowledge and its lack of situation awareness. This problem is not limited to machines.
I’m sure you can recall an experience where you went to a meeting outside of your field of expertise. For me, I first experienced this about 12 years ago at a Navy event planning meeting. The language was such that I could understand most of the words and the structure but not what anyone was saying. In other words, the sentences made sense at one level but not at a deeper cognitive level. Had I joined the discussion at that time, it would have been clear to everyone that I had no clue what was going on. I did not yet have a shared understanding of what was being discussed. I had not used or observed the systems yet, nor had I experienced the flow of the training exercise. I could not ground what I was hearing with experiences from the real world. Specialized expertise carries with it shared understanding that makes it difficult for one outside of the discipline to follow. The speakers were not concerned with making the discussion available to a general audience, because a majority of the audience had the required knowledge.
For a computer program to win the Loebner Silver medal, it will need to have an understanding that is shared in common with human beings. It will need to be grounded in experiences and knowledge from the real world. It turns out that even the least educated among us have a vast store of knowledge from our experiences that we use to interact with others around us. We expect members of our society to know about tables and chairs, mobile phones and dishwashers.
At Discovery Machine we are building mental models for our intelligent systems that serve to provide understanding to our intelligent agents. These mental models of complex systems enable our agents to communicate in specific areas of expertise. This specialized understanding and communication may not win the Loebner prize, but it has great utility in creating coaches and training aids for these disciplines. Communication within these specialized restricted domains is greatly improved through the use of these models.
Again we turn to Wittgenstein, this time to his later work. In The Philosophical Investigations (1953) Wittgenstein moves beyond the purely logical and into what is perhaps a more realistic view of human communication. Communication is shared understanding through concepts. These concepts are mental representations which Wittgenstein called “family resemblances”. Reality is not found in language but rather in the shared understanding that language requires. The future of intelligent systems may ride on their ability to communicate. This communication is knowledge rich and situationally aware. Perhaps in the end intelligent systems will be able to say: I communicate therefore I am.