15 September 2020, Iris Proff
© Arco Mul
Since its very beginning, AI has been a field characterized by inflated hopes and big disappointments. Often, it seemed like there were no problems that machines could not solve and that big breakthroughs were just around the corner. But it was never long until a major drawback of the current approach became apparent, funding dwindled, and the field started moving in another direction.
The AI undergraduate programme at the UvA was launched in 1992, as one of the first of its kind in the Netherlands and the world. At the time, AI was a vastly different field from what it is today. The field was not centered around self-learning algorithms that make many of the current AI applications so powerful. Artificial neural networks existed, but they were not widely used due to a lack in computing power and data. Rather, AI rested on logical reasoning and search algorithms.
In 1992, much of the field revolved around so-called expert systems, which were considered as the first truly useful AI applications. An expert-system tried to capture the knowledge on a certain topic (say, diagnosing infectious diseases) in a long series of if-then clauses. The user could query the system, and then it would produce an answer mimicking the decision-making process of human experts.
During the 1990s, however, criticism of the approach arose, and funding and interest started shrinking. The systems couldn’t meet their promise, as they didn’t possess common sense and couldn’t generalize. By the end of the decade, expert systems had pretty much disappeared. They gave way to new AI approaches applying statistics and learning, rather than hard-coded rules.
Frank, in the beginning of the 90s you were asked to develop a teaching programme in artificial intelligence at the University of Amsterdam. How come the university asked you?
Frank Veltman: It’s a mystery why they asked me! There had already been attempts to start an AI programme, but the researchers involved could not come to an agreement. I think they thought that I, as a relative outsider, might succeed where others, who were much more at the centre of AI, had failed.
Were you doing research into AI at the time?
I was leader of a big European project involving seven universities and over 70 researchers. We wanted to create a speech-to-speech translation system: you talk English to it, and what comes out is French. The programme would take your speech, turn it into a common logical language, think about what it meant and return the message in another language.
In 1990, we claimed that within 20 years, automatic speech-to-speech translation would have become reality. Today, it indeed exists, but it works in a totally different way than what we imagined then!
Today’s translation systems all rely on neural networks. You didn’t use any neural networks at the time?
Not at all! Remko Scha, who was part of the project, already used a method called data-oriented parsing. He analyzed text corpora using statistics. This statistical information helped to choose between different possible interpretations. Our approach was thus not purely based on logic, but the ultimate goal was to find a common logical representation.
Which other research groups at UvA were doing AI at the time?
Research in AI at UvA has always been fragmented. At the humanities faculty, one group was doing computational linguistics. A group in psychology researched how to interview experts so that you can put their expertise into an expert system. They firmly believed in Prolog, a logical programming language, and they used it for everything. And in Computer Science there was a strong robotics group.
But next to that, there were smaller groups spread around the faculties. My group in philosophy was doing natural language analysis. In medicine, people were developing medical expert systems and in law, one group was building expert systems to build artificial judges and do automatic fining. Then there was a phonetics group, who build computer models to produce speech and a small group in psychology working with neural networks who tried to build models of the brain.
All of these groups gave their own programming and AI courses – which were not very good, because the groups were quite limited in their expertise. In the end, you had about 20 places in the university where you could learn programming. Our idea was to bring it all together.
That doesn’t sound like an easy task.
No, it took a few years to create a decent AI study programme. The first time the programme was officially evaluated, it received a lot of criticism. I was happy with the criticism, because it was the only way to convince everybody to start working together.
You managed to unify the AI teaching. But what about the research?
Around 1995, there was an attempt to bring all AI research together in one place. But the idea was never seriously picked up at a higher administrative level, and some of the groups did not want to give up their relative autonomy.
What was the original focus of the AI teaching programme?
Utrecht University had a very successful programme called Cognitive AI. It involved a lot of philosophy and their ambition was to do computational modelling of cognitive processes. Personally, I would have liked to do that too, but we had to do something different. Our approach was more practical focused on robotics. At the time, you could specialize in three directions: Expert systems, logic & language, and robotics. If you look at the programme today, it has changed completely. Expert systems for instance almost disappeared. Same goes for non-monotonic reasoning, which I used to teach in the Master’s. That’s too bad – but I understand.
It all gave way to neural networks?
Yes. Even in 1992, we had a course in pattern recognition. But it was always a mix of logical methods and neural nets. Now, the success lies in deep learning.
Do you think there is still a role for logical methods in AI today?
Yes, but not the central role that researchers originally envisioned. Back in those days, many people thought that our brain does nothing else than making logical derivations. I never agreed: I think a logical derivation is the result of some brain process, rather then the process itself.
Still, for some tasks you need to specify at a high level what the system needs to learn. And for that you need logic. For instance, if you want the system to learn meaning, it’s good if you can specify that meaning in a logical system. Google Translate and the like are great, but it’s not as if these programmes understand a word.
How do you know that they don’t understand?
That’s a very good question of course!
Thank you for your time, Frank!