09 July 2020, Iris Proff
© Arco Mul
“I think I have the right personality to be a researcher” says Dieuwke Hupkes confidently, one week after defending her PhD on neural networks and natural language processing. “I like working on my own, I like thinking about the same problem for a long time and I really, really want to know the answers to the questions I am asking in my research.”
The young researcher stands out. Not only is she one of few female researchers in the male-dominated field of AI. But also the questions Dieuwke has been posing for the last five years are somewhat special. Many people applying neural networks see them first and foremost as computational tools that solve difficult problems. Problems such as translating sentences from one language to another or controlling a self-driving car.
Dieuwke however is not interested in building the best performing artificial neural networks. Instead, she studies them as models of human language processing. In doing so, she wants to answer a question that has been hunting her since she was a kid: How do human brains process language? And how come they do this effortlessly – even though language is so complex?
Neural networks as model of the brain
For a model to be useful to understand how humans process language, it needs to have three properties.
- First, the model must resemble the human brain in some way. Even though neural networks are different from the brain in many ways, they share fundamental operating principles. They consist of single units (“neurons”) passing signals, transform input into output and change their structure as they learn.
- Second, our model of choice should be able to process language. Neural networks are not on par with human language processing, yet. However, recent studies and models like Google Translate prove that neural networks capture many aspects of natural language.
- Finally, there needs to be a way for us to interpret what the model is doing. A common criticism of artificial neural networks is that they are ‘black boxes’. Neural networks are not programmed to solve a certain task – they are only told how to learn to find their own solution.
It is this last point that Dieuwke focused her efforts on over the course of her PhD. Like a neuroscientist developing methods to investigate what happens in the brain, she developed methods to investigate what happens in the hidden layers of an artificial neural network. What information do different parts of the network represent?
Linking subjects and verbs
A quick example: To understand a sentence, we need to know which subject belongs to which verb. But subjects and verbs that belong together sometimes appear far apart in a sentence (like in this sentence). To know what belongs together, we need to keep in mind if the subject of a sentence is singular or plural. Dieuwke and her colleagues found that this information is stored locally in a neural network. Two specialized “neurons” represent the number of subjects in a sentence. If the researchers disabled one of these unit, the neural network could not process long-distance-dependencies anymore.
This suggests that neural networks cannot represent a long-distance dependency inside another long-distance dependency. In a follow-up study, the reseachers tested if this is also the case for humans. Indeed, they found that humans and neural networks made similar errors. This illustrates that artificial neural networks, as simplified as they are, can provide useful hypothesis on how humans process language.
Moments of doubt
Even though Dieuwke was able to follow her passions and develop her own approaches, her life as a PhD student was not without difficulties. As a PhD candidate, she reflects, you focus on an extremely specific set of problems – so specific you might be inclined to think they are meaningless. The odds are high that the results of all your efforts will not change the world.
“In science, many people need to do basically the same thing and only some of them will find something useful. If I end up being one of the researchers exploring a a ‘dead end’, I can live with that”, Dieuwke concludes. “Someone has to explore these paths, too.”
The tempting world of AI in industry
Dieuwke is now starting a position as postdoctoral researcher at the ELLIS Society that wants to foster European AI research. One of her future ambitions is to bridge the gap between industry and academia in AI.
For many graduates in AI, industry has more attractive carrier choices to offer than academia. Employees at Google or Facebook don’t need to worry about funding and temporary contracts. They don’t need to spend their time on teaching and bureaucracy, and they have access to the most powerful computing facilities in the world. During an internship at Facebook, Dieuwke could run analyses in a few days that would have taken months at university.
“I don’t think it’s necessarily a problem that industry is guiding the way in AI. But I think it’s a problem that industry is currently not giving back much in terms of education”, Dieuwke states. “It would be great if people in industry could contribute more to the training of new researchers. They could for instance teach courses at university or fund more PhD positions. They have the money and expertise to do that!”
Read Dieuwke Hupkes’ dissertation titled “Hierarchy and interpretability in neural models of language processing” here.