Yes, the machines might take over one day – but that’s (probably) still a long way away. In reality, human intelligence may be the key to developing artificial intelligence (AI).
Hopkin’s own cognitive scientist, Tal Linzen, works at the cross section of human intelligence and AI, using AI to confirm beliefs about human intelligence and then using human intelligence to improve AI technologies.
Linzen was originally interested in traditional linguistics, drawn in by the quirks of individual languages. It was during his PhD that he became interested in the cognitive aspect of linguistics, and while in Paris he became involved in his current area of computational linguistics. Now at Hopkins, Linzen leads the Computational Psycholinguistics Lab.
One thread of research in the Lab is the acquisition of language. A large issue is understanding how we learn to form questions.
In English, questions are formed by moving a word to the beginning of a sentence. For example, the sentence “the dog is hungry” becomes “is the dog hungry?” by simply moving “is” to the beginning. However, there are more complex sentences that aren’t so straight forward.
Linzen gave the example of “the dog that can jump is hungry,” and mused how to form the question version.
“How do you know if you’re supposed to move ‘can’ to the beginning of the sentence – ‘can the dog that jump is hungry?’ – or ‘is’ – ‘is the dog that can jump hungry?’” Linzen said. “You know that it’s the second question that’s the correct one, but how do you know that?”
While growing up, children aren’t often exposed to these types of complex questions, so it isn’t clear how humans can so easily learn the rules. One theory is that children are exposed to other parts of language that can inform these questions, and another is that it is something innate that humans are born with, like sight.
Testing these hypotheses is where artificial intelligence comes in.
“We can write computer programs that have that assumption in them… and then we compare how well each of them can learn that phenomenon,” Linzen said.
Besides understanding language acquisition, Linzen also studies how to improve language technologies. Current technologies are somewhat rudimentary – they can be given commands, but the commands have to be rather specific to be understood.
“I think one of the most exciting directions of language technologies is to figure out how to make your conversation with those systems more natural,” Linzen said.
This process begins by studying human language comprehension, mainly the predictability of a word in a sentence. The Lab does this by tracking how quickly study participants read individual words in a sentence, either by eye tracking, which follows eye movement while reading, or through internet surveys, where a participant reads a sentence and presses the space bar every time they finish a word. The theory is that the longer it takes to read a word, the less predictable it is.
Linzen also uses made up languages to understand the predictability of words. “You construct a language that is extremely simple, has maybe 25 words and two grammar rules,” Linzen said. “You have people learn it and see how difficult each of the rules are to learn or what makes it hard to learn.”
Based on what is learned from these experiments about human language comprehension, the Lab can create artificial intelligence that can use human language better.
Linzen is also doing work on understanding the capacity of current artificial intelligence. One popular system, called a neural network, is a rather simple way to replicate human cognition based on addition and multiplication. Neural networks have come to be widely used in artificial intelligence, but they are much simpler than what is traditionally thought to be needed for language comprehension.
“It’s been surprising to me how good [neural networks] are, and that calls in to question a lot of what we thought about the difficulty of learning certain things about language. It looks easier than we thought in some ways,” Linzen said.
Outside of the Computational Psycholinguistics Lab, Linzen teaches the course Introduction to Computational Cognitive Science. Linzen says he enjoys teaching a class that is directly related to what he studies, and the chance to interact closely with students.
“I like hearing from people who don’t know what the ‘correct answer’ is,” Linzen said.
He emphasized that ‘correct’ merely means the currently accepted answer.
“Sometimes it’s surprising. It’s really fun to get a different, fresh perspective on something that you think you already know.”
Linzen also believes that it is important for students to be introduced to the idea of human intelligence intersecting with artificial intelligence, since it is a huge part of cognitive science and computational science.
“Students who are interested in developing artificial intelligence should consider some classes here [in the Cognitive Sciences Department]. It will give you an interesting perspective,” Linzen said.
The human brain still remains far superior to artificial intelligence, but work like in Linzen’s Computational Psycholinguistics Lab consistently makes AI technologies smarter and smarter. Still, it can safely be said that we’re a long way away from the machine uprising.