by Morten H. Christiansen, Cornell University and Pablo Contreras Kallens, Cornell University [This article first appeared in The Conversation, republished with permission]
Unlike the carefully scripted dialogue found in most books and movies, the language of everyday interactions tends to be messy and incomplete, full of false starts, interruptions, and people talking to each other. Whether it’s casual conversations between friends, quarrels between siblings, or formal discussions in a conference room, genuine conversation is chaotic. It seems miraculous that anyone can learn a language given the random nature of linguistic experience.
For this reason, many linguists, including Noam Chomsky, founder of modern linguistics, believe that language learners need some sort of glue to master the unruly nature of everyday language. And that cement is grammar: a system of rules for generating grammatical sentences.
Children need to have a grammar model hard-wired into their brains to help them overcome the limitations of their linguistic experience – or so it is believed.
This template, for example, may contain a “super-rule” that dictates how new elements are added to existing sentences. Children then only have to learn whether their first language is one, like English, where the verb precedes the object (as in “I eat sushi”), or a language like Japanese, where the verb follows the object (in Japanese, the same sentence is structured as “I eat sushi”).
But new insights into language learning are coming from an unlikely source: artificial intelligence. A new breed of large AI language models can write newspaper articles, poetry, and computer code and answer questions honestly after being exposed to large amounts of language input. And even more amazing, they all do it without the help of grammar.
Grammatical language without grammar
Even if their choice of words is sometimes strange, absurd or contains racist, sexist and other harmful prejudices, one thing is very clear: the overwhelming majority of the output from these AI language models is grammatically correct. And yet, there are no hard-wired grammar patterns or rules – they rely solely on linguistic experience, however messy.
GPT-3, arguably the best known of these models, is a gigantic deep learning neural network with 175 billion parameters. It was trained to predict the next word in a sentence given what happened before on hundreds of billions of words from the internet, books and Wikipedia. When it made a bad prediction, its parameters were adjusted using a machine learning algorithm.
Remarkably, GPT-3 can generate believable text reacting to prompts such as “A summary of the latest ‘Fast and Furious’ movie is…” or “Write a poem in the style of Emily Dickinson”. Plus, GPT-3 can answer SAT-level analogies, read comprehension questions, and even solve simple arithmetic problems — all while learning to predict the next word.
Comparison of AI models and human brains
The similarity to human language does not end there, however. Research published in Nature Neuroscience has demonstrated that these artificial deep learning networks appear to use the same computational principles as the human brain. The research group, led by neuroscientist Uri Hasson, first compared the ability of GPT-2 – a “little brother” of GPT-3 – and humans to predict the next word in a story from the podcast “This American Life: People and AI predicted the exact same word nearly 50% of the time.
The researchers recorded the brain activity of the volunteers while listening to the story. The best explanation for the activation patterns they observed was that people’s brains – like GPT-2 – were not just using the previous word or two to make predictions, but were relying on the accumulated context so far. to 100 previous words. Altogether, the authors conclude: “Our finding of spontaneous predictive neural signals when participants listen to natural speech suggests that active prediction may underlie lifelong language learning in humans.”
One possible concern is that these new AI language models are powered by lots of input: GPT-3 was trained on a linguistic experience equivalent to 20,000 human years. But a preliminary study that has yet to be peer-reviewed found that GPT-2 can still model human next-word predictions and brain activations, even when trained on just 100 million words. That’s well within the amount of language input an average child might hear in the first 10 years of life.
We are not suggesting that GPT-3 or GPT-2 learn language exactly like children. Indeed, these AI models do not appear to understand much, if anything, of what they are saying, yet understanding is fundamental to the use of human language. Yet what these models prove is that a learner – albeit a silicon – can learn language well enough from mere exposure to produce perfectly good grammatical sentences and do so in a way that resembles the processing of the human brain.
Rethinking language learning
For years, many linguists believed it was impossible to learn a language without a built-in grammar model. New AI models prove otherwise. They demonstrate that the ability to produce grammatical language can be learned from linguistic experience alone. Similarly, we suggest that children do not need innate grammar to learn language.
“Children should be seen, not heard” goes the old adage, but the latest AI language models suggest nothing could be further from the truth. Instead, children should be engaged in back-and-forth conversation as much as possible to help them develop their language skills. Linguistic experience – not grammar – is essential to becoming a proficient language user.
Morten H. Christiansen, Professor of Psychology, Cornell University and Pablo Contreras Kallens, Ph.D. Student in Psychology, Cornell University
This article is republished from The Conversation under a Creative Commons license. Read the original article.
#changing #scientists #understanding #language #learning #raising #questions #innate #grammar