July 27, 2024
New Study Uses Proteogenomics to Tackle Drug Resistance in Acute Myeloid Leukemia

New Study Shows that AI Can Learn Language Like a Child

Recent research conducted by a team of New York University (NYU) researchers demonstrates that artificial intelligence (AI) systems, such as GPT-4, have the potential to learn and use human language in a similar way to children. While AI systems learn from massive amounts of language input, often in the trillions of words, children only receive millions of words per year. This data gap has led researchers to question the relevance of AI advances to human learning and development.

To bridge this gap, the NYU researchers conducted an experiment in which they trained a multimodal AI system using video recordings captured through the eyes and ears of a single child, from the age of 6 months to their second birthday. The videos represented only around 1% of the child’s waking hours. The researchers aimed to determine whether the AI model could learn words and concepts from the limited input resembling a child’s everyday experiences.

The results of the study, published in the journal Science, revealed that the neural network, or model, was able to learn a significant number of words and concepts using the small percentage of the child’s daily experiences captured in the videos. This suggests that limited input can be sufficient for genuine language learning in AI systems.

According to Wai Keen Vong, a research scientist at NYU’s Center for Data Science and the first author of the paper, this study is the first to demonstrate that a neural network trained on realistic input from a single child can learn to associate words with their visual counterparts.

Brenden Lake, an assistant professor in NYU’s Center for Data Science and Department of Psychology and the senior author of the paper, suggests that using AI models to study language learning in children allows researchers to address longstanding debates about the factors influencing word acquisition. The findings suggest that children may not require language-specific biases or innate knowledge, but instead rely on associative learning.

The NYU researchers analyzed over 60 hours of video footage captured from the child’s perspective, containing approximately a quarter of a million word instances. The footage showcased various activities in the child’s development, such as mealtimes, reading books, and playtime. The researchers then trained a multimodal neural network with two separate modules: one involving vision encoding from video frames and another involving language encoding from transcribed child-directed speech.

The study highlights the potential of AI systems to learn language and acquire concepts by mimicking the experiences of children. By leveraging algorithmic advances and real-life input, researchers can gain new insights into early language development, potentially reshaping our understanding of this critical area of human development.

Overall, this research demonstrates the promising possibilities of AI in language learning and provides a unique perspective on the factors influencing language acquisition in children.

*Note:
1. Source: Coherent Market Insights, Public sources, Desk research
2. We have leveraged AI tools to mine information and compile it