Machine learning analyzes toddlers’ eye movements to ascertain their age

Using deep learning to tease out factors indicative of age-related variability in the way toddlers gaze at visual stimuli, researchers at the University of Minnesota have shown that the technology can accurately distinguish 18-month-olds from 30-month-olds.

The team, led by Kirsten Dalrymple, PhD, of the school’s Institute of Child Development, reported the findings online April 18 in Scientific Reports.

Dalrymple and colleagues recruited 37 children aged 18 months and 36 children aged 30 months. The researchers had the participants sit on a parent’s lap and view images presented on a monitor.

The parents were instructed not to point at or talk about the images.

While the toddlers viewed the monitors, the researchers tracked and recorded their eye movements using software dedicated to that task.

Next, using a linear support-vector machine classifier on features extracted by their deep-learning model, the team looked for a decision boundary with a maximum margin separating the two groups of toddlers.

On analysis, they found their deep-learning classification system accurately distinguished between the two groups 83% of the time.

By comparison, their baseline model, which used similar techniques but had no deep learning component, had an accuracy of 68%.

“Our results demonstrate that machine learning is an effective tool for understanding how looking patterns vary according to age, providing insight into how toddlers allocate attention and how that changes with development,” the authors concluded. “This sensitivity for detecting differences in exploratory gaze behavior in toddlers highlights the utility of machine learning for characterizing a variety of developmental capacities.”

Interestingly, Dalrymple et al. further found that, going by eye movements and fixations, 18-month-olds tend to show keen interest in faces while 30-month-olds are more captivated by things that other people pay attention to.

“This difference between age groups could reflect changes in cognitive representation of others’ intentionality,” the authors wrote.

The study is posted online in full for free.

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.

Around the web

FDA Commissioner Robert Califf, MD, said the clinical community needs to combat health misinformation at a grassroots level. He warned that patients are immersed in a "sea of misinformation without a compass."

With generative AI coming into its own, AI regulators must avoid relying too much on principles of risk management—and not enough on those of uncertainty management.

Cardiovascular devices are more likely to be in a Class I recall than any other device type. The FDA's approval process appears to be at least partially responsible, though the agency is working to make some serious changes. We spoke to a researcher who has been tracking these data for years to learn more. 

Trimed Popup
Trimed Popup