Researchers have uncovered that AI speech recognition models can decode a dog’s mood, breed and sex by analyzing barks.
Researchers from the University of Michigan and Mexico’s National Institute of Astrophysics, Optics and Electronics published a study that found that AI models traditionally trained on human speech can be applied to dog barks.
They fed the Wav2Vec2 speech recognition model a dataset of barks and noises from 74 dogs of varying breeds. The model was able to generate and interpret acoustic data from the dog noises to determine how the animal was feeling.
The bark-infused Wav2Vec2 model was 70% accurate in determining the dog’s mood, as well as its breed, age and sex, outperforming other models trained on related data.
“Animal vocalizations are logistically much harder to solicit and record,” said Artem Abzaliev, lead author of the study. “They must be passively recorded in the wild or, in the case of domestic pets, with the permission of owners.”
Models like Wav2Vec2 are designed to distinguish nuances in human vocals, including tone pitch and accent.
However, such systems don’t exist for dogs due to a lack of quality training data.
To gather the data required to train the model, the researchers exposed dogs to “several stimuli” including repeatedly ringing a doorbell or speaking affectionately to them, recording any noises the dogs would make which was then fed into the model for analysis.
The researchers found that by repurposing a model originally designed to analyze human speech they were able to overcome the data challenge.
“These models are able to learn and encode the incredibly complex patterns of human language and speech,” Abzaliev said. “We wanted to see if we could leverage this ability to discern and interpret dog barks.”
“By using speech processing models initially trained on human speech, our research opens a new window into how we can leverage what we built so far in speech processing to start understanding the nuances of dog barks,” said Rada Mihalcea, the University of Michigan’s AI laboratory director.
The research could benefit biologists and animal behaviorists, the researchers suggested, including helping humans respond to the emotional and physical needs of dogs.
“Our results show that the sounds and patterns derived from human speech can serve as a foundation for analyzing and understanding the acoustic patterns of other sounds, such as animal vocalizations,” Mihalcea said.
This post was originally published on this site be sure to check out more of their content.