Dog whisperers can now join the list of professionals whose jobs are at risk of being stolen by artificial intelligence (AI), as it may have just entered the world of animal communication. Using machine learning software, researchers were able to successfully decode the meaning of dogs’ vocalizations, paving the way for new technologies that may help us better understand our four-legged companions.
The authors of the as yet un-peer reviewed study recorded the barks, growls, howls and whimpers of 74 pet dogs as they were exposed to a variety of scenarios designed to trigger certain responses. These included everything from playing with their favorite toys to witnessing the researchers pretending to attack the dogs’ owners.
From these recordings, the study authors identified 14 different types of dog vocalization, such as “positive squeals” during gameplay, “sadness/anxiety barking” and “very aggressive barking at a stranger.” An AI model called Wav2Vec2 – which was originally designed for human speech recognition – was then trained on these hound sounds before being put through its paces with a number of challenges.
The first of these involved picking out individual dogs based on their vocalizations. Funnily enough, when the AI was pre-trained on human speech before being introduced to pooch talk, it was able to successfully identify specific dogs in 50 percent of trials, while models trained only on canine sounds achieved a 24 percent success rate.
This is pretty significant, as it suggests that familiarity with human speech can help an AI to get to grips with the complexities of non-human communication, which means we don’t have to start from scratch when it comes to building a model for talking to animals.
“Our results show that the sounds and patterns derived from human speech can serve as a foundation for analyzing and understanding the acoustic patterns of other sounds, such as animal vocalizations,” explained study author Rada Mihalcea in a statement.
For its next trick, the model was able to distinguish between different dog breeds with varying levels of success. More than half of the dogs in the study were chihuahuas, and the software was able to correctly identify these lap dogs from their bark on around 75 percent of occasions.
Finally, the model was challenged to interpret the meaning of the animals’ vocalizations by matching them to one of the 14 types of dog sound listed by the researchers. When pre-trained on human speech, the AI achieved a success rate of 62.2 percent, although certain categories of sound were more easy to decipher than others.
For instance, the model was able to correctly identify 90.7 percent of negative grunts but only 45.26 percent of negative squeals.
“There is so much we don’t yet know about the animals that share this world with us. Advances in AI can be used to revolutionize our understanding of animal communication, and our findings suggest that we may not have to start from scratch,” said Mihalcea.
“By using speech processing models initially trained on human speech, our research opens a new window into how we can leverage what we built so far in speech processing to start understanding the nuances of dog barks,” she said.
The study study is currently awaiting peer review and is available as a preprint on arXiv.
This post was originally published on this site be sure to check out more of their content.