Google has created a new artificial intelligence model which could reveal how dolphins communicate to each other, with the hope that in time, we will learn to understand what they are communicating and "speak dolphin" back to them in the future.
The model - which is called Google DeepMind's DolphinGemma - has been programmed with the world's largest collection of dolphin sounds, including clicks, whistles and vocalisations that the Wild Dolphin Project have recorded over several years.
Dr Denise Herzing, founder and research director of the Wild Dolphin Project said to The Telegraph newspaper: "We do not know if animals have words.
"Dolphins can recognise themselves in the mirror, they use tools, they're smart but language is still the last barrier so feeding dolphin sounds into an AI model will give us a really good look if there are patterns, subtleties that humans can't pick out.
"The goal would someday be to 'speak dolphin'".
The AI model will also try to spot sequences in the sounds which could be linked to behaviour, and through this, we could try to understand what is being communicated.
Dr Thas Starner, a Google Deep-Mind scientist, added: "The model can help researchers uncover hidden structures and potential meanings within the dolphins' natural communication, a task previously requiring immense human effort.
"We're just beginning to understand the patterns within the sounds."