The Challenges of Animal Translation

Artificial intelligence may help us decode animalese. But how much will we really be able to understand?
Two dolphins swimming while a human figure holds a gigantic ear piece to listen.
Illustration by Jakov Jakovljević

Disney’s 2019 remake of its 1994 classic “The Lion King” was a box-office success, grossing more than one and a half billion dollars. But it was also, in some ways, a failed experiment. The film’s photo-realistic, computer-generated animals spoke with the rich, complex voices of actors such as Donald Glover and Chiwetel Ejiofor—and many viewers found it hard to reconcile the complex intonations of those voices with the feline gazes on the screen. In giving such persuasively nonhuman animals human personalities and thoughts, the film created a kind of cognitive dissonance. It had been easier to imagine the interiority of the stylized beasts in the original film.

Disney’s filmmakers had stumbled onto an issue that has long fascinated philosophers and zoologists: the gap between animal minds and our own. The dream of bridging that divide, perhaps by speaking with and understanding animals, goes back to antiquity. Solomon was said to have possessed a ring that gave him the power to converse with beasts—a legend that furnished the title of the ethologist Konrad Lorenz’s pioneering book on animal psychology, “King Solomon’s Ring,” from 1949. Many animal lovers look upon the prospect of such communication with hope: they think that, if only we could converse with other creatures, we might be inspired to protect and conserve them properly. But others warn that, whenever we attempt to communicate with animals, we risk projecting our ideas and preconceptions onto them. We might do this simply through the act of translation: any human language constrains the repertoire of things that can be said, or perhaps even thought, for those using it.

In 1974, the philosopher Thomas Nagel published a seminal paper called “What Is It Like to Be a Bat?” Bat life, Nagel argued, is so profoundly different from human life that we can never truly know the answer to that question. Our understandings are shaped by our human concepts; the only way to know what it is like to be a bat is to be a bat, and to have bat concepts. Even if we don’t or can’t know exactly what it’s like to be a bat, we can have some understanding of how bat minds work; we can understand that bat life is lived aloft, sometimes upside down, and partly through echolocation. Still, in Nagel’s view, something is left out: the experience itself. As the philosopher Ludwig Wittgenstein famously put it, if a lion could talk, we could not understand him—our human minds would not share the sensory and conceptual landscape that lion-talk would express.

Today, animal-translation technologies are being developed that use the same “machine learning” approach that is applied to human languages in services such as Google Translate. These systems use neural networks to analyze vast numbers of example sentences, inferring from them general principles of grammar and usage, and then apply those patterns in order to translate sentences the system has never seen. Denise Herzing, the founder and research director of the nonprofit Wild Dolphin Project, which studies dolphins in the Atlantic, is now using similar algorithms, coupled with underwater keyboards and computers, to try to decode dolphin communications. “It may be that our mobile technology will be the same technology that helps us communicate with another species,” Herzing said, in a 2013 TED talk. An even more ambitious initiative, called the Interspecies Internet—founded by the musician Peter Gabriel; the M.I.T. professor Neil Gershenfeld; the “father of the Internet,” Vint Cerf; and the cognitive psychologist and marine-mammal scientist Diana Reiss—seeks to use new technologies to connect intelligent species such as dolphins, elephants, and great apes to one another and to us. “Computer technology is finally allowing us to see inside the world of animals in ways that are showing us that they are complex sentient beings that deserve our understanding and respect,” Con Slobodchikoff, an animal behaviorist who is a professor emeritus at Northern Arizona University, said.

But other researchers, following Nagel, doubt that genuine translation can be easily achieved between species that don’t share the same basic perceptual and cognitive processes. “You can’t just have a Skype conversation,” Marcelo Magnasco, a physicist and dolphin researcher at the Rockefeller University in New York, told me. “We will need to understand what it is to be a dolphin.”

Not all animal minds are equally different. It stands to reason that, mentally speaking, we have more in common with other primates than with octopuses and squid: the last common ancestor we share with chimpanzees lived six to eight million years ago, whereas the last we may share with octopuses lived in the Precambrian seas around six hundred million years before that. In “The Descent of Man,” Charles Darwin argued that there is “no fundamental difference between man and the higher mammals in their mental faculties,” and that the emotions of those animals were directly comparable to the ones that we experience. Darwin may have been indulging the Victorian tendency toward anthropomorphization, but it isn’t hard to discern a shared mentality with some of our animal cousins. Oliver Sacks expressed this memorably in his description of an encounter with an orangutan at the Toronto Zoo:

She stared into my eyes, and I into hers, like lovers gazing into each other’s eyes, with just the pane of glass between us. I put my left hand against the window, and she immediately put her right hand over mine. Their affinity was obvious—we could both see how similar they were. I found this astounding, wonderful; it gave me an intense feeling of kinship and closeness as I had never had before with any animal. . . . Then we pulled our faces away from the glass, and she went back to her baby.

“I have had and loved dogs and other animals,” Sacks wrote, “but I have never known such an instant, mutual recognition and sense of kinship as I had with this fellow primate.”

It shouldn’t be surprising, then, that great apes can learn to communicate with humans in very sophisticated ways. Koko, a gorilla who lived in a preserve in the Santa Cruz Mountains until her death in 2018, learned many words in a modified version of American Sign Language; a bonobo named Kanzi, who was studied by the primatologist Sue Savage-Rumbaugh beginning in the nineteen-eighties, can understand complex human commands and communicate using keyboard symbols. The vocabulary used by Koko, Kanzi, and other apes can be not just concrete but emotional. The animals don’t just ask for this or that object; they can learn to convey sadness, for instance, through hand gestures mimicking the flowing of tears.

And yet, even with our nearest animal relatives, we risk overinterpreting what we see. “Research on animal behavior pretty much has to start from an anthropomorphic stance,” the neuroscientist Joseph LeDoux has written. “We study things that matter to us.” This may mean that we are too ready to attribute meaning and understanding to the animals with whom we communicate. In 2015, when Koko was filmed relaying what appeared to be a sign-language message to the United Nations climate-change conference (“Koko sorry. Koko cry. Time hurry. Fix Earth! Help Earth!”), several linguists and primate researchers argued that she could not possibly understand a concept like climate change, and that the display must have been largely scripted. Kanzi, despite his large vocabulary, may not have a grasp of grammar—to some, this suggests that he is just associating sounds with objects or actions, and not truly using language.

With species to which we are more distantly connected, it becomes even harder to establish common ground. Slobodchikoff, who is an expert on the communications of prairie dogs, said that “the time has come for people to understand that what we recognize as reality is not necessarily what other animal species recognize.” For example, Slobodchikoff said, “bees and some birds see in the ultraviolet range of the visual spectrum, but we don’t. Bats, dolphins, dogs, and cats hear sounds in the ultrasonic range, but we don’t.” Dogs have vastly more smelling capacity than we humans do. Although Slobodchikoff insists that animals “have language, perceive time, have emotions, think, and plan,” he also argues that “each animal species has key differences that make it unique.” We are as distinct as we are alike.

Those differences are dictated by many factors, including physiology and brain structure, the nature of the animal’s environment, and the kinds of experience that environment supplies. The resulting divergences can be daunting. The last shared ancestor of humans and dolphins, for instance, lived an estimated ninety-five million years ago. Today, many dolphins might as well live on a different planet—a gravity-free world that’s typically blue-green in all directions, with no shadows or smells and a vast and alien soundscape. What concepts are needed to navigate such a place? Whatever they may be, dolphins communicate them through sequences of whistles made by nasal-tissue vibration, which many dolphin researchers see as a proto-language—what you might jokingly call “dolphish.” “Dolphins vocalize profusely when they’re all together, and the vocalizations are extremely complex,” Magnasco, the dolphin researcher, said. “They appear to be conversing with one another.” Each dolphin in a pod has a signature whistle, which with a little poetic license could be called its name; dolphins use these whistles to summon one another, and to tell other members of their pod where they are if they lose visual contact.

This much we know. But Magnasco doesn’t think that anyone has achieved a basic understanding of dolphish. “I’m not yet confident that I know what is the signal, what is the variation, what is the intention,” he said. “You need an extremely large body of data to do that, and it’s unclear that we have enough yet.” Still, there are hints that it might be possible. In 2013, Herzing and her team at the Wild Dolphin Project used a machine-learning algorithm called Cetacean Hearing and Telemetry (CHAT), designed to identify meaningful signals in dolphin whistles. The algorithm picked out a sound within a dolphin pod that the researchers had earlier trained the dolphins to associate with sargassum seaweed—a clumpy, floaty plant that dolphins sometimes play with. The dolphins may have assimilated the new “word,” and begun using it in the wild.

And yet, in an important sense, dolphish may be more than a language. Dolphins don’t just make whistles—they also employ body language and a variety of sounds, including clicks, which they use for sonar echolocation. From the acoustic reflections created by the clicks, a dolphin can form a mental picture of an object’s size, shape, and density. Dolphins can interpret one another’s sonar signals. “They are able to see shapes of things when they passively eavesdrop on someone else’s clicks,” Magnasco said. Using sound alone, they can see what another sees.

With cephalopods such as octopuses and squid, the gap widens further. Our common ancestor with them is thought to be a flatworm with only the most rudimentary of nervous systems; octopus brains are essentially a separate evolutionary experiment in developing intelligence. An octopus has around five hundred million neurons in its body—in the same range as a dog—but they are spread around, mostly in the arms, where they form clusters called ganglia, connected to one another. Even the brain in the center of the body is bizarre, because the creature’s esophagus, through which food is ingested, runs right through the middle of it. Some researchers hold that, with this distributed nervous system, cephalopods might host a “community of minds.” It isn’t clear, for instance, whether it’s the brain or the arms that “decide” what the arms do.

“An octopus mind is nothing like a primate mind, nor indeed like a dog’s, elephant’s or bat’s mind,” the evolutionary ethologist Phyllis Lee, of the University of Stirling, in Scotland, has written. According to the Australian philosopher of mind Peter Godfrey-Smith, cephalopods are “probably the closest we will come to meeting an intelligent alien.” Some researchers still hesitate to attribute “mind” to octopuses at all—and yet their behavior is often indicative of memory, problem-solving, cunning, personality, and even, some argue, sentience. They figure out how to unscrew jars, how to sabotage laboratory lights with jets of water (they may not like brightness), how to escape from their tanks just when their human wardens aren’t looking. They appear to gather items sometimes not for any obvious use but simply because they find them interesting. Some octopuses in captivity have been known to take what seems to be a dislike to individuals, squirting them with water at every opportunity. “They talk to you, reach out to you,” Michael Kuba, a marine biologist who has worked at the Okinawa Institute of Science and Technology, in Japan, told me. “But only to people they know.”

Octopuses seem to have designs of their own, which may subvert ours. Their agendas are often unfathomable. “When I first saw octopuses play,” Jennifer Mather, a professor at the University of Lethbridge, in Alberta, Canada, who specializes in cephalopod behavior, said, “I realized that we only saw it as play because it looked like our play.” She instead describes such behavior as motivated by exploration and led by the question “What can I do with this object?” (And yet an octopus might not even have an “I.”) Ultimately, Mather said, it’s hard to know for sure what the actions mean, because we don’t know where each one starts and finishes; we have no lexicon for translation.

Traditional efforts in animal cognition have attempted to build such a lexicon. Researchers have devised systems of symbols that animals can use by touching or pointing. In the nineteen-eighties, Reiss developed an underwater keyboard for dolphins; the animals quickly figured out, without instruction, how to request a body rub or a ball. Reiss also used mirrors to explore dolphin self-recognition: the animals not only appeared to recognize themselves (a sign, some researchers think, of a degree of consciousness) but also seemed to “play” with their reflections (by spinning, for example). Between 2016 and 2019, at the National Aquarium in Baltimore, Reiss and Magnasco collaborated on studies that used an eight-foot underwater touch screen fitted with dolphin-friendly interactive apps, including a version of Whac-A-Mole in which fish move across the display.

Using such systems, it’s possible to ask animals about their preferences among two or more alternatives—the same approach that child psychologists often take in trying to understand the reasoning of preverbal infants. Roger Payne, a whale-song expert—he co-discovered the songs of humpback whales, in the late nineteen-sixties—has explained how groups of alternatives might be used to pose ever-more-specific inquiries. “We might try asking dolphins direct questions,” he said, at a workshop of the Interspecies Internet project, at M.I.T., in 2019. “Do dolphins fear boats? Are sharks scary? Which of the following sharks is most scary? Is your mother afraid of sharks?” We might find out if dolphins lie regularly to each other as humans do, he said. “I would be surprised if they didn’t.”

The challenge, of course, is that it’s humans posing the questions and determining the choice of answers. But that’s changing. “The exciting thing about artificial intelligence and computer technology is that we are beginning to be able to decipher animal languages and animal cognition on terms that are meaningful to the animals, and not on our terms,” Slobodchikoff told me. Today’s machine-learning systems analyze data and look for correlations with startling efficiency; often, they find statistical connections that human analysts miss. They can, for example, deduce the “shape” of a language space, which depicts where words and concepts sit in relation to one another (“king” will typically be as far from “man” in this space as “queen” is from “woman”); these conceptual spaces turn out to be surprisingly similar for different languages—presumably because they are all representations of the same external world. Remarkably, the same sort of conceptual mapping will work not just for languages but for images. Researchers at Google have developed an A.I. system that can translate from an “image map” to a language map. After being trained to label a wide variety of images, it can be given an image that it has never seen before—of a dog, say—and make a good, sometimes even excellent guess at the word for what it has been shown. Given enough training data, these A.I. algorithms can extract semantic meaning from a range of non-linguistic inputs.

Britt Selvitelle, a computer scientist who worked on the team that created Twitter, is a founding member of the Earth Species Project, an organization founded in San Francisco that is developing A.I. approaches like this to animal communication. “We’re working on decoding the first nonhuman language,” he said, at the M.I.T. workshop—a goal that he thinks can be reached in five to ten years. In theory, a machine-learning system is particularly well suited to the problem of translating animalese. The loose correspondences between human and animal words and concepts may not matter to an A.I.; neither will the fact that animal ideas may be expressed not as vocalizations but as gestures, sequences of movements, or changes in skin texture. A neural network makes no assumptions about the nature of the input data; as long as there is some aspect of an animal’s behavioral repertoire that represents or expresses something that our languages can also express—a type of species, a warning, a spatial direction—then the algorithm has a chance of spotting it. “We’re really asking people to remove their human glasses, as much as possible,” Selvitelle said. One Earth Species Project collaboration, called Whale-X, aims to collect and analyze all communications among a pod of whales over an entire season.

It will be hard for the team to tag and track the individual whales. But Magnasco told me that he is also skeptical of the approach on a conceptual level. Even if the data can be gathered and analyzed, he said, it’s not obvious that we’ll have a word-for-word translation of whale to human terms, particularly without more understanding of their behavior. “If there is a vocabulary that has to do with their living environment, there is a massive amount of our vocabulary that just won’t make sense to them,” he said. In trying to import language-translation techniques to other species, the Earth Species Project might be “postulating an inherent similarity that we have no reason to assume.”

Many human languages seem to converge on a small list of omnipresent concepts formulated as individual words. Perhaps the most widely used lists of such words were derived in the mid-twentieth century by Morris Swadesh, an American linguist. The canonical Swadesh lists have between a hundred and two hundred and fifteen items. They contain personal pronouns, body parts, common animals such as “bird” and “dog,” verbs such as “eat,” “see,” and “hear,” and objects and substances such as “sun,” “water,” “stone,” and “smoke.” Magnasco points out that most of the items on the Swadesh list could have no “dolphish” equivalents, even in principle, because they have no relevance to the dolphin’s world. Among those excluded, he argues, would be “common words from our terrestrial environment, like ‘dog,’ ‘louse,’ ‘tree,’ ‘leaf,’ ‘root,’ ‘bark,’ ‘horn,’ and ‘mountain’ ”; words from terrestrial-animal anatomy—“nose,” “claw,” “foot,” “knee,” “hand,” “neck,” “feather,” “hair”; and words related directly to gravity, such as “walk,” “lie,” “stand,” “path,” and “swim”; and also the colors red and yellow, which dolphins can’t see. Finally, there are “words that do not exist or lose meaning in an aquatic environment”: “water,” “drink,” “rain,” “earth,” “fire,” “burn,” “ash,” “dry,” and “wet.”

If we could speak to them, dolphins wouldn’t understand the metaphor of a glass being half full or half empty. But how much does that matter? We can be discouraged by the fact that concepts that are universal among humans have no place in the conceptual landscape of the dolphin; alternatively, we can be encouraged by the possibility that there might be any overlap at all. It’s incredible to think that people and dolphins might communicate about anything, even seaweed; also, it’s striking to imagine dolphins shaking their heads, or the equivalent, over our inability to grasp concepts that seem obvious to them. It may be that the most interesting, revealing part of dolphish is precisely the part that lies outside our own lexicon—which is to say, outside our own minds. If, in fact, we find ourselves unable to fully reconstruct another creature’s mental world, it may be enough just to acknowledge the reality of what we can’t articulate.

In other ways, even basic communication may be of value. Some of our mistreatment of other species is obviously callous and selfish, as in factory farming, but some of it arises from a communications breakdown. Dogs are often surrendered to shelters, Slobodchikoff said, because people have trouble “reading and understanding the signals with which they are trying to communicate with us.” And, by changing what we believe about the minds of animals, even attempts at communication may affect how we think of them as legal entities. More than a hundred experts have signed a declaration urging the banning of octopus farming on the grounds that these “sentient and sophisticated” animals should not be kept in “sterile” and “monotonous” environments. Octopuses have long been denied the consideration and welfare that we give to vertebrates, but many marine biologists now agree they should be seen as possessing minds. Organizations such as the Great Ape Project and the Nonhuman Rights Project are seeking to extend minimal legal rights to certain animals such as great apes, elephants, dolphins, and whales.

In “King Solomon’s Ring,” Konrad Lorenz suggested that Solomon could communicate with animals not because he possessed a magical object but because he had the gift of observation. Lorenz “made the space to see and hear what other animals were doing,” Reiss told me. New technology may or may not help us to communicate with animals. But even the attempt at translation suggests a deepening of respect for them—and a willingness to free ourselves from our human preconceptions and prejudices.