fb-pixelAre we modeling AI on the wrong brain? - The Boston Globe Skip to main content
Ideas | Flynn Coleman

Are we modeling AI on the wrong brain?

Globe Staff Illustration | Adobe Stock image

AT THE SEATTLE AQUARIUM in 2005, a giant Pacific octopus named Billye was given a herring-stuffed medication bottle. Billye and her octopus friends had previously been served their supper in jars with fastened lids. They had quickly learned to open them and routinely did so in under a minute. But biologists wanted to see what Billye would do when her meal was secured with a childproof cap — the kind that require us to push down and turn simultaneously to open (it still sometimes takes me a few tries).

Billye took the bottle and quickly determined that this was no ordinary lid. In less than 55 minutes she figured it out and was soon enjoying her herring. With a little practice she got it down to five minutes.

Advertisement



Octopuses are cephalopods, related to oysters. They have personalities, interact with their surroundings, and have expressions and memories. It is their approach to solving problems that intrigues those looking for a model for machines.

Many believe that mimicking the human brain is the optimal way to create artificial intelligence. But scientists are struggling to do this, due to the substantial intricacies of the human mind. Billye reminds us that there is a vast array of nonhuman life that is worthy of emulation.

RELATED | Emily Kumler: Why artificial intelligence is far too human

Much of the excitement around state-of-the-art artificial intelligence research today is focused on deep learning, which utilizes layers of artificial neural networks to perform machine learning through a web of nodes that are modeled on interconnections between neurons in the vertebrate brain cortex. While this science holds incredible promise, given the enormous complexity of the human brain, it is also presenting formidable challenges, including that some of these AI systems are arriving at conclusions that cannot be explained by their designers.

Advertisement



Maybe this should be expected, since humans do not know exactly how we make decisions either. We do not fully understand how our own brains work, nor do we even have a universally accepted definition of what human intelligence is. We don’t exactly know why we sleep or dream. We don’t know how we process memories. We don’t know whether we have free will, or what consciousness is (or who has it). And one of the main obstacles currently in the way of our creating a high level of nuanced intellectual performance in machines is our inability to code what we call “common sense.”

Some scientists, however, oppose the obvious archetype, suggesting that trying to pattern synthetic intelligence predominantly on our own is unnecessarily anthropocentric. Our world has a wondrous variety of sentient organisms that AI can train computers to model; why not think creatively beyond species and try to engineer intelligent technology that reflects our world’s prismatic diversity?

Roboticist Rodney Brooks thinks that nonhuman intelligence is what AI developers should be investigating. Brooks first began studying insect intelligence in the 1980s, and went on to build several businesses from the robots he developed (he co-invented the Roomba). When asked about his approach, Brooks said that it’s “unfair to claim that an elephant has no intelligence worth studying just because it does not play chess.”

The range of skill, ingenuity, and creativity of our biological brethren on this planet is astounding. But a fixation on humans as the preeminent metric of intelligence discounts other species’ unique abilities. Perhaps the most humbling example for humans is slime mold (Physarum polycephalum), a brainless and neuron-less organism (more like a collective organism or superorganism) that can make trade-offs, solve labyrinthian mazes, take risks, and remember where it has been. Some say slime mold could be the key to more efficient self-driving cars.

Advertisement



Roboticists are intrigued by the swarm intelligence of termites as well as theirs and other creatures’ stigmergy — a mechanism that allows them to collectively make decisions without directly communicating with one another by picking up signs left behind in the environment. Computer scientist Radikha Nagpal has been conducting research on the architectural feats of termites and the movement of schools of fish and flocks of birds. She thinks that we need to move away from a “human on top” mentality to design the next generation of robotics.

Octopuses like Billye possess what is called distributed intelligence, with two-thirds of their neurons residing in their eight arms, allowing them to perform various tasks both independently and at the same time. Researchers at Raytheon think that emulating octopuses’ multifaceted brilliance is better suited for the robots they are constructing for space exploration. In his book “Other Minds,” Peter Godfrey-Smith suggests that observing octopus intelligence is the closest we will ever get to studying alien intelligence. Taking cues from the vision of hawks, the dexterity of cats, or the sense of smell of bears can expand the technological horizons of what’s possible.

Advertisement



Humans have long mimicked nature and nonhuman life for our inventions, from modeling X-ray machines on the reflective eyesight of lobsters, to creating an ultrasound cane for the visually impaired based on echolocation (the sensory mechanism of bats), to simulating the anatomy of sea lampreys to make tiny robots that could someday swim through our bodies detecting disease.

Much like humans had to first let go of having to fly exactly like birds fly in order to crack the code of flight, we must now look beyond the widely held belief that the human mind is singular and unique as an intellectual model, and that replicating it is the only way artificial neural networks could truly be deemed intelligent. Being open to holding all beings in esteem, respecting their complexities and gifts, is foundational to building and valuing future intelligent machines.

RELATED: What complex technology can learn from simple ants

Science continues to show us that we are not quite as sui generis as we may have thought; we are discovering now that certain attributes we assumed were reserved solely for humans — moral judgment, empathy, emotions — are also found across the spectrum of life on earth. Jessica Pierce and Marc Bekoff, in their book “Wild Justice,” establish that animals demonstrate nuanced emotions and moral behaviors, such as fairness and empathy. The authors maintain that animals are social beings that also have a sense of social justice.

Advertisement



Adobe Stock

Simply put: Humans are not the lone species whose study can serve as a guide for future forms of AI. We are but one form of intelligent life. Other living creatures exhibit incredible intelligence in a mosaic of mesmerizing ways. Spiders weave silk balloons to parachute and fly. Chimpanzees mourn their dead. So do orcas; as do elephants, who also have distinct personalities and can empathize and coordinate with each other. Crows create and use tools to gather food, and can also solve puzzles. Birds can form long-term alliances and display “relationship intelligence.” Bees can count and use dance to communicate complex information to the rest of their colonies. Pigeons have fantastic memories, can recognize words, perceive space and time, and detect cancer in image scans.

Humans have much to learn from the acumen of bees and termites, elephants and parrots; but some of us are still uncomfortable with the idea of nonhumans having thoughts and emotions. Is it because sanctioning their agency devalues our own? Our appreciation of animals does not follow the scientific evidence, and nonhumans remain mostly excluded from our notions of intelligence, justice, and rights.

As we strive to create machines that can think for themselves and possibly become self-aware, it’s time to take a look in the mirror and ask ourselves not only what kind of AI we want to build, but also what kind of humans we want to be. Modeling AI on myriad forms of intelligence, drawing from the vast panoply of intelligent life, is not only a potential solution to the conundrum of how to construct a digital mind; it could also be a gateway to a more inclusive, peaceful existence; to preserving the life that exists on our only home.

RELATED: Artificial intelligence’s diversity problem

For those speculating about how we may treat synthetically intelligent beings in the future, looking at how we have bestowed rights on other nonhumans is instructive. Our historical treatment of animals — or in truth, of any being we are able to convince ourselves is “other” or less than human — does not bode well for their treatment and acceptance. The root of the word “robot” comes from the Old Church Slavonic word “rabota” which means “forced labor” — perhaps a prescient forecast that we may be prone to consider AI as nothing more than a tool to do our work and bidding. Creating a hierarchy of intelligence makes it easy to assign lesser dignities to other thinking things. Insisting on absolute human supremacy in all instances does not portend well for us in the Intelligent Machine Age.

I believe that our failure to model AI on the human mind may ultimately be our salvation. It will compel us to assign greater value to all types of intelligence and all living things; and to ask ourselves difficult questions about which beings are worthy of emulation, respect, and autonomy. This (possibly last) frontier of scientific invention may be our chance to embrace our human limitations, our weaknesses, the glitches and gaps in our systems, and to expand our worldview beyond ourselves. Being willing to admit other species are brilliant could be the smartest thing we can do.


Flynn Coleman is the author of “A Human Algorithm: How Artificial Intelligence Is Redefining Who We Are,” available now from Counterpoint Press. Send comments to ideas@globe.com.