scorecardresearch Skip to main content
Ideas | Rina Diane Caballar

What complex technology can learn from simple ants

ROBYN PHELPS FOR THE BOSTON GLOBE

Researchers and scientists are constantly finding ways to train artificial intelligence to solve complex problems — from answering phones to driving cars — and they’re increasingly looking to nature for inspiration. Understanding animal behavior, it seems, can help intelligent systems discover optimal solutions.

That’s where ants come in.

Ants live a life without any central control. Their world is one where notions of leadership and order are nonexistent. Contrary to how ant colonies are depicted in fiction and film — having a hierarchical structure similar to human societies — ants aren’t given directions by other ants, nor do they have defined roles or dedicated tasks.

As scientists study how this behavior plays out in biological life, others have stolen some ant strategies and incorporated them in advanced technology — from algorithms that can streamline logistics networks to drone swarms and fleets of autonomous vehicles. All creatures find ways of tackling problems. Despite our human intelligence, we still have a lot to learn about those we live with.

Advertisement



One of the first researchers to study the mechanism behind insect interactions was French zoologist Pierre-Paul Grassé. In 1959, he introduced the concept of stigmergy, coined from the Greek words “stigma,” which means “mark,” and “ergon,” which means “work.” Grassé used this concept to explain the nest construction behavior of the Bellicositermes natalensis species of termites, where a building action performed by one termite serves as a stimulus for another, and the response of that termite becomes a stimulus for the next termite. This stimulus-response sequence allows termites to coordinate and organize tasks among themselves.

Ants exhibit similar stigmergic behavior. For instance, when searching for food, each ant marks its route from the nest to the food source by laying down pheromones. Then, ants use their antennae and keen sense of smell to follow the scent of pheromones left behind by other ants. According to Madeleine Beekman, head of the Social Insects Lab at the University of Sydney, this is a perfect example of positive feedback.

“Ants deposit pheromones as they walk between the nest and the food source. If the source is of high quality, the ants may deposit more pheromones. The higher the level of pheromones, the more likely other ants will follow the pheromone trail and add to it. Hence, the trail becomes stronger and attracts more ants,” explained Beekman.

Advertisement



Ants are also naturally inclined to follow the shortest path to a food source. “Because the pheromone is volatile, it disappears if it is not renewed quickly,” Beekman said. “So when not many ants follow the pheromone trail, the trail disappears. A colony will tend to establish a trail using the shortest path because the pheromone is less likely to evaporate when the path to a food source is short.”

This pheromone-laying and -following behavior was the inspiration for the ant colony optimization algorithm introduced by Marco Dorigo in his 1992 doctoral thesis. “There was something in the way ants solve the shortest path problem that I thought could be translated and used as inspiration to develop an algorithm,” said Dorigo, a co-director of IRIDIA, the artificial intelligence research laboratory of the Université Libre de Bruxelles in Brussels.

The algorithm uses artificial ants to solve optimization problems. Artificial ants reside in a virtual computing world; so the pheromones they deposit are actually numeric values. This sequence of artificial pheromone values is known as an artificial pheromone trail, and is the only means of communication between artificial ants.

Artificial ants move through each step of the assigned problem, making probabilistic decisions based on pheromone concentrations, akin to the decision-making process of real ants. To illustrate how it works, Dorigo applied it to the classic traveling salesman problem: Given a list of cities that need to be visited only once and the distances between each city, find the shortest possible route that goes through every city and returns to the city of origin.

Advertisement



To represent the problem, Dorigo used a graph composed of a set of edges and nodes, where the nodes are the cities and the edges are the connections between pairs of cities. “So finding a solution to the traveling salesman problem is equivalent to finding the shortest path on the graph connecting all the nodes and going back to the first node,” he said.

The artificial ants traverse the nodes of the graph, leaving pheromones on the edges. The amount of pheromones they leave is proportional to the quality of the solution. “Let’s say there are 100 nodes. One ant starts at node one, makes 100 jumps, then goes back to the initial node. Now the ant has produced one solution. It’s not very good, but it’s a solution,” said Dorigo.

Because there are no pheromones in the beginning, the artificial ants start moving at random. But during the next iteration, the ants don’t randomly choose where to go. “When they choose the next node to look for, they give preference to those edges that have more pheromones because these edges belong to a better solution. If you do this over and over, you end up having a good solution. It’s not an optimal one, but there are many tweaks to improve the algorithm,” Dorigo said.

Advertisement



Just as pheromones disappear when they’re unlikely to build up to the shortest path to a food source, so do artificial pheromone values. That’s necessary to prevent the algorithm from rapid convergence, enabling artificial ants to forget their search history and focus on more promising areas of the search space.

Putting theory into practice, the algorithm has been applied to real-world vehicle routing problems for delivery and distribution, as well as scheduling optimization problems. In the airline industry, Southwest Airlines modeled the foraging behavior of ants and applied it to their cargo routing and handling system, resulting in an 80 percent decrease in freight transfer rates and a 20 percent reduction in the workload of people who move cargo. The airline company has also used ant behavior as a basis for determining efficient ways of boarding a plane and assigning plane arrivals to airport gates.

For ant colonies, finding the shortest path is not only a means of optimization but also a method of self-organization and an example of swarm intelligence. “Ants are not very good at individually navigating their environment, and a lot of their navigation relies on following pheromone trails. Pheromones are a primary means of communication for ants, so a lot of their self-organized behaviors are mediated through pheromones,” said Simon Garnier, head of the New Jersey Institute of Technology’s Swarm Lab, an interdisciplinary research laboratory that studies the mechanisms underlying collective behaviors and swarm intelligence in natural and artificial systems.

Advertisement



Creating self-organized physical structures is another representation of swarm intelligence in ants. “One of the species we study in our lab are army ants found in Central and South America,” Garnier said. “These ants attach themselves to each other to form bridges and chains. They build structures to help them navigate through the environment and these structures are not formed based on pheromones, but on contact. It’s by touching each other that they communicate in this case.”

This collective behavior is instrumental in artificial intelligence and other emerging technologies. For instance, self-assembling robots can use the optimal construction of army ant living bridges to tailor their assembly to different shapes and structures. “Ants have bodies with a high degree of freedom. They can twist themselves around in different positions, making them more compliant. So the idea is to use the principle of self-assembly in these army ants to design swarms of robots that will essentially conform to any space in which we ask them to build something,” said Garnier.

Self-assembling robots could be used to temporarily reinforce portions of a collapsed building after an earthquake so rescuers can safely pull out people stuck under the building, for instance. Another application would be land or site surveillance using a swarm of drones. The idea, according to Garnier, is to allow this swarm of drones to accomplish a task autonomously — without central control or direction — similar to the self-organizing behavior of ants. “Instead of having one big plane, you have an army of robots — little drones to survey areas, regularly update information, and collaborate with each to other to make sure they don’t miss any areas,” he explained.

Meanwhile, autonomous cars can apply navigation algorithms to take optimal routes, similar to what ants do when traffic begins to build up on the shortest path. “Ants never experience traffic jams because as soon as there are too many ants on one path, they will automatically redistribute themselves to other available paths,” said Garnier. Much like this natural redistribution, autonomous cars can take into account what’s happening around them and navigate accordingly. “Smart cars should be able to sense that there’s too much traffic on the main road so they can take peripheral roads. You can design them as a swarm to optimize their distribution in space and minimize the amount of pressure on the road network,” Garnier adds.

Ants may be the smallest of beings, but the underlying logic they use for survival might be the future of technology.


Rina Diane Caballar is a freelance writer. You can follow her at @rinadianewrites.