John H. Holland, 86; advanced study of complex adaptive systems
NEW YORK — John Henry Holland, a computer scientist whose seminal work on genetic algorithms, or computer codes that mimic sexually reproducing organisms, proved crucial in the study of complex adaptive systems, a field he helped create, died on Aug. 9 at his home in Ann Arbor, Mich. He was 86.
The cause was cancer, his daughter Gretchen Sleamon said.
While a graduate student at the University of Michigan in 1953, Dr. Holland, wandering through the school’s mathematics library, chanced upon “The Genetical Theory of Natural Selection,” a 1929 book by English statistician and evolutionary biologist R.A. Fisher. In one arresting example, Fisher described the seemingly random fluttering of a colony of butterflies as a dynamic information network that could be mapped mathematically.
“The fact that you could take calculus and differential equations and all the other things I had learned in my math classes to start a revolution in genetics — that was a real eye-opener,” Dr. Holland told M. Mitchell Waldrop, the author of “Complexity: The Emerging Science at the Edge of Order and Chaos” (1993). “Once I saw that, I knew I could never let it go.”
He began thinking about using computers to analyze complex adaptive systems: continually evolving aggregates that emerge through the spontaneous interaction of myriad agents and develop multiple organizational levels. Examples include the human brain, ant colonies, economies, and tropical rain forests.
“I look at big, buzzing complex systems and ask what mechanisms and properties seem central,” he told The New York Times in 1995.
One of his most innovative ideas was to develop computer codes, which he called genetic algorithms, that mimicked evolutionary processes by mating and mutating possible solutions; they in turn generated new solutions leading to an optimal result — a computer version of the survival of the fittest.
“John is rather unique in that he took ideas from evolutionary biology in order to transform search and optimization in computer science, and then he took what he discovered in computer science and allowed us to rethink evolutionary dynamics,” David Krakauer, the president of the Santa Fe Institute, a think tank for the study of complex systems, wrote in a memorial article. “This kind of rigorous translation between two communities of thought is a characteristic of very deep minds.”
In 1975 Dr. Holland presented his ideas in “Adaptation in Natural and Artificial Systems,” one of the most frequently cited works in the fields of artificial intelligence and evolutionary computing, with implications for fields as varied as psychology, neuroscience, economics, and linguistics. It led to the widespread acceptance of genetic algorithms as an optimization and search method in computer science beginning in the 1980s.
“Although there had been previous research on genetic algorithms and related evolutionary algorithms, John’s book was a turning point in the field,” said John E. Laird, a professor of engineering at the University of Michigan who works on artificial intelligence. “It marked the beginning of a sustained body of research on genetic algorithms that has continued for over 40 years.”
John Henry Holland was born on in Fort Wayne, Ind., and grew up in several small Ohio towns where his father, Gustave, set up soybean-processing mills. His mother, the former Mildred Gfroerer, often worked as his accountant.
After graduating from high school in Ohio he attended the Massachusetts Institute of Technology, where he worked on the first real-time computer, Whirlwind, and earned a physics degree in 1950.
For the next year and a half, he worked at IBM’s main research laboratory in Poughkeepsie, N.Y., on the company’s first commercial computer, the 701, called the Defense Calculator. As a way of testing the computer, he and his team leader, Nathaniel Rochester, devised a program to simulate Hebb’s theory of cell assemblies, which seeks to explain the way neurons in the brain self-organize during the learning process. Such neural-network simulations later became standard in artificial intelligence research.
At the same time, one of his colleagues, the electrical engineer Arthur Samuel, taught the computer to play checkers. Samuel showed that a computer program could be self-improvable. With time, the 701 adapted its tactics to the other player’s moves, becoming a better player. The lesson was not lost on Dr. Holland.
After earning a master’s degree in mathematics at the University of Michigan, he did doctoral work in the school’s new department of communication sciences and in 1959 received its first PhD in what would later be called computer science. His dissertation, “Cycles in Logical Nets,” described the changes caused when feedback was introduced into logical networks.
He spent his entire career at the University of Michigan, where he was a professor of electrical engineering and computer science, and a professor of psychology. He helped create the university’s cognitive science program in the 1970s and, in 1999, the Center for the Study of Complex Systems.
In the mid-1980s, he became a core participant in the Santa Fe Institute. Created by senior scientists at the Los Alamos National Laboratory for the interdisciplinary study of complex systems, it quickly became a clearinghouse for the most advanced ideas in the field. He went on to serve as a trustee and as a member of its science advisory board.
In addition to his daughter Gretchen, Dr. Holland leaves his other daughters, Alison Butler and Manja Holland; his sister, Shirley Ringgenberg; and four grandchildren. His two marriages ended in divorce.
He explored the nature of complex systems in the books “Hidden Order: How Adaptation Builds Complexity” (1995) and “Emergence: From Chaos to Order” (1998). In his 80s, he published “Signals and Boundaries: Building Blocks for Complex Adaptive Systems” (2012) and “Complexity: A Very Short Introduction” (2014). In 1992 he was named a MacArthur Fellow.
Dr. Holland often said that he picked up some of his best ideas by talking to people outside his field — linguists, musicians, and poets.
“My own idiosyncratic view is that the reason many scientists burn out early is that they dig very deep in one area and then they’ve gone as far as it’s humanly possible at that time and then can’t easily cross over into other areas,” he said in a 2006 interview. “I think at the heart of most creative science are well-thought-out metaphors, and cross-disciplinary work is a rich source of metaphor.”