The field of artificial intelligence faces a pivotal challenge: an “imitation barrier.” This barrier is preventing the blossoming of AI’s full potential, regardless of what you hear about rapid progress in the technology.
As of today, AI, even “generative AI,” uses a binary language of ones and zeros to produce seemingly creative outcomes. But these outputs are not truly original; they mimic styles and data provided by humans. This is the imitation barrier.
For example, large language models such as ChatGPT and GPT-4 transform previously learned words and knowledge into sentences that emulate text written by humans. Despite the facade of intelligence, there is a hitch. The resulting text is often biased, blatantly unethical, or simply incorrect because GPT-4 is akin to a mechanical parrot — able to mimic the right sounds but lacking the understanding and creative spark that fuels genuine thought, problem solving, and communication.
The imitation barrier becomes a significant hindrance in circumstances that demand innovative and clever solutions. Whether it is in business problem solving, managing battlefield conditions, or navigating complicated situations on roadways, current AI systems falter because not all scenarios can be preloaded into a database on which to draw. The imitation barrier can put people’s lives at risk.
But there’s a glimmer of hope — the imitation barrier can be broken.
Today’s generative AI systems are based on neural networks, a loose approximation of the human brain that philosopher Alexander Bain first proposed in 1873. But I and colleagues at Carnegie Mellon University’s School of Computer Science proposed a new model for AI, the insights-knowledge object model. We think this offers a fresh perspective on what human cognition entails and how it can be represented in computers.
Part of our idea is that human thought and creativity are not merely brain functions but involve the entire body — its organs, hormones, systems, and, yes, neural connections. Researchers at The Silicon Valley Laboratory, a consulting firm I lead, are exploring this new perspective. One of their primary pursuits is the search for a creativity hormone — call it “hormone C” — that we hypothesize is the trigger for creative impulses in humans. Hormone C could be a single molecule or it could turn out to be a “neural soup” comprising a mixture of hormones, such as dopamine, epinephrine, and serotonin, as well as certain vitamins and blood oxygen levels. Either way, we believe this biochemical impulse spurs the human body into creative action with an external or internal stimulus initiating the process. Whether the outcome is an artistic painting or an innovative solution to a problem, the body returns to a state of rest following the act of creation, awaiting the next stimulus.
This cyclical process is the focus of our study. When the physiology of creativity is fully understood, new mathematical models will be developed, opening the door for programming genuine computational creativity to break the imitation barrier.
Recall that when ChatGPT is asked a question, it responds with a reconfiguration or even a regurgitation of text already in its database. The result is a false creation, as no new insights are developed. With an approach inspired by human physiology, the next generation of AI has the potential to produce something much better. For example, one goal could be to develop a computer that can determine an ethical, humane, and creative solution to a seemingly no-win situation — like the “trolley problem” in dangerous autonomous driving scenarios.
If we put aside the 19th-century concept of a neural network, then we can be unbounded in the 21st century in our pursuit of machine inspiration. The prospect of developing true artificial creative intelligence promises to be worth the journey.
Rowland Chen is CEO of The Silicon Valley Laboratory, a consulting firm based in California.