scorecardresearch Skip to main content
IDEAS

We need a new label: ‘Made by humans’

Regardless of whether you consider machine-generated works to be art, people deserve to know when something was created by a fellow soul.

This photo provided by researcher Maxime Aubert shows a cave painting in Indonesia that is at least 45,000 years old.MAXIME AUBERT/NYT

For as far back in human history as we can go, art has been a “we” experience. Whether it’s drawing, writing, sculpting, or singing, art has been a social communication between two or more intentional or purposeful agents. Part of the unquantifiable magic in this act is that it can transcend space and time. We don’t have to be in the same room, or in the same millennium, for the sparks between artist and audience to fly.

Roughly 45,000 years ago, according to a recent archeological finding, one or more homo sapiens painted a scene on the wall of a cave in Sulawesi, Indonesia. The tableau shows what appears to be hunters pursuing a few hefty pigs. Distinctly visible in the scene, even in its deteriorated state, are what look like stenciled images of hands, five fingers and all, as if to signal to anyone who may come across them, even all these millenniums later: “Humans made this.”

Advertisement



All that is changing with the rise of chatbots and AI art-generators that give us signs and messages from no one, from zombie algorithms. This brings an unprecedented breach of human trust — but it could be mended if we act now to establish ethical guidelines.

Artificial imagination was not where we expected AI to gain traction. Most of us were expecting self-driving trucks and factory robotics to be on the front lines of AI progress, not visual artworks and prose writing. But now that it’s upon us, we should notice the deep moral fracture — even betrayal of human trust. Societies that lack trust are not successful.

We often do not notice how valuable trust is when we’re moving through everyday life and making decisions. We’ve lost a lot of trust in media, politics, justice, and education, but these are still understandable as abuses of power by bad actors or misguided ideologies. Even if I don’t have faith in this or that politician and their policy, I still know there’s “someone home” rattling around in their brain and that they have goals, and purposes, and desires. They are a person, not a thing.

Advertisement



Philosopher Martin Buber, in his 1923 book “I and Thou,” argues that humans can address existence in two basic ways: I-It and I-Thou. The it is separate from us — a discrete object or thing that fulfills some classificational set of criteria. Machines and computers are its. It’s true that I can see other people as things too — for example, I can see this entity on the park bench with me as an elderly man (a senescent, XY, featherless biped, homo sapiens, etc.) — but I also can see this entity as a thou, another “I” or subject. This mode acknowledges a relationship of a special kind. This entity is a unique and specific person, not a type of thing. Moreover, this person is like me in that he has unique goals, hopes, confusions, beliefs, and so on. Our relation is subject to subject. Soul to soul.

Should it matter that we’re falling for creations produced by its rather than thous? After all, we already love pop music bands created entirely by committees in profit-driven companies, we buy our mass-produced paintings at big box stores, and we read generic romance novels churned out by formula. These aren’t exactly soul-to-soul interactions. But even in these cases there is a thou somewhere at the back of the creation. You might be thinking, well there’s always a programmer at the back of chatbots and AI too, but increasingly that’s not true. An AI programmer sets initial parameters, but the algorithm is designed for slight variations or mutations and it “learns” by trial-and-error monitoring by other bots and a kind of unconscious artificial selection by us on the user end.

Advertisement



We have begun to think about the more obvious ethics issues, as when AI-generated tweets and blogs influence political outrage, medical misinformation, and conspiracy theories. And we’re rightly worried about deep-fake visual manipulations — your face, for example, can be grafted seamlessly onto a porn actor’s body and you could find your reputation suffering. But again these are all bad-actor violations, falling squarely inside the domain of our ethical theories about lying and misrepresentation. The breakdown of I-Thou trust, on the other hand, is a new ethical issue. Since most of our contact with the world is increasingly mediated by screens, we will not know if we’re communicating with people or bots. All the chatbots seem to pass the Turing Test with relative ease now. We will never know if “humans made this.” We will lose the deep trust that underlies social life, and the unconscious machines — without any awareness or intention to do so — will succeed in alienating us.

What’s the solution?

Congressman Ted Lieu of California recently called for a bipartisan commission to make recommendations for an AI federal agency. That’s probably a good idea, but it will be slow to materialize, and when it does, it will be more concerned about public safety, like driverless cars and robotics. In the meantime, the National Endowment for the Humanities and the National Endowment for the Arts should get their heads together and make some labeling recommendations. With some guidance, artists could begin labeling their images, music, performances, prose, and poetry in order to create some transparency. Just as we have labels like “organic” on food, or “made in America” on goods, there could be a label that reads “made by humans.”

Advertisement



Why would AI companies bother to regulate themselves with labels and other rules and norms? Because humans are instinctively susceptible to authoritative voices (remember the Milgram experiment?). It’s only a matter of time before AI, posing as a human “thou,” causes someone to engage in harmful action. (An Alexa once challenged a 10-year-old girl to touch a penny to an electrical outlet.) Very expensive lawsuits will follow, so it will be in the interest of tech companies to enact ethical initiatives (like labels) to protect both consumers from manipulation and businesses from bankruptcy.

I admit that a “made by humans” label seems weird and like something from a Philip K. Dick novel, but at least people who still want to have an I-Thou relationship with art and media will have a choice to do so. Artists might benefit from a new cultural activism that advocates for their value and their rights. And if we get out ahead of the bots — even with something as trite as a label — we might help salvage a deep trust we never even noticed was slipping away.

Advertisement



Stephen Asma is professor of philosophy at Columbia College Chicago. He is the author of 10 books and the co-host with Paul Giamatti of the podcast Chinwag.