What is art? A means by which we take the materials of the world, break them down into packets of information (codes, colors, chords), re-sort them into untested arrangements, and run continuous iterations (drafts, sketches, demos) until some kind of fresh meaning emerges.
As mysterious as creativity may seem, every artist’s system is just that: a system. And like any system, it can be modeled by artificial intelligence.
For artist and viewer, to different degrees, art is an interpretive act. So it makes sense that much of the most ambitious research in artificial intelligence is concerned with teaching virtual viewers how to “see” art — how to notice patterns and shifts in styles and forms, how to connect one age of human thought to the next, how to experience our experience.
Researchers at Rutgers University’s Art and Artificial Intelligence Laboratory, for instance, are creating algorithms that aren’t just re-confirming well-established art-history relationships (say, the passing of brush strokes from Monet to Hassam), they’re looking further (and faster) into images and their echoes across visual history than any fleshy expert ever has or could.
Likewise, for artist and viewer, to different degrees, art is a creative act. And we usually assume creative acts are the province of people. But the Rutgers researchers are painting a different picture: Algorithms are proving themselves equally adept at making art as they are at viewing it.
Those researchers — Ahmed Elgammal, Bingchen Liu, Mohamed Elhoseiny, and Marian Mazzone — have recently started working on “creative adversarial networks,” which adapt the usual “deep learning” M.O. of neural networks to — well, act more like artists.
Or, as the abstract of their recent study puts it: “We propose modifications to its objective to make it capable of generating creative art by maximizing deviation from established styles and minimizing deviation from art distribution.” That is, look at what everyone else is doing, and do something different.
Elgammal forefronts the questions driving this research in a blog post: “If we teach the machine about art and art styles and force it to generate novel images that do not follow established styles, what would it generate? Would it generate something that is aesthetically appealing to humans? Would that be considered ‘art’?”
If you’re one of those people whose internal answering service just shouted “NO!” to that last question, congratualtions, stick in the mud. Elgammal and his colleagues trained their model on “80K digitized images of Western paintings ranging from the 15th century to the end of the 20th century,” and tweaked it to “explore the creative space to generate novel images that differ from what it has seen in art history.”
The resulting images were then mixed in with a sampling of Abstract Expressionist paintings made between 1945 and 2007, as well as images of paintings shown at Art Basel 2016, and shown to a pool of 18 viewers to evaluate and judge the works. Humans — so in tune with the fundamental spiritual energies through which art connects us — mistook the generated images for “actual” artworks 75 percent of the time. Yes, I just put scare quotes around that.
“We hypothesized that human subjects would rate art by real artists higher on these scales than those generated by the proposed system,” writes Elgammal in his blog post. “To our surprise the results showed that our hypothesis is not true! Human subjects rated the images generated by the proposed system higher than those created by real artists, whether in the Abstract Expressionism set or in the Art Basel set!”
So if beauty really is in the eye of the beholder, art as we know it may be in a spot of trouble.
As random and unknowable as the path of creation through an artist’s mind may be, we ostensibly derive its meaning (or our faith in meaning), our investment in its meaning (faked or felt), and its value (monetary and not) from its origins in an artist’s squishy mind and ineffable spirit. Or, at least, that’s the idea.
So why are so many of these images (the full range of which you can see in the study) so alluring? What does it know about me that I don’t? Why do I like them? And why do I feel like this is just the beginning?
Perhaps because I’m once again listening to “Daddy’s Car,” a song created by AI developed at Sony’s CSL Research Laboratory and described by one writer at The Verge as “a dire warning for humanity.”
After feeding more than 13,000 musical leadsheets into a database, researchers used an AI system called FlowComposer to generate entirely new music based on various understood “styles,” from “Miles Davis” to (in the case of “Daddy’s Car”) “The Beatles.”
It’s not that it’s good, and it’s not that it’s bad. In fact, the most noticeable thing about “Daddy’s Car” is how easy it would be not to notice. It’s exactly as listenable as any number of pop songs you drift through and absorb into your memory every day on your way in and out of places (hm – funny, that). If my charitable definition of art includes Ariana Grande, is an algorithm really that far off?
And if AI really does put the art in artificial; if, after some time, we are relieved of our creative faculties by a machine with a finer hand and a sharper eye, what will we tell ourselves art means then?Michael Andor Brodeur can be reached at firstname.lastname@example.org. Follow him on Twitter @MBrodeur.