By John A. Tures, Professor of Political Science, LaGrange College
Earlier in the year, I wrote an article about the challenges that artificial intelligence poses for writers, and a test on whether or not one could spot the difference between human-written poetry and A.I. poetry, as well as improvements humans need to do in order to outperform A.I. writing. In this article, it’s the artists who are being challenged by artificial intelligence. Why does A.I. technically “win” these surveys? The answer may come from the audience, not the artist.
As Max Read reveals, “On Wednesday, the prolific and popular blogger Scott Alexander published the preliminary results of a kind of poll he’d set up called “The AI Art Turing Test,” in which he asked readers to distinguish between A.I.-generated images and human-fashioned art.”
I took the test, and I don’t have my grades yet. However, it wouldn’t surprise me if my scores on picking might be like most people, who tend to A.I. over human-created art.
As Read reveals “As it turned out, the average score was 60.6 percent, meaning it was relatively difficult for most Astral Codex Ten readers to tell whether A.I. had been involved in the creation of a given image. Alexander also asked participants to choose their favorite picture; significantly, to him, the picture most-often chosen as the favorite was an impressionist-style A.I.-generated image of a café, prompted by a man named Jack Galler.”
It’s not the only test of its kind. Elise Kjørstad writes of another human vs. artificial test, one published by Simone Grassini and Mika Koivisto in Scientific Reports. They found that A.I. can not only produce some pretty nice-looking art, but it’s also one preferred by most people because they actually thought they were choosing ones created by humans, not programs. In this test, lesser known artists across a variety of genres were compared to one created by a image generator program named MidJourney.
Before A.I. backers begin celebrating the way computer science geeks danced when a program could outperform a chess champion at his game, the authors of the study noted some issues with the study. A.I.-generated art had better resolution and were more colorful, while human art was confined to a square, whether it was done in a square or not.
Those taking the test seemed to think they had seen the A.I. art before, because it’s essentially attempting to replicate existing material. Plus, they rated art they thought was done by A.I. (actually done by humans) as uglier. If they thought a human did it, they felt it was better because it was more beautiful. The audience really wanted humans to win.
Kjørstad cites artist and professor Alinta Krauth “‘Machines can produce visual products that people find aesthetically pleasing without being inherently creative,’ she writes. This is especially the case when a machine is trained to follow the many rules humans have established for what we find aesthetic, according to Krauth. AI can reproduce genres, styles, or ideas.”
If all an artist is doing is recreating what others have done, or play within the confines of what we’ve always seen, then A.I. is likely to replace that person by simply recreating style perhaps better, as the program can work faster and for cheaper. The challenge is that audiences picked the A.I. style. How can a human, attempting creativity and challenging convention, beat out the artificial images designed to please? In my next column, I explain not only how humans can beat out A.I. generated pop, but also use A.I. to enhance their creativity.
John A. Tures is a professor of political science at LaGrange College in LaGrange, Georgia. His views are his own. He can be reached at jtures@lagrange.edu. His “X” account is JohnTures2.
Be the first to comment on "If A.I. Beats Human-Created Art In Contests, Should We Be Worried?"