- Recent Tabs
- Posts
- AI Art?
AI Art?
The Case Against Generative AI
In the last five years, generative AI has become increasingly capable of producing images, texts, and music that resemble the work of human artists.
Just last month, we’ve seen how tools like OpenAI’s image model have been able to generate artwork that resembles Studio Ghibli’s signature aesthetic with startling precision. Social media users eagerly re-imagine family photos and internet memes in this “Ghiblified” style – soft colors, painterly skies. One user even managed to recreate the entire trailer of The Lord of the Rings: Fellowship of the Ring in this art style.
It’s visually impressive, even charming. But despite its novelty and entertainment value, there is an important question that has to be answered: does this AI-generated media qualify as “art”?
Defining Art
Britannica describes the creation of a work of art as follows:
[Artistic creation] is the bringing about of a new combination of elements in the medium (tones in music, words in literature, paints on canvas, and so on). The elements existed beforehand but not in the same combination; creation is the re-formation of these pre-existing materials (Hospers, 2022).
In this sense, generative AI does not create: it recombines statistically patterned data from massive training sets, often drawn from existing human-made artworks, without understanding, intention, or motivation.
In The New Yorker, American sci-fi author Ted Chiang argues that AI lacks the interior life and agency needed for art. “To be moved by art is to feel something that was felt by another,” he writes.
A large language model (LLM) such as ChatGPT doesn’t feel. It doesn’t know beauty, grief, or joy. They model probability distributions; that is, they predict the most likely next word, pixel, or sound, like a sophisticated autocomplete keyboard.
Can a Tool Be the Artist?
In this discussion, it’s important to distinguish between the “tool” and the “artist.” AI models are tools, much like a paintbrush or a DSLR camera. In our opinion, what makes art is not the tool itself, but the vision and intention behind it. And there’s the rub. When a user inputs an incredibly detailed prompt in an AI model, who is responsible for the creative vision? The person who typed the prompt? The developers who trained the model? The thousands of artists whose work was scraped from the internet without their knowledge?
We would argue that it is the latter – that the copyright should rest on those whose work was used as the “raw materials” that were remixed, in a sense, to create a new artifact.
In addition, generative AI can replicate styles and generate near-infinite variations. But replication, no matter how skillful, isn’t the same as creation. As Chayka (2023) notes, even if the resulting image is technically “new,” its stylistic DNA is copied from the work of real artists. We’re not talking about mere stylistic choices; images generated by LLMs are statistical reproductions of original work, whose authors and creators were not credited or compensated.
Human-made art is inseparable from human context. An artist paints not only with pigment but with intention, memory, cultural perspective, and emotional experience. AI, by contrast, doesn’t know what it’s making. It has no goals, no cultural memory, no emotional stakes.
Legal and Ethical “Loopholes”
Much of the training data for image generators like Midjourney, DALL·E, and Stable Diffusion comes from copyrighted human work—often scraped without consent or compensation.
This is bad, right? Unfortunately, it’s not a black-and-white issue: as it stands, generative AI operates in a legal grey area. Current copyright law protects specific works, not artistic styles. In the case of Studio Ghibli’s apparent victimization, there may not be much precedent that would clearly say that they are the victim here. After all, Studio Ghibli, under existing laws, can copyright its characters and storyboards, but not the general watercolor feel or palette of its films.
According to Appel, Neelbauer, and Schweidel (2023), this loophole enables AI companies to harvest artistic data at scale while avoiding direct infringement claims. But lawsuits are emerging. Artists and photographers, including those like Zhang Jingna, have begun to push back legally against the use of their work in training datasets. (CNA Insider, a Singapore-based YouTube channel, recently made a short documentary on Zhang Jingna’s experience and copyright infringement.)
The mounting legal cases against OpenAI and others (Panettieri, 2025) highlight the growing pressure for copyright reform in the age of generative systems.
So, Is It Art?
Ultimately, the answer may depend on how we define art: is it about the final output, or the process behind it?
If we value intention, context, and meaning, then AI-generated media do not meet the bar. It can mimic, remix, and surprise, but it doesn’t express. It can be used to make creative tools more accessible or inspire new forms of media, but on its own, it lacks authorship.
That doesn’t mean AI has no place in the creative world. It’s already changing workflows in animation, advertising, music production, and gaming. But it may be more accurate to call these outputs “content” or “designs” rather than works of art in the traditional sense.
It’s Not Art.
Using generative AI isn’t making art. It’s remixing artworks at scale, often without context, permission, or understanding. The outputs may be beautiful, sometimes even resonant, but they lack the intentionality and humanity that give real art its staying power.
AI in its various applications should be developed to free us from the drudgery of everyday busywork. Technology should be designed to automate repetitive, manual tasks, like the dishes, spreadsheets, email sorting. But we’re doing the opposite: robots were supposed to do the laundry so we could write poems, and not write poems so we could do the laundry.
As AI tools proliferate, we face a choice. We can allow algorithms to dictate our creative future, reducing centuries (and even millennia) of artistic expression to statistical pattern libraries – or we can draw clearer lines around what creativity means and who should benefit from it.
Until then, let the machines do the dishes. The art should be ours to make.
References
Appel, G., Neelbauer, J., & Schweidel, D. (2023, April 7). Generative AI Has an Intellectual Property Problem. Harvard Business Review. Retrieved April 13, 2025, from https://hbr.org/2023/04/generative-ai-has-an-intellectual-property-problem
Chayka, K. (2023, February 10). Is A.I. Art Stealing from Artists? The New Yorker. Retrieved April 11, 2025, from https://www.newyorker.com/culture/infinite-scroll/is-ai-art-stealing-from-artists
Chiang, T. (2024, August 31). Why A.I. Isn’t Going to Make Art. The New Yorker. Retrieved April 11, 2025, from https://www.newyorker.com/culture/the-weekend-essay/why-ai-isnt-going-to-make-art
Hospers, J. (2022, October 4). Philosophy of art - Expression, Aesthetics, Creativity | Britannica. (n.d.). Retrieved April 13, 2025, from https://www.britannica.com/topic/philosophy-of-art/Art-as-expression
Panettieri, J. (2025, April 10). Generative AI Lawsuits Timeline: Legal Cases vs. OpenAI, Microsoft, Anthropic, Nvidia, Perplexity, Intel and More. Sustainable Tech Partner for Green IT Service Providers. Retrieved April 11, 2025, from https://sustainabletechpartner.com/topics/ai/generative-ai-lawsuit-timeline/
DALL·E Prompt: “Create an image header for this article, illustrating the salient points of the article. [Copy of the entire article] Include the title of the article (AI "Art": The Case Against Generative AI); make it in a style reminiscent of Hieronymus Bosch” (Minor changes were made using Canva.)
Reply