Advertise here with Carbon Ads

This site is made possible by member support. โค๏ธ

Big thanks to Arcustech for hosting the site and offering amazing tech support.

When you buy through links on kottke.org, I may earn an affiliate commission. Thanks for supporting the site!

kottke.org. home of fine hypertext products since 1998.

๐Ÿ”  ๐Ÿ’€  ๐Ÿ“ธ  ๐Ÿ˜ญ  ๐Ÿ•ณ๏ธ  ๐Ÿค   ๐ŸŽฌ  ๐Ÿฅ”

Can AI Make Art?

Ted Chiang with a thought-provoking essay on Why A.I. Isn’t Going to Make Art:

It is very easy to get ChatGPT to emit a series of words such as “I am happy to see you.” There are many things we don’t understand about how large language models work, but one thing we can be sure of is that ChatGPT is not happy to see you. A dog can communicate that it is happy to see you, and so can a prelinguistic child, even though both lack the capability to use words. ChatGPT feels nothing and desires nothing, and this lack of intention is why ChatGPT is not actually using language. What makes the words “I’m happy to see you” a linguistic utterance is not that the sequence of text tokens that it is made up of are well formed; what makes it a linguistic utterance is the intention to communicate something.

In the past few years, Chiang has written often about the limitations of LLMs โ€” you can read more about his AI views on kottke.org.

Discussion  4 comments

Kapuku Edited

Nearly every argument made in this piece could have been made (and probably was made) when the camera was invented. For example, the supposedly disparaging comment: "A product that generates images with little effort"

LLMs and AI image generators are just tools. Nothing more, nothing less. They are like cameras or paintbrushes. They must be wielded to produce art. Some cameras, say, wildlife cameras that flash whenever something steps on a pressure pad, aren't wielded and don't produce art. LLMs can do that whenever I ask ChatGPT to summarize a long document. That's not art. But when a person conceives a writing prompt and works with the tool over many iterations to craft a story... how is that different from an artist using any other tool?

These reflexive "AI-isn't-art" points of view are based on a wildly outdated view of how art is made. We have long celebrated video artists working in digital media. We cry when playing video games (at least I have). We, perhaps with some debate, accept art made by worker bees overseen by the actual "artist". Why is a person working with prompts any less of an artist than any of these artists? Simply put, they aren't.

AI left to its own devices won't spontaneously produce art, just as a camera left on a shelf won't magically take a picture, but it is a tool that can produce beautiful things and should be embraced and celebrated. We are living in an age when a new art form has been forged and that is a tremendously exciting thing.

Jason KottkeMOD Edited

The way I read it, Chiang is arguing that LLMs are not creators by themselves, just as a camera or Photoshop is not a creator โ€” they're tools, as you point out. What the LLM companies & enthusiasts would have you believe is that these AIs are creative entities in their own right, capable of producing creative work with very little effort. Chiang points out that these tools can be used to create art (just like literally anything can) but the human effort involved (the time, the choices, the curiosity, the emotion, the desire to communicate) is what makes it art (rather than the output of spicy autocomplete).

Kapuku

I don't know. It seems a very fine - and arbitrary - line to draw when he says that AI can't produce art because it limits you to a prompt with hundreds of words, whereas an *Artist* makes many more individual choices in the creation of *Art*. He even acknowledges that you can produce art iteratively using AI by running hundreds of prompts and making thousands of images to get one final piece of art. So where is the line between an AI-created image and art? How is this different from a photographer taking hundreds of images to get the final piece? Why do we feel the need to define this at all?

I agree that artists should be compensated for having their work used to train AIs. And the resulting output of AIs is derivative to an extent on the artwork, photography, and internet junk used to train it, but I still believe and embrace what it produces as exciting and, in many cases, beautiful and engaging art.

Reply in this thread

Jeffrey Shrader

The argument that was most compelling to me was about non-creative writing:

Not all writing needs to be creative, or heartfelt, or even particularly good; sometimes it simply needs to exist. Such writing might support other goals, such as attracting views for advertising or satisfying bureaucratic requirements. When people are required to produce such text, we can hardly blame them for using whatever tools are available to accelerate the process. But is the world better off with more documents that have had minimal effort expended on them? It would be unrealistic to claim that if we refuse to use large language models, then the requirements to create low-quality text will disappear. However, I think it is inevitable that the more we use large language models to fulfill those requirements, the greater those requirements will eventually become.

There is already lots of bullshit writing to accompany (Graeberean) bullshit jobs. Creating more of it, and making it even shittier, seems like a bad idea.

Hello! In order to leave a comment, you need to be a current kottke.org member. If you'd like to sign up for a membership to support the site and join the conversation, you can explore your options here.

Existing members can sign in here. If you're a former member, you can renew your membership.

Note: If you are a member and tried to log in, it didn't work, and now you're stuck in a neverending login loop of death, try disabling any ad blockers or extensions that you have installed on your browser...sometimes they can interfere with the Memberful links. Still having trouble? Email me!