Advertise here with Carbon Ads

This site is made possible by member support. ๐Ÿ’ž

Big thanks to Arcustech for hosting the site and offering amazing tech support.

When you buy through links on kottke.org, I may earn an affiliate commission. Thanks for supporting the site!

kottke.org. home of fine hypertext products since 1998.

๐Ÿ”  ๐Ÿ’€  ๐Ÿ“ธ  ๐Ÿ˜ญ  ๐Ÿ•ณ๏ธ  ๐Ÿค   ๐ŸŽฌ  ๐Ÿฅ”

Researchers at Carnegie Mellon have figured out how to make AI models like ChatGPT serve up prohibited material by sending it nonsensical text strings…sort of like a buffer overflow or SQL injection attack.