Advertise here with Carbon Ads

This site is made possible by member support. ❤️

Big thanks to Arcustech for hosting the site and offering amazing tech support.

When you buy through links on kottke.org, I may earn an affiliate commission. Thanks for supporting the site!

kottke.org. home of fine hypertext products since 1998.

🍔  💀  📸  😭  🕳️  🤠  🎬  🥔

After OpenAI’s Blowup, It Seems Pretty Clear That ‘AI Safety’ Isn’t a Real Thing. “The long-term fallout from this gripping incident is bound to be a lot less enjoyable than the initial spectacle of it.”

Comments  1

Sort by: thread — thread . latest . faves

MacRae Linton

I know I've been jumping in on these OpenAI posts so I'll take a little break from commenting on whatever comes next, but I've read a couple articles recently that have been getting at the hollow core of the Effective Altruist/Longtermist/e/acc whole "philosophy" that is at the heart of the ~~AI Safety~~ that drove the OpenAI firing.

Our fav, Molly White wrote here: https://newsletter.mollywhite.net/p/effective-obfuscation

And then a couple blogs from people I didn't know caught my eye:

https://crookedtimber.org/2023/11/21/what-openai-shares-with-scientology/
https://www.garbageday.email/p/you-are-tearing-me-apart-eacc

From the end of that last one:

The altruists, which includes folks like Elon Musk and Sam Bankman-Fried, believe that maximum human happiness is a math equation you can solve with money, which should be what steers technological innovation. While the accelerationists believe almost the inverse, that innovation matters more than human happiness and the internet can, and should, rewire how our brains work. Either way, both groups are obsessed with race science, want to replace democratic institutions with privately-owned automations — that they control — and are utterly convinced that technology and, specifically, the emergence of AI is a cataclysmic doomsday moment for humanity. The accelerationists just think it should happen immediately. Of course, as is the case with everything in Silicon Valley, all of this is predicated on the unwavering belief in its own importance. So it’s very possible that if we were to take the actually longtermist view of all of this, we’d actually end up looking back at this whole thing as a bunch a weird nerds fighting over Reddit threads.

That last sentence really speaks to me. The fear that AI may literally destroy humanity is built upon a foundation of sand—blog posts on LessWrong, Astral Codex, etc. that are essentially the ramblings of a cult, pretty disconnected from reality, taking small suppositions and building wild conclusions out of them. It's posters's arguments; fundamentally unserious.

It's still really important to keep talking about the dangers of AI, though. The recent Hollywood strikes targeted harms that are here today. It's easy to see ways that AI could disrupt our economy: replacing jobs, taking humans out of the loop, eating privacy, etc. without becoming an all powerful god saving or destroying us. Every time an article lends credence to these eschatological views it distracts from much more plausible dangers, dangers I'm not convinced OpenAI ever really cared about when talking about "AI Safety".

Hello! In order to leave a comment, you need to be a current kottke.org member. If you'd like to sign up for a membership to support the site and join the conversation, you can explore your options here.

Existing members can sign in here. If you're a former member, you can renew your membership.

Note: If you are a member and tried to log in, it didn't work, and now you're stuck in a neverending login loop of death, try disabling any ad blockers or extensions that you have installed on your browser...sometimes they can interfere with the Memberful links. Still having trouble? Email me!