Superintelligence
Nick Bostrom has been thinking deeply about the philosophical implications of machine intelligence. You might recognize his name from previous kottke.org posts about the underestimation of human extinction and the possibility that we’re living in a computer simulation, that sort of cheery stuff. He’s collected some of his thoughts in a book called Superintelligence: Paths, Dangers, Strategies. Here’s how Wikipedia summarizes it:
The book argues that if machine brains surpass human brains in general intelligence, then this new superintelligence could replace humans as the dominant lifeform on Earth. Sufficiently intelligent machines could improve their own capabilities faster than human computer scientists. As the fate of the gorillas now depends more on humans than on the actions of the gorillas themselves, so would the fate of humanity depend on the actions of the machine superintelligence. Absent careful pre-planning, the most likely outcome would be catastrophe.
Technological smartypants Elon Musk gave Bostrom’s book an alarming shout-out on Twitter the other day. A succinct summary of Bostrom’s argument from Musk:
Hope we’re not just the biological boot loader for digital superintelligence. Unfortunately, that is increasingly probable
Eep. I’m still hoping for a Her-style outcome for superintelligence…the machines just get bored with people and leave.
Stay Connected