homeabout kottke.orgarchives + tagsmembership!
aboutarchives + tagsmembership!
aboutarchivesmembers!

Superintelligent AI, humanity’s final invention

posted by Jason Kottke   Feb 04, 2015

When Tim Urban recently began researching artificial intelligence, what he discovered affected him so much that he wrote a deep two-part dive on The AI Revolution: The Road to Superintelligence and Our Immortality or Extinction.

An AI system at a certain level — let’s say human village idiot — is programmed with the goal of improving its own intelligence. Once it does, it’s smarter — maybe at this point it’s at Einstein’s level — so now when it works to improve its intelligence, with an Einstein-level intellect, it has an easier time and it can make bigger leaps. These leaps make it much smarter than any human, allowing it to make even bigger leaps. As the leaps grow larger and happen more rapidly, the AGI soars upwards in intelligence and soon reaches the superintelligent level of an ASI system. This is called an Intelligence Explosion, and it’s the ultimate example of The Law of Accelerating Returns.

There is some debate about how soon AI will reach human-level general intelligence — the median year on a survey of hundreds of scientists about when they believed we’d be more likely than not to have reached AGI was 2040 — that’s only 25 years from now, which doesn’t sound that huge until you consider that many of the thinkers in this field think it’s likely that the progression from AGI to ASI happens very quickly. Like — this could happen:

It takes decades for the first AI system to reach low-level general intelligence, but it finally happens. A computer is able understand the world around it as well as a human four-year-old. Suddenly, within an hour of hitting that milestone, the system pumps out the grand theory of physics that unifies general relativity and quantum mechanics, something no human has been able to definitively do. 90 minutes after that, the AI has become an ASI, 170,000 times more intelligent than a human.

Superintelligence of that magnitude is not something we can remotely grasp, any more than a bumblebee can wrap its head around Keynesian Economics. In our world, smart means a 130 IQ and stupid means an 85 IQ — we don’t have a word for an IQ of 12,952.

While I was reading this, I kept thinking about two other posts Urban wrote: The Fermi Paradox (in that human-built AI could be humanity’s own Great Filter) and From 1,000,000 to Graham’s Number (how the process of the speed and intelligence of computers could fold in on itself to get unimaginably fast and powerful).

We Work Remotely