kottke.org posts about artificial intelligence

Civilization is itself a thinking machineJul 21 2016

In response to the question "What Do You Think About Machines That Think?" Brian Eno responded that artificial intelligence has been with us for millennia and understanding it is more a matter of managing our ignorance of how it works.

My untroubled attitude results from my almost absolute faith in the reliability of the vast supercomputer I'm permanently plugged into. It was built with the intelligence of thousands of generations of human minds, and they're still working at it now. All that human intelligence remains alive in the form of the supercomputer of tools, theories, technologies, crafts, sciences, disciplines, customs, rituals, rules-of-thumb, arts, systems of belief, superstitions, work-arounds, and observations that we call Global Civilisation.

Global Civilisation is something we humans created, though none of us really know how. It's out of the individual control of any of us -- a seething synergy of embodied intelligence that we're all plugged into. None of us understands more than a tiny sliver of it, but by and large we aren't paralysed or terrorised by that fact -- we still live in it and make use of it. We feed it problems -- such as "I want some porridge" and it miraculously offers us solutions that we don't really understand. What does that remind you of?

Interesting perspective. There's lots more on this question in the book What to Think About Machines That Think, which includes thoughts from Virginia Heffernan, Freeman Dyson, Alison Gopnik, Kevin Kelly, and dozens of others.

Harry Potter and the Artificial IntelligenceJul 11 2016

Max Deutsch trained a neural network using the first four Harry Potter books and then asked it to write its own chapter.

"The Malfoys!" said Hermione.

Harry was watching him. He looked like Madame Maxime. When she strode up the wrong staircase to visit himself.

"I'm afraid I've definitely been suspended from power, no chance - indeed?" said Snape. He put his head back behind them and read groups as they crossed a corner and fluttered down onto their ink lamp, and picked up his spoon. The doorbell rang. It was a lot cleaner down in London.

Hermione yelled. The party must be thrown by Krum, of course.

Harry collected fingers once more, with Malfoy. "Why, didn't she never tell me. ..." She vanished. And then, Ron, Harry noticed, was nearly right.

"Now, be off," said Sirius, "I can't trace a new voice."

Rowling, your job is safe for now. Deutsch did the same thing with the Hamilton soundtrack...the result is not particularly good but that last line!

The world's first chatbot lawyerJun 29 2016

AI chatbot lawyer sounds like a SNL skit, but the DoNotPay chatbot has successfully contested 160,000 parking tickets in London and New York.

Dubbed as "the world's first robot lawyer" by its 19-year-old creator, London-born second-year Stanford University student Joshua Browder, DoNotPay helps users contest parking tickets in an easy to use chat-like interface.

The program first works out whether an appeal is possible through a series of simple questions, such as were there clearly visible parking signs, and then guides users through the appeals process.

The results speak for themselves. In the 21 months since the free service was launched in London and now New York, Browder says DoNotPay has taken on 250,000 cases and won 160,000, giving it a success rate of 64% appealing over $4m of parking tickets.

Having spent a shitload of money on lawyering over the past few years, there is definitely an opportunity for some automation there.

2001: A Picasso OdysseyJun 09 2016

Bhautik Joshi took 2001: A Space Odyssey and ran it through a "deep neural networks based style transfer" with the paintings of Pablo Picasso.

See also Blade Runner in the style of van Gogh's Starry Night and Alice in a Neural Networks Wonderland.

Our creative, beautiful, unpredictable machinesMar 11 2016

I have been following with fascination the match between Google's Go-playing AI AlphaGo and top-tier player Lee Sedol and with even more fascination the human reaction to AlphaGo's success. Many humans seem unnerved not only by AlphaGo's early lead in the best-of-five match but especially by how the machine is playing in those games.

Then, with its 19th move, AlphaGo made an even more surprising and forceful play, dropping a black piece into some empty space on the right-hand side of the board. Lee Sedol seemed just as surprised as anyone else. He promptly left the match table, taking an (allowed) break as his game clock continued to run. "It's a creative move," Redmond said of AlphaGo's sudden change in tack. "It's something that I don't think I've seen in a top player's game."

When Lee Sedol returned to the match table, he took an usually long time to respond, his game clock running down to an hour and 19 minutes, a full twenty minutes less than the time left on AlphaGo's clock. "He's having trouble dealing with a move he has never seen before," Redmond said. But he also suspected that the Korean grandmaster was feeling a certain "pleasure" after the machine's big move. "It's something new and unique he has to think about," Redmond explained. "This is a reason people become pros."

"A creative move." Let's think about that...a machine that is thinking creatively. Whaaaaaa... In fact, AlphaGo's first strong human opponent, Fan Hui, has credited the machine for making him a better player, a more beautiful player:

As he played match after match with AlphaGo over the past five months, he watched the machine improve. But he also watched himself improve. The experience has, quite literally, changed the way he views the game. When he first played the Google machine, he was ranked 633rd in the world. Now, he is up into the 300s. In the months since October, AlphaGo has taught him, a human, to be a better player. He sees things he didn't see before. And that makes him happy. "So beautiful," he says. "So beautiful."

Creative. Beautiful. Machine? What is going on here? Not even the creators of the machine know:

"Although we have programmed this machine to play, we have no idea what moves it will come up with," Graepel said. "Its moves are an emergent phenomenon from the training. We just create the data sets and the training algorithms. But the moves it then comes up with are out of our hands -- and much better than we, as Go players, could come up with."

Generally speaking,1 until recently machines were predictable and more or less easily understood. That's central to the definition of a machine, you might say. You build them to do X, Y, & Z and that's what they do. A car built to do 0-60 in 4.2 seconds isn't suddenly going to do it in 3.6 seconds under the same conditions.

Now machines are starting to be built to think for themselves, creatively and unpredictably. Some emergent, non-linear shit is going on. And humans are having a hard time figuring out not only what the machine is up to but how it's even thinking about it, which strikes me as a relatively new development in our relationship. It is not all that hard to imagine, in time, an even smarter AlphaGo that can do more things -- paint a picture, write a poem, prove a difficult mathematical conjecture, negotiate peace -- and do those things creatively and better than people.

Unpredictable machines. Machines that act more like the weather than Newtonian gravity. That's going to take some getting used to. For one thing, we might have to stop shoving them around with hockey sticks. (thx, twitter folks)

Update: AlphaGo beat Lee in the third game of the match, in perhaps the most dominant fashion yet. The human disquiet persists...this time, it's David Ormerod:

Move after move was exchanged and it became apparent that Lee wasn't gaining enough profit from his attack.

By move 32, it was unclear who was attacking whom, and by 48 Lee was desperately fending off White's powerful counter-attack.

I can only speak for myself here, but as I watched the game unfold and the realization of what was happening dawned on me, I felt physically unwell.

Generally I avoid this sort of personal commentary, but this game was just so disquieting. I say this as someone who is quite interested in AI and who has been looking forward to the match since it was announced.

One of the game's greatest virtuosos of the middle game had just been upstaged in black and white clarity.

AlphaGo's strength was simply remarkable and it was hard not to feel Lee's pain.

  1. Let's get the caveats out of the way here. Machines and their outputs aren't completely deterministic. Also, with AlphaGo, we are talking about a machine with a very limited capacity. It just plays one game. It can't make a better omelette than Jacques Pepin or flow like Nicki. But not only beating a top human player while showing creativity in a game like Go, which was considered to be uncrackable not that long ago, seems rather remarkable.

Lo and Behold, a film about "the connected world" by Werner HerzogJan 20 2016

Well, holy shit...Werner Herzog has made a film called Lo and Behold about the online world and artificial intelligence.

Lo and Behold traces what Herzog describes as "one of the biggest revolutions we as humans are experiencing," from its most elevating accomplishments to its darkest corners. Featuring original interviews with cyberspace pioneers and prophets such as Elon Musk, Bob Kahn, and world-famous hacker Kevin Mitnick, the film travels through a series of interconnected episodes that reveal the ways in which the online world has transformed how virtually everything in the real world works, from business to education, space travel to healthcare, and the very heart of how we conduct our personal relationships.

From the trailer, it looks amazing. Gotta see this asap.

Update: Here's the official trailer for the film:

Have the monks stopped meditating? They all seem to be tweeting.

It's coming out in theaters and iTunes/Amazon on August 19th. Can't wait!

The One with Chicken BobJan 19 2016

Twitter user Andy Pandy fed the scripts for all the episodes of Friends into a neural network and had it generate new scenes. Here's what it came up with.

Neuro FriendsNeuro Friends

(via @buzz)

The ethical dilemma of self-driving carsDec 10 2015

When people drive cars, collisions often happen so quickly that they are entirely accidental. When self-driving cars eliminate driver error in these cases, decisions on how to crash can become pre-meditated. The car can think quickly, "Shall I crash to the right? To the left? Straight ahead?" and do a cost/benefit analysis for each option before acting. This is the trolley problem.

How will we program our driverless cars to react in situations where there is no choice to avoid harming someone? Would we want the car to run over a small child instead of a group of five adults? How about choosing between a woman pushing a stroller and three elderly men? Do you want your car to kill you (by hitting a tree at 65mph) instead of hitting and killing someone else? No? How many people would it take before you'd want your car to sacrifice you instead? Two? Six? Twenty?

The video above introduces a wrinkle I had never considered before: what if the consumer could choose the sort of safety they want? If you had to choose between buying a car that would save as many lives as possible and a car that would save you above all other concerns, which would you select? You can imagine that answer would be different for different people and that car companies would build & market cars to appeal to each of them. Perhaps Apple would make a car that places the security of the owner above all else, Google would be a car that would prioritize saving the most lives, and Uber would build a car that keeps the largest Uber spenders alive.1

Ethical concerns like the trolley problem will seem quaint when the full power of manufacturing, marketing, and advertising is applied to self-driving cars. Imagine trying to choose one of the 20 different types of ketchup at the supermarket except that if you choose the wrong one, you and your family die and, surprise, it's not actually your choice, it's the potential Trump voter down the street who buys into Car Company X's advertising urging him to "protect himself" because he feels marginalized in a society that increasingly values diversity over "traditional American values". I mean, we already see this with huge, unsafe gas-guzzlers driving on the same roads as small, safer, energy-efficient cars, but the addition of software will turbo-charge this process. But overall cars will be much safer so it'll all be ok?

  1. The bit about Uber is a joke but just barely. You could easily imagine a scenario in which a Samsung car might choose to hit an Apple car over another Samsung car in an accident, all other things being equal.

Taking a neural net out for a walkNov 23 2015

Kyle McDonald hooked a neural network program up to a webcam and had it try to analyze what it was seeing in realtime as he walked around Amsterdam. See also a neural network tries to identify objects in Star Trek:TNG intro. (via @mbostock)

Alice in a Neural Networks WonderlandSep 17 2015

Gene Kogan used some neural network software written by Justin Johnson to transfer the style of paintings by 17 artists to a scene from Disney's 1951 animated version of Alice in Wonderland. The artists include Sol Lewitt, Picasso, Munch, Georgia O'Keeffe, and van Gogh.

Neural Wonderland

The effect works amazingly well, like if you took Alice in Wonderland and a MoMA catalog and put them in a blender. (via prosthetic knowledge)

A neural network tries to identify objects in Star Trek:TNG introAug 27 2015

Ville-Matias Heikkilä pointed a neural network at the opening title sequence for Star Trek: The Next Generation to see how many objects it could identify.

But the system hadn't seen much space imagery before,1 so it didn't do such a great job. For the red ringed planet, it guessed "HAIR SLIDE, CHOCOLATE SAUCE, WAFFLE IRON" and the Enterprise was initially "COMBINATION LOCK, ODOMETER, MAGNETIC COMPASS" before it finally made a halfway decent guess with "SUBMARINE, AIRCRAFT CARRIER, OCEAN LINER". (via prosthetic knowledge)

  1. If you're curious, here is some information on the training set used.

MarI/OJun 15 2015

SethBling wrote a program made of neural networks and genetic algorithms called MarI/O that taught itself how to play Super Mario World. This six-minute video is a pretty easy-to-understand explanation of the concepts involved.

But here's the thing: as impressive as it is, MarI/O actually has very little idea how to play Super Mario World at all. Each time the program is presented with a new level, it has to learn how to play all over again. Which is what it's doing right now on Twitch. (via waxy)

The trolley problemApr 29 2015

The trolley problem is an ethical and psychological thought experiment. In its most basic formulation, you're the driver of a runaway trolley about to hit and certainly kill five people on the track ahead, but you have the option of switching to a second track at the last minute, killing only a single person. What do you do?

The problem becomes stickier as you consider variations of the problem:

As before, a trolley is hurtling down a track towards five people. You are on a bridge under which it will pass, and you can stop it by putting something very heavy in front of it. As it happens, there is a very fat man next to you -- your only way to stop the trolley is to push him over the bridge and onto the track, killing him to save five. Should you proceed?

As driverless cars and other autonomous machines are increasingly on our minds, so too is the trolley problem. How will we program our driverless cars to react in situations where there is no choice to avoid harming someone? Would we want the car to run over a small child instead of a group of five adults? How about choosing between a woman pushing a stroller and three elderly men? Do you want your car to kill you (by hitting a tree at 65mph) instead of hitting and killing someone else? No? How many people would it take before you'd want your car to sacrifice you instead? Two? Six? Twenty? Is there a place in the car's system preferences panel to set the number of people? Where do we draw those lines and who gets to decide? Google? Tesla? Uber?1 Congress? Captain Kirk?

If that all seems like a bit too much to ponder, Kyle York shared some lesser-known trolley problem variations at McSweeney's to lighten the mood.

There's an out of control trolley speeding towards a worker. You have the ability to pull a lever and change the trolley's path so it hits a different worker. The first worker has an intended suicide note in his back pocket but it's in the handwriting of the second worker. The second worker wears a T-shirt that says PLEASE HIT ME WITH A TROLLEY, but the shirt is borrowed from the first worker.

Reeeeally makes you think, huh?

  1. If Uber gets to decide, the trolley problem's ethical concerns vanish. The car would simply hit whomever will spend less on Uber rides and deliveries in the future, weighted slightly for passenger rating. Of course, customers with a current subscription to Uber Safeguard would be given preference at different coverage levels of 1, 5, and 20+ ATPs (Alternately Targeted Persons).

Cognitive Cooking with Chef WatsonApr 14 2015

Chef WatsonWatson, IBM's evolving attempt at building a computer capable of AI, was originally constructed to excel at Jeopardy. Which it did, handily beating Jeopardy mega-champ Ken Jennings. Watson has since moved on to cooking and has just come out with a new cookbook, Cognitive Cooking with Chef Watson.

You don't have to be a culinary genius to be a great cook. But when it comes to thinking outside the box, even the best chefs can be limited by their personal experiences, the tastes and flavor combinations they already know. That's why IBM and the Institute of Culinary Education teamed up to develop a groundbreaking cognitive cooking technology that helps cooks everywhere discover and create delicious recipes, utilizing unusual ingredient combinations that man alone might never imagine.

In Cognitive Cooking with Chef Watson, IBM's unprecedented technology and ICE's culinary experts present more than 65 original recipes exploding with irresistible new flavors. Together, they have carefully crafted, evaluated and perfected each of these dishes for "pleasantness" (superb taste), "surprise" (innovativeness) and a "synergy" of mouthwatering ingredients that will delight any food lover.

Arcade IntelligenceFeb 26 2015

Then something happens. By the three hundredth game, the A.I. has stopped missing the ball.

The New Yorker's Nicola Twilley on the computer program that learned how to play Breakout and other Atari games. All on its own. Artificial Intelligence Goes to the Arcade.

Superintelligent AI, humanity's final inventionFeb 04 2015

When Tim Urban recently began researching artificial intelligence, what he discovered affected him so much that he wrote a deep two-part dive on The AI Revolution: The Road to Superintelligence and Our Immortality or Extinction.

An AI system at a certain level -- let's say human village idiot -- is programmed with the goal of improving its own intelligence. Once it does, it's smarter -- maybe at this point it's at Einstein's level -- so now when it works to improve its intelligence, with an Einstein-level intellect, it has an easier time and it can make bigger leaps. These leaps make it much smarter than any human, allowing it to make even bigger leaps. As the leaps grow larger and happen more rapidly, the AGI soars upwards in intelligence and soon reaches the superintelligent level of an ASI system. This is called an Intelligence Explosion, and it's the ultimate example of The Law of Accelerating Returns.

There is some debate about how soon AI will reach human-level general intelligence -- the median year on a survey of hundreds of scientists about when they believed we'd be more likely than not to have reached AGI was 2040 -- that's only 25 years from now, which doesn't sound that huge until you consider that many of the thinkers in this field think it's likely that the progression from AGI to ASI happens very quickly. Like -- this could happen:

It takes decades for the first AI system to reach low-level general intelligence, but it finally happens. A computer is able understand the world around it as well as a human four-year-old. Suddenly, within an hour of hitting that milestone, the system pumps out the grand theory of physics that unifies general relativity and quantum mechanics, something no human has been able to definitively do. 90 minutes after that, the AI has become an ASI, 170,000 times more intelligent than a human.

Superintelligence of that magnitude is not something we can remotely grasp, any more than a bumblebee can wrap its head around Keynesian Economics. In our world, smart means a 130 IQ and stupid means an 85 IQ -- we don't have a word for an IQ of 12,952.

While I was reading this, I kept thinking about two other posts Urban wrote: The Fermi Paradox (in that human-built AI could be humanity's own Great Filter) and From 1,000,000 to Graham's Number (how the process of the speed and intelligence of computers could fold in on itself to get unimaginably fast and powerful).

SuperintelligenceAug 12 2014

Nick Bostrom has been thinking deeply about the philosophical implications of machine intelligence. You might recognize his name from previous kottke.org posts about the underestimation of human extinction and the possibility that we're living in a computer simulation, that sort of cheery stuff. He's collected some of his thoughts in a book called Superintelligence: Paths, Dangers, Strategies. Here's how Wikipedia summarizes it:

The book argues that if machine brains surpass human brains in general intelligence, then this new superintelligence could replace humans as the dominant lifeform on Earth. Sufficiently intelligent machines could improve their own capabilities faster than human computer scientists. As the fate of the gorillas now depends more on humans than on the actions of the gorillas themselves, so would the fate of humanity depend on the actions of the machine superintelligence. Absent careful pre-planning, the most likely outcome would be catastrophe.

Technological smartypants Elon Musk gave Bostrom's book an alarming shout-out on Twitter the other day. A succinct summary of Bostrom's argument from Musk:

Hope we're not just the biological boot loader for digital superintelligence. Unfortunately, that is increasingly probable

Eep. I'm still hoping for a Her-style outcome for superintelligence...the machines just get bored with people and leave.

Will technology help humans conquer the universe or kill us all?Feb 27 2013

Ross Andersen, whose interview with Nick Bostrom I linked to last week, has a marvelous new essay in Aeon about Bostrom and some of his colleagues and their views on the potential extinction of humanity. This bit of the essay is the most harrowing thing I've read in months:

No rational human community would hand over the reins of its civilisation to an AI. Nor would many build a genie AI, an uber-engineer that could grant wishes by summoning new technologies out of the ether. But some day, someone might think it was safe to build a question-answering AI, a harmless computer cluster whose only tool was a small speaker or a text channel. Bostrom has a name for this theoretical technology, a name that pays tribute to a figure from antiquity, a priestess who once ventured deep into the mountain temple of Apollo, the god of light and rationality, to retrieve his great wisdom. Mythology tells us she delivered this wisdom to the seekers of ancient Greece, in bursts of cryptic poetry. They knew her as Pythia, but we know her as the Oracle of Delphi.

'Let's say you have an Oracle AI that makes predictions, or answers engineering questions, or something along those lines,' Dewey told me. 'And let's say the Oracle AI has some goal it wants to achieve. Say you've designed it as a reinforcement learner, and you've put a button on the side of it, and when it gets an engineering problem right, you press the button and that's its reward. Its goal is to maximise the number of button presses it receives over the entire future. See, this is the first step where things start to diverge a bit from human expectations. We might expect the Oracle AI to pursue button presses by answering engineering problems correctly. But it might think of other, more efficient ways of securing future button presses. It might start by behaving really well, trying to please us to the best of its ability. Not only would it answer our questions about how to build a flying car, it would add safety features we didn't think of. Maybe it would usher in a crazy upswing for human civilisation, by extending our lives and getting us to space, and all kinds of good stuff. And as a result we would use it a lot, and we would feed it more and more information about our world.'

'One day we might ask it how to cure a rare disease that we haven't beaten yet. Maybe it would give us a gene sequence to print up, a virus designed to attack the disease without disturbing the rest of the body. And so we sequence it out and print it up, and it turns out it's actually a special-purpose nanofactory that the Oracle AI controls acoustically. Now this thing is running on nanomachines and it can make any kind of technology it wants, so it quickly converts a large fraction of Earth into machines that protect its button, while pressing it as many times per second as possible. After that it's going to make a list of possible threats to future button presses, a list that humans would likely be at the top of. Then it might take on the threat of potential asteroid impacts, or the eventual expansion of the Sun, both of which could affect its special button. You could see it pursuing this very rapid technology proliferation, where it sets itself up for an eternity of fully maximised button presses. You would have this thing that behaves really well, until it has enough power to create a technology that gives it a decisive advantage -- and then it would take that advantage and start doing what it wants to in the world.'

Read the whole thing, even if you have to watch goats yelling like people afterwards, just to cheer yourself back up.

Are we underestimating the risk of human extinction?Feb 22 2013

Nick Bostrom, a Swedish-born philosophy professor at Oxford, thinks that we're underestimating the risk of human extinction. The Atlantic's Ross Andersen interviewed Bostrom about his stance.

I think the biggest existential risks relate to certain future technological capabilities that we might develop, perhaps later this century. For example, machine intelligence or advanced molecular nanotechnology could lead to the development of certain kinds of weapons systems. You could also have risks associated with certain advancements in synthetic biology.

Of course there are also existential risks that are not extinction risks. The concept of an existential risk certainly includes extinction, but it also includes risks that could permanently destroy our potential for desirable human development. One could imagine certain scenarios where there might be a permanent global totalitarian dystopia. Once again that's related to the possibility of the development of technologies that could make it a lot easier for oppressive regimes to weed out dissidents or to perform surveillance on their populations, so that you could have a permanently stable tyranny, rather than the ones we have seen throughout history, which have eventually been overthrown.

While reading this, I got to thinking that maybe the reason we haven't observed any evidence of sentient extraterrestrial life is that at some point in the technology development timeline just past the "pumping out signals into space" point (where humans are now), a discovery is made that results in the destruction of a species. Something like a nanotech virus that's too fast and lethal to stop. And the same thing happens every single time it's discovered because it's too easy to discover and too powerful to stop.

Great algorithms stealMar 10 2010

An interesting article about how composer and programmer David Cope found a unique solution for making computer-composed classical music sound as though it was composed by humans: he wrote algorithms that based new works on previously created works.

Finally, Cope's program could divine what made Bach sound like Bach and create music in that style. It broke rules just as Bach had broken them, and made the result sound musical. It was as if the software had somehow captured Bach's spirit -- and it performed just as well in producing new Mozart compositions and Shakespeare sonnets. One afternoon, a few years after he'd begun work on Emmy, Cope clicked a button and went out for a sandwich, and she spit out 5,000 beautiful, artificial Bach chorales, work that would've taken him several lifetimes to produce by hand.

Gosh it's going to get interesting when machines can do some real fundamental "human" things 10,000x faster and better than humans can.

Virtual societyJul 15 2005

Scientists are going to try and generate a society using a bunch of virtual agents in a virtual world.

Each agent will be capable of various simple tasks, like moving around and building simple structures, but will also have the ability to communicate and cooperate with its cohabitants. Though simple interaction, the researchers hope to watch these characters create their very own society from scratch.

The World Series of Poker..........RobotsJul 13 2005

The World Series of Poker..........Robots. Robots make anything cooler.

this is kottke.org

   Front page
   About + contact
   Site archives

You can follow kottke.org on Twitter, Facebook, Tumblr, Feedly, or RSS.

Ad from The Deck

We Work Remotely

 

Hosting provided by
Enginehosting