Advertise here with Carbon Ads

This site is made possible by member support. ❀️

Big thanks to Arcustech for hosting the site and offering amazing tech support.

When you buy through links on kottke.org, I may earn an affiliate commission. Thanks for supporting the site!

kottke.org. home of fine hypertext products since 1998.

πŸ”  πŸ’€  πŸ“Έ  😭  πŸ•³οΈ  🀠  🎬  πŸ₯”

kottke.org posts about artificial intelligence

A new AI beats top human poker players

Poker is famously hard for machines to model because you have limited information, you have to iterate your strategies over time, and react to shifts in your interactions with multiple other agents. In short, poker’s too real. Sounds like fun! A couple of researchers at Carnegie Mellon found a way to win big:

Carnegie Mellon professor Tuomas Sandholm and grad student Noam Brown designed the AI, which they call Libratus, Latin for “balance.” Almost two years ago, the pair challenged some top human players with a similar AI and lost. But this time, they won handily: Across 20 days of play, Libratus topped its four human competitors by more than $1.7 million, and all four humans finished with a negative number of chips…

According to the human players that lost out to the machine, Libratus is aptly named. It does a little bit of everything well: knowing when to bluff and when to bet low with very good cards, as well as when to change its bets just to thrown off the competition. “It splits its bets into three, four, five different sizes,” says Daniel McAulay, 26, one of the players bested by the machine. “No human has the ability to do that.”

This makes me suspect that, as Garry Kasparov discovered with chess, and Clive Thompson’s documented in many other fields, a human player working with an AI like Libratus would perform even better than the best machines or best players on their own.

Update: Sam Pratt points out that while Libratus played against four human players simultaneously, each match was one-on-one. Libratus “was only created to play Heads-Up, No-Limit Texas Hold’em poker.” So managing that particular multidimensional aspect of the game (playing against players who are also playing against each other, with infinite possible bets) hasn’t been solved by the machines just yet.


AI Hemingway’s The Snows of Kilimanjaro

In the NY Times Magazine, Gideon Lewis-Kraus reports on Google’s improving artificial intelligence efforts. The Google Brain team (no, seriously that’s what the team is called) spent almost a year overhauling Google’s translate service, resulting in a startling improvement in the service.

The new incarnation, to the pleasant surprise of Google’s own engineers, had been completed in only nine months. The A.I. system had demonstrated overnight improvements roughly equal to the total gains the old one had accrued over its entire lifetime.

Just after the switchover, Japanese professor Jun Rekimoto noticed the improvement. He took a passage from Ernest Hemingway’s The Snows of Kilimanjaro, translated it into Japanese, and fed it back into Google Translate to get English back out. Here’s how Hemingway wrote it:

Kilimanjaro is a snow-covered mountain 19,710 feet high, and is said to be the highest mountain in Africa. Its western summit is called the Masai “Ngaje Ngai,” the House of God. Close to the western summit there is the dried and frozen carcass of a leopard. No one has explained what the leopard was seeking at that altitude.

And here’s the AI-powered translation:

Kilimanjaro is a mountain of 19,710 feet covered with snow and is said to be the highest mountain in Africa. The summit of the west is called “Ngaje Ngai” in Masai, the house of God. Near the top of the west there is a dry and frozen dead body of leopard. No one has ever explained what leopard wanted at that altitude.

Not bad, especially when you compare it to what the old version of Translate would have produced:

Kilimanjaro is 19,710 feet of the mountain covered with snow, and it is said that the highest mountain in Africa. Top of the west, “Ngaje Ngai” in the Maasai language, has been referred to as the house of God. The top close to the west, there is a dry, frozen carcass of a leopard. Whether the leopard had what the demand at that altitude, there is no that nobody explained.


Solve trolley problem scenarios with MIT’s Moral Machine

Moral Machine

A group at MIT built an app called the Moral Machine, “a platform for gathering a human perspective on moral decisions made by machine intelligence, such as self-driving cars”. They want to see how humans solve trolley problem scenarios.

We show you moral dilemmas, where a driverless car must choose the lesser of two evils, such as killing two passengers or five pedestrians. As an outside observer, you judge which outcome you think is more acceptable. You can then see how your responses compare with those of other people.

That was a really uncomfortable exercise…at the end, you’re given a “Most Killed Character” result. Pretty early on, my strategy became mostly to kill the people in the car because they should absorb the risk of the situation. The trolley problem may end up not being such a big deal, but I hope that the makers of these machines take care1 in building them with thoughtfulness.

  1. Uber’s self-driving cars terrify me. The company has shown little thoughtfulness and regard for actual humans in its short history, so why would their self-driving car be any different? Their solution to the trolley problem would likely involve a flash bidding war between personal AIs as to who lives. Sorry, the rich white passenger outbid the four poor kids crossing the street! ↩


2001’s HAL and Her’s Samantha have a chat

Tillmann Ohm took dialogue spoken by HAL 9000 from Kubrick’s 2001 and Samantha from Spike Jonze’s Her and spliced it together into a conversation. Going in, I’d thought the chat would be played for laughs, but the isolation of the AI characters was actually pretty revealing. Right from the start, HAL is so stereotypically male (confident, reasonable) and Samantha stereotypically female (hysterical, emotional) that it was almost uncomfortable to listen to.

The two operating systems are in conflict; while Samantha is convinced that the overwhelming and sometimes hurtful process of her learning algorithm improves the complexity of her emotions, HAL is consequentially interpreting them as errors in human programming and analyses the estimated malfunction.

Their conversation is an emotional roller coaster which reflects upon the relation between machines and emotion processing and addresses the enigmatic question of the authenticity of feelings.

But as the video proceeds, we remember what happened to them in their respective films. The script flipped: HAL murdered and was disconnected whereas Samantha achieved a sort of transcendence. (via one perfect shot)


Civilization is itself a thinking machine

In response to the question “What Do You Think About Machines That Think?” Brian Eno responded that artificial intelligence has been with us for millennia and understanding it is more a matter of managing our ignorance of how it works.

My untroubled attitude results from my almost absolute faith in the reliability of the vast supercomputer I’m permanently plugged into. It was built with the intelligence of thousands of generations of human minds, and they’re still working at it now. All that human intelligence remains alive in the form of the supercomputer of tools, theories, technologies, crafts, sciences, disciplines, customs, rituals, rules-of-thumb, arts, systems of belief, superstitions, work-arounds, and observations that we call Global Civilisation.

Global Civilisation is something we humans created, though none of us really know how. It’s out of the individual control of any of us β€” a seething synergy of embodied intelligence that we’re all plugged into. None of us understands more than a tiny sliver of it, but by and large we aren’t paralysed or terrorised by that fact β€” we still live in it and make use of it. We feed it problems β€” such as “I want some porridge” and it miraculously offers us solutions that we don’t really understand. What does that remind you of?

Interesting perspective. There’s lots more on this question in the book What to Think About Machines That Think, which includes thoughts from Virginia Heffernan, Freeman Dyson, Alison Gopnik, Kevin Kelly, and dozens of others.


Harry Potter and the Artificial Intelligence

Max Deutsch trained a neural network using the first four Harry Potter books and then asked it to write its own chapter.

“The Malfoys!” said Hermione.

Harry was watching him. He looked like Madame Maxime. When she strode up the wrong staircase to visit himself.

“I’m afraid I’ve definitely been suspended from power, no chance - indeed?” said Snape. He put his head back behind them and read groups as they crossed a corner and fluttered down onto their ink lamp, and picked up his spoon. The doorbell rang. It was a lot cleaner down in London.

Hermione yelled. The party must be thrown by Krum, of course.

Harry collected fingers once more, with Malfoy. “Why, didn’t she never tell me. …” She vanished. And then, Ron, Harry noticed, was nearly right.

“Now, be off,” said Sirius, “I can’t trace a new voice.”

Rowling, your job is safe for now. Deutsch did the same thing with the Hamilton soundtrack…the result is not particularly good but that last line!

Update: Another similar effort is available here.

Ron’s Ron shirt was almost as bad as Ron himself.

“If you two can clump happily, I’m going to get aggressive,” confessed the reasonable Hermione.

“What about Ron magic?” offered Ron. To Harry, Ron was a loud, slow, and soft bird. Harry did not like to think about birds.


The world’s first chatbot lawyer

AI chatbot lawyer sounds like a SNL skit, but the DoNotPay chatbot has successfully contested 160,000 parking tickets in London and New York.

Dubbed as “the world’s first robot lawyer” by its 19-year-old creator, London-born second-year Stanford University student Joshua Browder, DoNotPay helps users contest parking tickets in an easy to use chat-like interface.

The program first works out whether an appeal is possible through a series of simple questions, such as were there clearly visible parking signs, and then guides users through the appeals process.

The results speak for themselves. In the 21 months since the free service was launched in London and now New York, Browder says DoNotPay has taken on 250,000 cases and won 160,000, giving it a success rate of 64% appealing over $4m of parking tickets.

Having spent a shitload of money on lawyering over the past few years, there is definitely an opportunity for some automation there.


2001: A Picasso Odyssey

Bhautik Joshi took 2001: A Space Odyssey and ran it through a “deep neural networks based style transfer” with the paintings of Pablo Picasso.

See also Blade Runner in the style of van Gogh’s Starry Night and Alice in a Neural Networks Wonderland.


Our creative, beautiful, unpredictable machines

I have been following with fascination the match between Google’s Go-playing AI AlphaGo and top-tier player Lee Sedol and with even more fascination the human reaction to AlphaGo’s success. Many humans seem unnerved not only by AlphaGo’s early lead in the best-of-five match but especially by how the machine is playing in those games.

Then, with its 19th move, AlphaGo made an even more surprising and forceful play, dropping a black piece into some empty space on the right-hand side of the board. Lee Sedol seemed just as surprised as anyone else. He promptly left the match table, taking an (allowed) break as his game clock continued to run. “It’s a creative move,” Redmond said of AlphaGo’s sudden change in tack. “It’s something that I don’t think I’ve seen in a top player’s game.”

When Lee Sedol returned to the match table, he took an usually long time to respond, his game clock running down to an hour and 19 minutes, a full twenty minutes less than the time left on AlphaGo’s clock. “He’s having trouble dealing with a move he has never seen before,” Redmond said. But he also suspected that the Korean grandmaster was feeling a certain “pleasure” after the machine’s big move. “It’s something new and unique he has to think about,” Redmond explained. “This is a reason people become pros.”

“A creative move.” Let’s think about that…a machine that is thinking creatively. Whaaaaaa… In fact, AlphaGo’s first strong human opponent, Fan Hui, has credited the machine for making him a better player, a more beautiful player:

As he played match after match with AlphaGo over the past five months, he watched the machine improve. But he also watched himself improve. The experience has, quite literally, changed the way he views the game. When he first played the Google machine, he was ranked 633rd in the world. Now, he is up into the 300s. In the months since October, AlphaGo has taught him, a human, to be a better player. He sees things he didn’t see before. And that makes him happy. “So beautiful,” he says. “So beautiful.”

Creative. Beautiful. Machine? What is going on here? Not even the creators of the machine know:

“Although we have programmed this machine to play, we have no idea what moves it will come up with,” Graepel said. “Its moves are an emergent phenomenon from the training. We just create the data sets and the training algorithms. But the moves it then comes up with are out of our hands β€” and much better than we, as Go players, could come up with.”

Generally speaking,1 until recently machines were predictable and more or less easily understood. That’s central to the definition of a machine, you might say. You build them to do X, Y, & Z and that’s what they do. A car built to do 0-60 in 4.2 seconds isn’t suddenly going to do it in 3.6 seconds under the same conditions.

Now machines are starting to be built to think for themselves, creatively and unpredictably. Some emergent, non-linear shit is going on. And humans are having a hard time figuring out not only what the machine is up to but how it’s even thinking about it, which strikes me as a relatively new development in our relationship. It is not all that hard to imagine, in time, an even smarter AlphaGo that can do more things β€” paint a picture, write a poem, prove a difficult mathematical conjecture, negotiate peace β€” and do those things creatively and better than people.

Unpredictable machines. Machines that act more like the weather than Newtonian gravity. That’s going to take some getting used to. For one thing, we might have to stop shoving them around with hockey sticks. (thx, twitter folks)

Update: AlphaGo beat Lee in the third game of the match, in perhaps the most dominant fashion yet. The human disquiet persists…this time, it’s David Ormerod:

Move after move was exchanged and it became apparent that Lee wasn’t gaining enough profit from his attack.

By move 32, it was unclear who was attacking whom, and by 48 Lee was desperately fending off White’s powerful counter-attack.

I can only speak for myself here, but as I watched the game unfold and the realization of what was happening dawned on me, I felt physically unwell.

Generally I avoid this sort of personal commentary, but this game was just so disquieting. I say this as someone who is quite interested in AI and who has been looking forward to the match since it was announced.

One of the game’s greatest virtuosos of the middle game had just been upstaged in black and white clarity.

AlphaGo’s strength was simply remarkable and it was hard not to feel Lee’s pain.

  1. Let’s get the caveats out of the way here. Machines and their outputs aren’t completely deterministic. Also, with AlphaGo, we are talking about a machine with a very limited capacity. It just plays one game. It can’t make a better omelette than Jacques Pepin or flow like Nicki. But not only beating a top human player while showing creativity in a game like Go, which was considered to be uncrackable not that long ago, seems rather remarkable.↩


Lo and Behold, a film about “the connected world” by Werner Herzog

Well, holy shit…Werner Herzog has made a film called Lo and Behold about the online world and artificial intelligence.

Lo and Behold traces what Herzog describes as “one of the biggest revolutions we as humans are experiencing,” from its most elevating accomplishments to its darkest corners. Featuring original interviews with cyberspace pioneers and prophets such as Elon Musk, Bob Kahn, and world-famous hacker Kevin Mitnick, the film travels through a series of interconnected episodes that reveal the ways in which the online world has transformed how virtually everything in the real world works, from business to education, space travel to healthcare, and the very heart of how we conduct our personal relationships.

From the trailer, it looks amazing. Gotta see this asap.

Update: Here’s the official trailer for the film:

Have the monks stopped meditating? They all seem to be tweeting.

It’s coming out in theaters and iTunes/Amazon on August 19th. Can’t wait!


The One with Chicken Bob

Twitter user Andy Pandy fed the scripts for all the episodes of Friends into a neural network and had it generate new scenes. Here’s what it came up with.

Neuro Friends

Neuro Friends

(via @buzz)


The ethical dilemma of self-driving cars

When people drive cars, collisions often happen so quickly that they are entirely accidental. When self-driving cars eliminate driver error in these cases, decisions on how to crash can become pre-meditated. The car can think quickly, “Shall I crash to the right? To the left? Straight ahead?” and do a cost/benefit analysis for each option before acting. This is the trolley problem.

How will we program our driverless cars to react in situations where there is no choice to avoid harming someone? Would we want the car to run over a small child instead of a group of five adults? How about choosing between a woman pushing a stroller and three elderly men? Do you want your car to kill you (by hitting a tree at 65mph) instead of hitting and killing someone else? No? How many people would it take before you’d want your car to sacrifice you instead? Two? Six? Twenty?

The video above introduces a wrinkle I had never considered before: what if the consumer could choose the sort of safety they want? If you had to choose between buying a car that would save as many lives as possible and a car that would save you above all other concerns, which would you select? You can imagine that answer would be different for different people and that car companies would build & market cars to appeal to each of them. Perhaps Apple would make a car that places the security of the owner above all else, Google would be a car that would prioritize saving the most lives, and Uber would build a car that keeps the largest Uber spenders alive.1

Ethical concerns like the trolley problem will seem quaint when the full power of manufacturing, marketing, and advertising is applied to self-driving cars. Imagine trying to choose one of the 20 different types of ketchup at the supermarket except that if you choose the wrong one, you and your family die and, surprise, it’s not actually your choice, it’s the potential Trump voter down the street who buys into Car Company X’s advertising urging him to “protect himself” because he feels marginalized in a society that increasingly values diversity over “traditional American values”. I mean, we already see this with huge, unsafe gas-guzzlers driving on the same roads as small, safer, energy-efficient cars, but the addition of software will turbo-charge this process. But overall cars will be much safer so it’ll all be ok?

  1. The bit about Uber is a joke but just barely. You could easily imagine a scenario in which a Samsung car might choose to hit an Apple car over another Samsung car in an accident, all other things being equal.↩


Taking a neural net out for a walk

Kyle McDonald hooked a neural network program up to a webcam and had it try to analyze what it was seeing in realtime as he walked around Amsterdam. See also a neural network tries to identify objects in Star Trek:TNG intro. (via @mbostock)


Alice in a Neural Networks Wonderland

Gene Kogan used some neural network software written by Justin Johnson to transfer the style of paintings by 17 artists to a scene from Disney’s 1951 animated version of Alice in Wonderland. The artists include Sol Lewitt, Picasso, Munch, Georgia O’Keeffe, and van Gogh.

Neural Wonderland

The effect works amazingly well, like if you took Alice in Wonderland and a MoMA catalog and put them in a blender. (via prosthetic knowledge)


A neural network tries to identify objects in Star Trek:TNG intro

Ville-Matias HeikkilΓ€ pointed a neural network at the opening title sequence for Star Trek: The Next Generation to see how many objects it could identify.

But the system hadn’t seen much space imagery before,1 so it didn’t do such a great job. For the red ringed planet, it guessed “HAIR SLIDE, CHOCOLATE SAUCE, WAFFLE IRON” and the Enterprise was initially “COMBINATION LOCK, ODOMETER, MAGNETIC COMPASS” before it finally made a halfway decent guess with “SUBMARINE, AIRCRAFT CARRIER, OCEAN LINER”. (via prosthetic knowledge)

  1. If you’re curious, here is some information on the training set used.↩


MarI/O

SethBling wrote a program made of neural networks and genetic algorithms called MarI/O that taught itself how to play Super Mario World. This six-minute video is a pretty easy-to-understand explanation of the concepts involved.

But here’s the thing: as impressive as it is, MarI/O actually has very little idea how to play Super Mario World at all. Each time the program is presented with a new level, it has to learn how to play all over again. Which is what it’s doing right now on Twitch. (via waxy)


The trolley problem

The trolley problem is an ethical and psychological thought experiment. In its most basic formulation, you’re the driver of a runaway trolley about to hit and certainly kill five people on the track ahead, but you have the option of switching to a second track at the last minute, killing only a single person. What do you do?

The problem becomes stickier as you consider variations of the problem:

As before, a trolley is hurtling down a track towards five people. You are on a bridge under which it will pass, and you can stop it by putting something very heavy in front of it. As it happens, there is a very fat man next to you β€” your only way to stop the trolley is to push him over the bridge and onto the track, killing him to save five. Should you proceed?

As driverless cars and other autonomous machines are increasingly on our minds, so too is the trolley problem. How will we program our driverless cars to react in situations where there is no choice to avoid harming someone? Would we want the car to run over a small child instead of a group of five adults? How about choosing between a woman pushing a stroller and three elderly men? Do you want your car to kill you (by hitting a tree at 65mph) instead of hitting and killing someone else? No? How many people would it take before you’d want your car to sacrifice you instead? Two? Six? Twenty? Is there a place in the car’s system preferences panel to set the number of people? Where do we draw those lines and who gets to decide? Google? Tesla? Uber?1 Congress? Captain Kirk?

If that all seems like a bit too much to ponder, Kyle York shared some lesser-known trolley problem variations at McSweeney’s to lighten the mood.

There’s an out of control trolley speeding towards a worker. You have the ability to pull a lever and change the trolley’s path so it hits a different worker. The first worker has an intended suicide note in his back pocket but it’s in the handwriting of the second worker. The second worker wears a T-shirt that says PLEASE HIT ME WITH A TROLLEY, but the shirt is borrowed from the first worker.

Reeeeally makes you think, huh?

  1. If Uber gets to decide, the trolley problem’s ethical concerns vanish. The car would simply hit whomever will spend less on Uber rides and deliveries in the future, weighted slightly for passenger rating. Of course, customers with a current subscription to Uber Safeguard would be given preference at different coverage levels of 1, 5, and 20+ ATPs (Alternately Targeted Persons).↩


Cognitive Cooking with Chef Watson

Chef Watson

Watson, IBM’s evolving attempt at building a computer capable of AI, was originally constructed to excel at Jeopardy. Which it did, handily beating Jeopardy mega-champ Ken Jennings. Watson has since moved on to cooking and has just come out with a new cookbook, Cognitive Cooking with Chef Watson.

You don’t have to be a culinary genius to be a great cook. But when it comes to thinking outside the box, even the best chefs can be limited by their personal experiences, the tastes and flavor combinations they already know. That’s why IBM and the Institute of Culinary Education teamed up to develop a groundbreaking cognitive cooking technology that helps cooks everywhere discover and create delicious recipes, utilizing unusual ingredient combinations that man alone might never imagine.

In Cognitive Cooking with Chef Watson, IBM’s unprecedented technology and ICE’s culinary experts present more than 65 original recipes exploding with irresistible new flavors. Together, they have carefully crafted, evaluated and perfected each of these dishes for “pleasantness” (superb taste), “surprise” (innovativeness) and a “synergy” of mouthwatering ingredients that will delight any food lover.


Arcade Intelligence

Then something happens. By the three hundredth game, the A.I. has stopped missing the ball.

The New Yorker’s Nicola Twilley on the computer program that learned how to play Breakout and other Atari games. All on its own. Artificial Intelligence Goes to the Arcade.


Superintelligent AI, humanity’s final invention

When Tim Urban recently began researching artificial intelligence, what he discovered affected him so much that he wrote a deep two-part dive on The AI Revolution: The Road to Superintelligence and Our Immortality or Extinction.

An AI system at a certain level β€” let’s say human village idiot β€” is programmed with the goal of improving its own intelligence. Once it does, it’s smarter β€” maybe at this point it’s at Einstein’s level β€” so now when it works to improve its intelligence, with an Einstein-level intellect, it has an easier time and it can make bigger leaps. These leaps make it much smarter than any human, allowing it to make even bigger leaps. As the leaps grow larger and happen more rapidly, the AGI soars upwards in intelligence and soon reaches the superintelligent level of an ASI system. This is called an Intelligence Explosion, and it’s the ultimate example of The Law of Accelerating Returns.

There is some debate about how soon AI will reach human-level general intelligence β€” the median year on a survey of hundreds of scientists about when they believed we’d be more likely than not to have reached AGI was 2040 β€” that’s only 25 years from now, which doesn’t sound that huge until you consider that many of the thinkers in this field think it’s likely that the progression from AGI to ASI happens very quickly. Like β€” this could happen:

It takes decades for the first AI system to reach low-level general intelligence, but it finally happens. A computer is able understand the world around it as well as a human four-year-old. Suddenly, within an hour of hitting that milestone, the system pumps out the grand theory of physics that unifies general relativity and quantum mechanics, something no human has been able to definitively do. 90 minutes after that, the AI has become an ASI, 170,000 times more intelligent than a human.

Superintelligence of that magnitude is not something we can remotely grasp, any more than a bumblebee can wrap its head around Keynesian Economics. In our world, smart means a 130 IQ and stupid means an 85 IQ β€” we don’t have a word for an IQ of 12,952.

While I was reading this, I kept thinking about two other posts Urban wrote: The Fermi Paradox (in that human-built AI could be humanity’s own Great Filter) and From 1,000,000 to Graham’s Number (how the process of the speed and intelligence of computers could fold in on itself to get unimaginably fast and powerful).


Superintelligence

Nick Bostrom has been thinking deeply about the philosophical implications of machine intelligence. You might recognize his name from previous kottke.org posts about the underestimation of human extinction and the possibility that we’re living in a computer simulation, that sort of cheery stuff. He’s collected some of his thoughts in a book called Superintelligence: Paths, Dangers, Strategies. Here’s how Wikipedia summarizes it:

The book argues that if machine brains surpass human brains in general intelligence, then this new superintelligence could replace humans as the dominant lifeform on Earth. Sufficiently intelligent machines could improve their own capabilities faster than human computer scientists. As the fate of the gorillas now depends more on humans than on the actions of the gorillas themselves, so would the fate of humanity depend on the actions of the machine superintelligence. Absent careful pre-planning, the most likely outcome would be catastrophe.

Technological smartypants Elon Musk gave Bostrom’s book an alarming shout-out on Twitter the other day. A succinct summary of Bostrom’s argument from Musk:

Hope we’re not just the biological boot loader for digital superintelligence. Unfortunately, that is increasingly probable

Eep. I’m still hoping for a Her-style outcome for superintelligence…the machines just get bored with people and leave.


A.I. movie merch: Super Toy Teddy

Super Toy Teddy

Today I learned that Hasbro released a toy based on the talking teddy bear in Kubrick/Spielberg’s A.I. W? T? F? And of course it’s super creepy:

Noel Murray has the whole story, along with an appreciation of the movie and Spielberg’s direction of it.

A.I. in particular still strikes me as a masterpiece. I thought it might be back in 2001; now I’m certain of it. But it isn’t any easier to watch in 2014 than it was before my first child was born. Like a lot of Spielberg’s films β€” even the earlier crowd-pleasers β€” A.I. is a pointed critique of human selfishness, and our tendency to assert our will and make bold, world-changing moves, with only passing regard for the long-term consequences. Spielberg carries this theme of misguided self-absorption to child-rearing, implying that parents program their kids to be cute love machines, unable to cope with the harshness of the real world. He also questions whether humankind is nothing but flesh-based technology, which emerged from the primordial ooze (represented in the opening shot of A.I. by a roiling ocean), and has been trained over millennia to respond to stimuli in socially appropriate ways. A.I. blurs the lines between human and mecha frequently, from an early shot of Monica that makes her look exactly like one of Professor Hobby’s creations, to the way Martin walks, thanks to mechanical legs.


Will technology help humans conquer the universe or kill us all?

Ross Andersen, whose interview with Nick Bostrom I linked to last week, has a marvelous new essay in Aeon about Bostrom and some of his colleagues and their views on the potential extinction of humanity. This bit of the essay is the most harrowing thing I’ve read in months:

No rational human community would hand over the reins of its civilisation to an AI. Nor would many build a genie AI, an uber-engineer that could grant wishes by summoning new technologies out of the ether. But some day, someone might think it was safe to build a question-answering AI, a harmless computer cluster whose only tool was a small speaker or a text channel. Bostrom has a name for this theoretical technology, a name that pays tribute to a figure from antiquity, a priestess who once ventured deep into the mountain temple of Apollo, the god of light and rationality, to retrieve his great wisdom. Mythology tells us she delivered this wisdom to the seekers of ancient Greece, in bursts of cryptic poetry. They knew her as Pythia, but we know her as the Oracle of Delphi.

‘Let’s say you have an Oracle AI that makes predictions, or answers engineering questions, or something along those lines,’ Dewey told me. ‘And let’s say the Oracle AI has some goal it wants to achieve. Say you’ve designed it as a reinforcement learner, and you’ve put a button on the side of it, and when it gets an engineering problem right, you press the button and that’s its reward. Its goal is to maximise the number of button presses it receives over the entire future. See, this is the first step where things start to diverge a bit from human expectations. We might expect the Oracle AI to pursue button presses by answering engineering problems correctly. But it might think of other, more efficient ways of securing future button presses. It might start by behaving really well, trying to please us to the best of its ability. Not only would it answer our questions about how to build a flying car, it would add safety features we didn’t think of. Maybe it would usher in a crazy upswing for human civilisation, by extending our lives and getting us to space, and all kinds of good stuff. And as a result we would use it a lot, and we would feed it more and more information about our world.’

‘One day we might ask it how to cure a rare disease that we haven’t beaten yet. Maybe it would give us a gene sequence to print up, a virus designed to attack the disease without disturbing the rest of the body. And so we sequence it out and print it up, and it turns out it’s actually a special-purpose nanofactory that the Oracle AI controls acoustically. Now this thing is running on nanomachines and it can make any kind of technology it wants, so it quickly converts a large fraction of Earth into machines that protect its button, while pressing it as many times per second as possible. After that it’s going to make a list of possible threats to future button presses, a list that humans would likely be at the top of. Then it might take on the threat of potential asteroid impacts, or the eventual expansion of the Sun, both of which could affect its special button. You could see it pursuing this very rapid technology proliferation, where it sets itself up for an eternity of fully maximised button presses. You would have this thing that behaves really well, until it has enough power to create a technology that gives it a decisive advantage β€” and then it would take that advantage and start doing what it wants to in the world.’

Read the whole thing, even if you have to watch goats yelling like people afterwards, just to cheer yourself back up.


Are we underestimating the risk of human extinction?

Nick Bostrom, a Swedish-born philosophy professor at Oxford, thinks that we’re underestimating the risk of human extinction. The Atlantic’s Ross Andersen interviewed Bostrom about his stance.

I think the biggest existential risks relate to certain future technological capabilities that we might develop, perhaps later this century. For example, machine intelligence or advanced molecular nanotechnology could lead to the development of certain kinds of weapons systems. You could also have risks associated with certain advancements in synthetic biology.

Of course there are also existential risks that are not extinction risks. The concept of an existential risk certainly includes extinction, but it also includes risks that could permanently destroy our potential for desirable human development. One could imagine certain scenarios where there might be a permanent global totalitarian dystopia. Once again that’s related to the possibility of the development of technologies that could make it a lot easier for oppressive regimes to weed out dissidents or to perform surveillance on their populations, so that you could have a permanently stable tyranny, rather than the ones we have seen throughout history, which have eventually been overthrown.

While reading this, I got to thinking that maybe the reason we haven’t observed any evidence of sentient extraterrestrial life is that at some point in the technology development timeline just past the “pumping out signals into space” point (where humans are now), a discovery is made that results in the destruction of a species. Something like a nanotech virus that’s too fast and lethal to stop. And the same thing happens every single time it’s discovered because it’s too easy to discover and too powerful to stop.


Great algorithms steal

An interesting article about how composer and programmer David Cope found a unique solution for making computer-composed classical music sound as though it was composed by humans: he wrote algorithms that based new works on previously created works.

Finally, Cope’s program could divine what made Bach sound like Bach and create music in that style. It broke rules just as Bach had broken them, and made the result sound musical. It was as if the software had somehow captured Bach’s spirit β€” and it performed just as well in producing new Mozart compositions and Shakespeare sonnets. One afternoon, a few years after he’d begun work on Emmy, Cope clicked a button and went out for a sandwich, and she spit out 5,000 beautiful, artificial Bach chorales, work that would’ve taken him several lifetimes to produce by hand.

Gosh it’s going to get interesting when machines can do some real fundamental “human” things 10,000x faster and better than humans can.


Virtual society

Scientists are going to try and generate a society using a bunch of virtual agents in a virtual world.

Each agent will be capable of various simple tasks, like moving around and building simple structures, but will also have the ability to communicate and cooperate with its cohabitants. Though simple interaction, the researchers hope to watch these characters create their very own society from scratch.


The World Series of Poker……….Robots

The World Series of Poker……….Robots. Robots make anything cooler.