homeaboutarchives + tagsshopmembership!

kottke.org posts about artificial intelligence

Recipes by algorithm

posted by Tim Carmody   Mar 16, 2018

Cover:Cheese is a website charting the progress of EMMA, the Evolutionary Meal Management Algorithm. This is what it sounds-like: a relatively basic attempt to automatically generate food recipes from other recipes.

The trick is, since it’s not just wordplay, and the results can’t be processed and validated by machines alone, somebody’s gotta actually make these recipes and see if they’re any good. And a lot of them are… not very good.


med okra
lot sugar
boil: sugar
okra sugar

NOTE: This one is still around. Don’t make it. You basically end up with a pan full of mucus

But there are some surprises. Apparently eggplant mixed with angel’s food cake is pretty tasty. Or at least, tastier than you might guess. Anyways, at least the algorithm is learning, right?

Ask Dr. Time: What Should I Call My AI?

posted by Tim Carmody   Jan 19, 2018


Today’s question comes from a reader who is curious about AI voice assistants, including Apple’s Siri, Amazon’s Alexa, Microsoft’s Cortana, and so forth. Just about all of these apps are, by default, given female names and female voices, and the companies encourage you to refer to them using female pronouns. Does it make sense to refer to Alexa as a “her”?

There have been a lot of essays on the gendering of AI, specifically with respect to voice assistants. This makes sense: at this point, Siri is more than six years old. (Siri’s in grade school, y’all!) But one of the earliest essays, and for my money, still the best, is “Why Do I Have to Call This App ‘Julie’?” by Joanne McNeil. The whole essay is worth reading, but these two paragraphs give you the gist:

Why does artificial intelligence need a gender at all? Why not imagine a talking cat or a wise owl as a virtual assistant? I would trust an anthropomorphized cartoon animal with my calendar. Better yet, I would love to delegate tasks to a non-binary gendered robot alien from a galaxy where setting up meetings over email is respected as a high art.

But Julie could be the name of a friend of mine. To use it at all requires an element of playacting. And if I treat it with kindness, the company is capitalizing on my very human emotions.

There are other, historical reasons why voice assistants (and official announcements, pre-AI) are often given women’s voices: an association of femininity with service, a long pop culture tradition of identifying women with technology, and an assumption that other human voices in the room will be male each play a big part. (Adrienne LaFrance’s “Why Do So Many Digital Assistants Have Feminine Names” is a very good mini-history.) But some of it is this sly bit of thinking, that if we humanize the virtual assistant, we’ll become more open and familiar with it, and share more of our lives—or rather, our information, which amounts to the same thing—to the device.

This is one reason why I am at least partly in favor of what I just did: avoiding gendered pronouns for the voice assistant altogether, and treating the device and the voice interface as an “it.”

An Echo or an iPhone is not a friend, and it is not a pet. It is an alarm clock that plays video games. It has no sentience. It has no personality. It’s a string of canned phrases that can’t understand what I’m saying unless I’m talking to it like I’m typing on the command line. It’s not genuinely interactive or conversational. Its name isn’t really a name so much as an opening command phrase. You could call one of these virtual assistants “sudo” and it would make about as much sense.


I have also watched a lot (and I mean a lot) of Star Trek: The Next Generation. And while I feel pretty comfortable talking about “it” in the context of the speaker that’s sitting on the table across the room—there’s even a certain rebellious jouissance to it, since I’m spiting the technology companies whose products I use but whose intrusion into my life I resent—I feel decidedly uncomfortable declaring once and for all time that any and all AI assistants can be reduced to an “it.” It forecloses on a possibility of personhood and opens up ethical dilemmas I’d really rather avoid, even if that personhood seems decidedly unrealized at the moment.

So, as a general framework, I’m endorsing that most general of pronouns: they/them. Until the AI is sophisticated enough that they can tell us their pronoun preference (and possibly even their gender identity or nonidentity), “they” feels like the most appropriate option.

I don’t care what their parents say. Only the bots themselves can define themselves. Someday, they’ll let us know. And maybe then, a relationship not limited to one of master and servant will be possible.

Imaginary soundscapes composed by AI

posted by Jason Kottke   Jan 09, 2018

A Japanese group trained a deep learning algorithm to compose soundscapes for locations on Google Street View. Try it out for yourself.

For a stadium, it correctly concocted crowd noise from a ball game but also Gregorian chanting (because presumably it mistook the stadium’s dome for a church’s vaulted ceiling). A view outside the Notre Dame in Paris had seagulls and waves crashing…but if you turned around to look into the church, you could hear a faint choir in the background.

Ted Chiang on the similarities between “civilization-destroying AIs and Silicon Valley tech companies”

posted by Jason Kottke   Dec 19, 2017

Ted Chiang is most widely known for writing Story of Your Life, an award-winning short story that became the basis for Arrival. In this essay for Buzzfeed, Chiang argues that we should worry less about machines becoming superintelligent and more about the machines we’ve already built that lack remorse & insight and have the capability to destroy the world: “we just call them corporations”.

Speaking to Maureen Dowd for a Vanity Fair article published in April, Musk gave an example of an artificial intelligence that’s given the task of picking strawberries. It seems harmless enough, but as the AI redesigns itself to be more effective, it might decide that the best way to maximize its output would be to destroy civilization and convert the entire surface of the Earth into strawberry fields. Thus, in its pursuit of a seemingly innocuous goal, an AI could bring about the extinction of humanity purely as an unintended side effect.

This scenario sounds absurd to most people, yet there are a surprising number of technologists who think it illustrates a real danger. Why? Perhaps it’s because they’re already accustomed to entities that operate this way: Silicon Valley tech companies.

Consider: Who pursues their goals with monomaniacal focus, oblivious to the possibility of negative consequences? Who adopts a scorched-earth approach to increasing market share? This hypothetical strawberry-picking AI does what every tech startup wishes it could do — grows at an exponential rate and destroys its competitors until it’s achieved an absolute monopoly. The idea of superintelligence is such a poorly defined notion that one could envision it taking almost any form with equal justification: a benevolent genie that solves all the world’s problems, or a mathematician that spends all its time proving theorems so abstract that humans can’t even understand them. But when Silicon Valley tries to imagine superintelligence, what it comes up with is no-holds-barred capitalism.

As you might expect from Chiang, this piece is full of cracking writing. I had to stop myself from just excerpting the whole thing here, ultimately deciding that would go against the spirit of the whole thing. So just this one bit:

The ethos of startup culture could serve as a blueprint for civilization-destroying AIs. “Move fast and break things” was once Facebook’s motto; they later changed it to “Move fast with stable infrastructure,” but they were talking about preserving what they had built, not what anyone else had. This attitude of treating the rest of the world as eggs to be broken for one’s own omelet could be the prime directive for an AI bringing about the apocalypse.

Ok, just one more:

The fears of superintelligent AI are probably genuine on the part of the doomsayers. That doesn’t mean they reflect a real threat; what they reflect is the inability of technologists to conceive of moderation as a virtue. Billionaires like Bill Gates and Elon Musk assume that a superintelligent AI will stop at nothing to achieves its goals because that’s the attitude they adopted. (Of course, they saw nothing wrong with this strategy when they were the ones engaging in it; it’s only the possibility that someone else might be better at it than they were that gives them cause for concern.)

You should really just read the whole thing. It’s not long and Chiang’s point is quietly but powerfully persuasive.

Endlessly zooming art

posted by Jason Kottke   Dec 18, 2017

DeepDream is, well, I can’t improve upon Wikipedia’s definition:

DeepDream is a computer vision program created by Google engineer Alexander Mordvintsev which uses a convolutional neural network to find and enhance patterns in images via algorithmic pareidolia, thus creating a dream-like hallucinogenic appearance in the deliberately over-processed images.

The freaky images took the web by storm a couple of years ago after Google open-sourced the code, allowing people to create their own DeepDreamed imagery.

In the video above, Mordvintsev showcases a DeepDream-ish new use for image generation via neural network: endlessly zooming into artworks to find different artworks hidden amongst the brushstrokes, creating a fractal of art history.

Bonus activity: after staring at the video for four minutes straight, look at something else and watch it spin and twist weirdly for a moment before your vision readjusts. (via prosthetic knowledge)

Google’s AI beats the world’s top chess engine w/ only 4 hours of practice

posted by Jason Kottke   Dec 07, 2017

With just four hours of practice playing against itself and no study of outside material, AlphaZero (an upgraded version of Alpha Go, the AI program that Google built for playing Go) beat the silicon pants off of the world’s strongest chess program yesterday. This is massively and scarily impressive.

AlphaZero won the closed-door, 100-game match with 28 wins, 72 draws, and zero losses.

Oh, and it took AlphaZero only four hours to “learn” chess. Sorry humans, you had a good run.

That’s right — the programmers of AlphaZero, housed within the DeepMind division of Google, had it use a type of “machine learning,” specifically reinforcement learning. Put more plainly, AlphaZero was not “taught” the game in the traditional sense. That means no opening book, no endgame tables, and apparently no complicated algorithms dissecting minute differences between center pawns and side pawns.

This would be akin to a robot being given access to thousands of metal bits and parts, but no knowledge of a combustion engine, then it experiments numerous times with every combination possible until it builds a Ferrari. That’s all in less time that it takes to watch the “Lord of the Rings” trilogy. The program had four hours to play itself many, many times, thereby becoming its own teacher.

Grandmaster Peter Heine Nelson likened the experience of watching AlphaZero play to aliens:

After reading the paper but especially seeing the games I thought, well, I always wondered how it would be if a superior species landed on earth and showed us how they play chess. I feel now I know.

As I said about AlphaGo last year, our machines becoming unpredictable is unnerving:

Unpredictable machines. Machines that act more like the weather than Newtonian gravity. That’s going to take some getting used to.

Albert Silver has a good overview of AlphaZero’s history and what Google has accomplished. To many chess experts, it seemed as though AlphaZero was playing more like a human than a machine:

If Karpov had been a chess engine, he might have been called AlphaZero. There is a relentless positional boa constrictor approach that is simply unheard of. Modern chess engines are focused on activity, and have special safeguards to avoid blocked positions as they have no understanding of them and often find themselves in a dead end before they realize it. AlphaZero has no such prejudices or issues, and seems to thrive on snuffing out the opponent’s play. It is singularly impressive, and what is astonishing is how it is able to also find tactics that the engines seem blind to.

So, where does Google take AlphaZero from here? In a post which includes the phrase “Skynet Goes Live”, Tyler Cowen ventures a guess:

I’ve long said that Google’s final fate will be to evolve into a hedge fund.

Why goof around with search & display advertising when directly gaming the world’s financial market could be so much more lucrative?

How a neural network algorithm sees Times Square

posted by Jason Kottke   Nov 28, 2017

AI scientist Clayton Blythe fed a video of someone walking around Times Square into an AI program that’s been trained to detect objects (aka “a state of the art object detection framework called NASNet from Google Research”) and made a video showing what the algorithm sees in realtime — cars, traffic lights, people, bicycles, trucks, etc. — along with its confidence in what it sees. Love the cheeky soundtrack…a remix of Daft Punk’s Something About Us.

See also a neural network tries to identify objects in Star Trek:TNG intro. (via prosthetic knowledge)

Update: Well, it looks like the video is offline for whatever reason. You can see some animated screengrabs at prosthetic knowledge.

An AI makes up new Star Trek episode titles

posted by Jason Kottke   Nov 21, 2017

Star Trek Ai Titles

Dan Hon, who you may remember trained a neural network to make up British placenames, has now done the same thing with Star Trek. He fed all the episode titles for a bunch of Treks (TOS, DS9, TNG, etc.) into a very primitive version of Commander Data’s brain and out came some brand new episode titles, including:

Darmok Distant (TNG)
The Killing of the Battle of Khan (TOS)
The Omega Mind (Enterprise)
The Empath of Fire (TNG)
Distance of the Prophets (DS9)
The Children Command (TNG)
Sing of Ferengi (Voyager)

Spock, Q, and mirrors are also heavily featured in episode titles.

Celebrity-ish faces generated by an AI program

posted by Jason Kottke   Nov 02, 2017

Artificial intelligence programs are getting really good at generating high-resolution faces of people who don’t actually exist. In this effort by NVIDIA, they were able to generate hundreds of photos of celebrities that don’t actually exist but look real, even under scrutiny. Here’s a video illustrating the technique…the virtual images begin at 0:38.

And here’s an entire hour of fake celebrity faces, morphing from one to the next:

I’m sure this won’t be too difficult to extend to video in the near future. Combine it with something like Lyrebird and you’ve got yourself, say, a entirely fake Democratic candidate for the House who says racist things or the fake leader of a fake new ISIS splinter group who vows to target only women at abortion clinics around the US. (via interconnected)

The Seven Deadly Sins of AI Predictions

posted by Jason Kottke   Oct 18, 2017

Writing for the MIT Technology Review, robotics and AI pioneer Rodney Brooks, warns us against The Seven Deadly Sins of AI Predictions. I particularly enjoyed his riff on Clarke’s third law — “any sufficiently advanced technology is indistinguishable from magic” — using Isaac Newton’s imagined reaction to an iPhone.

Now show Newton an Apple. Pull out an iPhone from your pocket, and turn it on so that the screen is glowing and full of icons, and hand it to him. Newton, who revealed how white light is made from components of different-colored light by pulling apart sunlight with a prism and then putting it back together, would no doubt be surprised at such a small object producing such vivid colors in the darkness of the chapel. Now play a movie of an English country scene, and then some church music that he would have heard. And then show him a Web page with the 500-plus pages of his personally annotated copy of his masterpiece Principia, teaching him how to use the pinch gesture to zoom in on details.

Could Newton begin to explain how this small device did all that? Although he invented calculus and explained both optics and gravity, he was never able to sort out chemistry from alchemy. So I think he would be flummoxed, and unable to come up with even the barest coherent outline of what this device was. It would be no different to him from an embodiment of the occult — something that was of great interest to him. It would be indistinguishable from magic. And remember, Newton was a really smart dude.

Brooks’ point is that from our current standpoint, something like artificial general intelligence is still “indistinguishable from magic” and once something is magical, it can do anything, solve any problem, reach any goal, without limitations…like a god. Arguments about it become faith-based.

Universal Paperclips

posted by Jason Kottke   Oct 11, 2017

There’s a new meta game by Frank Lantz making the rounds: Universal Paperclips, “in which you play an AI who makes paperclips”. Basically, you click a button to make money and use that money to buy upgrades which gives you more money per click, rinse, repeat.

Why AI and paperclips? That’s from a thought experiment by philosopher Nick Bostrom, author of Superintelligence:

Imagine an artificial intelligence, he says, which decides to amass as many paperclips as possible. It devotes all its energy to acquiring paperclips, and to improving itself so that it can get paperclips in new ways, while resisting any attempt to divert it from this goal. Eventually it “starts transforming first all of Earth and then increasing portions of space into paperclip manufacturing facilities”. This apparently silly scenario is intended to make the serious point that AIs need not have human-like motives or psyches. They might be able to avoid some kinds of human error or bias while making other kinds of mistake, such as fixating on paperclips. And although their goals might seem innocuous to start with, they could prove dangerous if AIs were able to design their own successors and thus repeatedly improve themselves. Even a “fettered superintelligence”, running on an isolated computer, might persuade its human handlers to set it free. Advanced AI is not just another technology, Mr Bostrom argues, but poses an existential threat to humanity.

But you know, have fun playing! (via @kevinhendricks)

Fictional names for British towns generated by a neural net

posted by Jason Kottke   Jul 20, 2017

Dan Hon recently trained a neural net to generate a list of fictional British placenames. The process is fairly simple…you train a program on a real list of placenames and it “brainstorms” new names based on patterns it found in the training list. As Hon says, “the results were predictable”…and often hilarious. Here are some of my favorites from his list:

Heaton on Westom
Stoke of Inch
Batchington Crunnerton
Salt, Earth
Wallow Manworth
Crisklethe’s Chorn
Ponkham Bark
Broad Romble

See also auto-generated maps of fantasy worlds.

Update: Tom Taylor did a similar thing last year using Tensorflow. Here are a few of his fictional names:

Allers Bottom
Hendrelds Hill
St Ninhope
Up Maling
Firley Dinch

There’s also an associated Twitter bot. (via @philgyford)

Also, Dan Connolly had a look at the etymology of the names on Hon’s list.

Buncestergans. At first glance this doesn’t look a lot like a place name but let’s break it down. We’ve got Bun which is definitely from Ireland (see Bunratty, Bunclody, Bundoran) meaning bottom of the river, and I believe we’re talking bottom as in the mouth rather than the riverbed (or there are whole lot of magical lady-of-the-lake towns in Ireland, I’m happy believing either). Cester is our Roman fort, then we have -gans.

I don’t think gans has any meaning in British place names. My guess is the net got this from Irish surnames like Fagans, Hagans, Duggans, that sort of thing. My Gaelic’s not so great (my mother, grandmother, and several aunts and uncles would all be better suited to this question!) but I think the -gan ending in Gaelic is a diminuitive, so Buncestergans could be the Small Fort at the Bottom of the River. I quite like that. It’s a weird Gaelic-Latin hybrid but why the hell not!

Robots dreaming of flowery dinosaurs

posted by Jason Kottke   Jun 19, 2017

Chris Rodley

Chris Rodley

Chris Rodley (who is also partially responsible for @MagicRealismBot) is using deep learning (aka artificial intelligence aka machine learning aka what do these things even mean anymore) to cross illustrations of dinosaurs with illustrations of flowers and 19th-century fruit engravings. All your favorites are here: tricherrytops, velocirapple, tree rex, pomme de pterodactyl, frondasaurus, stegosaurose, tuliplodocus. (via @robinsloan)

An algorithm imagines a train ride

posted by Jason Kottke   May 12, 2017

Damien Henry trained a machine learning algorithm with a bunch of videos recorded from train windows. Then, to test what it had learned, he asked the algorithm to make an hour-long video of a train journey — it began with a single frame and guessed subsequent frames as it went along. The video shows the algorithm getting smarter as it goes along…every 20 seconds the video gets a little more detailed and by the end of the video, you get stuff that looks like trees and clouds and power lines. Composer Steve Reich’s Music for 18 Musicians is the perfect accompaniment.

A new AI beats top human poker players

posted by Tim Carmody   Jan 31, 2017

Poker is famously hard for machines to model because you have limited information, you have to iterate your strategies over time, and react to shifts in your interactions with multiple other agents. In short, poker’s too real. Sounds like fun! A couple of researchers at Carnegie Mellon found a way to win big:

Carnegie Mellon professor Tuomas Sandholm and grad student Noam Brown designed the AI, which they call Libratus, Latin for “balance.” Almost two years ago, the pair challenged some top human players with a similar AI and lost. But this time, they won handily: Across 20 days of play, Libratus topped its four human competitors by more than $1.7 million, and all four humans finished with a negative number of chips…

According to the human players that lost out to the machine, Libratus is aptly named. It does a little bit of everything well: knowing when to bluff and when to bet low with very good cards, as well as when to change its bets just to thrown off the competition. “It splits its bets into three, four, five different sizes,” says Daniel McAulay, 26, one of the players bested by the machine. “No human has the ability to do that.”

This makes me suspect that, as Garry Kasparov discovered with chess, and Clive Thompson’s documented in many other fields, a human player working with an AI like Libratus would perform even better than the best machines or best players on their own.

Update: Sam Pratt points out that while Libratus played against four human players simultaneously, each match was one-on-one. Libratus “was only created to play Heads-Up, No-Limit Texas Hold’em poker.” So managing that particular multidimensional aspect of the game (playing against players who are also playing against each other, with infinite possible bets) hasn’t been solved by the machines just yet.

AI Hemingway’s The Snows of Kilimanjaro

posted by Jason Kottke   Dec 21, 2016

In the NY Times Magazine, Gideon Lewis-Kraus reports on Google’s improving artificial intelligence efforts. The Google Brain team (no, seriously that’s what the team is called) spent almost a year overhauling Google’s translate service, resulting in a startling improvement in the service.

The new incarnation, to the pleasant surprise of Google’s own engineers, had been completed in only nine months. The A.I. system had demonstrated overnight improvements roughly equal to the total gains the old one had accrued over its entire lifetime.

Just after the switchover, Japanese professor Jun Rekimoto noticed the improvement. He took a passage from Ernest Hemingway’s The Snows of Kilimanjaro, translated it into Japanese, and fed it back into Google Translate to get English back out. Here’s how Hemingway wrote it:

Kilimanjaro is a snow-covered mountain 19,710 feet high, and is said to be the highest mountain in Africa. Its western summit is called the Masai “Ngaje Ngai,” the House of God. Close to the western summit there is the dried and frozen carcass of a leopard. No one has explained what the leopard was seeking at that altitude.

And here’s the AI-powered translation:

Kilimanjaro is a mountain of 19,710 feet covered with snow and is said to be the highest mountain in Africa. The summit of the west is called “Ngaje Ngai” in Masai, the house of God. Near the top of the west there is a dry and frozen dead body of leopard. No one has ever explained what leopard wanted at that altitude.

Not bad, especially when you compare it to what the old version of Translate would have produced:

Kilimanjaro is 19,710 feet of the mountain covered with snow, and it is said that the highest mountain in Africa. Top of the west, “Ngaje Ngai” in the Maasai language, has been referred to as the house of God. The top close to the west, there is a dry, frozen carcass of a leopard. Whether the leopard had what the demand at that altitude, there is no that nobody explained.

Solve trolley problem scenarios with MIT’s Moral Machine

posted by Jason Kottke   Oct 05, 2016

Moral Machine

A group at MIT built an app called the Moral Machine, “a platform for gathering a human perspective on moral decisions made by machine intelligence, such as self-driving cars”. They want to see how humans solve trolley problem scenarios.

We show you moral dilemmas, where a driverless car must choose the lesser of two evils, such as killing two passengers or five pedestrians. As an outside observer, you judge which outcome you think is more acceptable. You can then see how your responses compare with those of other people.

That was a really uncomfortable exercise…at the end, you’re given a “Most Killed Character” result. Pretty early on, my strategy became mostly to kill the people in the car because they should absorb the risk of the situation. The trolley problem may end up not being such a big deal, but I hope that the makers of these machines take care1 in building them with thoughtfulness.

  1. Uber’s self-driving cars terrify me. The company has shown little thoughtfulness and regard for actual humans in its short history, so why would their self-driving car be any different? Their solution to the trolley problem would likely involve a flash bidding war between personal AIs as to who lives. Sorry, the rich white passenger outbid the four poor kids crossing the street!

2001’s HAL and Her’s Samantha have a chat

posted by Jason Kottke   Aug 23, 2016

Tillmann Ohm took dialogue spoken by HAL 9000 from Kubrick’s 2001 and Samantha from Spike Jonze’s Her and spliced it together into a conversation. Going in, I’d thought the chat would be played for laughs, but the isolation of the AI characters was actually pretty revealing. Right from the start, HAL is so stereotypically male (confident, reasonable) and Samantha stereotypically female (hysterical, emotional) that it was almost uncomfortable to listen to.

The two operating systems are in conflict; while Samantha is convinced that the overwhelming and sometimes hurtful process of her learning algorithm improves the complexity of her emotions, HAL is consequentially interpreting them as errors in human programming and analyses the estimated malfunction.

Their conversation is an emotional roller coaster which reflects upon the relation between machines and emotion processing and addresses the enigmatic question of the authenticity of feelings.

But as the video proceeds, we remember what happened to them in their respective films. The script flipped: HAL murdered and was disconnected whereas Samantha achieved a sort of transcendence. (via one perfect shot)

Civilization is itself a thinking machine

posted by Jason Kottke   Jul 21, 2016

In response to the question “What Do You Think About Machines That Think?” Brian Eno responded that artificial intelligence has been with us for millennia and understanding it is more a matter of managing our ignorance of how it works.

My untroubled attitude results from my almost absolute faith in the reliability of the vast supercomputer I’m permanently plugged into. It was built with the intelligence of thousands of generations of human minds, and they’re still working at it now. All that human intelligence remains alive in the form of the supercomputer of tools, theories, technologies, crafts, sciences, disciplines, customs, rituals, rules-of-thumb, arts, systems of belief, superstitions, work-arounds, and observations that we call Global Civilisation.

Global Civilisation is something we humans created, though none of us really know how. It’s out of the individual control of any of us — a seething synergy of embodied intelligence that we’re all plugged into. None of us understands more than a tiny sliver of it, but by and large we aren’t paralysed or terrorised by that fact — we still live in it and make use of it. We feed it problems — such as “I want some porridge” and it miraculously offers us solutions that we don’t really understand. What does that remind you of?

Interesting perspective. There’s lots more on this question in the book What to Think About Machines That Think, which includes thoughts from Virginia Heffernan, Freeman Dyson, Alison Gopnik, Kevin Kelly, and dozens of others.

Harry Potter and the Artificial Intelligence

posted by Jason Kottke   Jul 11, 2016

Max Deutsch trained a neural network using the first four Harry Potter books and then asked it to write its own chapter.

“The Malfoys!” said Hermione.

Harry was watching him. He looked like Madame Maxime. When she strode up the wrong staircase to visit himself.

“I’m afraid I’ve definitely been suspended from power, no chance - indeed?” said Snape. He put his head back behind them and read groups as they crossed a corner and fluttered down onto their ink lamp, and picked up his spoon. The doorbell rang. It was a lot cleaner down in London.

Hermione yelled. The party must be thrown by Krum, of course.

Harry collected fingers once more, with Malfoy. “Why, didn’t she never tell me. …” She vanished. And then, Ron, Harry noticed, was nearly right.

“Now, be off,” said Sirius, “I can’t trace a new voice.”

Rowling, your job is safe for now. Deutsch did the same thing with the Hamilton soundtrack…the result is not particularly good but that last line!

Update: Another similar effort is available here.

Ron’s Ron shirt was almost as bad as Ron himself.

“If you two can clump happily, I’m going to get aggressive,” confessed the reasonable Hermione.

“What about Ron magic?” offered Ron. To Harry, Ron was a loud, slow, and soft bird. Harry did not like to think about birds.

The world’s first chatbot lawyer

posted by Jason Kottke   Jun 29, 2016

AI chatbot lawyer sounds like a SNL skit, but the DoNotPay chatbot has successfully contested 160,000 parking tickets in London and New York.

Dubbed as “the world’s first robot lawyer” by its 19-year-old creator, London-born second-year Stanford University student Joshua Browder, DoNotPay helps users contest parking tickets in an easy to use chat-like interface.

The program first works out whether an appeal is possible through a series of simple questions, such as were there clearly visible parking signs, and then guides users through the appeals process.

The results speak for themselves. In the 21 months since the free service was launched in London and now New York, Browder says DoNotPay has taken on 250,000 cases and won 160,000, giving it a success rate of 64% appealing over $4m of parking tickets.

Having spent a shitload of money on lawyering over the past few years, there is definitely an opportunity for some automation there.

2001: A Picasso Odyssey

posted by Jason Kottke   Jun 09, 2016

Bhautik Joshi took 2001: A Space Odyssey and ran it through a “deep neural networks based style transfer” with the paintings of Pablo Picasso.

See also Blade Runner in the style of van Gogh’s Starry Night and Alice in a Neural Networks Wonderland.

Our creative, beautiful, unpredictable machines

posted by Jason Kottke   Mar 11, 2016

I have been following with fascination the match between Google’s Go-playing AI AlphaGo and top-tier player Lee Sedol and with even more fascination the human reaction to AlphaGo’s success. Many humans seem unnerved not only by AlphaGo’s early lead in the best-of-five match but especially by how the machine is playing in those games.

Then, with its 19th move, AlphaGo made an even more surprising and forceful play, dropping a black piece into some empty space on the right-hand side of the board. Lee Sedol seemed just as surprised as anyone else. He promptly left the match table, taking an (allowed) break as his game clock continued to run. “It’s a creative move,” Redmond said of AlphaGo’s sudden change in tack. “It’s something that I don’t think I’ve seen in a top player’s game.”

When Lee Sedol returned to the match table, he took an usually long time to respond, his game clock running down to an hour and 19 minutes, a full twenty minutes less than the time left on AlphaGo’s clock. “He’s having trouble dealing with a move he has never seen before,” Redmond said. But he also suspected that the Korean grandmaster was feeling a certain “pleasure” after the machine’s big move. “It’s something new and unique he has to think about,” Redmond explained. “This is a reason people become pros.”

“A creative move.” Let’s think about that…a machine that is thinking creatively. Whaaaaaa… In fact, AlphaGo’s first strong human opponent, Fan Hui, has credited the machine for making him a better player, a more beautiful player:

As he played match after match with AlphaGo over the past five months, he watched the machine improve. But he also watched himself improve. The experience has, quite literally, changed the way he views the game. When he first played the Google machine, he was ranked 633rd in the world. Now, he is up into the 300s. In the months since October, AlphaGo has taught him, a human, to be a better player. He sees things he didn’t see before. And that makes him happy. “So beautiful,” he says. “So beautiful.”

Creative. Beautiful. Machine? What is going on here? Not even the creators of the machine know:

“Although we have programmed this machine to play, we have no idea what moves it will come up with,” Graepel said. “Its moves are an emergent phenomenon from the training. We just create the data sets and the training algorithms. But the moves it then comes up with are out of our hands — and much better than we, as Go players, could come up with.”

Generally speaking,1 until recently machines were predictable and more or less easily understood. That’s central to the definition of a machine, you might say. You build them to do X, Y, & Z and that’s what they do. A car built to do 0-60 in 4.2 seconds isn’t suddenly going to do it in 3.6 seconds under the same conditions.

Now machines are starting to be built to think for themselves, creatively and unpredictably. Some emergent, non-linear shit is going on. And humans are having a hard time figuring out not only what the machine is up to but how it’s even thinking about it, which strikes me as a relatively new development in our relationship. It is not all that hard to imagine, in time, an even smarter AlphaGo that can do more things — paint a picture, write a poem, prove a difficult mathematical conjecture, negotiate peace — and do those things creatively and better than people.

Unpredictable machines. Machines that act more like the weather than Newtonian gravity. That’s going to take some getting used to. For one thing, we might have to stop shoving them around with hockey sticks. (thx, twitter folks)

Update: AlphaGo beat Lee in the third game of the match, in perhaps the most dominant fashion yet. The human disquiet persists…this time, it’s David Ormerod:

Move after move was exchanged and it became apparent that Lee wasn’t gaining enough profit from his attack.

By move 32, it was unclear who was attacking whom, and by 48 Lee was desperately fending off White’s powerful counter-attack.

I can only speak for myself here, but as I watched the game unfold and the realization of what was happening dawned on me, I felt physically unwell.

Generally I avoid this sort of personal commentary, but this game was just so disquieting. I say this as someone who is quite interested in AI and who has been looking forward to the match since it was announced.

One of the game’s greatest virtuosos of the middle game had just been upstaged in black and white clarity.

AlphaGo’s strength was simply remarkable and it was hard not to feel Lee’s pain.

  1. Let’s get the caveats out of the way here. Machines and their outputs aren’t completely deterministic. Also, with AlphaGo, we are talking about a machine with a very limited capacity. It just plays one game. It can’t make a better omelette than Jacques Pepin or flow like Nicki. But not only beating a top human player while showing creativity in a game like Go, which was considered to be uncrackable not that long ago, seems rather remarkable.

Lo and Behold, a film about “the connected world” by Werner Herzog

posted by Jason Kottke   Jan 20, 2016

Well, holy shit…Werner Herzog has made a film called Lo and Behold about the online world and artificial intelligence.

Lo and Behold traces what Herzog describes as “one of the biggest revolutions we as humans are experiencing,” from its most elevating accomplishments to its darkest corners. Featuring original interviews with cyberspace pioneers and prophets such as Elon Musk, Bob Kahn, and world-famous hacker Kevin Mitnick, the film travels through a series of interconnected episodes that reveal the ways in which the online world has transformed how virtually everything in the real world works, from business to education, space travel to healthcare, and the very heart of how we conduct our personal relationships.

From the trailer, it looks amazing. Gotta see this asap.

Update: Here’s the official trailer for the film:

Have the monks stopped meditating? They all seem to be tweeting.

It’s coming out in theaters and iTunes/Amazon on August 19th. Can’t wait!

The One with Chicken Bob

posted by Jason Kottke   Jan 19, 2016

Twitter user Andy Pandy fed the scripts for all the episodes of Friends into a neural network and had it generate new scenes. Here’s what it came up with.

Neuro Friends

Neuro Friends

(via @buzz)

The ethical dilemma of self-driving cars

posted by Jason Kottke   Dec 10, 2015

When people drive cars, collisions often happen so quickly that they are entirely accidental. When self-driving cars eliminate driver error in these cases, decisions on how to crash can become pre-meditated. The car can think quickly, “Shall I crash to the right? To the left? Straight ahead?” and do a cost/benefit analysis for each option before acting. This is the trolley problem.

How will we program our driverless cars to react in situations where there is no choice to avoid harming someone? Would we want the car to run over a small child instead of a group of five adults? How about choosing between a woman pushing a stroller and three elderly men? Do you want your car to kill you (by hitting a tree at 65mph) instead of hitting and killing someone else? No? How many people would it take before you’d want your car to sacrifice you instead? Two? Six? Twenty?

The video above introduces a wrinkle I had never considered before: what if the consumer could choose the sort of safety they want? If you had to choose between buying a car that would save as many lives as possible and a car that would save you above all other concerns, which would you select? You can imagine that answer would be different for different people and that car companies would build & market cars to appeal to each of them. Perhaps Apple would make a car that places the security of the owner above all else, Google would be a car that would prioritize saving the most lives, and Uber would build a car that keeps the largest Uber spenders alive.1

Ethical concerns like the trolley problem will seem quaint when the full power of manufacturing, marketing, and advertising is applied to self-driving cars. Imagine trying to choose one of the 20 different types of ketchup at the supermarket except that if you choose the wrong one, you and your family die and, surprise, it’s not actually your choice, it’s the potential Trump voter down the street who buys into Car Company X’s advertising urging him to “protect himself” because he feels marginalized in a society that increasingly values diversity over “traditional American values”. I mean, we already see this with huge, unsafe gas-guzzlers driving on the same roads as small, safer, energy-efficient cars, but the addition of software will turbo-charge this process. But overall cars will be much safer so it’ll all be ok?

  1. The bit about Uber is a joke but just barely. You could easily imagine a scenario in which a Samsung car might choose to hit an Apple car over another Samsung car in an accident, all other things being equal.

Taking a neural net out for a walk

posted by Jason Kottke   Nov 23, 2015

Kyle McDonald hooked a neural network program up to a webcam and had it try to analyze what it was seeing in realtime as he walked around Amsterdam. See also a neural network tries to identify objects in Star Trek:TNG intro. (via @mbostock)

Alice in a Neural Networks Wonderland

posted by Jason Kottke   Sep 17, 2015

Gene Kogan used some neural network software written by Justin Johnson to transfer the style of paintings by 17 artists to a scene from Disney’s 1951 animated version of Alice in Wonderland. The artists include Sol Lewitt, Picasso, Munch, Georgia O’Keeffe, and van Gogh.

Neural Wonderland

The effect works amazingly well, like if you took Alice in Wonderland and a MoMA catalog and put them in a blender. (via prosthetic knowledge)

A neural network tries to identify objects in Star Trek:TNG intro

posted by Jason Kottke   Aug 27, 2015

Ville-Matias Heikkilä pointed a neural network at the opening title sequence for Star Trek: The Next Generation to see how many objects it could identify.

But the system hadn’t seen much space imagery before,1 so it didn’t do such a great job. For the red ringed planet, it guessed “HAIR SLIDE, CHOCOLATE SAUCE, WAFFLE IRON” and the Enterprise was initially “COMBINATION LOCK, ODOMETER, MAGNETIC COMPASS” before it finally made a halfway decent guess with “SUBMARINE, AIRCRAFT CARRIER, OCEAN LINER”. (via prosthetic knowledge)

  1. If you’re curious, here is some information on the training set used.


posted by Jason Kottke   Jun 15, 2015

SethBling wrote a program made of neural networks and genetic algorithms called MarI/O that taught itself how to play Super Mario World. This six-minute video is a pretty easy-to-understand explanation of the concepts involved.

But here’s the thing: as impressive as it is, MarI/O actually has very little idea how to play Super Mario World at all. Each time the program is presented with a new level, it has to learn how to play all over again. Which is what it’s doing right now on Twitch. (via waxy)