Advertise here with Carbon Ads

This site is made possible by member support. โค๏ธ

Big thanks to Arcustech for hosting the site and offering amazing tech support.

When you buy through links on kottke.org, I may earn an affiliate commission. Thanks for supporting the site!

kottke.org. home of fine hypertext products since 1998.

๐Ÿ”  ๐Ÿ’€  ๐Ÿ“ธ  ๐Ÿ˜ญ  ๐Ÿ•ณ๏ธ  ๐Ÿค   ๐ŸŽฌ  ๐Ÿฅ”

kottke.org posts about Ted Chiang

The Neo-Luddite Movement

For the last few weeks, I’ve been listening to the audiobook of Brian Merchant’s history of the Luddite movement, Blood in the Machine: The Origins of the Rebellion Against Big Tech. In it, Merchant argues the Luddites were at their core a labor movement against capitalism and compares them to contemporary movements against big tech and media companies. Merchant writes in the Atlantic:

The first Luddites were artisans and cloth workers in England who, at the onset of the Industrial Revolution, protested the way factory owners used machinery to undercut their status and wages. Contrary to popular belief, they did not dislike technology; most were skilled technicians.

At the time, some entrepreneurs had started to deploy automated machines that unskilled workers โ€” many of them children โ€” could use to churn out cheap, low-quality goods. And while the price of garments fell and the industrial economy boomed, hundreds of thousands of working people fell into poverty. When petitioning Parliament and appealing to the industrialists for minimum wages and basic protections failed, many organized under the banner of a Robin Hood-like figure, Ned Ludd, and took up hammers to smash the industrialists’ machines. They became the Luddites.

He goes on to compare their actions to tech publication writers’ strikes, the SAG-AFTRA & WGA strikes, the Authors Guild lawsuit against AI companies, and a group of masked activists “coning” self-driving cars. All this reminds me of Ted Chiang’s quote about AI:

I tend to think that most fears about A.I. are best understood as fears about capitalism. And I think that this is actually true of most fears of technology, too. Most of our fears or anxieties about technology are best understood as fears or anxiety about how capitalism will use technology against us. And technology and capitalism have been so closely intertwined that it’s hard to distinguish the two.

Reply ยท 1

Raising Artificial Intelligences Like Children

Over the weekend, I listened to this podcast conversation between the psychologist & philosopher Alison Gopnik and writer Ted Chiang about using children’s learning as a model for developing AI systems. Around the 23-minute mark, Gopnik observes that care relationships (child care, elder care, etc.) are extremely important to people but is nearly invisible in economics. And then Chiang replies:

One of the ways that conventional economics sort of ignores care is that for every employee that you hire, there was an incredible amount of labor that went into that employee. That’s a person! And how do you make a person? Well, for one thing, you need several hundred thousand hours of effort to make a person. And every employee that any company hires is the product of hundreds of thousands of hours of effort. Which, companies… they don’t have to pay for that!

They are reaping the benefits of an incredible amount of labor. And if you imagine, in some weird kind of theoretical sense, if you had to actually pay for the raising of everyone that you would eventually employ, what would that look like?

It’s an interesting conversation throughout โ€” recommended!

Chiang has written some of my favorite things on AI in recent months/years, including this line that’s become one of my guiding principles in thinking about AI: “I tend to think that most fears about A.I. are best understood as fears about capitalism.”

Reply ยท 7

Imagining an Alternative to AI-Supercharged Capitalism

Expanding on his previous thoughts on the relationship between AI and capitalism โ€” “I tend to think that most fears about A.I. are best understood as fears about capitalism” โ€” Ted Chiang offers a useful metaphor for how to think about AI: as a management-consulting firm like McKinsey.

So, I would like to propose another metaphor for the risks of artificial intelligence. I suggest that we think about A.I. as a management-consulting firm, along the lines of McKinsey & Company. Firms like McKinsey are hired for a wide variety of reasons, and A.I. systems are used for many reasons, too. But the similarities between McKinsey โ€” a consulting firm that works with ninety per cent of the Fortune 100 โ€” and A.I. are also clear. Social-media companies use machine learning to keep users glued to their feeds. In a similar way, Purdue Pharma used McKinsey to figure out how to “turbocharge” sales of OxyContin during the opioid epidemic. Just as A.I. promises to offer managers a cheap replacement for human workers, so McKinsey and similar firms helped normalize the practice of mass layoffs as a way of increasing stock prices and executive compensation, contributing to the destruction of the middle class in America.

A former McKinsey employee has described the company as “capital’s willing executioners”: if you want something done but don’t want to get your hands dirty, McKinsey will do it for you. That escape from accountability is one of the most valuable services that management consultancies provide. Bosses have certain goals, but don’t want to be blamed for doing what’s necessary to achieve those goals; by hiring consultants, management can say that they were just following independent, expert advice. Even in its current rudimentary form, A.I. has become a way for a company to evade responsibility by saying that it’s just doing what “the algorithm” says, even though it was the company that commissioned the algorithm in the first place.

Good stuff โ€” I especially enjoyed the mini You’re Wrong About on the Luddites โ€” do read the whole thing.


The First Battleground of the Age of AI Is Art

In this final installment of Everything is a Remix, Kirby Ferguson offers his perspective on image generation with AI, how it compares to human creativity, and what its role will be in the future. In watching the part about the anxiety in the creative community about these image generators, I was reminded of what Ted Chiang has said about fears of technology actually being fears of capitalism.

It’s capitalism that wants to reduce costs and reduce costs by laying people off. It’s not that like all technology suddenly becomes benign in this world. But it’s like, in a world where we have really strong social safety nets, then you could maybe actually evaluate sort of the pros and cons of technology as a technology, as opposed to seeing it through how capitalism is going to use it against us.

I agree with Ferguson that these AI image generators are, outside the capitalist context, useful and good for helping humans be creative and express themselves. Tools like Midjourney, DALL-E, and Stable Diffusion allow anyone to collaborate with every previous human artist that has ever existed, all at once. Like, just think about how powerful this is: normal people who have ideas but lack technical skills can now create imagery. Is it art? Perhaps not in most cases, but some of it will be. If the goal is to get more people to be able to more easily express and exercise their creativity, these image generators fulfill that in a big way. But that’s really scary โ€” power always is.


Ted Chiang: “ChatGPT Is a Blurry JPEG of the Web”

This is a fantastic piece by writer Ted Chiang about large-language models like ChatGPT. He likens them to lossy compression algorithms:

What I’ve described sounds a lot like ChatGPT, or most any other large-language model. Think of ChatGPT as a blurry jpeg of all the text on the Web. It retains much of the information on the Web, in the same way that a jpeg retains much of the information of a higher-resolution image, but, if you’re looking for an exact sequence of bits, you won’t find it; all you will ever get is an approximation. But, because the approximation is presented in the form of grammatical text, which ChatGPT excels at creating, it’s usually acceptable. You’re still looking at a blurry jpeg, but the blurriness occurs in a way that doesn’t make the picture as a whole look less sharp.

Reframing the technology in that way turns out to be useful in thinking through some of its possibilities and limitations:

There is very little information available about OpenAI’s forthcoming successor to ChatGPT, GPT-4. But I’m going to make a prediction: when assembling the vast amount of text used to train GPT-4, the people at OpenAI will have made every effort to exclude material generated by ChatGPT or any other large-language model. If this turns out to be the case, it will serve as unintentional confirmation that the analogy between large-language models and lossy compression is useful. Repeatedly resaving a jpeg creates more compression artifacts, because more information is lost every time. It’s the digital equivalent of repeatedly making photocopies of photocopies in the old days. The image quality only gets worse.

Indeed, a useful criterion for gauging a large-language model’s quality might be the willingness of a company to use the text that it generates as training material for a new model. If the output of ChatGPT isn’t good enough for GPT-4, we might take that as an indicator that it’s not good enough for us, either.

Chiang has previously spoken about how “most fears about A.I. are best understood as fears about capitalism”.

I tend to think that most fears about A.I. are best understood as fears about capitalism. And I think that this is actually true of most fears of technology, too. Most of our fears or anxieties about technology are best understood as fears or anxiety about how capitalism will use technology against us. And technology and capitalism have been so closely intertwined that it’s hard to distinguish the two.

Let’s think about it this way. How much would we fear any technology, whether A.I. or some other technology, how much would you fear it if we lived in a world that was a lot like Denmark or if the entire world was run sort of on the principles of one of the Scandinavian countries? There’s universal health care. Everyone has child care, free college maybe. And maybe there’s some version of universal basic income there.

Now if the entire world operates according to โ€” is run on those principles, how much do you worry about a new technology then? I think much, much less than we do now.

See also Why Computers Won’t Make Themselves Smarter. (via @irwin)


Ted Chiang: Fears of Technology Are Fears of Capitalism

Writer Ted Chiang (author of the fantastic Exhalation) was recently a guest on the Ezra Klein Show. The conversation ranged widely โ€” I enjoyed his thoughts on superheroes โ€” but his comments on capitalism and technology seem particularly relevant right now. From the transcript:

I tend to think that most fears about A.I. are best understood as fears about capitalism. And I think that this is actually true of most fears of technology, too. Most of our fears or anxieties about technology are best understood as fears or anxiety about how capitalism will use technology against us. And technology and capitalism have been so closely intertwined that it’s hard to distinguish the two.

Let’s think about it this way. How much would we fear any technology, whether A.I. or some other technology, how much would you fear it if we lived in a world that was a lot like Denmark or if the entire world was run sort of on the principles of one of the Scandinavian countries? There’s universal health care. Everyone has child care, free college maybe. And maybe there’s some version of universal basic income there.

Now if the entire world operates according to โ€” is run on those principles, how much do you worry about a new technology then? I think much, much less than we do now. Most of the things that we worry about under the mode of capitalism that the U.S practices, that is going to put people out of work, that is going to make people’s lives harder, because corporations will see it as a way to increase their profits and reduce their costs. It’s not intrinsic to that technology. It’s not that technology fundamentally is about putting people out of work.

It’s capitalism that wants to reduce costs and reduce costs by laying people off. It’s not that like all technology suddenly becomes benign in this world. But it’s like, in a world where we have really strong social safety nets, then you could maybe actually evaluate sort of the pros and cons of technology as a technology, as opposed to seeing it through how capitalism is going to use it against us. How are giant corporations going to use this to increase their profits at our expense?

And so, I feel like that is kind of the unexamined assumption in a lot of discussions about the inevitability of technological change and technologically-induced unemployment. Those are fundamentally about capitalism and the fact that we are sort of unable to question capitalism. We take it as an assumption that it will always exist and that we will never escape it. And that’s sort of the background radiation that we are all having to live with. But yeah, I’d like us to be able to separate an evaluation of the merits and drawbacks of technology from the framework of capitalism.

Echoing some of his other thoughts during the podcast, Chiang also wrote a piece for the New Yorker the other day about how the singularity will probably never come.


Anxiety Is the Dizziness of Freedom, A Short Story by Ted Chiang

From Ted Chiang’s collection of stories, Exhalation, comes Anxiety Is the Dizziness of Freedom, published online for the first time. In the story, devices called “prisms” allow people to talk to their alternate reality selves, but only for a limited time.

Every prism โ€” the name was a near acronym of the original designation, “Plaga interworld signaling mechanism” โ€” had two LEDs, one red and one blue. When a prism was activated, a quantum measurement was performed inside the device, with two possible outcomes of equal probability: one outcome was indicated by the red LED lighting up, while the other was indicated by the blue one. From that moment forward, the prism allowed information transfer between two branches of the universal wave function. In colloquial terms, the prism created two newly divergent timelines, one in which the red LED lit up and one in which the blue one did, and it allowed communication between the two.

I read Exhalation several months ago; every story was fantastic, but this was one of my favorites.


The Implausible Covid-19 Movie

A few weeks ago, the Washington Post interviewed Scott Z. Burns, who wrote the screenplay for Contagion, Steven Soderbergh’s film about a bat-borne illness that starts a global pandemic. What’s most striking about the interview is how outlandish Burns finds certain aspects of the Covid-19 pandemic, so ridiculous in fact that people would find them implausible if this were a fictional story.

I would have never imagined that the movie needed a “bad guy” beyond the virus itself. It seems pretty basic that the plot should be humans united against the virus. If you were writing it now, you would have to take into account the blunders of a dishonest president and the political party that supports him. But any good studio executive would have probably told us that such a character was unbelievable and made the script more of a dark comedy than a thriller.

On Twitter, director Sarah Polley recently had a similar take.

This is the worst movie I have ever seen.

Unsurprising that this movie doesn’t work โ€” the screenplay was a dog’s breakfast.

So much heavy handed foreshadowing. The apocalyptic footage from Wuhan, the super villain American president, the whistleblower dying, the Russia/China border closed while people still claimed it was just a flu, the warnings unheeded. Insulting to the audience’s intelligence.

And then โ€” that most annoying of horror/disaster movie tropes โ€” the hapless idiots walking into disaster after disaster, all of which the audience can see coming from a mile away.

The over the top details of world leaders and their wives falling ill, the far fetched idea that industrialized countries wouldn’t have proper protective gear for front line workers and ventilators. Pleeeeaaase. This movie needed a script doctor.

It’s interesting that there are certain boundaries in fiction related to the audience’s suspension of disbelief that are are routinely ignored by reality. I’m also reminded of how Margaret Atwood approached The Handmaid’s Tale and The Testaments, using only elements that have historical precedent:

The television series has respected one of the axioms of the novel: no event is allowed into it that does not have a precedent in human history.

And yet some critics consider the events from the novels and TV show to be too much, over-the-top.

Update: Ted Chiang from a recent interview:

While there has been plenty of fiction written about pandemics, I think the biggest difference between those scenarios and our reality is how poorly our government has handled it. If your goal is to dramatize the threat posed by an unknown virus, there’s no advantage in depicting the officials responding as incompetent, because that minimizes the threat; it leads the reader to conclude that the virus wouldn’t be dangerous if competent people were on the job. A pandemic story like that would be similar to what’s known as an “idiot plot,” a plot that would be resolved very quickly if your protagonist weren’t an idiot. What we’re living through is only partly a disaster novel; it’s also โ€” and perhaps mostly โ€” a grotesque political satire.

I am currently blazing through Exhalation (Kindle), Chiang’s collection of science & technology fables. (via @jasondh)


A world that can’t learn from itself

From Umair Haque, a provocative question: Why Don’t Americans Understand How Poor Their Lives Are?

In London, Paris, Berlin, I hop on the train, head to the cafe โ€” it’s the afternoon, and nobody’s gotten to work until 9am, and even then, maybe not until 10 โ€” order a carefully made coffee and a newly baked croissant, do some writing, pick up some fresh groceries, maybe a meal or two, head home โ€” now it’s 6 or 7, and everyone else has already gone home around 5 โ€” and watch something interesting, maybe a documentary by an academic, the BBC’s Blue Planet, or a Swedish crime-noir. I think back on my day and remember the people smiling and laughing at the pubs and cafes.

In New York, Washington, Philadelphia, I do the same thing, but it is not the same experience at all. I take broken down public transport to the cafe โ€” everybody’s been at work since 6 or 7 or 8, so they already look half-dead โ€” order coffee and a croissant, both of which are fairly tasteless, do some writing, pick up some mass-produced groceries, full of toxins and colourings and GMOs, even if they are labelled “organic” and “fresh”, all forbidden in Europe, head home โ€” people are still at work, though it’s 7 or 8 โ€” and watch something bland and forgettable, reality porn, decline porn, police-state TV. I think back on my day and remember how I didn’t see a single genuine smile โ€” only hard, grim faces, set against despair, like imagine living in Soviet Leningrad.

Haque places the blame on our inability as a society to look outward and learn from ourselves, from history, and from the rest of the world.

So just as Americans don’t get how bad their lives really are, comparatively speaking โ€” which is to say how good they could be โ€” so too Europeans don’t fully understand how good their lives are โ€” and how bad, if they continue to follow in America’s footsteps, austerity by austerity, they could be. Both appear to be blind to one another’s mistakes and successes.

Reading it, I noticed a similarity to Ted Chiang’s essay on the unchecked capitalism of Silicon Valley (which I linked to this morning). Chiang notes that corporations lack insight:

In psychology, the term “insight” is used to describe a recognition of one’s own condition, such as when a person with mental illness is aware of their illness. More broadly, it describes the ability to recognize patterns in one’s own behavior. It’s an example of metacognition, or thinking about one’s own thinking, and it’s something most humans are capable of but animals are not. And I believe the best test of whether an AI is really engaging in human-level cognition would be for it to demonstrate insight of this kind.

Haque is saying that our societies lack insight as well…or at least the will to incorporate that insight into practice.


Ted Chiang on the similarities between “civilization-destroying AIs and Silicon Valley tech companies”

Ted Chiang is most widely known for writing Story of Your Life, an award-winning short story that became the basis for Arrival. In this essay for Buzzfeed, Chiang argues that we should worry less about machines becoming superintelligent and more about the machines we’ve already built that lack remorse & insight and have the capability to destroy the world: “we just call them corporations”.

Speaking to Maureen Dowd for a Vanity Fair article published in April, Musk gave an example of an artificial intelligence that’s given the task of picking strawberries. It seems harmless enough, but as the AI redesigns itself to be more effective, it might decide that the best way to maximize its output would be to destroy civilization and convert the entire surface of the Earth into strawberry fields. Thus, in its pursuit of a seemingly innocuous goal, an AI could bring about the extinction of humanity purely as an unintended side effect.

This scenario sounds absurd to most people, yet there are a surprising number of technologists who think it illustrates a real danger. Why? Perhaps it’s because they’re already accustomed to entities that operate this way: Silicon Valley tech companies.

Consider: Who pursues their goals with monomaniacal focus, oblivious to the possibility of negative consequences? Who adopts a scorched-earth approach to increasing market share? This hypothetical strawberry-picking AI does what every tech startup wishes it could do โ€” grows at an exponential rate and destroys its competitors until it’s achieved an absolute monopoly. The idea of superintelligence is such a poorly defined notion that one could envision it taking almost any form with equal justification: a benevolent genie that solves all the world’s problems, or a mathematician that spends all its time proving theorems so abstract that humans can’t even understand them. But when Silicon Valley tries to imagine superintelligence, what it comes up with is no-holds-barred capitalism.

As you might expect from Chiang, this piece is full of cracking writing. I had to stop myself from just excerpting the whole thing here, ultimately deciding that would go against the spirit of the whole thing. So just this one bit:

The ethos of startup culture could serve as a blueprint for civilization-destroying AIs. “Move fast and break things” was once Facebook’s motto; they later changed it to “Move fast with stable infrastructure,” but they were talking about preserving what they had built, not what anyone else had. This attitude of treating the rest of the world as eggs to be broken for one’s own omelet could be the prime directive for an AI bringing about the apocalypse.

Ok, just one more:

The fears of superintelligent AI are probably genuine on the part of the doomsayers. That doesn’t mean they reflect a real threat; what they reflect is the inability of technologists to conceive of moderation as a virtue. Billionaires like Bill Gates and Elon Musk assume that a superintelligent AI will stop at nothing to achieves its goals because that’s the attitude they adopted. (Of course, they saw nothing wrong with this strategy when they were the ones engaging in it; it’s only the possibility that someone else might be better at it than they were that gives them cause for concern.)

You should really just read the whole thing. It’s not long and Chiang’s point is quietly but powerfully persuasive.


Arrival: future communication, past perspective

In his newest video, Evan Puschak talks about Arrival, calling it “a response to bad movies”. Arrival was perhaps my favorite film of 2016, and I agree with him about how well-made this film is. There’s a top-to-bottom attention to craft on display, from how it looks to how it was cast (Amy Adams was the absolute perfect choice for the lead) to the integration of the theme with story to how expertly it was adapted from Ted Chiang’s Story of Your Life. The whole thing’s tight as a drum. If you happened to miss it, don’t watch this video (it gives the whole thing away) and go watch it instead…it’s available to rent/buy on Amazon.

Looking back through the archives, I’m realizing I never did a post about Arrival even though I collected some links about it. So, linkdump time!

Wired wrote about how the movie’s alien alphabet was developed.

Stephen Wolfram wrote about his involvement with the science of the film โ€” his son Christopher wrote Mathematica code for some of the on-screen visuals. 1

Science vs Cinema explored how well the movie represented actual science:

Screenwriter Eric Heisserer wrote about how he adapted Chiang’s short story for the screen.

Jordan Brower wrote a perceptive review/analysis that includes links to several other resources about the film.

Update: The director of photography for Arrival was Bradford Young, who shot Selma and is currently working on the Han Solo movie for Disney. Young did an interview with No Film School just before Arrival came out.

I’m from the South, so quilts are a big part of telling our story. Quilting is ancient, but in the South it’s a very particular translation of idea, time, and space. In my own practice as an image maker, I slowly began to be less concerned with precision and more concerned with feeling.

Quiltmakers are rigorous, but they’re a mixed media format. I think filmmaking should be a mixed media format. I’m just really honoring what quiltmakers do, which is tell a story by using varying texture within a specific framework to communicate an idea. For me, with digital technology, lenses do that the best. The chips don’t do it now-digital film stock is basically all captured the same, but the lenses are how you give the image its textural quality.

(thx, raafi)

Update: James Gleick, author of Time Travel, wrote about Arrival and Story of Your Life for The New York Review of Books.

What if the future is as real as the past? Physicists have been suggesting as much since Einstein. It’s all just the space-time continuum. “So in the future, the sister of the past,” thinks young Stephen Dedalus in Ulysses, “I may see myself as I sit here now but by reflection from that which then I shall be.” Twisty! What if you received knowledge of your own tragic future-as a gift, or perhaps a curse? What if your all-too-vivid sensation of free will is merely an illusion? These are the roads down which Chiang’s story leads us. When I first read it, I meant to discuss it in the book I was writing about time travel, but I could never manage that. It’s not a time-travel story in any literal sense. It’s a remarkable work of imagination, original and cerebral, and, I would have thought, unfilmable. I was wrong.

(via @fquist)

  1. Christopher was 15 or 16 when he worked on the film. His LinkedIn profile states that he’s been a programmer for Wolfram (the company) since he was 13 and that in addition to his work on Arrival, he “implemented the primary cryptography functions in Mathematica”.โ†ฉ