Advertise here with Carbon Ads

This site is made possible by member support. 💞

Big thanks to Arcustech for hosting the site and offering amazing tech support.

When you buy through links on kottke.org, I may earn an affiliate commission. Thanks for supporting the site!

kottke.org. home of fine hypertext products since 1998.

Beloved by 86.47% of the web.

🍔  💀  📸  😭  🕳️  🤠  🎬  🥔

kottke.org posts about computing

Brian Eno’s Oblique Strategies on HyperCard

After posting the video on the history of HyperCard the other day, I went down a bit of a HyperCard rabbit hole on the Internet Archive. There are a ton of HyperCard programs, manual & packaging scans, and other resources available on IA; among them:

I also found this version of Brian Eno’s Oblique Strategies:

You can see why people call HyperCard “the web before the web”…it’s all right there.

Also, don’t miss this comment from Keith Dawson (who you may remember from the pioneering tech newsletter Tasty Bits From the Technology Front) on how HyperCard was almost called Wildcard.

Soon after, I took a call from Apple. Would we be willing to give up the name Wildcard, or at least license it for their use on a new product? We discussed it. No.

Wild.

Reply · 3

HyperCard Changed Everything

This video traces the history of Apple’s HyperCard from Vannevar Bush’s idea of the Memex to the Mother of All Demos to the Xerox PARC Alto to Bill Atkinson, the inventor of HyperCard, who said:

HyperCard is a software erector set. It lets people put things together without having to know how to solder.

There’s a ton of information about HyperCard at hypercard.org, including this HyperCard simulator that runs in your browser.

Reply · 4

Playing Boards of Canada on a DEC PDP-1 from 1959

This is so so cool and an arrow-splitting bullseye in the middle of my wheelhouse: a short Boards of Canada tune played on a DEC PDP-1, one of the most significant machines in the history of computing.

Here’s a description of what’s going on, courtesy of @dryad.technology on Bluesky:

The PDP-1 doesn’t have sound, but it does have front-panel light bulbs for debugging, so they rewired the light bulb lines into speakers to create 4 square wave channels.

You can read more about The PDP-1: The Machine That Started Hacker Culture:

The bottom line is that the PDP-1 was really the first computer that encouraged users to sit down and play. While IBM machines did the boring but necessary work of business behind closed doors and tended by squads of servants, DEC’s machines found their way into labs and odd corners of institutions where curious folk sat in front of their terminals, fingers poised over keyboards while a simple but powerful phrase was uttered: “I wonder what happens if…” The DEC machines were the first computers that allowed the question, which is really at the heart of the hacker culture, to be answered in real time.

And every day is a good day to listen to Boards of Canada. Oh! And if you’re anywhere near Mountain View, the Computer History Museum has regular demos of the PDP-1 and will play the song if requested!

If anyone would like to see this live, we demo the PDP-1 at the Computer History Museum in Mountain View, CA on the first and third Saturdays of the month, 2:30 and 3:15p. Just ask, and we’ll be happy to play it!

(via @k4r1m.bsky.social)

Reply · 4

I Made a Floppy Disk from Scratch

Polymatt decided he was going to make a 3.5” floppy disk from scratch — and actually did.

I’m not sure how many of you have actually cracked one of these things open and taken a look inside, but it’s actually a little bit more complex than I expected. Recreating a shell isn’t going to be the tough part. It’s actually this: recreating the media itself with some PET film and a bunch of chemicals. These disks are incredibly thin, and the magnetic film itself is measured in microns. It’s going to be quite the feat in order to figure out how to apply something that thin.

Fantastic. If you enjoyed the Building a Watch From Scratch in a Brooklyn Basement video, you will probably like this one:

Wanting to get the most out of my new machine, I wanted to look into purchasing what’s called a drag knife. It’s a tool that would go in where the bit is that would allow you to create very precise cuts on things like paper or film. And after realizing I’d have to pay over $150 for one of these things, I thought, maybe I could make one. So that’s what I did. For me, one of the most satisfying things is using a machine to make more tools or features for that machine.

I’m not saying I want to buy myself a CNC machine, but I’m not not saying it either. (via @ernie.tedium.co)


Nine Rules for Evaluating New Technology

In 1987, Wendell Berry wrote an essay called Why I Am Not Going to Buy a Computer. In it, he outlined his standards for adopting new technology in his work.

  1. The new tool should be cheaper than the one it replaces.
  2. It should be at least as small in scale as the one it replaces.
  3. It should do work that is clearly and demonstrably better than the one it replaces.
  4. It should use less energy than the one it replaces.
  5. If possible, it should use some form of solar energy, such as that of the body.
  6. It should be repairable by a person of ordinary intelligence, provided that he or she has the necessary tools.
  7. It should be purchasable and repairable as near to home as possible.
  8. It should come from a small, privately owned shop or store that will take it back for maintenance and repair.
  9. It should not replace or disrupt anything good that already exists, and this includes family and community relationships.

The whole essay is worth a read, especially now as contemporary society is struggling to evaluate and find the proper balance for technologies like social media, smartphones, and LLMs. (via the honest broker)

Reply · 6

16-bit Intel 8088 Chip by Charles Bukowski

Today I learned that Charles Bukowski, “laureate of American lowlife”, wrote about the incompatibilities of early computing platforms in a poem called 16-bit Intel 8088 Chip:

16-bit Intel 8088 chip

with an Apple Macintosh

you can’t run Radio Shack programs

in its disc drive.

nor can a Commodore 64

drive read a file

you have created on an

IBM Personal Computer.

both Kaypro and Osborne computers use

the CP/M operating system

but can’t read each other’s

handwriting

for they format (write

on) discs in different

ways.

the Tandy 2000 runs MS-DOS but

can’t use most programs produced for

the IBM Personal Computer

unless certain

bits and bytes are

altered

but the wind still blows over

Savannah

and in the Spring

the turkey buzzard struts and

flounces before his

hens.

Lovely. And accurate. And somehow even maybe profound? (via sing, memory)


Want to Wear a Tiny Apollo Guidance Computer on Your Wrist?

two photographs of people wearing a wristwatch that resembles an Apollo Guidance Computer

A British company is selling a wristwatch that’s a shrunk-down replica of the Apollo Guidance Computer interface that the Apollo astronauts used to maneuver their spacecraft to the Moon and back. From Gizmodo:

The original AGCs were used by astronauts for guidance and navigation, which you cannot do with the watch — and no offense, but you probably don’t have a spacecraft anyway. But it does function in its own way. The watch has a built-in GPS, a digital display, and a working keyboard. It’s also programmable, built atop an open-source framework that is compatible with a number of coding environments including Arduino and Python. So if you have some features you’d like to run, it’s open to input.

You can pre-order the DSKY Moonwatch today. (via moss & fog)

Reply · 1

A Navajo Weaving of an Intel Pentium Processor

a Navajo weaving of a Pentium chip next to a microscopic image of the actual chip

In 1994, a Navajo/Diné weaver named Marilou Schultz made a weaving of the microscopic pattern of an Intel Pentium processor. (In the image above, the weaving is on the left and the chip is on the right.)

The Pentium die photo below shows the patterns and structures on the surface of the fingernail-sized silicon die, over three million tiny transistors. The weaving is a remarkably accurate representation of the die, reproducing the processor’s complex designs. However, I noticed that the weaving was a mirror image of the physical Pentium die; I had to flip the rug image below to make them match. I asked Ms. Schultz if this was an artistic decision and she explained that she wove the rug to match the photograph. There is no specific front or back to a Navajo weaving because the design is similar on both sides,3 so the gallery picked an arbitrary side to display. Unfortunately, they picked the wrong side, resulting in a backward die image.

Schultz is working on a weaving of another chip, the Fairchild 9040, which was “built by Navajo workers at a plant on Navajo land”.

In December 1972, National Geographic highlighted the Shiprock plant as “weaving for the Space Age”, stating that the Fairchild plant was the tribe’s most successful economic project with Shiprock booming due to the 4.5-million-dollar annual payroll. The article states: “Though the plant runs happily today, it was at first a battleground of warring cultures.” A new manager, Paul Driscoll, realized that strict “white man’s rules” were counterproductive. For instance, many employees couldn’t phone in if they would be absent, as they didn’t have telephones. Another issue was the language barrier since many workers spoke only Navajo, not English. So when technical words didn’t exist in Navajo, substitutes were found: “aluminum” became “shiny metal”. Driscoll also realized that Fairchild needed to adapt to traditional nine-day religious ceremonies. Soon the monthly turnover rate dropped from 12% to under 1%, better than Fairchild’s other plants.

The whole piece is really interesting and demonstrates the deep rabbit hole awaiting the curious art viewer. (via waxy)


“I Created Clippy”

Illustrator Kevan Atteberry created the Clippy character that was introduced in Microsoft Office 97. There was a ton of backlash when the character was introduced, but as time has passed, many people have begun to think fondly of him.

He’s a guy that just wants to help, and he’s a little bit too helpful sometimes. And there’s something fun and vulnerable about that.


How NASA Writes Space-Proof Code

When you write some code and put it on a spacecraft headed into the far reaches of space, you need to it work, no matter what. Mistakes can mean loss of mission or even loss of life. In 2006, Gerard Holzmann of the NASA/JPL Laboratory for Reliable Software wrote a paper called The Power of 10: Rules for Developing Safety-Critical Code. The rules focus on testability, readability, and predictability:

  1. Avoid complex flow constructs, such as goto and recursion.
  2. All loops must have fixed bounds. This prevents runaway code.
  3. Avoid heap memory allocation.
  4. Restrict functions to a single printed page.
  5. Use a minimum of two runtime assertions per function.
  6. Restrict the scope of data to the smallest possible.
  7. Check the return value of all non-void functions, or cast to void to indicate the return value is useless.
  8. Use the preprocessor sparingly.
  9. Limit pointer use to a single dereference, and do not use function pointers.
  10. Compile with all possible warnings active; all warnings should then be addressed before release of the software.

All this might seem a little inside baseball if you’re not a software developer (I caught only about 75% of it — the video embedded above helped a lot), but the goal of the Power of 10 rules is to ensure that developers are working in such a way that their code does the same thing every time, can be tested completely, and is therefore more reliable.

Even here on Earth, perhaps more of our software should work this way. In 2011, NASA applied these rules in their analysis of unintended acceleration of Toyota vehicles and found 243 violations of 9 out of the 10 rules. Are the self-driving features found in today’s cars written with these rules in mind or can recursive, untestable code run off into infinities while it’s piloting people down the freeway at 70mph?

And what about AI? Anil Dash recently argued that today’s AI is unreasonable:

Amongst engineers, coders, technical architects, and product designers, one of the most important traits that a system can have is that one can reason about that system in a consistent and predictable way. Even “garbage in, garbage out” is an articulation of this principle — a system should be predictable enough in its operation that we can then rely on it when building other systems upon it.

This core concept of a system being reason-able is pervasive in the intellectual architecture of true technologies. Postel’s Law (“Be liberal in what you accept, and conservative in what you send.”) depends on reasonable-ness. The famous IETF keywords list, which offers a specific technical definition for terms like “MUST”, “MUST NOT”, “SHOULD”, and “SHOULD NOT”, assumes that a system will behave in a reasonable and predictable way, and the entire internet runs on specifications that sit on top of that assumption.

The very act of debugging assumes that a system is meant to work in a particular way, with repeatable outputs, and that deviations from those expectations are the manifestation of that bug, which is why being able to reproduce a bug is the very first step to debugging.

Into that world, let’s introduce bullshit. Today’s highly-hyped generative AI systems (most famously OpenAI) are designed to generate bullshit by design.

I bet NASA will be very slow and careful in deciding to run AI systems on spacecraft — after all, they know how 2001: A Space Odyssey ends just as well as the rest of us do.


Early Computer Art in the 50s and 60s

a wavy black and white pattern generated by a computer

an intricate and colorful looping pattern

a computer drawing of a bunch of colorful squares stacked on top of each other

Artist Amy Goodchild recently published an engaging article about the earliest computer art from the 50s and 60s.

My original vision for this article was to cover the development of computer art from the 50’s to the 90’s, but it turns out there’s an abundance of things without even getting half way through that era. So in this article we’ll look at how Lovelace’s ideas for creativity with a computer first came to life in the 50’s and 60’s, and I’ll cover later decades in future articles.

I stray from computer art into electronic, kinetic and mechanical art because the lines are blurred, it contributes to the historical context, and also because there is some cool stuff to look at.

Cool stuff indeed — I’ve included some of my favorite pieces that Goodchild highlighted above. (via waxy)


The Lisa Personal Computer: Apple’s Influential Flop

The Apple Lisa was the more expensive and less popular precursor to the Macintosh; a recent piece at the Computer History Museum called Lisa “Apple’s most influential failure”.

Apple’s Macintosh line of computers today, known for bringing mouse-driven graphical user interfaces (GUIs) to the masses and transforming the way we use our computers, owes its existence to its immediate predecessor at Apple, the Lisa. Without the Lisa, there would have been no Macintosh — at least in the form we have it today — and perhaps there would have been no Microsoft Windows either.

The video above from Adi Robertson at The Verge is a good introduction to the Lisa and what made it so simultaneously groundbreaking and unpopular. From a companion article:

To look at the Lisa now is to see a system still figuring out the limits of its metaphor. One of its unique quirks, for instance, is a disregard for the logic of applications. You don’t open an app to start writing or composing a spreadsheet; you look at a set of pads with different types of documents and tear off a sheet of paper.

But the office metaphor had more concrete technical limits, too. One of the Lisa’s core principles was that it should let users multitask the way an assistant might, allowing for constant distractions as people moved between windows. It was a sophisticated idea that’s taken for granted on modern machines, but at the time, it pushed Apple’s engineering limits - and pushed the Lisa’s price dramatically upward.

And from 1983, a demo video from Apple on how the Lisa could be used in a business setting:

And a more characteristically Apple ad for the Lisa featuring a pre-stardom Kevin Costner:


Papercraft Models of Vintage Computers

a papercraft model of an original Apple Macintosh

a papercraft model of an IBM 5150 computer

a papercraft model of an Amiga 500 computer

Rocky Bergen makes papercraft models of vintage computers like the original Macintosh, Commodore 64, the IBM 5150, and TRS-80. The collection also includes a few gaming consoles and a boombox. And here’s the thing — you can download the patterns for each model for free and make your own at home. Neat!


A Demo of Pockit, a Tiny, Powerful, Modular Computer

Admission time: it’s been a long time since I considered myself any sort of gadget nerd, but I have to tell you that I watched much of this demo of Pockit with my jaw on the floor and my hand on my credit card. 12-year-old Jason would have run through a wall to be able to play with something like this. It does web browsing, streaming video, AI object detection, home automation, and just anything else you can think of. Reminded me of some combination of littleBits, Arduino, and Playdate. What a fun little device! (via craig mod)


Searching for Susy Thunder

Searching for Susy Thunder

A really entertaining and interesting piece by Claire Evans about Susan Thunder (aka Susan Thunder aka Susan Headley), a pioneering phone phreaker and computer hacker who ran with the likes of Kevin Mitnick and then just quietly disappeared.

She was known, back then, as Susan Thunder. For someone in the business of deception, she stood out: she was unusually tall, wide-hipped, with a mane of light blonde hair and a wardrobe of jackets embroidered with band logos, spoils from an adolescence spent as an infamous rock groupie. Her backstage conquests had given her a taste for quaaludes and pharmaceutical-grade cocaine; they’d also given her the ability to sneak in anywhere.

Susan found her way into the hacker underground through the phone network. In the late 1970s, Los Angeles was a hotbed of telephone culture: you could dial-a-joke, dial-a-horoscope, even dial-a-prayer. Susan spent most of her days hanging around on 24-hour conference lines, socializing with obsessives with code names like Dan Dual Phase and Regina Watts Towers. Some called themselves phone phreakers and studied the Bell network inside out; like Susan’s groupie friends, they knew how to find all the back doors.

When the phone system went electric, the LA phreakers studied its interlinked networks with equal interest, meeting occasionally at a Shakey’s Pizza parlor in Hollywood to share what they’d learned: ways to skim free long-distance calls, void bills, and spy on one another. Eventually, some of them began to think of themselves as computer phreakers, and then hackers, as they graduated from the tables at Shakey’s to dedicated bulletin board systems, or BBSes.

Susan followed suit. Her specialty was social engineering. She was a master at manipulating people, and she wasn’t above using seduction to gain access to unauthorized information. Over the phone, she could convince anyone of anything. Her voice honey-sweet, she’d pose as a telephone operator, a clerk, or an overworked secretary: I’m sorry, my boss needs to change his password, can you help me out?

Via Evans’ Twitter account, some further reading and viewing on Susy Thunder and 80s hacking/phreaking: Trashing the Phone Company with Suzy Thunder (her 1982 interview on 20/20), audio of Thunder’s DEF CON 3 speech, Exploding the Phone, The Prototype for Clubhouse Is 40 Years Old, and It Was Built by Phone Hackers, and Katie Hafner and John Markoff’s book Cyberpunk: Outlaws and Hackers on the Computer Frontier, Revised.


A Brief History of the Pixel

History Pixel

Computer graphics legend Alvy Ray Smith (Pixar, Lucasfilm, Microsoft) has written a new book called A Biography of the Pixel (ebook). In this adapted excerpt, Smith traces the origins of the pixel — which he calls “a repackaging of infinity” — from Joseph Fourier to Vladimir Kotelnikov to the first computers to Toy Story.

Taking pictures with a cellphone is perhaps the most pervasive digital light activity in the world today, contributing to the vast space of digital pictures. Picture-taking is a straightforward 2D sampling of the real world. The pixels are stored in picture files, and the pictures represented by them are displayed with various technologies on many different devices.

But displays don’t know where the pixels come from. The sampling theorem doesn’t care whether they actually sample the real world. So making pixels is the other primary source of pictures today, and we use computers for the job. We can make pixels that seem to sample unreal worlds, eg, the imaginary world of a Pixar movie, if they play by the same rules as pixels taken from the real world.

The taking vs making — or shooting vs computing - distinction separates digital light into two realms known generically as image processing and computer graphics. This is the classical distinction between analysis and synthesis. The pixel is key to both, and one theory suffices to unify the entire field.

Computation is another key to both realms. The number of pixels involved in any picture is immense - typically, it takes millions of pixels to make just one picture. An unaided human mind simply couldn’t keep track of even the simplest pixel computations, whether the picture was taken or made. Consider just the easiest part of the sampling theorem’s ‘spread and add’ operation - the addition. Can you add a million numbers? How about ‘instantaneously’? We have to use computers.


How Do Algorithms Become Biased?

In the latest episode of the Vox series Glad You Asked, host Joss Fong looks at how racial and other kinds of bias are introduced into massive computer systems and algorithms, particularly those that work through machine learning, that we use every day.

Many of us assume that tech is neutral, and we have turned to tech as a way to root out racism, sexism, or other “isms” plaguing human decision-making. But as data-driven systems become a bigger and bigger part of our lives, we also notice more and more when they fail, and, more importantly, that they don’t fail on everyone equally. Glad You Asked host Joss Fong wants to know: Why do we think tech is neutral? How do algorithms become biased? And how can we fix these algorithms before they cause harm?


Lava Lamps Help Keep The Internet Secure??

Web performance and security company Cloudflare uses a wall of lava lamps to generate random numbers to help keep the internet secure. Random numbers generated by computers are often not exactly random, so what Cloudflare does is take photos of the lamps’ activities and uses the uncertainty of the lava blooping up and down to generate truly random numbers. Here’s a look at how the process works:

At Cloudflare, we have thousands of computers in data centers all around the world, and each one of these computers needs cryptographic randomness. Historically, they got that randomness using the default mechanism made available by the operating system that we run on them, Linux.

But being good cryptographers, we’re always trying to hedge our bets. We wanted a system to ensure that even if the default mechanism for acquiring randomness was flawed, we’d still be secure. That’s how we came up with LavaRand.

LavaRand is a system that uses lava lamps as a secondary source of randomness for our production servers. A wall of lava lamps in the lobby of our San Francisco office provides an unpredictable input to a camera aimed at the wall. A video feed from the camera is fed into a CSPRNG [cryptographically-secure pseudorandom number generator], and that CSPRNG provides a stream of random values that can be used as an extra source of randomness by our production servers. Since the flow of the “lava” in a lava lamp is very unpredictable, “measuring” the lamps by taking footage of them is a good way to obtain unpredictable randomness. Computers store images as very large numbers, so we can use them as the input to a CSPRNG just like any other number.

(via open culture)


Google Announces They Have Achieved “Quantum Supremacy”

Today, Google announced the results of their quantum supremacy experiment in a blog post and Nature article. First, a quick note on what quantum supremacy is: the idea that a quantum computer can quickly solve problems that classical computers either cannot solve or would take decades or centuries to solve. Google claims they have achieved this supremacy using a 54-qubit quantum computer:

Our machine performed the target computation in 200 seconds, and from measurements in our experiment we determined that it would take the world’s fastest supercomputer 10,000 years to produce a similar output.

You may find it helpful to watch Google’s 5-minute explanation of quantum computing and quantum supremacy (see also Nature’s explainer video):

IBM has pushed back on Google’s claim, arguing that their classical supercomputer can solve the same problem in far less than 10,000 years.

We argue that an ideal simulation of the same task can be performed on a classical system in 2.5 days and with far greater fidelity. This is in fact a conservative, worst-case estimate, and we expect that with additional refinements the classical cost of the simulation can be further reduced.

Because the original meaning of the term “quantum supremacy,” as proposed by John Preskill in 2012, was to describe the point where quantum computers can do things that classical computers can’t, this threshold has not been met.

One of the fears of quantum supremacy being achieved is that quantum computing could be used to easily crack the encryption currently used anywhere you use a password or to keep communications private, although it seems like we still have some time before this happens.

“The problem their machine solves with astounding speed has been very carefully chosen just for the purpose of demonstrating the quantum computer’s superiority,” Preskill says. It’s unclear how long it will take quantum computers to become commercially useful; breaking encryption — a theorized use for the technology — remains a distant hope. “That’s still many years out,” says Jonathan Dowling, a professor at Louisiana State University.


The Most Important Pieces of Code in the History of Computing

Bitcoin Code

Slate recently asked a bunch of developers, journalists, computer scientists, and historians what they thought the most influential and consequential pieces of computer code were. They came up with a list of the 36 world-changing pieces of code, including the code responsible for the 1202 alarm thrown by the Apollo Guidance Computer during the first Moon landing, the HTML hyperlink, PageRank, the guidance system for the Roomba, and Bitcoin (above).

Here’s the entry for the three lines of code that helps cellular networks schedule and route calls efficiently and equitably:

At any given moment in a given area, there are often many more cellphones than there are base station towers. Unmediated, all of these transmissions would interfere with one another and prevent information from being received reliably. So the towers have a prioritization problem to solve: making sure all users can complete their calls, while taking into account the fact that users in noisier places need to be given more resources to receive the same quality of service. The solution? A compromise between the needs of individual users and the overall performance of the entire network. Proportional fair scheduling ensures all users have at least a minimal level of service while maximizing total network throughput. This is done by giving lower priority to users that are anticipated to require more resources. Just three lines of code that make all 3G and 4G cellular networks around the world work.


The Biggest Nonmilitary Effort in the History of Human Civilization

aldrin-moon.jpg

Charles Fishman has a new book, One Giant Leap, all about NASA’s Apollo program to land an astronaut on the moon. He talks about it on Fresh Air with Dave Davies.

On what computers were like in the early ’60s and how far they had to come to go to space

It’s hard to appreciate now, but in 1961, 1962, 1963, computers had the opposite reputation of the reputation they have now. Most computers couldn’t go more than a few hours without breaking down. Even on John Glenn’s famous orbital flight — the first U.S. orbital flight — the computers in mission control stopped working for three minutes [out] of four hours. Well, that’s only three minutes [out] of four hours, but that was the most important computer in the world during that four hours and they couldn’t keep it going during the entire orbital mission of John Glenn.

So they needed computers that were small, lightweight, fast and absolutely reliable, and the computers that were available then — even the compact computers — were the size of two or three refrigerators next to each other, and so this was a huge technology development undertaking of Apollo.

On the seamstresses who wove the computer memory by hand

There was no computer memory of the sort that we think of now on computer chips. The memory was literally woven … onto modules and the only way to get the wires exactly right was to have people using needles, and instead of thread wire, weave the computer program. …

The Apollo computers had a total of 73 [kilobytes] of memory. If you get an email with the morning headlines from your local newspaper, it takes up more space than 73 [kilobytes]. … They hired seamstresses. … Every wire had to be right. Because if you got [it] wrong, the computer program didn’t work. They hired women, and it took eight weeks to manufacture the memory for a single Apollo flight computer, and that eight weeks of manufacturing was literally sitting at sophisticated looms weaving wires, one wire at a time.

One anecdote that was new to me describes Armstrong and Aldrin test-burning moon dust, to make sure it wouldn’t ignite when repressurized.

Armstrong and Aldrin actually had been instructed to do a little experiment. They had a little bag of lunar dirt and they put it on the engine cover of the ascent engine, which was in the middle of the lunar module cabin. And then they slowly pressurized the cabin to make sure it wouldn’t catch fire and it didn’t. …

The smell turns out to be the smell of fireplace ashes, or as Buzz Aldrin put it, the smell of the air after a fireworks show. This was one of the small but sort of delightful surprises about flying to the moon.


The Women Who Helped Pioneer Chaos Theory

The story goes that modern chaos theory was birthed by Edward Lorenz’s paper about his experiments with weather simulation on a computer. The computing power helped Lorenz nail down hidden patterns that had been hinted at by computer-less researchers for decades. But the early tenets of chaos theory were not the only things that were hidden. The women who wrote the programs that enabled Lorenz’s breakthroughs haven’t received their proper due.

But in fact, Lorenz was not the one running the machine. There’s another story, one that has gone untold for half a century. A year and a half ago, an MIT scientist happened across a name he had never heard before and started to investigate. The trail he ended up following took him into the MIT archives, through the stacks of the Library of Congress, and across three states and five decades to find information about the women who, today, would have been listed as co-authors on that seminal paper. And that material, shared with Quanta, provides a fuller, fairer account of the birth of chaos.

The two women who programmed the computer for Lorenz were Ellen Gille (née Fetter) and Margaret Hamilton. Yes, that Margaret Hamilton, whose already impressive career starts to look downright bonkers when you add in her contributions to chaos theory.


Grace Hopper Explains a Nanosecond

In this short clip from 1983, legendary computer scientist Grace Hopper uses a short length of wire to explain what a nanosecond is.

Now what I wanted when I asked for a nanosecond was: I wanted a piece of wire which would represent the maximum

distance that electricity could travel in a billionth of a second. Now of course it wouldn’t really be through wire — it’d be out in space, the velocity of light. So if we start with a velocity of light and use your friendly computer, you’ll discover that a nanosecond is 11.8 inches long, the maximum limiting distance that electricity can travel in a billionth of a second.

You can watch the entirety of a similar lecture Hopper gave at MIT in 1985, in which she “practically invents computer science at the chalkboard”. (via tmn)


What Counts As Evidence in Mathematics?

Einstein-Blackboard-01.jpg

The ultimate form of argument, and for some, the most absolute form of truth, is mathematical proof. But short of a conclusive proof of a theorem, mathematicians also consider evidence that might 1) disprove a thesis or 2) suggest its possible truth or even avenues for proving that it’s true. But in a not-quite-empirical field, what the heck counts as evidence?

The twin primes conjecture is one example where evidence, as much as proof, guides our mathematical thinking. Twin primes are pairs of prime numbers that differ by 2 — for example, 3 and 5, 11 and 13, and 101 and 103 are all twin prime pairs. The twin primes conjecture hypothesizes that there is no largest pair of twin primes, that the pairs keep appearing as we make our way toward infinity on the number line.

The twin primes conjecture is not the Twin Primes Theorem, because, despite being one of the most famous problems in number theory, no one has been able to prove it. Yet almost everyone believes it is true, because there is lots of evidence that supports it.

For example, as we search for large primes, we continue to find extremely large twin prime pairs. The largest currently known pair of twin primes have nearly 400,000 digits each. And results similar to the twin primes conjecture have been proved. In 2013, Yitang Zhang shocked the mathematical world by proving that there are infinitely many prime number pairs that differ by 70 million or less. Thanks to a subsequent public “Polymath” project, we now know that there are infinitely many pairs of primes that differ by no more than 246. We still haven’t proved that there are infinitely many pairs of primes that differ by 2 — the twin primes conjecture — but 2 is a lot closer to 246 than it is to infinity.

This starts to get really complicated once you leave the relatively straightforward arithmetical world of prime numbers behind, with its clearly empirical pairs and approximating conjectures, and start working with computer models that generate arbitrarily large numbers of mathematical statements, all of which can be counted as evidence.

Patrick Hanner, the author of this article, gives what seems like a simple example: are all lines parallel or intersecting? Then he shows how the models one can use to answer this question vary wildly based on their initial assumptions, in this case, whether one is considering lines in a single geometric plane or lines in an n-dimensional geometric space. As always in mathematics, it comes back to one’s initial set of assumptions; you can “prove” (i.e., provide large quantities of evidence for) a statement with one set of rules, but that set of rules is not the universe.


The Secret History of Women in Coding

In an excerpt of his forthcoming book Coders, Clive Thompson writes about The Secret History of Women in Coding for the NY Times.

A good programmer was concise and elegant and never wasted a word. They were poets of bits. “It was like working logic puzzles — big, complicated logic puzzles,” Wilkes says. “I still have a very picky, precise mind, to a fault. I notice pictures that are crooked on the wall.”

What sort of person possesses that kind of mentality? Back then, it was assumed to be women. They had already played a foundational role in the prehistory of computing: During World War II, women operated some of the first computational machines used for code-breaking at Bletchley Park in Britain. In the United States, by 1960, according to government statistics, more than one in four programmers were women. At M.I.T.’s Lincoln Labs in the 1960s, where Wilkes worked, she recalls that most of those the government categorized as “career programmers” were female. It wasn’t high-status work — yet.

This all changed in the 80s, when computers and programming became, culturally, a mostly male pursuit.

By the ’80s, the early pioneering work done by female programmers had mostly been forgotten. In contrast, Hollywood was putting out precisely the opposite image: Computers were a male domain. In hit movies like “Revenge of the Nerds,” “Weird Science,” “Tron,” “WarGames” and others, the computer nerds were nearly always young white men. Video games, a significant gateway activity that led to an interest in computers, were pitched far more often at boys, as research in 1985 by Sara Kiesler, a professor at Carnegie Mellon, found. “In the culture, it became something that guys do and are good at,” says Kiesler, who is also a program manager at the National Science Foundation. “There were all kinds of things signaling that if you don’t have the right genes, you’re not welcome.”

See also Claire Evans’ excellent Broad Band: The Untold Story of the Women Who Made the Internet.


Buy the Cheap Thing First

cast iron skillets.jpg

Beth Skwarecki has written the perfect Lifehacker post with the perfect headline (so perfect I had to use it for my aggregation headline too, which I try to never do):

When you’re new to a sport, you don’t yet know what specialized features you will really care about. You probably don’t know whether you’ll stick with your new endeavor long enough to make an expensive purchase worth it. And when you’re a beginner, it’s not like beginner level equipment is going to hold you back…

How cheap is too cheap?

Find out what is totally useless, and never worth your time. Garage sale ice skates with ankles that are so soft they flop over? Pass them up.

What do most people do when starting out?

If you’re getting into powerlifting and you don’t have a belt and shoes, you can still lift with no belt and no shoes, or with the old pair of Chucks that you may already have in your closet. Ask people about what they wore when they were starting out, and it’s often one of those options…

What’s your exit plan?

How will you decide when you’re done with your beginner equipment? Some things will wear out: Running shoes will feel flat and deflated. Some things may still be usable, but you’ll discover their limitations. Ask experienced people what the fancier gear can do that yours can’t, and you’ll get a sense of when to upgrade. (You may also be able to sell still-good gear to another beginner to recoup some of your costs.)

Wearing out your beginner gear is like graduating. You know that you’ve stuck with the sport long enough that you aren’t truly a beginner anymore. You may have managed to save up some cash for the next step. And you can buy the nicer gear now, knowing exactly what you want and need.

This is 100 percent the truth, and applies to way more than just sports equipment. Computers, cooking, fashion, cars, furniture, you name it. The key thing is to pick your spots, figure out where you actually know what you want and what you want to do with it, and optimize for those. Everywhere else? Don’t outwit yourself. Play it like the beginner that you are. And save some scratch in the process. Perfect, perfect advice.


The Embroidered Computer

Artists Irene Posch & Ebru Kurbak have built The Embroidered Computer, a programmable 8-bit computer made using traditional embroidery techniques and materials.

Embroidered Computer

Embroidered Computer

Solely built from a variety of metal threads, magnetic, glas and metal beads, and being inspired by traditional crafting routines and patterns, the piece questions the appearance of current digital and electronic technologies surrounding us, as well as our interaction with them.

Technically, the piece consists of (textile) relays, similar to early computers before the invention of semiconductors. Visually, the gold materials, here used for their conductive properties, arranged into specific patterns to fulfill electronic functions, dominate the work. Traditionally purely decorative, their pattern here defines they function. They lay bare core digital routines usually hidden in black boxes. Users are invited to interact with the piece in programming the textile to compute for them.

The piece also slyly references the connection between the early history of computing and the textile industry.

When British mathematician Charles Babbage released his plans for the Analytical Engine, widely considered the first modern computer design, fellow mathematician Ada Lovelace is famously quoted as saying that ‘the Analytical Engine weaves algebraic patterns, just as the Jacquard loom weaves flowers and leaves.’

The Jacquard loom is often considered a predecessor to the modern computer because it uses a binary system to store information that can be read by the loom and reproduced many times over.

See also Posch’s & Kurbak’s The Knitted Radio, a sweater that functions as an FM radio transmitter.


Papercraft Computers

Papercraft Electronics

Papercraft Electronics

Papercraft Electronics

Rocky Bergen makes paper models of vintage electronics and computing gear. And here’s the cool bit…you can download the plans to print and fold your own: Apple II, Conion C-100F boom box, Nintendo GameCube, and Commodore 64.


Why Doctors Hate Their Computers

Nobody writes about health care practice from the inside out like Atul Gawande, here focusing on an increasingly important part of clinical work: information technology.

A 2016 study found that physicians spent about two hours doing computer work for every hour spent face to face with a patient—whatever the brand of medical software. In the examination room, physicians devoted half of their patient time facing the screen to do electronic tasks. And these tasks were spilling over after hours. The University of Wisconsin found that the average workday for its family physicians had grown to eleven and a half hours. The result has been epidemic levels of burnout among clinicians. Forty per cent screen positive for depression, and seven per cent report suicidal thinking—almost double the rate of the general working population.

Something’s gone terribly wrong. Doctors are among the most technology-avid people in society; computerization has simplified tasks in many industries. Yet somehow we’ve reached a point where people in the medical profession actively, viscerally, volubly hate their computers.

It’s not just the workload, but also what Gawande calls “the Revenge of the Ancillaries” — designing software for collaboration between different health care professionals, from surgeons to administrators, all of whom have competing stakes and preferences in how a product is used and designed, what information it offers and what it demands. And most medical software doesn’t handle these competing demands very well.


The Stylish & Colorful Computing Machines of Yesteryear

Holy moly, these photographs of vintage computers & peripherals by “design and tech obsessive” James Ball are fantastic.

Ball Computers

Ball Computers

Ball Computers

He did a similar series with early personal computers subtitled “Icons of Beige”.

Ball Computers

(via @mwichary)