When the TV watching experience moved from checking “your local listings” or TV Guide and surfing channels with your remote to scrolling through visual onscreen menus on streaming services, key art was born. Key art graphics are the images that identify shows in streaming menus โ ok here, it’s just easier to show you:
Like the best movie posters and book covers, these images are bold and simple promotional signifiers of a larger piece of media, but as Rex Sorgatz argues in today’s edition of Why is this interesting?, key art is its own thing with its own set of constraints and challenges.
Good key art is so evocative, so iconic, that it becomes the image that springs to mind whenever you think about a show:
One neglected characteristic ties all these images together: They are all horizontal.
It sounds trivial, but going wide helped differentiate TV key art as its own medium, distinct from book covers and movie posters. And because these images appear on streaming platforms, they are unencumbered by other marketing copy, like taglines, cast and credits, and multifarious blurbs.
From this list of the most popular baby names, we learn that the top boys names tend toward the biblical and traditional (Noah, Jacob, William, Elijah) while girls names are less so (Emma, Olivia, Ava, Mia, Madison).
Serial and Missing Richard Simmons were two of the podcasts that defined the 2010s. Not sure how you can leave Slow Burn off of here though…
The 100 Best Shoes of the Decade. Counterpoint: almost all of these shoes are terrible โ gaudy celeb-driven collectables about as classy as commemorative plates.
In the latest issue of his newsletter, Rex Sorgatz proposes a name for the growing collection of media about the recent past: the Historical Cinematic Universe (HCU for short, name after the MCU, naturally).
By my estimation, the uptick started in movies, with a surge in reality-based Oscar-bait like Spotlight, The Wolf of Wall Street, The Post, The Social Network, and the films of Adam McKay, especially Vice and The Big Short. More recently, and more significantly, the trend has spilled into scripted television, with such ambitious projects as HBO’s Brexit movie, Paramount’s Waco miniseries, Netflix’s Unabomber series, USA’s Tupac / Biggie miniseries, Hulu’s 9/11 series, Buzzfeed’s 1968 series, and, of course, HBO’s Chernobyl miniseries, which is the best show on TV right now. (A+ rec!)
With the influx of scripted historical reinterpretation, traditional documentaries have broadened their scope, expanding into binge experiences. The boomlet is most obvious in those esteemed investigations from HBO, such as Leaving Neverland and The Jinx. But this vim for verisimilitude has spread to unexpected locales, like Lifetime, with Surviving R. Kelly, and A&E, with The Clinton Affair, my personal favorite of this genre. Each of these historical investigations have yielded massive cultural influence.
I am a huge fan of the HCU, particularly of podcasts like Slow Burn and an excellent documentary he doesn’t mention, OJ: Made in America. As Sorgatz notes, Slow Burn creator Leon Neyfakh just launched a new podcast called Fiasco, the first season of which is about the 2000 Bush/Gore election.1 The podcast is only available via subscription, but you can listen to the first episode on the website.
Update: In this video, Patrick Willems talks about accidental cinematic universes, like the one about the US space program, which combines films like The Right Stuff, Apollo 13, Hidden Figures, and First Man into one overarching narrative. Or the British WWII cinematic universe that includes movies like The King’s Speech, Darkest Hour, and Dunkirk.
Which I have been thinking about a lot recently โ it is recent history’s biggest counterfactual. Imagine a world where Al Gore became President in 2001. The US taking climate change seriously back then would have made a huge difference. No using 9/11 to sharply escalate our meddling & deadly presence in the Middle East. Perhaps no financial crisis in 2008. Perhaps no Roberts or Alito on the Supreme Court. Sigh.โฉ
On Tuesday, my friend Rex Sorgatz came out with the very timely book, The Encyclopedia of Misinformation, the full subtitle of which is “A Compendium of Imitations, Spoofs, Delusions, Simulations, Counterfeits, Impostors, Illusions, Confabulations, Skullduggery, Frauds, Pseudoscience, Propaganda, Hoaxes, Flimflam, Pranks, Hornswoggle, Conspiracies & Miscellaneous Fakery”. Today I’m happy to present an excerpt about the genesis and use of the laugh track on television. [The video insert on how the laff box worked is mine.] -jason
No technique in television production has been more maligned than the laugh track, yet it somehow perseveres through decades of ridicule.
It all started innocently, as a quick hack to solve a technical problem. Charley Douglass, a sound engineer at CBS in the early ’50s, was annoyed at studio audiences who inconveniently laughed at the wrong moments. Sometimes they chuckled too long at unfunny bits; other times, they refused to bellow with sufficient gusto. To evenly redistribute the laughter, Douglass invented a contraption that looked like a steampunk organ collided with a cyberpunk adding machine, connected on the back end to magnetic tapes with recorded laughter. By pressing buttons on the laff box (that’s actually what he called it), an orchestrator could punch up guffaws, chortles, and giggles on demand. The magical machine also acted as a sort of demographic keyboard, with inputs for specific genders, ages, and ethnicities, plus a foot pedal that controlled the duration of each laugh. One keystroke might simulate frothy housewife giggle; another, guy who missed joke but laughs anyway. Keys could be combined into melodic chords of laughter, bringing down the house in a crescendo of hilarity.
The gizmo was a success, smoothing out the aural wrinkles in programs like The Abbott and Costello Show and I Love Lucy. It was a necessary evil of this nascent era, when television was rapidly changing from live broadcast to taped recordings. Audiences were still growing accustomed to the big square tube in their living rooms, and the laugh track helped ease the transition by simulating an intimate theater experience at home. You knew when to laugh because they told you when to laugh.
Naturally, this quaint bag of laughs was quickly abused. Sitcoms in the ’60s and ’70s took the laff box and cranked it to eleven. Realizing canned chuckles freed them from the burden of a live audience, shows like Gilligan’s Island and The Brady Bunch ratcheted the laugh track to egregious levels. No show could escape the canned laughter craze โ beloved programs like The Muppet Show and M*A*S*H used laugh tracking, even during outdoor scenes, when a studio audience was improbable. When animated shows like The Flintstones and The Jetsons added tracks of artificial mirth, the entire illusion of a captive studio audience was finally shattered.
Show creators hated the laugh track, spurring a constant feud with network executives who believed audiences enjoyed the audio cues. To adjudicate the conflict, CBS held a controlled experiment in 1965 with its brand-new show Hogan’s Heroes. The network tested two versions of the World War II comedy โ one with canned laughter, one without. The test audiences overwhelmingly preferred the laugh-tracked show. Since then, nearly all CBS comedies have contained audience laughter.
Fake laughter was far from universal though. Many beloved shows, including The Mary Tyler Moore Show, Friends, Cheers, and Seinfeld, used studio audiences for most of their laughter, only adding dashes of the canned stuff through sweetening (that’s the term of art).1
But laughter of all kinds โ live or tracked โ was becoming the joke of the sitcom industry, as a morose aura started to envelop the merriment. An oft-told anecdote asserted that due to track age, the laff box contained the chortles of dead people. The canard seems to have originated with Jim Carrey as Andy Kaufman in Man on the Moon (1999), who ad libbed this bit of dialogue about sitcoms like Taxi:
It’s just stupid jokes and canned laughter! And you don’t know why it’s there, but it’s there! And it’s dead people laughing, did you know that? Those people are dead!1
It might have been true in the ’70s, but the claim is likely not accurate today, as audio engineers are known to assiduously update their libraries with new snorts and snickers.
Regardless, the stench of dead laughter was in the air. Starting in the early aughts, shows began to jettison the laugh track, as most celebrated comedies of the era โ The Office, Arrested Development, Curb Your Enthusiasm, Orange Is the New Black, 30 Rock, Community, Louie, Modern Family โ abandoned the cheesy blandishment. Some programs maintain laugh tracks today (especially those on CBS), and they do tend to get good ratings. In fact, one can almost divide sitcoms into two categories โ “critically acclaimed” versus “high ratings” โ on whether they use a laugh track. As a generalization, shows that cozen a laugh from the viewer perform better in the ratings but seldom win Emmys.
Although widely derided, the laugh track served its purpose. Television began as a medium for viewing live events with an audience (essentially theater-at-a-distance), and it took decades for television to evolve into its own medium. The laff box allowed producers to literally play the audience, like an organ. Perhaps it was synthetic, but the technical innovation put the audience into the tube, creating a more communal experience in our homes. Today, that role โ incorporating a disembodied audience โ is played by social media. LOL.
If you’re interested in reading about more simulations, skullduggery, and flimflam, The Encyclopedia Of Misinformation is now available on Amazon.
Sweetening is demonstrated with dismay in Annie Hall when Woody Allen witnesses laugh tracks being added to a live broadcast in a Los Angeles television studio. The term is also invoked in other commercial arts. When Kiss’s Alive! was released in 1975, it claimed to be a live album but many tracks were clearly sweetened, as they say, with studio overdubs to sharpen the sound.โฉ
Another oft-cited (but inaccurate) source for this old saw is Chuck Palahniuk’s 2002 novel Lullaby: “Most of the laugh tracks on television were recorded in the early 1950s. These days, most of the people you hear laughing are dead.”โฉ
Rex Sorgatz grew up in a small and isolated town (physically, culturally) in North Dakota named Napoleon.
Out on the prairie, pop culture existed only in the vaguest sense. Not only did I never hear the Talking Heads or Public Enemy or The Cure, I could never have heard of them. With a radio receiver only able to catch a couple FM stations, cranking out classic rock, AC/DC to Aerosmith, the music counterculture of the ’80s would have been a different universe to me. (The edgiest band I heard in high school was The Cars. “My Best Friend’s Girl” was my avant-garde.)
Is this portrait sufficiently remote? Perhaps one more stat: I didn’t meet a black person until I was 16, at a summer basketball camp. I didn’t meet a Jewish person until I was 18, in college.
This was the Deep Midwest in the 1980s. I was a pretty clueless kid.
He recently returned there and found that the physical isolation hasn’t changed, but thanks to the internet, the kids now have access to the full range of cultural activities and ideas from all over the world.
“Basically, this story is a controlled experiment,” I continue. “Napoleon is a place that has remained static for decades. The economics, demographics, politics, and geography are the same as when I lived here. In the past twenty-five years, only one thing has changed: technology.”
Rex is a friend and nearly every time we get together, we end up talking about our respective small town upbringings and how we both somehow managed to escape. My experience wasn’t quite as isolated as Rex’s โ I lived on a farm until I was 9 but then moved to a small town of 2500 people; plus my dad flew all over the place and the Twin Cities were 90 minutes away by car โ but was similar in many ways. The photo from his piece of the rusted-out orange car buried in the snow could have been taken in the backyard of the house I grew up in, where my dad still lives. Kids listened to country, top 40, or heavy metal music. I didn’t see Star Wars or Empire in a theater. No cable TV until I was 14 or 15. No AP classes until I was a senior. Aside from a few Hispanics and a family from India, everyone was white and Protestant. The FFA was huge in my school. I had no idea about rap music or modernism or design or philosophy or Andy Warhol or 70s film or atheism. I didn’t know what I didn’t know and had very little way of finding out.
I didn’t even know I should leave. But somehow I got out. I don’t know about Rex, but “escape” is how I think of it. I was lucky enough to excel at high school and got interest from schools from all over the place. My dad urged me to go to college…I was thinking about getting a job (probably farming or factory work) or joining the Navy with a friend. That’s how clueless I was…I knew so little about the world that I didn’t know who I was in relation to it. My adjacent possible just didn’t include college even though it was the best place for a kid like me.
In college in an Iowan city of 110,000, I slowly discovered what I’d been missing. Turns out, I was a city kid who just happened to grow up in a small town. I met other people from all over the country and, in time, from all over the world. My roommate sophomore year was black.1 I learned about techno music and programming and photography and art and classical music and LGBT and then the internet showed up and it was game over. I ate it all up and never got full. And like Rex:
Napoleon had no school newspaper, and minimal access to outside media, so I had no conception of “the publishing process.” Pitching an idea, assigning a story, editing and rewriting โ all of that would have baffled me. I had only ever seen a couple of newspapers and a handful of magazines, and none offered a window into its production. (If asked, I would have been unsure if writers were even paid, which now seems prescient.) Without training or access, but a vague desire to participate, boredom would prove my only edge. While listlessly paging through the same few magazines over and over, I eventually discovered a semi-concealed backdoor for sneaking words onto the hallowed pages of print publications: user-generated content.
That’s the ghastly term we use (or avoid using) today for non-professional writing submitted by readers. What was once a letter to the editor has become a comment; editorials, now posts. The basic unit persists, but the quantity and facility have matured. Unlike that conspicuous “What’s on your mind?” input box atop Facebook, newspapers and magazines concealed interaction with readers, reluctant of the opinions of randos. But if you were diligent enough to find the mailing address, often sequestered deep in the back pages, you could submit letters of opinion and other ephemera.
I eventually found the desire to express myself. Using a copy of Aldus PhotoStyler I had gotten from who knows where, I designed party flyers for DJ friends’ parties. I published a one-sheet periodical for the residents of my dorm floor, to be read in the bathroom. I made meme-y posters2 which I hung around the physics department. I built a homepage that just lived on my hard drive because our school didn’t offer web hosting space and I couldn’t figure out how to get an account elsewhere.3 Well, you know how that last bit turned out, eventually.4
The fall of my senior year, he returned from a weekend at home in Chicago with a VHS tape in tow. He popped it into a friend’s VCR and said, “you’re about to see a future NBA star.” And we all watched some highlights of an 18-year-old Kevin Garnett he’d taped off the local news station.โฉ
One was a Beavis and Butthead sign warning people not to eat in the lab. Another was a “Jurassic Doc” poster featuring my thesis advisor who we all called “Doc”.โฉ
Robin Sloan is right: it’s tough to end things on the internet. Especially self-indulgent autobiographical rambling. Apologies. We now return to your regularly scheduled interestingness presented with minimal commentary.โฉ
Rex Sorgatz wonders what sort of robots we’ll build, R2-D2s or C-3POs.
R2-D2 excels in areas where humans are deficient: deep computation, endurance in extreme conditions, and selfless consciousness. R2-D2 is a computer that compensates for human deficiencies โ it shines where humans fail.
C3-PO is the personification of the selfish human โ cloying, rules-bound, and despotic. (Don’t forget, C3-PO let Ewoks worship him!) C3-PO is a factotum for human vanity โ it engenders the worst human characteristics.
I love the chart he did for the piece, characterizing 3PO’s D&D alignment as lawful evil and his politics as Randian.
Koons’ dog was about 10 feet tall but the seller notes they can make them anywhere from 3 feet tall to almost 100 feet tall. Jiminy. I wonder what these things look like? I bet they aren’t nearly as precise as the originals, but you never know. See also: Rex Sorgatz’s Uber for Art Forgeries. (via prosthetic knowledge)
It’s possible that Vermeer โ an artist who many consider the greatest painter of all time โ could paint with no more acuity than you or me. Vermeer may have been a simple technologist โ but a technologist who could recreate the world with scintillating photographic intensity, centuries before photography was invented, which might actually be a bigger deal than being a good painter.
I loved these articles. I wish I would have written them…I am fascinated with both Vermeer and art forgeries. Good stuff.
With access to infinite bytes of media, describing a digital object as “rare” sticks out like a lumbering anachronism. YouTube - the official home of lumbering anachronisms - excels at these extraordinarily contradictory moments. Here, for instance, are the Beatles, performing a “VERY RARE” rendition of “Happy Birthday”. That sonic obscurity has been heard 2.3 million times. And here is a “Rare Acoustic” version of Slash performing “Sweet Child O’ Mine”. Over 26 million have devoured this esoteric Axl-less morsel.
Photographer Clayton Cubitt and Rex Sorgatz have both written essays about how photography is becoming something more than just standing in front of something and snapping a photo of it with a camera. Here’s Cubitt’s On the Constant Moment.
So the Decisive Moment itself was merely a form of performance art that the limits of technology forced photographers to engage in. One photographer. One lens. One camera. One angle. One moment. Once you miss it, it is gone forever. Future generations will lament all the decisive moments we lost to these limitations, just as we lament the absence of photographs from pre-photographic eras. But these limitations (the missed moments) were never central to what makes photography an art (the curation of time,) and as the evolution of technology created them, so too is it on the verge of liberating us from them.
Photography was once an act of intent, the pushing of a button to record a moment. But photography is becoming an accident, the curatorial attention given to captured images.
Slightly different takes, but both are sniffing around the same issue: photography not as capturing a moment in realtime but sometime later, during the editing process. As I wrote a few years ago riffing on a Megan Fox photo shoot, I side more with Cubitt’s take:
As resolution rises & prices fall on video cameras and hard drive space, memory, and video editing capabilities increase on PCs, I suspect that in 5-10 years, photography will largely involve pointing video cameras at things and finding the best images in the editing phase. Professional photographers already take hundreds or thousands of shots during the course of a shoot like this, so it’s not such a huge shift for them. The photographer’s exact set of duties has always been malleable; the recent shift from film processing in the darkroom to the digital darkroom is only the most recent example.
What’s interesting about the hot video/photo mobile apps of the moment, Vine, Instagram, and Snapchat, is that, if you believe what Cubitt and Sorgatz are saying, they follow the more outdated definition of photography. You hold the camera in front of something, take a video or photo of that moment, and post it. If you missed it, it’s gone forever. What if these apps worked the other way around: you “take” the photo or video from footage previously (or even constantly) gathered by your phone?
To post something to Instagram, you have the app take 100 photos in 10-15 seconds and then select your photo by scrubbing through them to find the best moment. Same with Snapchat. Vine would work similarly…your phone takes 20-30 seconds of video and you use Vine’s already simple editing process to select your perfect six seconds. This is similar to one of my favorite technology-driven techniques from the past few years:
In order to get the jaw-dropping slow-motion footage of great white sharks jumping out of the ocean, the filmmakers for Planet Earth used a high-speed camera with continuous buffering…that is, the camera only kept a few seconds of video at a time and dumped the rest. When the shark jumped, the cameraman would push a button to save the buffer.
Only an after-the-fact camera is able to capture moments like great whites jumping out of the water:
And it would make it much easier to capture moments like your kid’s first steps, a friend’s quick smile, or a skateboarder’s ollie. I suspect that once somebody makes an easy-to-use and popular app that works this way, it will be difficult to go back to doing it the old way.
Rex Sorgatz is writing about a piece of video everyday at View Source, which is also an email newsletter. Or is it a newsletter with a website?
If you’re like me, you suspect that YouTube is packed with interesting stuff, but we lack a system for finding it. A few interesting clips might come to you via so-called social media, but that just reinforces the feeling that there’s probably more out there beyond your friends.
My hope is that VIEWSOURCE will help solve this problem. It’s a simple daily email newsletter with just one video clip. It might be a long-forgotten music documentary, a new webshow with a celebrity, some crazy hip-hop video, or a new supercut. There is no “demographic” in mind, but hopefully it eschews the “viral video” genre.
In last Sunday’s episode of Mad Men, Grandpa Gene ate ice cream right out of the container and salted each spoonful before putting it in his mouth.
It was an odd sight…salt isn’t normally the first thing you think of as an ice cream topping. After the episode, Rex Sorgatz tweeted:
WHO THE FUCK SALTS THEIR ICE CREAM?
Salt has its own flavor when it’s concentrated (if you salt foods too much or eat some all by itself) but used judiciously, salt takes the natural flavor of food and enhances the intensity. To use another dairy product as an example, fresh mozzarella tastes pretty good on its own but throw a little salt on top and it’s mozzarella++. Salt makes ok food taste good and good food taste great. Along with butter, salt is the restaurant world’s secret weapon; chefs likely use way more salt than you do when you cook at home. It’s one of the reasons why restaurant food is so good.
But back to the ice cream. As food scientist Harold McGee writes, salt probably won’t make ice cream taste sweeter but will make it taste ice creamier, particularly if the ice cream is of low quality, as the store-bought variety might have been in 1963.
I’m not sure that salt makes sugar taste sweeter, but it fills out the flavor of foods, sweets included. It’s an important component of taste in our foods, so if it’s missing in a given dish, the dish will taste less complete or balanced. Salt also increase the volatility of some aromatic substances in food, and it enhances our perception of some aromas, so it can make the overall flavor of a food seem more intense.
So that’s why the fuck someone might want to salt their ice cream.
This is a little bit genius. One of the new features of FriendFeed (a Twitter-like thingie) is “fake following”. That means you can friend someone but you don’t see their updates. That way, it appears that you’re paying attention to them when you’re really not. Just like everyone does all the time in real life to maintain their sanity. Rex calls it “most important feature in the history of social networks” and I’m inclined to agree. It’s one of the few new social features I’ve seen that makes being online buddies with someone manageable and doesn’t just make being social a game or competition.
Any application that lets you “friend,” “follow,” or otherwise observe another user should include a prominent (and silent) “PAUSE” button. I think users of apps like Flickr, Twitter, Facebook, LiveJournal, Delicious, and, yes, FriendFeed, would benefit from an easy and undramatic way to take a little break from a “friend” โ without inducing the grand mal meltdown that “unfriending” causes the web’s more delicately-composed publishers.
On a completely different note, it’s been a challenge to acquire data from governments. We (namely Dan, our People Person) have been working since July to request formal data feeds from various agencies, and we’ve run into many roadblocks there, from the political to the technical. We expected that, of course, but the expectation doesn’t make it any less of a challenge.
I believe that Everyblock will be most successful not through the utility of its site but if it can get more civic and federal agencies to release more structured data about what’s going on in our cities and country. It is *our data* after all.
This beta was a full-on 120 page prototype, with actual stories re-purposed from other places, actual art, actual ads (someone quipped that it was the ultimate editor’s wet dream to be able to pick their own ads), and then all the sections and pacing that was to go into the actual magazine. The cover was lifted from McLuhan’s The Medium is the Massage; it was the startling black and white image of a guy’s head with a big ear where his eyes should have been. The whole thing got printed and laminated in a copy shop in Berkeley that had just got a new Kodak color copier and rip. Jane, Eugene, and I went in when the shop closed on Friday evening and worked round the clock through the weekend. Took 45 minutes to print out one color page! We emerged Monday morning with the prototype, which we had spiral-bound in a shop in South San Francisco, before we boarded a plane for Amsterdam to present it to Origin’s founder and CEO Eckart Wintzen, to see if he would approve the concept, agree to advertise in the magazine, and then give us the advance we crucially needed to keep the project alive.
Rex has released his list of the Best Blogs of 2007 That You’re (Maybe) Not Reading over at Fimoculous. Like last year, he’s focused his best-of-blogs list on lesser-known sites instead of the biggies, a strategy I applaud. In fact, he doesn’t even need to qualify the list as the best unknown blogs; many of the well-known blogs that usually make best-of lists, much of the Technorati Top 100, and most multi-author plastered-with-ads blogs are unremarkable…too much volume, too calculated, too focused on filling post and pageview quotas, and limited passion. If you look at the sites on Rex’s list, you’ll see a lot of blogs done by people who are passionate about something, not writing for a paycheck.
Rex’s #1 choice is an inspired one and absolutely right on…Twitter and Tumblr revitalized personal publishing in the eyes of many who had either tired of blogging or had never seen the point in it in the first place. My only complaint about the list is that there are too many one-hit wonders on it, sites that are worth a chuckle or squee! when you first see them but don’t hold up over time unless you really really like, say, snowclones. Oh, and Vulture…I really wanted to like it but really didn’t get it. (Oh oh, and and Jezebel? Being against a thing is not the same as standing for something.)
Stay Connected