“You cannot imagine the unimaginable”

I almost criminally let the 50th anniversary of 2001: A Space Odyssey pass without comment (and without rewatching the film). But here I am almost two months late.

The best thing I’ve read for the anniversary is Dan Chiasson’s piece for the New Yorker, rich with details I’d never heard.

In a movie about extraterrestrial life, Kubrick faced a crucial predicament: what would the aliens look like? Cold War-era sci-fi offered a dispiriting menu of extraterrestrial avatars: supersonic birds, scaly monsters, gelatinous blobs. In their earliest meetings in New York, Clarke and Kubrick, along with Christiane, sketched drafts and consulted the Surrealist paintings of Max Ernst. For a time, Christiane was modelling clay aliens in her studio. These gargoyle-like creatures were rejected, and “ended up dotted around the garden,” according to Kubrick’s daughter Katharina. Alberto Giacometti’s sculptures of thinned and elongated humans, resembling shadows at sundown, were briefly an inspiration. In the end, Kubrick decided that “you cannot imagine the unimaginable” and, after trying more ornate designs, settled on the monolith. Its eerily neutral and silent appearance at the crossroads of human evolution evokes the same wonder for members of the audience as it does for characters in the film. Kubrick realized that, if he was going to make a film about human fear and awe, the viewer had to feel those emotions as well.

Of course, it’s also an appropriate time to link back to my piece on the music from the film. (Part two, on A Clockwork Orange, is finally underway.)

Ned Beauman on Mica Levi

Profile by Ned Beauman of one of the best film composers I’ve ever heard. I love this character moment:

“I know it must seem like I live in a cave,” she said to me at one point, and indeed I sometimes felt not so much like a journalist as like a clippings service, updating Levi on the progress of her own career. She was surprised to learn that she was the first female nominee for Best Original Score in sixteen years, and only the fifth in the history of the Academy Awards. She was surprised to learn that Alex Ross, a critic she admires, had praised her work on this Web site. She was surprised to learn that Karl Lagerfeld had used two pieces from her “Jackie” score for his recent Chanel couture show of sixties-inspired twinsets. Watching the video on my phone, she marvelled at the excess of the event, which was held in a mirrored arena in Paris’s Grand Palais. I asked her if she would be getting royalties from Chanel. “I should look into that!” she said.

What Only Humans Can

The YouTube educational video maker CGP Grey has a justly large following. He makes short explainers of, for example, the difference between Britain, the U.K., and the British Isles, Coffee, the Lord of the Rings mythology, and the family tree. Generally speaking, they’re very good.

One of his most popular (and best) is Humans Need Not Apply, a chilling look at the future of automation and industry, and the potential massive spike in unemployment that may come about in the next fifty years as computers become better at human jobs than humans are. It’s about fifteen minutes long, and worth watching.

While I can’t dispute many of the facts, I think the video gets it very wrong on some aspects of creative professionalism. Grey is a utilitarian. He openly admits on his podcast, Cortex, that, for him, music is a tool that helps him get work done. Elsewhere, he’s sardonically mocking of poets and artists (as he is in the video above). That’s all well and good. Not everyone can be an art lover, and not everyone should be. But a utilitarian view of the arts leads to a fairly simple notion of art as something trying to accomplish a job.

In the last two weeks, music has lost two titans of two genres: the austere, difficult, polarising composer (and undeniably brilliant conductor) Pierre Boulez, and the chameleonic, gloriously weird rock1 musician David Bowie. The two men had little in common, and as far as I knew never crossed paths except in passing. But their music mattered to people, and it mattered far less because of its qualities than because those qualities were the result of human work.

David Bowie’s early rise to fame came about because he revelled in his weirdness. He was, as Hilton Als noted in his New Yorker piece, “that outsider who made different kids feel like dancing in that difference”. No computer, no matter how good the music it made, could ever forge that connection with people. A glance at Twitter over the past week shows how keenly his loss is felt as a personal one by people around the world.

By contrast, Hatsune Miku, a fully computer-generated performer from Japan (if you haven’t heard of her, then yes, really), certainly has her core group of fans. And I have no doubt that her fans enjoy her performances, and listening to her singing. But were she to vanish tomorrow, through some freak accident of data loss, all her fans would really lose would be her songs. With Bowie, they lost an icon.

Pierre Boulez’s early music became famous (and notorious) for its high degree of mathematical precision—not only the notes, but the dynamics, tempi, articulations were all rigorously figured out beforehand. Regardless of the emotional content of the music (which some listeners passionately defend), that is a type of music that is surely highly suited to a computer’s labour. But again, part of the appeal of Boulez as a musician is the fact that a human could create and hold music of this scale and complexity in his mind. Who would be interested if it was automatic? If his music had been made by a computer—a machine which necessarily would have found it easier—it would have been less interesting.

Recently, the American composer Andrew Norman made exactly this point on performing. From a New York Times interview by Will Robin:

“By thinking of the orchestra as only a sound-making machine, we’ve actually eliminated a huge part of what makes a concert experience amazing,” Mr. Norman said. A laptop, he pointed out, easily supersedes what the symphony can offer in terms of sonic power and flexibility. “What makes an orchestra special, for me, is not actually the sounds that it makes but the fact that there are a hundred human beings doing that, right in front of me,” he added. “In a way, it’s performance art.”

There are already computers that can generate music; even ones that can do a job of imitating Mozart well enough to fool Mozart experts. But short-term existential crises aside, these works become curios—interesting for how they were composed, but passed over because there’s nothing to get your teeth into. No composer weeping over the streets at the beauty of the sound of a fire sergeant’s funeral. No artist in a fit of horror at war dedicating himself to representing that horror in black-and-white. No writer so full of self-loathing that he imagines himself transforming overnight into vermin.

That’s not to say there won’t be room (not to say a market) for computer-generated art, but only in its most functional sense. What if I told you your film could have music by John Williams? Would you say no? What if the other option was Beethoven? Or night clubs, whose music—beyond the compulsion to follow certain trends of fashion—is essentially background noise for socialising.

But a computer-generated Beethoven symphony? Would we really want that? Beethoven was great, and we have piles of his music to listen to now—but he’s been and gone. Music has changed since his time, and to go back is pointless. And any originality expressed by a computer is uninteresting, not in spite of its lack of imagination, but precisely because its imagination is theoretically limitless.

Maybe I’m wrong. Maybe next generation’s Hatsune Miku will be one with a personality and life programmed by her team (as authors program characters—no judgement here), able to inhabit independently a virtual world. People can definitely come to love fictional characters, and maybe they’ll love her as they love Katniss Everdeen. Maybe the following generation will see a wholly computer generated performer, with appearance, personality, life all created by computers with the last human interaction having occurred thirty years before.

But my suspicion is that for the people who find human connection in art, art made by humans will always be essential. At least until computers can imitate a full, creative human mind. And then we’ll have plenty of new philosophical issues to deal with.

Last week, on the podcast Exponent (and to a lesser extent, in this blog post), Ben Thompson detailed a political position I hadn’t been familiar with before, but one which I find interesting. In short, as technology has a greater and greater impact on society, and as its presence costs more and more people their jobs, it is in technology companies’ best interests to lobby not only for less regulation (the clarion call of so much business), but also for higher taxes and an assured “ground floor”, economically, so that those people whose jobs are lost through technological disruption are not left with nothing. This way, the people whom regulations are supposed to protect are protected by the safety net, and the corporations have the freedom to grow as they please.

The reasoning is this: a job done by a computer is not of net benefit to society until the person whose job was lost is contributing something new. Otherwise, nothing is gained overall. The technology companies benefit through having fewer restraints on the ways they can develop; those restraints are less necessary if people are assured a stable means of livelihood anyway; and that better quality of life can be achieved through revenue generated from higher taxes. In both the businesses’ and the governments’ cases, something is given and something is gained.

To be sure, it’s not a flawless plan. I’m sure that one of the common criticisms levelled at it will be that if you pay people for doing nothing, then people won’t do anything. I’m more optimistic than that. I think, left without the constant worry about meeting basic needs, most people will try to occupy their time in ways that fulfil them, whether that’s making art, or starting a business, or whatever.

In any case, we may know soon enough whether a basic wage for everyone can have positive effects on a society: the current government of Finland are in the early stages of carrying out the experiment.

Equally, those more skeptical of the corporate world would argue that no corporation would lobby for higher taxes—these are self-interested entities, after all. But they wouldn’t be doing it out of the goodness of their hearts, but as a trade-off for reduced regulation (which, far more than tax, is the bugbear of many large businesses, especially tech companies). If the total cost of a computer doing a job plus a higher general tax rate is still cheaper than an employee (and it could well be), then the company still makes a saving, and the sooner that change is made, the more money saved.

More concerning to me is the ability of small and of poor countries to fund a project like this. There’s a compelling argument to be made that the tech companies who gain the most customers now, firstly, will become far bigger than any company that’s existed so far in human history, and secondly, will continue to dominate for the foreseeable future. Services become verbs—Google, Skype, Uber—and these services, as U.S.-based entities, will pay the majority of their taxes to the U.S. government. Good for American citizens.

But these services have a much smaller stake in other markets they choose to enter, and both the incentive and leverage to push local taxes as low as they can. Rich citizens (or visitors) in these countries demand the same services they get elsewhere in the world, but the companies that make those services can choose whether to enter the markets on their terms. They may be amenable to high tax at home, but would they be so willing to pay it everywhere else too?

These smaller countries, and these poorer countries, would be thus less able to compete with the rest of the world, and so would slip further and further behind in economic stakes. As a result, their governments would become less and less able to support their citizens under any system. In the Internet’s winner-take-all economics, maybe somebody still has to lose.

As I said, though, I’m an optimist. I think that most national systems will tend towards this sort of arrangement sooner or later, as a consequence of Internet economics and the needs that will arise because of it, in the same way that most western countries now have some form of socialism. (Not enough for some; too much for others, I know, but such is politics.) If companies can be compelled to pay high taxes in all territories where they operate—if that becomes the norm—then the revenue generated from that can keep people fed when their jobs disappear. Growth shoots already exist; the only real question is when it happens—and I believe the longer it takes, the worse off everyone will be.

As for whether people will still have things to make and sell, no matter how good computers get at making things (art, design, music, yes, but also sofas, clocks, kitchenware), I think people will hold onto a romantic attachment to the human-made. The down-side of Internet economics is that it allows companies to become enormous to a degree previously unthinkable, but it also—assuming it remains open and free as it should be (vote wisely, folks)—allows anyone with a computer and an Internet connection access to the largest market in human history.

This is Grey’s biggest error in judging the creative professions: he assumes that it’s a popularity contest. It isn’t. In order to get by, a creative person needs only a few thousand fans. Amongst the billions of people connected to the Internet, that is a fraction of a fraction of a percentage. Moreover, there’s a tangible thrill for fans in discovering something that nobody else knows.

For an artist in the twenty-first century, finding fans is hard, no doubt. But it’s far easier than it ever has been before. And Grey ought to know this. He’s done it himself.

  1. I guess? 

Joan Acocella on stagefright

I almost missed this terrific meditation on Stage Fright by Joan Acocella in the New Yorker from a couple of weeks back. Essential reading (though behind a paywall).

Stagefright has not been heavily studied, which is strange because, as Solovitch tells us, it is common not only among those who make their living on the stage but among the rest of us, too. In 2012, two researchers at the University of Nebraska-Omaha, Karen Dwyer and Marlina Davidson, administered a survey to eight hundred and fifteen college students, asking them to select their three greatest fears from a list that included, among other things, heights, flying, financial problems, deep water, death, and “speaking before a group.” Speaking before a group beat out all the others, even death.


The launch of Apple Music a couple of weeks ago has started another backlash against streaming.

Alex Ross, on the New Yorker website:

[T]he pressure from the margin to the center is strong. Despite “Think Different” maxims redolent of the old Steve Jobs script—“It’s your music. Do what you like with it.”—you’re encouraged to gravitate toward the music that everyone else is listening to. This is what happens all across the corporatized Internet: to quote the old adage of Adorno and Horkheimer, you have the “freedom to choose what is always the same.” The musician, writer, and publisher Damon Krukowski, a longtime critic of the streaming business, calls it the return of the monoculture. “What Apple is doing to music retail,” Krukowski said on Twitter, “is exactly what I saw chains do to books in the nineties: kill indie competition, then eliminate the product.”

Criticism of the “monoculture” has never been less valid. The Internet is an incredibly large place, and within its 3.14 billion users, there’s room for an infinite variety of cultural pockets. While there may be a gravitational pull towards the popular, that pull isn’t strong—certainly not strong enough to change people’s existing tastes. If people can’t find what they want on streaming services, they’ll just go elsewhere. Because the Internet is so huge and so interconnected, it’s never been easier to find people who share your passions, no matter how obscure.

These cultural pockets will continue to exist alongside the titans. While it’s possible for companies like Apple, Amazon, and Google to become almost infinitely large, they grow at the expense of middle-sized businesses, not small ones. Because no company will ever be big enough to cater to everything everybody wants, there’s an infinite number of niches to be filled, and the best way to fill these niches is to be extremely small and focussed. Business analyst Ben Thompson has made the analogy to the rainforest: enormous trees taking most of the resources at the top, but incredibly fertile land at the bottom.

Streaming services are best suited to popular tastes, both from the listeners’ and the artists’ perspective. But it’s true that a lot of smaller artists and labels—the types who fit these cultural pockets—are having a rough time on streaming. Their rate of pay is pitiful, and it’s made worse by the loss of album sales.

To address the problems of streaming, though, we first need to think about who’s encouraging artists to be on the services. Through iTunes, Apple is the largest music seller in the world. If they wanted, they could use their clout to push indie artists into a catch–22: join Apple Music or leave the iTunes store—but they don’t. Spotify likewise requires no exclusives from artists. Tidal wants exclusives, but that business is a total disaster anyway. Only Google’s terms of service are onerous and repulsive.

The reason Apple don’t force artists to their streaming service is simple: it’s bad for them too. Think of it this way: if you’re an indie musician, you make a lot more money by selling an album on iTunes than by having a thousand streams of your songs. And so do Apple. Their thirty per-cent cut of an album’s sale is worth a lot more than their nearly-thirty per-cent cut of a couple of thousand streams. So why would they encourage musicians to be in their streaming catalogue? The problem with streaming services is not that they’re a bad model for musicians; it’s that they’re a bad model for some musicians, but at the moment nearly all musicians are on them.

Indie musicians’ complaints about streaming revenue are misdirected. It’s not streaming services that are to blame for the poor payouts to musicians. Even if streaming services could triple or quadruple what they charge listeners, the payouts to musicians per stream would still be vanishingly small. If anyone is to blame, it’s record labels—big ones in particular. It’s no good for smaller musicians to have all of their music on streaming services, but it’s of great benefit to those musicians’ labels. By having a large catalogue of music on a streaming service, big labels have a consistent source of income. A record label doesn’t care if one of their artists gets a thousand plays per month or a million, it’s all revenue to them. So they’ll upload their whole catalogue to Spotify, Apple Music, and all the rest, because they can. It doesn’t matter to the labels if any particular artist is a bad fit for streaming. As long as they have a lot of musicians making them a little money each, they’re sitting happy.

Rather than blame streaming services for not paying indie artists enough, musicians need to take matters into their own hands. They can only do this by knowing their audience. If a musician aspires to be the next Taylor Swift or Adele or Drake or whoever, then the goal is to get everyone listening, and that can only be accomplished by being available everywhere. In that case, being available on streaming services, and being pushed by a big record label, is almost certainly the right call. Those services are, after all, where most people are listening to music these days.

But if a musician wants to be a smaller success, a professional rather than a superstar, then they don’t need to be everywhere. Instead, they need to connect directly with existing and potential fans. That means being online, and it means building a relationship with their listeners. It also means selling, not streaming, their music, and convincing fans that it’s worth buying. I’ve argued before that piracy is a better option than streaming for musicians who want to build a passionate, loyal fanbase, and I stand by the argument I made then. Listeners who pirate music know the artist isn’t getting paid, and those who fall in love with it will often buy it in future.1 People like supporting independent creators, regardless of their field, because they can see that their contribution makes a difference. That’s the stuff on which Kickstarter is made.

Musicians who want to achieve this type of professional success can’t market themselves the same way as pop musicians: that way lies ruin. Instead, they need to develop loyal fans who are willing to pay to support them because they’re unique. The Internet, and social networks in particular, allow that kind of connection. Success as a musician separate from the peloton is still hard, but it’s within reach of more people than it has ever been before. And that’s not monoculture.

  1. It’s a long-established fact that people who pirate music more also spend more on music. It’s also interesting to note that when piracy was more prevalent, it received little of the type of backlash from independent musicians that streaming has.  ↩

Danny Elfman’s strange collection

Good piece by Alec Wilkinson from last week’s New Yorker.

“When I was a child, the story my mother always told to scare me was ‘The Monkey’s Paw,’ ” he said. The story concerns a man and woman who get three wishes that turn out badly. “Around Mali there were women who sold lizard heads and rooster feet and powders,” Elfman said. “They were the ones who sold the materials for casting spells. The hardest thing to find, and the rarest, would be the mummified monkey’s paw. A tiny hand. Each finger would be used for a different spell. A few times, I saw one, and it was withdrawn immediately. ‘It’s not for you,’ she’d say. ‘Too much power.’ One day in the Bamako market, I saw one, and the woman offered it to me and said, ‘For you?’ I put it in a box and wrote, ‘Do not open under any circumstances until I return,’ and sent it to my mother. Of course I knew she wouldn’t be able to not open it. She told me that she waited three months but confirmed later that she only waited about five minutes, and screamed so loud that it was like Krakatoa—the whole neighborhood heard it. Those objects kind of set my path for the next forty-four years.”