Tim Carmody | The Verge The Verge is about technology and how it makes us feel. Founded in 2011, we offer our audience everything from breaking news to reviews to award-winning features and investigations, on our site, in video, and in podcasts. 2016-02-26T15:39:35+00:00 https://www.theverge.com/authors/tim-carmody/rss https://platform.theverge.com/wp-content/uploads/sites/2/2025/01/verge-rss-large_80b47e.png?w=150&h=150&crop=1 Tim Carmody <![CDATA[How the Flint River got so toxic]]> https://www.theverge.com/2016/2/26/11117022/flint-michigan-water-crisis-lead-pollution-history 2016-02-26T10:39:35-05:00 2016-02-26T10:39:35-05:00
Garbage floats in the Flint River | Sarah Rice/Getty Images

The Flint water crisis did not begin on April 25th, 2014, when the city switched its water supply from Detroit’s system, tapping Lake Huron to its own on the Flint River. That tragic mistake was the culmination of a much longer ongoing disaster, one caused by greed, politics, incompetence, and selective amnesia. The known consequences include lead poisoning, skin rashes, and carcinogens in the water. The total health consequences may not be known for years. Much has and will be written about that decision and its aftermath. Less has been written about how the Flint River became so polluted in the first place. Flint’s water crisis begins with the pollution of the Flint River, which has been going on for well over a century.


“It would be a mistake to conclude that Flint’s predicament is simply the result of government mismanagement,” says Andrew Highsmith, author of Demolition Means Progress: Flint, Michigan, And The Fate Of The American Metropolis. “It’s also the product of a variety of much larger structural problems that are much more difficult to address.” Besides economic factors, this includes a long history of environmental disasters and political dysfunction, much of it centered around the Flint River. None of these factors are unique to Flint; they’re at work in underfunded towns across the United States, the legacy of multiple industries from automobiles and chemicals to coal and agriculture.

This short history of pollution of the Flint River is gathered from multiple interviews and news sources, including over 400 historical documents from The Flint Journal, the City of Flint, the Flint Public Library, Flint’s Sloan Museum, the Environmental Protection Agency, the US Department of the Interior, and various Michigan state agencies.

flint-river-dam-flickr

Flint dam (Kris Runstrom)

The more polluted a water source is, the more processing required to make the water safe to drink. Most of the contaminants now in Flint’s drinking water were introduced during or after processing. For all drinking water, the first concern is bacteria, which can cause diseases like hepatitis, Legionnaire’s disease, and other illnesses. Because Flint’s river water had high levels of bacteria, it was treated with additional chlorine. Chlorine reacts with organic material in the water to produce carcinogenic byproducts such as trihalomethanes; it also makes water more acidic, which corrodes pipes. Federal law mandates adding anti-corrosive agents to drinking water in large cities; this standard water treatment practice was not followed. Finally, stagnation anywhere along the line raises the likelihood of bacteria and makes the water less safe to drink.

Immediately after switching to the Flint River, Flint’s drinking water spiked in bacteria and trihalomethane readings. A Flint-area outbreak of Legionnaire’s disease also coincided with the switch, causing at least 10 deaths. The acidic water corroded the old lead pipes; along with other heavy metals, lead seeped into the drinking supply where it caused widespread lead poisoning. Lead poisoning is the most widespread and serious health problem associated with Flint’s drinking water; children with prolonged exposure to lead experience a range of developmental problems that are incurable. Once pipes are corroded, even clean, properly treated water continues to carry lead and other contaminants through the tap. This is why aid organizations are still distributing bottled water and water filters, even though the city has stopped using water from the Flint River.

the way flint mishandled water treatment is the primary cause of the water crisis

The way Flint mishandled water treatment is the primary cause of the water crisis. There is also the political question of officials’ responsibility for the crisis and their failure to respond after problems with the water became clear. But the original problem is that Flint’s river water is much more difficult to treat than water from Lake Huron, the city’s water source from 1974 to 2014. Flint’s water-treatment staff were not able to successfully make Flint River water safe to drink. Whether this is because they were undertrained, understaffed, or simply made a decision not to invest scarce resources into treating a temporary source of water — and who exactly made those decisions — is still unclear.

Why did Flint’s river pose so many problems? Before processing, the water itself is polluted from four sources: natural biological waste; treated industrial and human waste; untreated waste intentionally or accidentally dumped into the river; and contaminants washed into the river by rain or snow. The river is also warmer than Lake Huron, and its flow is less constant, particularly in the summer. All of this raises levels of bacteria and organic matter and introduces a wide range of other potential contaminants, whether natural or human-made.

In fact, while the Flint River had been improving thanks to the new regulations, the departure of heavy industry, and local cleanup efforts, it had long been known as an exceptionally polluted river. Until very recently, it had been repeatedly ruled out as a primary source for the city’s drinking water. It is hard to imagine why anyone familiar with the river’s history would ever decide to use it even as a temporary water source. But they did.

flint-buick-plant-flickr

Undated photo of the No.11 Factory at the Buick Automobile Plant (Wystan / Flickr)

For most of its history, Flint (like other cities) didn’t worry about the pollution of its river. Contaminants certainly ended up in the city’s drinking supply. Industrial waste was introduced into the Flint River with the first lumber mills in the 1830s. After lumber came the paper mills, and chemical processing. As the forests around Flint were gradually cleared, the city switched from lumber and paper to carriages and finally automobiles. Between 1900 and 1930, Flint had its first boom, reaching a population of 150,000. The city had been drawing its drinking and industrial water from the Flint River since 1893, and discharging its untreated waste downstream.

In the 1930s, the fish began to disappear — first in the Flint River, then in the Shiawassee and Saginaw Bay. In 1934, a Genessee County conservation officer named Ivan Kester sent seven fish on ice to the University of Michigan’s newly formed Institute for Fisheries Research with a remarkable letter:

fish-letter-pdf

Enclosed you will find some fish and a sample of water taken from the Flint River at Flushing. There are thousands of fish dying in this river, and I am under the impression that Copper-Cyanide is the cause of these fish dying. Will you kindly check the water and fish I am sending you, and send me a report of your findings. Also mail a carbon copy of your reply to Mr. Adams, Pollution Division, Department of Conservation, Lansing, Mich.

The institute determined that pollution had lowered oxygen levels in the river, suffocating the fish.


“In the 1940s, there’s a lot of talk about pollution in the river,” said Jeremy Dimick, curator of collections at Flint’s Sloan Museum. “There are calls for reform, but nothing behind them.” (Oral accounts later cited in newspaper articles say oil on the Flint River caught fire at least two different times in the 1930s, but these stories aren’t reflected in the written record otherwise.) The once-plentiful stocks of walleye in the Flint River completely collapsed by the 1940s, due to polluted water poisoning incubating eggs.

After the war, Flint was booming again. Between 1945 and 1956, per capita water use grew from 56 to 81 gallons per day, while the city’s population had grown to nearly 200,000 people. A 1955 report stated that the Flint River would be unable to support the city’s industrial and residential needs. New regulations forced GM and other businesses to treat and dilute their waste with city water before releasing it into the river. The medium-sized river couldn’t provide enough water for a population expected to continue to grow until the end of the century.

Cleanliness and health were afterthoughts

In 1960, the Michigan Water Resources Commission gave Flint three years to “abate unlawful pollution of the Flint River.” Primary polluters included the factories, paper and packaging companies, the meatpacking industry, various landfills, and the city’s wastewater treatment plant. When Flint switched to water processed by Detroit in 1967, the main motivation was to properly treat municipal and industrial waste and secure enough water for a growing population; cleanliness and health were afterthoughts. GM executives lead the push to switch to Detroit’s water system, hoping to save money and secure enough water for their needs. They and other city leaders “were most worried about the capacity of the river, especially [during] the summer: whether it could provide enough drinking water, enough water for industrial uses, but then enough water to dilute the waste as well,” said Dimick. “It’s really to strive for more more capacity and more stability.”

Switching to a new water source didn’t fix the Flint River. Studies through the rest of the decade found “poor water quality caused by numerous sources,” including treated and untreated waste. After the passage of the Clean Water Act in 1972, a 1974 study of the Flint River showed improvement upstream of the city but significant toxins downstream. Raw sewage discharges from Flint’s wastewater plant raised fecal coliform bacteria; phenol from GM plants and ammonia from the wastewater plants contributed toxic materials. These chemicals cause skin rashes, cardiovascular and gastrointestinal diseases, and other health problems when ingested.

Heavy use of fertilizers in rural areas upstream of the city also polluted the river. A 1975 EPA study of the Holloway Reservoir upstream showed that phosphates from fertilizers and detergents had stimulated algae growth. Unchecked, the algae made the water’s oxygen levels collapse, turning it cloudy and brown.

Official landfills (like one in Richfield Township, now bankrupt, and a toxic cleanup site) and unofficial ones (like one on Bray Road, only cleaned up in 2014 so the city could dump its own wastewater sludge there) all polluted the grounds and groundwater. A 1996 story in the Flint Journal listed 81 closed dumps and landfills in the Flint area at various stages of cleanup, showing different levels of groundwater contamination. Some were monitored; many were not.

Road salt on the city’s bridges may be one reason poorly-treated flint river water was so corrosive to flint’s pipes

Road salt on the city’s bridges raised the river’s chlorine levels, making the water more corrosive. This has continued into the present and may have been one reason poorly-treated Flint River water was so damaging to metal pipes.

In 1977, a crack in the main water pipe from Lake Huron caused the city to temporarily switch to Flint River water and local filtration. Residents then reported a poor taste. Later reports indicate that it took “10 times the amount of chemicals to treat Flint River water than Lake Huron.”

Between 1986 and 1988, high levels of coliform bacteria were intermittently found in Flint’s drinking water. Flint and Detroit blamed each other; the source of the infection was never found.

holloway-dam-flint-river-flickr

Holloway dam on the Flint River (Tony Faiola).

General Motors did its part to pollute the Flint River watershed, too. In the 1980s, nearly all of GM’s Flint plants were either in noncompliance or flagged for potential noncompliance with EPA regulations. Facing rising labor and cleanup costs — as well as a need for facilities that would meet new technological and environmental standards — GM closed its existing plants in Flint one by one. After multiple environmental assessments and cleanups, the buildings at Chevy In the Hole, GM’s flagship factory on the Flint River, were razed in 2004. In 2009, GM sold the site to the city of Flint for $1. Contaminants left behind include heavy metals like arsenic, mercury, and lead, toxic solvents, volatile organic compounds (VOCs), and polynuclear aromatics (PNAs) including petroleum compounds. The environmental profile at GM’s other former factories in Flint, like Buick City and Flint West, both EPA-funded cleanup sites, is similar.

It is difficult to precisely determine whether chemicals from Flint’s old factory sites made their way into the river. If they did, and from there made their way into Flint’s drinking supply, they could cause a wide range of problems, from lead poisoning to cancer. The terms of GM’s federally assisted bankruptcy and subsequent settlements with the EPA mean that neither GM, nor the city of Flint, nor the federal government is responsible for health problems associated with any of the former factory sites.

A spokesperson for General Motors declined to comment on industrial connections with the Flint River but gave the following statement:

From logging to industry and agriculture to recreation, the Flint River has served an important role in the growth and evolution of the Flint area. General Motors and its employees recognize the importance of the river and have worked actively to restore and preserve this natural resource. Through our involvement with the Flint River Watershed Coalition, GM employees actively participate in school programs, educating children about water quality. Inside our plants, our employees are active participants in our recycling efforts, which have helped several of the Flint-area sites to reach landfill-free status.

While environmental regulations kept companies from dumping waste into the river directly, illegal and accidental dumping were rampant. In 1990, a furniture salesman was convicted of dumping entire drums of methylene chloride, toluene and xylene, lead and other chemicals onto his property on the banks of the river. Sixty-five gallons of toxic sludge were found; 527,000 cubic yards of contaminated soil were removed. Lewis Street and Riverside Drive in east Flint, just south of the water plant, were known for years as prime spots for dumping furniture and garbage along the river, as well as high crime rates. There are dozens of stories like this.

Over two days, 22 million gallons of raw human, industrial, residential and commercial waste poured into the river

One of the most significant pollution events in Flint’s water history occurred in 1999. A subcontractor digging a trench to lay new fiberoptic cable near the Flint River opened a 6-inch-wide, 5-foot-long gash in an unmarked 72-inch pipe running from the city’s sewers to the Flint Waste Water Treatment Plant in Flint Township. Sewage filled a pond in a nearby golf course until the pump to the crushed pipe was shut down, diverting the untreated water directly into the Flint River.

Over two days, 22 million gallons of raw human, industrial, residential, and commercial waste poured into the river. On the second night, downstream in Mt. Morris Township, Karen Winchester saw hundreds of dead fish floating down the river past her property — catfish, carp, and bluegill, 3 to 20 inches long, all belly-up. For 14 months, health officials prohibited swimming, fishing, or direct contact with the river.

All of this was happening in the middle of Flint’s negotiations to extend its then 32-year-old contract for water from Detroit. The city was under pressure from the state to prove that it could treat drinkable water from the Flint River in an emergency.

“As far as we know, no [community] uses the flint river for a drinking water source.”

Over the next year, bacteria levels continued to rise, fall, and rise again, suggesting ongoing pollution. In June 2000, the Michigan legislature passed a law requiring municipal and county authorities to report any sewage spill to the Department of Environmental Quality. It uncovered dozens more spills: as part of an amnesty program, nine communities in the Flint area reported 90 illegal sewage overflows over the preceding five years. Heavy rains, power outages, and accidents at plants or along sewage lines repeatedly dumped waste into the river. Flint itself declined to disclose any spills it hadn’t already reported. Communities began doing house-to-house checks looking for illegal hookups dumping into the sewer system or the river. Many were never found.

Despite the new law, the city continued to discharge untreated and partially treated sewage into the river during heavy rains, snowmelts, and power outages, including an 8-million-gallon spill in March 2006 and a 18.1-million-gallon spill in September 2008. The city’s takeover by state-appointed emergency managers did nothing to change the basic limitations of the river and the city and region’s ability to treat its own waste. It happened over and over again.

After each spill, many of Flint’s leaders repeated a version of the caveat James Helmstetter, the county’s director of environmental health, tacked onto his warning to residents after the 1999 spill.

“As far as we know, no [community] uses the Flint River for a drinking water source,” he said.

blue-ribbon-carriage-factory-flint-flickr

Undated photo of the Blue Ribbon Carriage Factory in Flint (Wystan/Flickr)

In April 2014, when the city returned to local water, many officials and residents were hopeful that the legacy of the river’s pollution was behind them. They looked at what they could see — lower water costs, aesthetic improvement of the riverbank, the possibility of local control — and ignored what they couldn’t. The river, the city, and the people were all mistreated for years. The lead pipes and shoddy water treatment built on a century of pollution and neglect. They compounded and concentrated an ongoing human-made environmental disaster, with few analogies or precedents.

There are even calls to abandon Flint and resettle its citizens, at least temporarily, while its water infrastructure is replaced — or instead of paying to replace it. The entire city, once one of the most prosperous in the country, could become a brownfield.

“Governor Snyder called Flint his Katrina,” a legislative aide to one of Flint’s representatives told me. “But it’s really more like a Chernobyl.”



]]>
Tim Carmody <![CDATA[Synergy killed the Fantastic Four]]> https://www.theverge.com/2016/1/19/10790450/marvel-secret-wars-comics-heroes-goodbye-fantastic-four 2016-01-19T15:09:36-05:00 2016-01-19T15:09:36-05:00

Marvel’s Secret Wars, the long-anticipated, continually delayed comics mega-event from creators Jonathan Hickman and Esad Ribic, concluded last week with the release of its ninth and last issue. It destroyed the entire Marvel Universe of the main comic line, along with the 15-year-old branching Ultimate Marvel Universe, the direct inspiration for most of Marvel’s new movies. Both universes were eventually restored and merged (with a few important differences to set up new storylines). So from the start, Secret Wars was overflowing with ideas, scenarios, and moments for which there wasn’t time nor space to properly resolve. But since one of the biggest themes of Secret Wars was the conflict between what we can dream for our world and the practical limits of reality, it feels appropriate that the comic’s embattled production and release mirrored exactly that.

The most significant change for Marvel after Secret Wars is the end of the Fantastic Four. The core group is no longer a team, and the Fantastic Four title, which has published nearly 700 issues since 1961, will not continue. Ben Grimm, aka the Thing, is now featured in the movie-boosted Guardians of the Galaxy; Johnny Storm, the Human Torch, has joined the Inhumans, whose profile in the Marvel Universe is rising (they were introduced on the TV show Agents of S.H.I.E.L.D.’s second season and the Inhumans movie is scheduled for 2019). For the near future, the Richards family proper (Reed and Susan, aka Mister Fantastic and Invisible Woman, and their children Franklin and Valeria) will not appear in a Marvel comic.

Fantastic Four was arguably the perfect comic for a time that’s now passed. Its themes of family, exploration, and a scientific approach to the impossible often gave way to regression, nostalgia, colonialism, and a continual circling around the same conflicts and plot lines. Endings in comics are always provisional, but for now, Secret Wars is the last Fantastic Four story. The whole series is an enormous and bittersweet sendoff to Marvel’s first family, marking the end of an important era in popular entertainment.

Nothing brought all of Marvel’s themes together like the Fantastic Four

Fantastic Four was the flagship Marvel title,” says Tom Breevort, Marvel Comics’ executive editor (and editor of Secret Wars). “It was the first [of Marvel’s new superhero titles] and it remained the vanguard all throughout the 1960s.” Fantastic Four served as the primary idea lab for the Marvel Universe, led by Jack Kirby, who co-created the characters with writer / editor Stan Lee and drew the first 102 issues and is generally recognized as being the series’ primary visionary. Key Kirby-created characters like Black Panther, the Inhumans, Galactus and the Silver Surfer, the Molecule Man and Doctor Doom (both of whom are primary movers in Secret Wars),and Ronan the Accuser were all introduced in Fantastic Four.

But “the World’s Greatest Comic Magazine” gradually lost readership, with a brief boost during Hickman’s run on the title between 2009 and 2012, quickly fading after. “Of the people who are complaining the loudest about the series sunsetting, most of them haven’t read the title regularly in years,” Breevort observes. The Fantastic Four films produced by Fox in 2005 and 2007 were solid but uneven, nowhere close to breakout hits like the Avengers, Spider-Man, or X-Men franchises.The 2015 reboot was a bona fide critical and commercial flop. Instead of pushing out the comic for a sure sales bump from the movie, Marvel canceled it.

Last fall, Marvel was reportedly negotiating with Fox to either buy back the FF movie rights or work out a partnership similar to the Sony-Marvel deal later announced for Spider-Man. When negotiations with Fox turned sour, Marvel CEO Isaac “Ike” Perlmutter is said to have called for the series cancelation rather than help promote a non-Marvel movie. Perlmutter is also said to have tried to ban writers and artists creating new characters in cases where another company might be able to claim the movie rights. Or it was simply a decision to stop throwing good ideas after bad, and to put the company’s best resources behind its most successful or newly promising characters. One way or another, synergy killed the Fantastic Four.

This was probably inevitable. But it’s also a shame. To understand why, you don’t need to read every issue of the series (although about half of them are as good as anything in the history of comics); just look at Secret Wars.

Warning: extensive spoilers of Secret Wars follow.

Marvel Secret Wars

Here’s the gist: Reed Richards and a small group of similarly gifted (and morally compromised) genius superheroes are trying to stop the collapse of the many universes of Marvel. They fail, destroying universes and alienating their superhero peers along the way. The only person able to figure out what is happening and how to stop it is the villain, Doctor Doom, who manages to combine the destroyed realities into a new, fragmented world where he sets himself up as the ultimate ruler, hero, and God figure. (For unexplained, probably psychological reasons, the only thing his power cannot repair is his own damaged face, which he continues to hide behind a mask.) Doom also replaces Reed as Susan’s husband and Franklin and Valeria’s father, remaking his arch enemy’s family as his own. Meanwhile, Reed and a small group of other superheroes manage to survive the end of their universe to discover that Susan, the children, and the other superheroes weren’t destroyed, but live on in a patchwork parody of their former lives.

If you think that sounds like a crazy blend of cosmology, metafictional self-commentary, and a complex, moving human drama, you’re absolutely right. But it’s not unique to Secret Wars — these are exactly the elements that the Fantastic Four brought to comics from the beginning, completely reshaping the pop culture that followed. DC’s The Flash interjected fantastic science and kicked off the Silver Age, Spider-Man went deeper into everyman angst, and S.H.I.E.L.D. and Doctor Strange went into full psychedelia. But nothing pulled all these themes together like Fantastic Four.

As both a conclusion to the story and a commentary on the problems of managing a world of imaginary heroes, Hickman and Ribic’s endgame is remarkably elegant. And it culminates in a fight between Reed and Doom — a physical and psychological battle where every line exchanged has double or triple meanings.

Doom wants Reed to appreciate all that he saved from total destruction. “Your entire life, you have been distracted with the modern concerns so precious to you and your kind… when all that matters is survival.” “Nothing has ever been easy.” Reed responds, “you know what’s not easy? Having your life erased because someone wants to indulge themselves.” It’s an argument about the realities of business weighed against values and loyalty. But Doom wants to be loved, too. He wants Reed to admit that Reed has always thought himself better than Doom. Reed’s response is perfect: “I’ve always believed that you could be better than what you are.”

Doom’s sin, which Reed identifies and Doom eventually confesses, is that he is “so afraid of losing the things you’ve saved that you hold them too tight.” He may as well be talking to Marvel fans. Doom’s imagination is too limited to be a leader: as Black Panther tells him earlier, “raising up a new [Marvel Universe] would require a vision you just don’t possess.” The metafictional message is obvious: a world needs to grow and evolve, not simply cycle through every conceivable permutation of conflict between existing properties, like a child bashing action figures together. Doom finally admits that with the same power, tempered by compassion and guided by imagination, Reed would have done a better job at saving everything. At this, the Molecule Man transfers his power from Doom to Reed, and everything changes.

With Molecule Man’s power — which literally and metaphorically, is full creative control of the Marvel Universe — Reed, Susan, and their children work to restore reality. It’s not a Crisis on Infinite Earths-style condensation into one timeline and continuity. The Marvel and Ultimate Universes are merged into a new Prime Universe, but the possibility of exploring other dimensions remains. And that’s the cause to which the Richards family chooses to devote themselves.

When Franklin Richards asks his dad, “are we not superheroes anymore?” Reed responds, “it’s doing good that counts, not necessarily how you do it… So, no… No more superheroes for a while, just science. And no more Mister Fantastic, just Dad.” Ben Grimm and Johnny Storm have their own adventures to come, adding legitimacy to newly viable franchises. But Reed’s family remains a family. These explorers have the opportunity, away from the prying eyes of movie studios and entertainment companies looking to exploit them, to continue greater explorations than anything imaginable in a comic book not titled Fantastic Four.

“Great societies are crumbling around us, and the old men who run them are out of ideas.”

The first character to return to the newly restored Prime Universe is T’Challa, the Black Panther. He returns to the moment just before the collapse began, in New Avengers Volume 3 Issue 1, a Hickman-written comic that was released more than three years ago. He briefly seems to remember what’s happened, but resumes where he left off on this timeline, meeting with a group of teenage Wakandans to plan their exploration of a distant solar system and launch a new satellite. It’s a passing of the torch. T’Challa takes Reed Richards’ mantle as the preeminent scientist superhero who solves the problems no one else can. Meanwhile we see Peter Parker and Miles Morales, two generations of Spider-Man, begin to fight crime and meditate on power and responsibility. The fact that T’Challa and Morales are characters of color is not an accident. The old and new are joined and reconciled.

“Great societies are crumbling around us,” says Black Panther, “and the old men who run them are out of ideas. So all eyes turn to you, our children — to build us something better.”

We learn from T’Challa that in light of the rest of the world abandoning exploration, “Wakanda now possesses the preeminent space program on the planet.” Hickman’s played with this theme before: in Fantastic Four 579, Reed Richards shocks attendees of a TED-like event called the Singularity Conference for their narrow vision and warnings of global calamity. “The future of man is not one billion of us fighting over limited resources on a soon-to-be-dead planet, but one trillion human beings spanning an entire galaxy,” he says. “The future of man is not here. It is out there. Because it’s our new horizon. Because it’s what’s next.” Mister Fantastic and the Fantastic Four can’t exist in a world resigned to its own destruction, content with merely holding onto whatever’s left.

Fantastic Four, Jack Kirby

Fantastic Four was first published in 1961. It was a time of both apprehension and optimism, particularly for science, and for which space exploration was the ultimate symbol of progress. The four characters get their powers in a space flight accident. They become heroes; they become monsters. They remain scientists and explorers, above all of the farthest reaches of outer space. The Kree, the Skrulls, Galactus, the Silver Surfer, Thanos, even later the Inhumans, and the entire cosmic backdrop of Guardians of the Galaxy all appear in Marvel Comics because of the space exploration of the Fantastic Four (and sometimes the Avengers. The fundamental assumption of Fantastic Four is that human space exploration would continue, that it would be led by heroes, and the places, creatures, and objects they encounter would enrich all of our lives.

This assumption is no longer viable, both in real life and in fiction. The Marvel Cinematic Universe has yet to depict the human exploration of space that characterized so much of the early franchises. (Guardians of the Galaxy‘s Peter Quill is a half-human, half-alien, and he was kidnapped by aliens rather than electing to become an intergalactic traveler.) Outer space, and encounters with aliens, are threatening enough to give Tony Stark symptoms of PTSD, and prompts his and Bruce Banner’s creation of Ultron as a first line of defense. [Spoiler: that doesn’t work out.] Outside the Marvel Universe, movies like Interstellar, Gravity, and The Martian all emphasize the dangers and inhumanity of space, almost certainly more accurately than the classic Fantastic Four, but with a corresponding imaginative cost. Even Star Wars, a direct descendant of Jack Kirby’s work on Fantastic Four for Marvel and New Gods for DC, is necessarily set “a long time ago in a galaxy far, far away.”

Our human heroes are now rarely explorers in a conventional sense, and technology and inequality have largely replaced science and the unknown as the fundamental problem explored in superhero stories. Our heroes are soldiers like Captain America, inventors like Iron Man, or gifted-but-ordinary people solving problems at a smaller scale, like Spider-Man, Ant-Man, Daredevil, and Jessica Jones. The Marvel Universe is more earthbound than it’s ever been, and that’s why the Fantastic Four can no longer interact with the universe they founded and shaped — it’s no longer big enough to contain their ambitions.

In Breevort’s eyes, this is just a part of an ongoing cycle. “At different points in time, different characters and concepts have struck sparks in the public consciousness, when at other times they haven’t been so successful.” But, he points out, the MCU is slowly becoming less bound by realism, with Doctor Strange, Captain Marvel, and the Black Panther and Inhumans movies around the corner. Science, mysticism, alternate dimensions, secret cities, outer space: the Marvel Universe is getting bigger, weirder, and more fantastic in a hurry. “We live in a world in which Ant-Man and Guardians of the Galaxy are blockbuster movies,” says Breevort. “So I think it’s only a matter of time until we see [the Fantastic Four] again.”

“Nostalgia by itself isn’t enough.”

But if the Fantastic Four are to return, they have to returned transformed. As Breevort says, “nostalgia by itself isn’t enough.” The very real threat of nuclear apocalypse might have pointed an earlier generation’s imagination to the stars, but our apocalypse is here, in the form of climate change. The futurists Reed chastises for focusing on the world’s immediate economic needs are not entirely wrong. Even Doctor Doom is not entirely wrong. Reed and his family had to be erased from existence for him, and for the readers, to recognize that.

“I’ve learned that the difference between living and dying is managing fear,” Reed Richards tells his wife Susan, shortly before they are ushered off stage, possibly forever. “Not being so afraid of losing the things you love that you hold them too tight.” The final act of the Fantastic Four is both an indictment of our failure of imagination, and a testament to its power. “I’m letting go,” Reed says. “Because now I believe in expansion. I believe we endure.”


]]>
Tim Carmody <![CDATA[What’s behind Amazon’s baffling decision to ban Apple TV and Chromecast?]]> https://www.theverge.com/2015/10/2/9439281/amazon-ban-apple-tv-chromecast-why 2015-10-02T12:34:03-04:00 2015-10-02T12:34:03-04:00

Amazon’s decision to ban sales of Apple TV and Google’s Chromecast and Nexus Player from its retail store, including its partner stores, somehow manages to be both a total surprise and not a surprise at all.

Read next: The Apple TV review.

It’s not surprising because everyone’s gotten used to a multi-front platform cold war, where the big tech companies try to outmaneuver each other by controlling as much of their own ecosystems as possible, while grudgingly granting and denying access to each other’s products and services depending on the deals and strategic advantage available. I mean, this is basically the reason Amazon’s Prime Video service isn’t on Apple’s or Google’s video devices in the first place.

This is not the official explanation. A spokesperson released this statement to reporters on Thursday:

Over the last three years, Prime Video has become an important part of Prime. It’s important that the streaming media players we sell interact well with Prime Video in order to avoid customer confusion. Roku, XBOX, PlayStation and Fire TV are excellent choices.

(At press time, an Amazon spokesperson declined to answer further questions on this decision.)

Just like Apple, Google, Facebook, or Microsoft, Amazon is a big, diversified tech company now, with a lot of pieces that all need to work together (and lots of internal stakeholders making sure no other part of the business is making their business more difficult). On top of that, Amazon in particular is known to play hardball, especially when its executives think their product has an advantage. Pulling book publishers’ entire lists from its store over price disputes is just one example. (Apple, too, has been known to pull products from its store for similar reasons.) Take those two pieces — the platform strategy tax, and the retail monopolist — and put them together, and it suddenly seems amazing that Amazon ever sold Apple and Google video devices in the first place.

So what’s changed? Given the timing, it’s hard to avoid concluding that Amazon is responding to changes in Apple TV. There’s a new one coming with a huge marketing push to go with it. The new Apple TV hits virtually all of the competitive advantages the Fire TV previously enjoyed: universal voice search, games, and an app store. It’s not crazy to think that Amazon wants to do whatever it can to make sure that it doesn’t gain too much traction as to become the default set-top box. It’s using the biggest lever it has: its own retail store. Including Chromecast and Android TV may just be a feint, a fig leaf, a post hoc justification. Amazon may not be able to put a brake on sales if Apple TV becomes a hit, but it doesn’t have to contribute to them.

The solution to Amazon’s posited problem is to just put Prime Video on those platforms. Maybe it can’t — or maybe it won’t.

Chromecast 2

On its face, it’s insane that the biggest e-retailer in the world, looking at the stunningly lucrative consumer electronics market, would stop selling some of its biggest sellers. As of this writing, Google Chromecast was the sixth-best seller in Amazon’s US electronics store, and Apple TV was 14th — two of just a handful of non-Amazon products at the top of the list.

What is the endgame here? In Amazon’s standoffs with publishers, it’s always been clear what Amazon’s side wanted: lower wholesale costs and full discretion over retail prices. Here, it’s not clear at all what the Amazon side wants.

Does it hope to pressure Apple or Google to add Amazon Prime to their devices? This is probable. In 2013, I was told, by someone who was certainly in a position to know, that Amazon expected Prime Video would be on the Apple TV by the end of that year, and certainly it would be on the Apple TV and Chromecast before Amazon released its own video player. (That happened a year and a half ago.) Amazon’s team is fueled by three years of frustration. But both Apple and Google now have open processes for developers to make and submit apps for their TV platforms, just like Amazon has done for the iPad and Android tablets. If Amazon just wanted to get its app on the new Apple TV, it would be hard for Apple to refuse them.

Is Amazon looking for concessions on payments made through the third-party devices, or some other detail? Always! But I wouldn’t hold my breath. Is it hoping that the absence of the other popular video players will goose sales of Fire TV? Maybe! It could also simply be all of the above.

Is Amazon standing on principle? It’s not as if Amazon holds or has ever held this policy beyond digital video. Look at ebook readers in Amazon’s electronic store. You’ll find e-readers and tablets by Barnes & Noble and Kobo, Boyue, and Key Ingredient right below the Kindles. Nobody’s confused about whether or not you can read ebooks from Amazon on those devices.

The power of controlling the portal is more important than the specific thing people find at the end of it

But Amazon sells them, just like Google lists Amazon in its search results and Microsoft lets Apple put iTunes on Windows PCs. And for good reason. The power of controlling the portal is more important than the specific thing people find at the end of it. And when Amazon stacks the Kindle against other e-readers and tablets directly, it does a pretty good job of selling Kindles. The company, flatly, is good at retail — not because it relentlessly curates its storefront, but because it lets a million flowers bloom.

The way Amazon works — the way it has always worked — is that Amazon.com is the internet retailer of first resort. Whether you’re a casual or a dedicated customer, what’s projected is if you can’t buy it on Amazon, it doesn’t exist — and you didn’t want it anyway.

Traditionally, the point of Prime Video — the point of Amazon Prime — is for streaming video, discounted shipping, and other services to serve as a loss leader for retail sales. Maybe you get the subscription to watch videos, but you use it to buy things in the store. It’s the same way Wal-Mart gets you into the store with cheap books or DVDs, but gets you to leave the store with a new television set. It’s an old, powerful model that happens to work.

The ban on Apple and Google devices suggests that this balance has shifted. Amazon is not necessarily a retailer first. Now, the point of Prime Video might not just be to juice retail sales. The point of Prime Video is to sell Prime Video — and to support the broader Amazon-branded ecosystem.

Fire TV stick

I’ve been so confused by Amazon’s decision, I asked for help. James McQuivey and Sucharita Mulpuru are analysts at Forrester, with McQuivey focusing on digital disruptions and Mulpuru on retail. It turned out that they were puzzled about many of the same things as I was, but they also offered insight.

“In all [of Amazon’s] actions, what I think is consistent, is the self-serving nature of their positions and dubious sincerity around anything they actually say,” Mulpuru said, adding that Amazon’s customer-first rhetoric has often been a ruse to improve its own position.

“To claim that it’s about ‘confusion’ is ludicrous,” she says. “They could solve that by putting verbiage on their product detail page and in packaging that the items aren’t compatible with Prime Video or whatever they think the cause of confusion is. I think it’s pretty clear that this is about a push for market share in the increasingly competitive media world where the hardware is a powerful hook into the customer’s home.”

“Maybe the conversation got ugly and Amazon got huffy, and it led to this.”

For McQuivey, the ban is likely the consequence of an “escalating conversation behind closed doors” that got out of hand. “I have to assume that there were behind-the-scenes conversations that involved Amazon trying to persuade these other companies to put Amazon Prime — clearly the number two paid streaming service in the US — on their devices. And maybe the conversation got ugly and Amazon got huffy, and it led to this,” he says.

What McQuivey is most surprised by is that Amazon was willing to give up not just the sales revenue from Apple and Google video devices, but the data. Amazon could create an entire picture of Apple and Google’s customers — correlating their preferences, their other purchases, et cetera — to create a total map of its own markets. (The other possibility, he says, is that Amazon’s team already had such a picture and didn’t like what the data was telling them.)

However, McQuivey adds that the standoff may also be a function of the relative strength of Prime Video. “Award-winning shows, exclusive access to top content from other channels … Amazon Prime video is meaningfully differentiated in a way that iTunes and Google Play can’t touch.” In other words, Amazon may be more inclined to play poker because the company has a better hand.

This may all turn out to be posturing. Ultimately, it’s in Apple’s and Google’s interests to open up their television platform, as they’re beginning to do, just as it’s in Amazon’s interests to get its services on as many devices as possible and to sell anything and everything under the sun — the strategy it’s pursued so successfully for two decades.

The worst-case scenario is a continued creep toward the old telephone company model, with services, devices, ads, the tech stack, and the applications all being run by a single company. That’s the sort of customer-hostile practices the technology industry has always claimed it’s helped move us away from. But with each company devoting more and more resources to combat its rivals, everyone else — the partners, the independents, and especially the customers — can too easily pay the price.

Verge Video: The future of Google’s Chromecast

]]>
Tim Carmody <![CDATA[Barry Diller on Aereo’s future and why buying Newsweek was a mistake]]> https://www.theverge.com/2013/4/29/4282976/watch-this-barry-diller-on-aereos-future-and-why-buying-newsweek-was 2013-04-29T15:44:51-04:00 2013-04-29T15:44:51-04:00
Aereo (STOCK)

Barry Diller is a media legend, a former Paramount CEO and USA Network mogul who helped launch the Fox television network. As chairman at IAC, he’s currently heavily involved with two major ventures in digital media: the news site The Daily Beast and the controversial internet television service Aereo. In an interview with Bloomberg TV, Diller discusses both his recent investments and how digital technology is transforming the news and entertainment industries.

“I understand broadcasting,” Diller says. “No incumbent wants anyone in. That is an unbreakable rule.” Aereo, he says, has the opportunity to participate in an emerging “radical revolution” in digital video, enabled by new bandwidth availability and changing customer behaviors. If 10 to 20 million households get tired of increasing cable fees and find Aereo’s proposition of digital access to network broadcasts worthwhile, the company will be “very profitable,” easily outstripping the cost and hassle of continued litigation with those same television networks.

Aereo as it exists is legal, argues Diller; the real danger to its future is that the networks’ complaints will lead to a hostile intervention from Congress. “My attitude has been to jump into something that looks difficult and is against what people think will succeed and plant my little flag,” says Diller, adding that “sometimes it gets kicked.”

“The ability to get the world to utilize the internet for all its information, entertainment, news, video — to me, it is a big shift.”

As for The Daily Beast, Diller acknowledges a big technological misstep. “I wish I hadn’t bought Newsweek. It was a mistake,” he says. “Printing a single magazine is a fool’s errand if that magazine is a newsweekly.” Luxury and niche magazines, Diller says, can continue to survive in print, supported by advertising and a lack of real competition, but not news. Digital media means that news, even a magazine-style blend of news and opinion, can be deployed in real time, making “newsmagazine” in the traditional print-centric sense an “odd phrase.” While Diller says the now all-digital Newsweek/Daily Beast has a great team behind it, “I don’t have high expectations.”

]]>
Tim Carmody <![CDATA[SOPA author Lamar Smith wants congressional guidelines to replace peer review for federal science research]]> https://www.theverge.com/2013/4/29/4282270/house-chair-wants-congress-guidelines-replace-peer-review-science-research 2013-04-29T13:42:02-04:00 2013-04-29T13:42:02-04:00
agency logos

Texas congressman Lamar Smith, SOPA author and new chair of the US House of Representatives’ Committee on Science, Space, and Technology, wants to overhaul how the National Science Foundation funds research projects. According to Science, Smith’s draft bill (called the “High Quality Research Act”) would require that the director of the NSF certify prior to any funding award that the proposed project is “groundbreaking,” of “the finest quality,” does not duplicate any other federally-funded research, and will “advance the national health, prosperity, or welfare, and … secure the national defense.” It also requires that the National Science Board (including the president’s council of science advisors) issue a report making recommendations on how these criteria could be applied to all other federal science funding.

“It’s a dangerous thing for Congress, or anybody else, to be trying to specify in detail what types of fundamental research NSF should be funding.”

This language may seem reasonable enough—it echoes the 1950 law that founded the NSF—but Science’s Jeffrey Mervis writes that the bill “in effect, would replace peer review at the National Science Foundation.” Currently, NSF research grants are approved by peer scientists judging proposals on just two criteria: their intellectual merit and their broader impact on the science community, as well as in the world at large.

Smith’s proposal would alter the mission of the NSF, the principal funding source for pure scientific research in the US, by making each project fulfill the agency’s aggregate goals. It opens up any project (and the director who approved it) to political scrutiny on that basis. Finally, the prohibition against duplicating current research makes the NSF pick winners and losers, rather than allowing multiple researchers to work toward the same goals with different degrees of success.

To see how these changes would affect federal science research, consider what’s already happened during Smith’s tenure as committee chair. Under a rider to a spending bill passed this year, the NSF is already prohibited from funding political science research unless it promotes economic development or national security. At hearings on the National Science budget two weeks ago, Smith questioned the acting NSF director and presidential science advisors about the merit of specific research projects, arguing that the NSF should require that funded research “benefit Americans.” In response to a later question, presidential science advisor John Holdren countered that “it’s a dangerous thing for Congress, or anybody else, to be trying to specify in detail what types of fundamental research NSF should be funding.”

“Representative Smith’s view of the world is flawed. He seems to believe that legislation of this sort is the answer to large, difficult problems.”

Smith followed the hearings by asking acting NSF director Cora Marrett to defend five specific projects in detail. In response, the ranking Democrat on the science committee, Eddie Bernice Johnson, wrote a letter to Smith arguing that “the moment you compromise both the merit review process and the basic research mission of NSF is the moment you undo everything that has enabled NSF to contribute so profoundly to our national health, prosperity, and welfare.”

On his blog, chemist Derek Lowe takes a more cynical view, writing that “people will rewrite their grant applications in order to make them look attractive under whatever rules apply—which, naturally, is how it’s always worked.” Pointing to Smith’s work on SOPA, Lowe argues that “Rep. Smith’s view of the world is flawed. He seems to believe that legislation of this sort is the answer to large, difficult problems.”

Update: On Monday, President Barack Obama appeared to address this controversy directly in his remarks for the 150th anniversary of the National Academy of Sciences:

One of the things that I’ve tried to do over these last four years and will continue to do over the next four years is to make sure that we are promoting the integrity of our scientific process; that not just in the physical and life sciences, but also in fields like psychology and anthropology and economics and political science — all of which are sciences because scholars develop and test hypotheses and subject them to peer review — but in all the sciences, we’ve got to make sure that we are supporting the idea that they’re not subject to politics, that they’re not skewed by an agenda, that, as I said before, we make sure that we go where the evidence leads us. And that’s why we’ve got to keep investing in these sciences.

And what’s true of all sciences is that in order for us to maintain our edge, we’ve got to protect our rigorous peer review system and ensure that we only fund proposals that promise the biggest bang for taxpayer dollars. And I will keep working to make sure that our scientific research does not fall victim to political maneuvers or agendas that in some ways would impact on the integrity of the scientific process. That’s what’s going to maintain our standards of scientific excellence for years to come.

]]>
Tim Carmody <![CDATA[Surprise! Jeff Bezos explains to Amazon investors why no profits are a good thing]]> https://www.theverge.com/2013/4/12/4217794/jeff-bezos-letter-amazon-investors-2012 2013-04-12T16:50:51-04:00 2013-04-12T16:50:51-04:00
Jeff Bezos

Jeff Bezos’ annual letter to Amazon investors is always worth reading, a peek into the mind of arguably the most important CEO in the technology industry. In 2011, Bezos wrote a manifesto, positioning Amazon as a universal self-service platform, obliterating traditional gatekeepers and unlocking new sources of creativity. In 2012, Bezos plays his hand closer to the vest, returning to familiar pro-consumer themes, arguing that Amazon’s continued investment lets the company anticipate customer needs and meet them, sometimes even before customers can articulate them. Amazon, Bezos says, is focused on its customers, not its competition or Wall Street or short-term profitability — a tricky proposition in a letter to investors who are, typically, worried about precisely those things.

As examples, Bezos cites AutoRip, monthly royalties for Kindle Direct Publishing authors, price guarantees, and Amazon Web Services’ track record of adding new features while reducing prices. If the unifying theme of 2011’s letter was self-service, the unifying theme of 2012’s is speed and automation in customer service. Bezos would have you believe that Amazon’s technology is faster, smarter, and more aggressive than anyone else’s, and that this speedy intelligence is focused on surprising, delighting, and earning the trust of Amazon’s customers.

“Long-term thinking squares the circle.”

Much of this is well-trodden ground for Bezos and Amazon: some of the phrases (like going up “blind alleys” that turn into “broad avenues”) are copied verbatim from past keynotes. So what’s the goal here?

Bezos needs this letter to do two things. First, it must address the fact that Amazon lost money in fiscal 2012: a net loss of $39 million, or 9 cents per share. A loss is never good news for investors, even investors as sanguine as Amazon’s. Without naming the source or author, Bezos quotes Matthew Yglesias’ admiring characterization of Amazon as a “charitable organization being run by elements of the investment community for the benefit of consumers.” Bezos doesn’t grant the premise, arguing that “long-term thinking squares the circle. Proactively delighting customers earns trust, which earns more business from those customers, even in new business arenas. Take a long-term view, and the interests of customers and shareholders align.” Bezos needs to assure investors that he’s playing a longer game here, one where money spent on infrastructure and customer loyalty now cements Amazon’s position as the universal retailer of first resort for the foreseeable future, a license to print money next year and beyond.

Not everyone believes that Amazon is so good to its customers

The second goal of Bezos’ letter is even more ambitious. After all, if investors were truly unhappy with little to no profits, they’d vote with their dollars and sell their stock. No. Bezos needs to reassure everyone — investors, the press, the public — that Amazon, the unstoppable monster of the tech industry, can still be nice.

A year ago, Amazon was emerging from a fierce fight for the future of book publishing as the undisputed winner. Now, like many other tech companies, it’s seen as the Evil Empire. When Amazon bought the social portal Goodreads last month, many vocal Goodreads users were disgusted and terrified, promising to delete their accounts to deny Amazon their data, like Russian peasants burning their villages in anticipation of Napoleon’s armies.

Amazon is done with radical disruption and moving on to radical surprises

Amazon, to many people both in and beyond the world of books and publishing, is the enemy, the same way Microsoft, Apple, Google, or Facebook have been the enemy in their respective fields. The only thing that keeps Amazon from being perceived widely as an enemy is the equally widespread belief that nobody offers a better experience than Amazon, whether in e-reading or online retail. It’s essential that Bezos maintain the perception, the trust, that nobody delivers more for their customers than Amazon. If Amazon is a victor, it must be a benevolent victor: if Bezos is a king, greedily hoarding secrets and profits from investors, it must be because he is a river to his people — not because he is looking for more worlds to conquer. That’s why he has no problem taking up the mantle of being “too good” to customers; the alternative is suicide.

So last year’s Amazon, the radical engine of disruption, is gone. This year’s Amazon is the company that loses to win, that seeks to use its unique position not to fleece its customers, but to surprise and delight them — and in delighting them, more fully conquer them forever.

]]>
Tim Carmody <![CDATA[Opportunity, meet problem: Facebook Home’s uneasy relationship with Google]]> https://www.theverge.com/2013/4/4/4184006/opportunity-meet-problem-facebook-home-uneasy-relationship-google 2013-04-04T18:05:04-04:00 2013-04-04T18:05:04-04:00
Gallery Photo: Facebook Home gallery

Answering questions after today’s Facebook Home event, Mark Zuckerberg was full of praise for Google’s smartphone platform. “We think that Google takes their commitment to openness in the ecosystem really seriously,” he said, regarding the possibility Google might try to lock out Facebook. Google, he said, was aware of Facebook’s work, although wasn’t a partner like a host of other industry players. “I actually think this is really good for Android,” he added, setting up a gentle dig. “Most app developers put most of their energy into iPhone.”

Meanwhile, Zuckerberg confirmed that Facebook Home is essentially an end-run around Google’s services wherever they compete directly with Facebook’s, with the ultimate goal of capturing more dollars. Home puts Facebook’s social updates, Facebook’s contacts, Facebook’s messaging service, and crucially, Facebook’s advertising directly on an Android user’s home and lock screens. That’s a much more direct attack on Google’s business.

“With phones, there’s no room for a right-hand column of ads. That forced us to think about what the business looks like on mobile.”

“With phones, there’s no room for a right-hand column of ads. That forced us to think about what the business looks like on mobile,” Zuckerberg tells Wired’s Steven Levy in an interview released today. At the event, he fielded a question: “We may see ads in cover feed?” Zuckerberg: “Yup!” This is targeted, full-screen, push advertising in your pocket when Google is still selling banner ads inside free apps. Facebook is squatting on prime Android screen estate. How does this not drive Google’s Larry Page completely insane?

Google’s official position on Facebook Home is diplomatic. “This latest collaboration demonstrates the openness and flexibility that has made Android so popular,” a spokesperson told VentureBeat. “And it’s a win for users who want a customized Facebook experience from Google Play — the heart of the Android ecosystem — along with their favorite Google services like Gmail, Search, and Google Maps.”

This is important, because it reveals the territory Google is determined to defend. Facebook Home might seriously skin Android’s user interface, but it doesn’t cut out Google’s core the way Amazon did with the Kindle Fire. Home doesn’t promote Graph Search over web or local search, Facebook Messages over Gmail, or Facebook’s App Center over Google Play. Facebook is pushing the boundaries, but it’s also playing within the lines.

And for now, it doesn’t have to play outside them. Facebook Home already inverts the system-level relationship between the social network and the launch screen. It already strips away nearly all of the chrome and UI features that make Android look like Android. Facebook just put the entirety of the core Android experience inside a blue-tinted, ad-sponsored wrapper, and then hid the wrapper as an app inside Google’s own store.

Facebook just put the core Android experience inside a blue-tinted, ad-sponsored wrapper

Now, to be fair, Facebook Home does extend the functionality of Android in terms of notifications, multitasking, and messaging. It solves real problems, and adds real value to an already open, customizable platform. But we’ve seen this story play out before. Remember Google Toolbar? Before search, autofill, pop-up blockers, and webapp launchers were built into web browsers, there was Google Toolbar, offering all of those things as part of Firefox and Internet Explorer. Eventually, though, Google added more and more layers and extensions into the browser, growing along with its suite of web applications, including a personalized “Home” page incorporating news feeds and email updates. Finally, tired of being bound to another company’s update schedule and codebase, Google built its own web browser and bought its own mobile operating system.

Google could still move fast, break things, and one of them will be Facebook’s skin

Likewise, Facebook can and will continue to iterate to bring more of its features into Home, going deeper and deeper into the OS to do so. “I think this can start to be a change in our relationship that we have with these computing devices,” Zuckerberg said today, touting the built-in ability of Android for software developers to extend it without limit. “We just think about it as software,” Facebook Home product manager Adam Mosseri told The Verge, when asked about the dividing line between Facebook and Android on a Facebook Home device. And it may all just be software. But now Facebook is stuck with Android’s update schedule and fragmentation. The company can try to control that somewhat by working directly with manufacturers, but so long as Facebook doesn’t fully control the operating system, Google could still move fast, break things, and one of them will be Facebook’s skin. Amazon didn’t fork Android for the Kindle Fire because it’s fun.

Facebook Home hands-on photos Facebook Home hands-on photos Facebook Home hands-on photos Facebook Home hands-on photos Facebook Home hands-on photos Facebook Home hands-on photos Facebook Home hands-on photos Facebook Home hands-on photos Facebook Home hands-on photos Facebook Home hands-on photos Facebook Home hands-on photos Facebook Home hands-on photos

Google is Facebook’s model, the other great web-first software company of this century, run by engineers and supported by advertising. Neither were the first of their kind; they were the companies who won, and soon looked for other worlds to conquer. They’re the two consumer software companies that most quickly learned that data, not discs or downloads or even dollars, is the real currency of the 21st century. But the nature of the data they collected means they approach similar problems in radically different ways.

If Google Now is the street, Facebook Home is the café

Ultimately, it’s a clash of styles. Compare Facebook Home to Google Now. Google Now also tries to pull information out of different apps and web services and foreground it on your phone. But Google Now presumes that what users want most is data scoured from official sources: maps, weather information, train schedules, sports scores. Facebook assumes that users want status updates and photos from people close to them. Which of those is more useful or entertaining? Which of those is more valuable to users, advertisers, and the technology platforms that connect them? If Google Now is the street, Facebook Home is the café. Where do you spend more time? Where do you spend more money?

For Facebook, Google is both a problem and an opportunity. For Google, Facebook is mostly just a problem.

]]>
Tim Carmody <![CDATA[How the Digital Public Library of America hopes to build a real public commons]]> https://www.theverge.com/2013/4/3/4178980/how-the-digital-public-library-of-america-hopes-to-build-a-real 2013-04-03T15:11:03-04:00 2013-04-03T15:11:03-04:00
Library-Victoria-Flickr-jakobusalem

The Digital Public Library of America is a beautiful idea. Take the physical-to-digital ambition of Google Books and wed it to the civic spirit of the US public library system, providing a centralized portal to a decentralized network of digital media from libraries, museums, universities, archives, and other local, regional, and national collections. Framed in this way, it all seems so logical, so proper, so clear — everything the internet as a public commons promised to be. Surely the messy reality of copyright law, limited local budgets, or the cat-herding that goes into any grand alliance of independent institutions was bound to foul it up somewhere.

The DPLA is in fact real, and will hold a launch event on April 18 at the Boston Public Library. In an essay in The New York Review of Books, Harvard University Librarian Robert Darnton describes how the DPLA’s organizers overcame some of that messy reality to get the new nonprofit off the ground, and some of the obstacles (read: copyright) with which it’s still grappling. (As a historian of the 18th century, Darnton also unsurprisingly places the DPLA within the overlapping traditions of the Enlightenment and the American Revolution.)

“Any reader [can] consult works that used to be stored on inaccessible shelves or locked up in treasure rooms.”

Unlike Google Books, the DPLA doesn’t hoover up institutions’ documents to be stored on its own servers. Its primary goal is to support coordinate scanning efforts by each of its partner institutions, and to act as a central search engine and metadata repository. Most of these libraries and museums have been slowly scanning and cataloguing their collections for years; the DPLA helps make those materials aggregatable and interoperable. At least initially, it’s not nearly as focused on printed books as Google has been, but rather gathers an eclectic mix of texts, photos, data, and art, especially rare documents. It also provides a sophisticated frontend portal for discovery and research.

Darnton describes the DPLA’s goal well:

The user-friendly interface will therefore enable any reader — say, a high school student in the Bronx — to consult works that used to be stored on inaccessible shelves or locked up in treasure rooms — say, pamphlets in the Huntington Library of Los Angeles about nullification and secession in the antebellum South. Readers will simply consult the DPLA through its URL, http://dp.la/. They will then be able to search records by entering a title or the name of an author, and they will be connected through the DPLA’s site to the book or other digital object at its home institution.

The DPLA’s partner institutions include the Smithsonian, the National Archives and Records Administration, Harvard University Library, the New York Public Library, ARTstor, and a number of state and regional digital library initiatives that will act as “service hubs” for local libraries and museums. The one glaring omission that would seem crucial to any “public library of America” is the Library of Congress; Darnton writes that “the sponsors naturally hope that the Library of Congress also will participate.” The organization is funded for its first two years by grants from federal endowments and private foundations (Sloan, Arcadia, Knight, Soros, the NEH, the Institute of Museum and Library Services — or as DPLA Executive Director Dan Cohen affectionately calls them, “the usual suspects”). But continued fundraising and evangelizing remains a big part of Cohen’s mandate going forward.

The DPLA as a repository, the DPLA as a platform

The competitive goal is to catch up with the rest of the world. “Europeana has aggregated 20 million cultural heritage objects from hundreds of sites,” Cohen tells The Verge. But as a historian and digital humanist, Cohen is more excited about what he calls “the DPLA as a platform.”

“The launch will showcase some transformative uses [of the archive] that show what you can do with a massive digital library that’s been operationalized,” says Cohen. The DPLA has been equipped with a rich API for developers, artists, and others to engage, adapt, and revisualize art and text. “The DPLA’s terms, if you look at them, are extremely permissive,” Cohen adds. “We are really fighting for a maximally usable and transferrable knowledge base. Everything, where possible, will be CC-zero licensed. If you’re Google, you can come right in and take everything. It’s just like Wikipedia. You can grab this stuff and use it as you want.” Text mining, mapping, art projects — it’s all open for business.

The DPLA as an advocate for open access and enrichment of the public sphere

Cohen also notes that the DPLA’s “enrichment of the public sphere will be important from an advocacy role.” He wants to create a platform where academic scholarship, whether in journals or monographs, can be disseminated and preserved in open formats for current and future generations. He wants to find ways for public libraries to engage in collective action with book publishers to make e-books as available as possible to US citizens. He wants the DPLA to explore alternative approaches to copyright that preserve authors’ and publishers’ chief profit window but also maximizing a work’s circulation, including the “library license” that would allow public, noncommercial entities (like the DPLA) to have digital access to certain works in copyright after five years, or Knowledge Unlatched, a consortium that purchases in-copyright books for open access. The DPLA also wants to work directly with authors to donate their books to the commons.

So while the DPLA’s holdings aren’t limited to books, its inauguration could pose a real challenge to the current regime of publishers, booksellers, and search engines in the US. It aims to tilt knowledge repositories away from private control and toward a public commons. The only things the brand-new institution will have to navigate are getting money, finding talented developers, fostering public awareness, and balancing the interests of a wide range of stakeholders, both public and private, commercial and noncommercial, with the collected cultural heritage of a nation in the balance. But while those obstacles are formidable, they’re small compared to the inertial forces that could have kept the DPLA from ever getting off the ground. The thing exists. That’s the hardest part, and that’s what matters.

]]>
Tim Carmody <![CDATA[Goodreads is no Instagram: Amazon paid about $150 million]]> https://www.theverge.com/2013/3/29/4161586/goodreads-no-instagram-amazon-paid-150-million 2013-03-29T15:17:13-04:00 2013-03-29T15:17:13-04:00
amazon coin

Terms of Amazon’s acquisition of Goodreads haven’t been disclosed, but that won’t stop people from speculating. Bloomberg Businessweek put forward some very sketchy arithmetic that spun the deal into more than $1 billion. Not only is this figure absurd on its face — remember, this was a pretty shocking figure for Facebook’s purchase of Instagram at the time, and eventually came down — it requires some pretty hefty leaps of logic. For instance, Pinterest and Twitter are valued at about $50 per user, and Facebook at $58, so Goodreads — a slow-growth network that’s hard to monetize in a market with few deep-pocketed buyers — must be worth $55 per user, or $880 million. The author then rounds up, arguing that “any decent I-banker would try to push that valuation.” By the way, this would mean that Instagram was actually severely undervalued when it was sold, since it only netted $29 per user, even though both Twitter and Facebook were bidding up the price.

Thankfully, saner voices bearing better information have prevailed. AllThingsD’s Kara Swisher has sources close to the deal who say Amazon netted Goodreads for an unspecified mix of cash and stock worth somewhere between $140 and $150 million. In context, Amazon’s stock has been trading at record highs, with its current market cap a shade over $121 billion. As for Businessweek’s “overly simple, back-of-the-envelope” figure? Swisher’s sources say that number is “simply wrong.” Goodreads’ exit is still remarkable: As much as $10 for each of its users (many of whose accounts are no longer active) and a hefty return on its modest $3 million in funding. No wonder True Ventures’ Jon Callaghan, whose company invested in Goodreads, called the purchase a “phenomenal outcome for True” and a “wonderful outcome for all parties.” Even if Goodreads had other interested buyers, $150 million is nothing to sneeze at.

]]>
Tim Carmody <![CDATA[Password denied: when will Apple get serious about security?]]> https://www.theverge.com/2013/3/29/4158594/password-denied-when-will-apple-get-serious-about-security 2013-03-29T12:45:55-04:00 2013-03-29T12:45:55-04:00
iForgot bw

Last Friday, The Verge revealed the existence of a dead-simple URL-based hack that allowed anyone to reset your Apple ID password with just your email address and date of birth. Apple quickly shut down the site and closed the security hole before bringing it back online.
The conventional wisdom is that this was a run-of-the-mill software security issue. “It’s the kind of server misconfiguration you see on the internet ten times a week,” one might say. “And it’s not as if your iTunes password even gets you to real money. This is why Apple added two-step verification.” Or, “Apple saw the hole and shut it down before most users even knew it was there. This is how things are supposed to work.”
No. It isn’t. It’s a troubling symptom that suggests Apple’s self-admittedly bumpy transition from a maker of beautiful devices to a fully-fledged cloud services provider still isn’t going smoothly. Meanwhile, your Apple ID password has come a long way from the short string of characters you tap to update apps on your iPhone. It now offers access to Apple’s entire ecosystem of devices, stores, software, and services.

“You’d think that if you were the security team at Apple… what you’d really be focusing on is that iForgot system.”

“Apple’s iForgot server is essentially the master password reset for its entire cloud service,” says cryptographer Matthew Green, a professor at Johns Hopkins University (and self-described “Apple fanboy”). Apple IDs have become the point of entry “for all of the data that people store on their phones, for all of the email they sort through iCloud. All of that data can be accessed essentially by resetting somebody’s password on iForgot.”

“You’d think that if you were the security team at Apple, and you had limited resources to devote to any part of the system, what you’d really be focusing on is that iForgot system,” adds Green. “You would have it audited both internally and also by at least one outside reviewer. And the fact that this very kind-of-stupid bug made it through whatever process Apple put in place, to me makes it seem very unlikely that Apple did those things.”


Icloud-1

Apple ID isn’t just used for iTunes. Here are just some of the services underpinned by Cupertino’s universal login:

  • Apple Online Store
  • Apple TV
  • Bookmarks. Notes, and Reminders
  • Calendar, Contacts, and Mail
  • Documents in the Cloud
  • EasyPay
  • FaceTime
  • Find My iPhone
  • Find My Friends
  • Game Center
  • iBooks and iBookstore
  • iChat
  • iCloud
  • iMessage
  • iTunes Store
  • iPhoto and Aperture Purchases
  • iWork Publishing (publish.iwork.com)
  • Mac App Store
  • My Support Profile
  • register.apple.com

It’s not clear that Apple security immediately understood the nature of its server vulnerability even after it was disclosed. According to iMore’s play-by-play, Apple initially simply put a maintenance sign over the iForgot page, preventing ordinary password resets. But even then, a hacker could still force a password reset and skip Apple’s security questions simply by entering in a URL as if the page were still accepting resets, fooling the still-online server into thinking those two questions had been successfully answered. When it became aware that user passwords were still vulnerable, Apple then took the iForgot server completely offline, which it could (and arguably should) have done straight away until the security hole had been plugged.

The most common response to the hacks was for users to enable two-step authentication, a long-awaited, recently-deployed security measure that requires access to a registered device as well as a password to access Apple ID services. Unfortunately, at the time of the hack, an option to enable two-step authentication for iCloud accounts had been introduced to the US, UK, Australia, Ireland, and New Zealand — and nowhere else. Many users who tried to turn on two-step authentication were subject to a mandatory multi-day waiting period before the password hack had even been fixed.

So Apple’s response to this crisis actually wasn’t perfect. It was sloppy, slow, and uneven. Just like the approach to ID security which got Apple and its customers into this mess.

Apple ID: your password is your passport

Apple ID: your password is your passport

Icloud-2

What harm could someone do with access to a user’s Apple ID? If the motive is simple vandalism, like in the case of Wired writer Mat Honan, Apple’s “Find My…” services make it easy to remotely wipe a user’s phone, tablet, or computer. Email, iMessage, iChat, or Facetime allows hackers to read or send private messages as the user, and iCloud would allow them to read, create, or deface other files as well.

More startling is the possibility that a password reset allows the hack to spiral out, iterating from one user to the next and from online to offline. Access to a user’s contacts gives a hacker access to a fresh pool of email addresses and dates of birth; access to iMessage gives them the specific email address associated with those contacts’ Apple IDs. “Find My iPhone,” “Find My Friends,” and Calendar can let you know where, when, and with whom a user and his or her contacts can likely be found. This can be particularly devastating if it’s a hack targeted at a particular user, with the specific goal of causing physical or material harm to that person or someone close to them.

But the real play here for both vandals and professional criminals is for personal data and documents. With an unlocked Apple ID, data can be harvested either through services like email or iMessage, or more likely by cracking open cloud backups of users’ devices. These backups contain app data, app and system settings (but not passwords), as well as photos and videos, text messages, voice mails, and other data.

It’s the equivalent of breaking into someone’s home by opening a first-floor window someone forgot to lock

“Apple doesn’t give a lot of detail about what gets backed up, but presumably everything on your phone is now in the cloud, assuming you do the default setup on an iPhone,” says Green. “So that’s a lot of data that’s now protected using essentially the same security system that was just protecting your iTunes account” three or four years ago.

It would be easy to retrieve copies of device backups, documents, contacts, mail, and messages from the cloud but otherwise leave a user’s profile intact; by the time a user knows something is amiss, he or she would only be aware that his or her old password is no longer functioning. Criminals don’t need continued access to users’ digital identities if they can browse full copies of their cloud data at leisure. Even strong encryption can be broken when time is no longer a factor.

All of this underscores the seriousness of Apple’s security lapse with iForgot. This was a high-priority system defeated with an extremely common form submission hack. It’s the equivalent of breaking into someone’s home by opening a first-floor window someone forgot to lock. Then imagine it happening again and again and again.

But Apple’s status as the largest technology company in the world, and the unique level of trust Apple users have in its systems, actually makes it worse than that. “Imagine that the Secret Service left the front door of the White House unlocked, forgot to turn on the security system, and then it was discovered that the entire protection detail had gone out to a bar, leaving the president completely unprotected,” says Green. “That’s the analogy that I would give to this particular bug.”

This was a high-priority system defeated with an extremely common form submission hack
Icloud-3
The unsexy bits of data security

The unsexy bits of data security

These systems may be defended like a castle, but bandits have plenty of places to chip away
Icloud-5
Icloud-4

As alarming as the consequences of an insecure Apple ID might be, what it says about Apple’s security procedures is even more frightening. “You know the Secret Service does a lot of other things besides protect the president,” Green says. “So if they’re not getting this one, extremely high profile job done right, then what about the other things — the much more complicated and subtle things — that don’t get anywhere near as much attention? How can we trust that they’re doing those things right?”

Apple locked the screen door and left the front door open, without asking anyone else to check that the house was safe

A server-side attack on Apple’s cloud could get customers’ credit card numbers and addresses, device backups with their encryption keys — as well as contacts and Apple IDs — anonymously and in bulk. Those systems may be defended like a castle, but bandits have plenty of places to chip away at private information at the periphery: intercepting wireless location data, cracking the still-private protocols for services like FaceTime or iMessage, or imitating iTunes updates to install to take over a user’s phone.

There’s nothing sexy about securing these systems. None of them contribute directly to Apple’s bottom line. And when it came to securing a business netting it an estimated $2 billion each year, Apple locked the screen door and left the front door open, without asking anyone else to check that the house was safe.

Becoming a mature cloud company

Becoming a mature cloud company

To be fair, Apple takes many measures to secure data in iCloud. Except for email, notes, and music, data is encrypted (with at least 128-bit AES) both on Apple’s servers and in transmission. New Apple ID passwords are required to be “strong,” i.e., a mix of letters and numbers, upper and lower case, at least 8 characters long and without more than 3 consecutive identical characters. (iTunes passwords used to be embarrassingly weak, and many weak passwords are still grandfathered into the system.) Apple now allows device-based two-factor authentication in some countries. To make big account changes, like changing a password or registering a new device, you need to answer two randomized security questions. (This was the security step bypassed by Friday’s hack.) After Mat Honan’s hack, Apple customer service no longer leans toward skipping any of these steps. Application passwords aren’t stored in the cloud and can only be stored locally when encrypted. The policies guiding Apple employees’ access to personal data and what they can do with it (or allow someone purporting to be you to have them do with it) are regularly audited and reviewed, at least by internal teams at Apple.

What do other cloud companies do?

All this is good. It compares favorably with other consumer-grade data storage solutions, like Dropbox. But even Dropbox, when it’s had big data breaches, has brought in third parties to review its security. Meanwhile, Apple’s lack of transparency, its unwillingness to open itself up to outsiders or even its own developers and customers, which has served it so well developing new consumer electronics, works against it when it comes to securing the cloud.

When Ars Technica investigated security issues in iCloud last year, it found that “your data is at least as safe as it is when stored on any remote server, if not more so,” but that its weaknesses lay in Apple’s lack of disclosure of its security processes (even Ars‘ assessment depends on a fair amount of guesswork), its prioritization of ease-of-use over full security, and its retention of encryption keys to iCloud data on its own servers. Apple’s defense has traditionally been that its security processes are “industry-standard.” But in the still-young consumer cloud, Apple is one of the leading companies helping to define that standard.

“The reality is that the Apple way values usability over all else, including security,” Echoworx’s Robby Gulri told Ars. “If you can see it in a browser, they can see it on the server.” This means customer data can be made accessible to Apple employees or law enforcement. Gulri, who owns an iPhone and iPad and uses iCloud, recommends that Apple users only make data available to iCloud that they would be comfortable with either of those two groups potentially seeing, like music or photos. He also recommends that Apple, like all cloud providers, have its encryption chains and security processes regularly audited and verified by a trusted third party.

“If you look at companies like Amazon, which is recognized as a cloud provider, and Microsoft, you see that they have very big security teams, they have processes in place,” says Green. “Nobody ever talks about what Apple’s security process is, and that’s partly because Apple is a secretive company and they keep to themselves, but seeing things like [Friday’s hack] makes you wonder if it’s because they haven’t fully developed their security strategy.”

“Apple is a secretive company and they keep to themselves, but… this makes you wonder if it’s because they haven’t fully developed their security strategy.”

Both Amazon and Microsoft have detailed, extensive, public privacy and security policies for their cloud services. Both companies have every point in their systems audited by independent third parties. They have multiple certifications, which are used both within industry to establish reliability and verify that the services satisfy laws governing things like private medical information or use by government services. They permit their customers to deploy their own penetration testing. They’re members of the Cloud Security Alliance, a nonprofit that establishes industry best practices for data security. The CSA also includes Google, Box, HP, Rackspace, VMWare, Intel, Adobe, Oracle, and nearly every other company with a significant presence in cloud computing and storage.

Apple’s not part of the CSA. In fact, Apple does none of these things. It doesn’t have or advertise any of the external certifications available for IT security. And Apple won’t disclose how its security audits are conducted, or by whom.

Reached by The Verge, Apple declined to answer whether iCloud security had ever been audited by a third party. Apple won’t disclose whether any part of its cloud security is even audited internally apart from that governing its customer service group. Pressed on these questions, an Apple representative sent links to its public security FAQs, which doesn’t address them.

“The reality is that the Apple way values usability over all else, including security.”
Icloud-6
Icloud-strip-300

It’s time for everyone to grow up

Really, consumers should be demanding the same level of security verification and transparency for their data that enterprise customers have come to expect from cloud wholesalers. It’s not just a problem for Apple; Google Drive, Microsoft’s SkyDrive, and Dropbox all face similar issues. But of these, Apple’s cloud storage is the most likely to be switched on by default and remains the least well-understood.

Meanwhile, Apple has also promoted iCloud as a solution for developers to sync data between apps on different devices. Apple doesn’t detail its security processes to developers either. The Verge has reported on how Apple hasn’t devoted the technology and personnel resources to make other parts of iCloud’s service competitive and useful. Apple hasn’t been able to keep Core Data in sync, and the company hasn’t been responsive to third-party inquiries and complaints. In “A tale of two iClouds,” Matthew Panzarino tries to distinguish between the (good) iCloud that Apple uses for its own services and the (bad) iCloud developers use, but when it comes to security issues and third-party auditing, there is no distinction. There are simply two instances of the same pattern.

“They need to recognize that they are the guardians of people’s data.”

In the absence of concrete, concerted demands from customers, developers, security experts, and the wider technology community, change is unlikely. “Microsoft has taken a lot of flak over Windows security vulnerabilities, and it’s become a problem for their brand,” says Green. “In response to that, they developed a security process. Not necessarily because they wanted to, but because they had to: because if they did not do it, then they were at risk in terms of their perception in the marketplace,” he explains.

“I don’t know that Apple has really faced that same kind of pressure,” Green adds. Apple executives “need to recognize that they are the guardians of people’s data, that that data is important, and that obviously, nobody expects them to be perfect, but they should start to at least educate their users on what the risks are and what the limitations of what they’re doing are.”

Apple needs to demonstrate that its cloud can be counted on. All the evidence suggests that much like Apple Maps or MobileMe, iCloud simply isn’t at the level of polish and performance we’ve come to expect from Apple. Security is just a symptom.

There are three components to Apple’s business: hardware, client software, and cloud services. Apple currently does two of these things very well. iCloud acts as if it “just works.” In reality, much of it is very broken.


]]>