The first thing ever searched on Google was the name Gerhard Casper, a former Stanford president. As the story goes, in 1998, Larry Page and Sergey Brin demoed Google for computer scientist John Hennessy. They searched Casper’s name on both AltaVista and Google. The former pulled up results for Casper the Friendly Ghost; the latter pulled up information on Gerhard Casper the person.
What made Google’s results different from AltaVista’s was its algorithm, PageRank, which organized results based on the amount of links between pages. In fact, the site’s original name, BackRub, was a reference to the backlinks it was using to rank results. If your site was linked to by other authoritative sites, it would place higher in the list than some random blog that no one was citing.
Google officially went online later in 1998. It quickly became so inseparable from both the way we use the internet and, eventually, culture itself, that we almost lack the language to describe what Google’s impact over the last 25 years has actually been. It’s like asking a fish to explain what the ocean is. And yet, all around us are signs that the era of “peak Google” is ending or, possibly, already over.
This year, The Verge is exploring how Google Search has reshaped the web into a place for robots — and how the emergence of AI threatens Google itself.
There is a growing chorus of complaints that Google is not as accurate, as competent, as dedicated to search as it once was. The rise of massive closed algorithmic social networks like Meta’s Facebook and Instagram began eating the web in the 2010s. More recently, there’s been a shift to entertainment-based video feeds like TikTok — which is now being used as a primary search engine by a new generation of internet users.
For two decades, Google Search was the largely invisible force that determined the ebb and flow of online content. Now, for the first time since Google’s launch, a world without it at the center actually seems possible. We’re clearly at the end of one era and at the threshold of another. But to understand where we’re headed, we have to look back at how it all started.
If you’re looking for the moment Google truly crossed over into the zeitgeist, it was likely around 2001. In February 2000, Jennifer Lopez wore her iconic green Versace dress to the Grammys, which former Google CEO Eric Schmidt would later say searches for inspired how Google Image Search functioned when it launched in summer 2001. That year was also the moment when users began to realize that Google was important enough to hijack.
The term “Google bombing” was first coined by Adam Mathes, now a product manager at Google, who first described the concept in April 2001 while writing for the site Uber.nu. Mathes successfully used the backlinks that fueled PageRank to make the search term “talentless hack” bring up his friend’s website. Mathes did not respond to a request for comment.
A humor site called Hugedisk.com, however, successfully pulled it off first in January 2001. A writer for the site, interviewed under the pseudonym Michael Hugedisk, told Wired in 2007 that their three-person team linked to a webpage selling pro-George W. Bush merchandise and was able to make it the top result on Google if you searched “dumb motherfucker.”
“One of the other guys who ran the site got a cease and desist letter from the bombed George Bush site’s lawyers. We chickened out and pulled down the link, but we got a lot of press,” Hugedisk recounted.
“It’s difficult to see which factors contribute to this result, though. It has to do with Google’s ranking algorithm,” a Google spokesperson said of the stunt at the time, calling the search results “an anomaly.”
But it wasn’t an anomaly. In fact, there’s a way of viewing the company’s 25-year history as an ongoing battle against users who want to manipulate what PageRank surfaces.
“[Google bombing] was a popular thing — get your political enemy and some curse words and then merge them in the top Google Image resolve and sometimes it works,” blogger Philipp Lenssen told The Verge. “Mostly for the laughs or giggles.”
There’s a way of viewing the company’s 25-year history as an ongoing battle against users who want to manipulate what PageRank surfaces
Lenssen still remembers the first time he started to get a surge of page views from Google. He had been running a gaming site called Games for the Brain for around three years without much fanfare. “It was just not doing anything,” he told The Verge. “And then, suddenly, it was a super popular website.”
It can be hard to remember how mysterious these early run-ins with Google traffic were. It came as a genuine surprise to Lenssen when he figured out that “brain games” had become a huge search term on Google. (Even now, in 2023, Lenssen’s site is still the first non-sponsored Google result for “brain games.”)
“Google kept sending me people all day long from organic search results,” he said. “It became my main source of income.”
Rather than brain games, however, Lenssen is probably best known for a blog he ran from 2003 to 2011 called Google Blogoscoped. He was, for a long time, one of the main chroniclers of everything Google. And he remembers the switch from other search engines to Google in the late 1990s. It was passed around by word of mouth as a better alternative to AltaVista, which wasn’t the biggest search engine of the era but was considered the best one yet.
In 2023, search optimization is a matter of sheer self-interest, a necessity of life in a Google-dominated world. The URLs of new articles are loaded with keywords. YouTube video titles, too — not too many, of course, because an overly long title gets cut off. Shop listings by vendors sprawl into wordy repetition, like side sign spinners reimagined as content sludge. And it goes beyond just Google’s domain. Solid blocks of blue hashtags and account tags trail at the end of influencer Instagram posts. Even teenagers tag their TikToks with #fyp — a hashtag thought to make it more likely for videos to be gently bumped into the algorithmic feeds of strangers.
The word SEO “kind of sounds like spam when you say it today,” said Lenssen, in a slightly affected voice. “But that was not how it started.”
To use the language of today, Lenssen and his cohort of bloggers were the earliest content creators. Their tastes and sensibilities would inflect much of digital media today, from Wordle to food Instagram. It might seem unfathomable now, but unlike the creators of 2023, the bloggers of the early 2000s weren’t in a low-grade war with algorithms. By optimizing for PageRank, they were helping Google by making it better. And that was good for everyone because making Google better was good for the internet.
This attitude is easier to comprehend when you look back at Google’s product launches in these early years — Google Groups, Google Calendar, Google News, Google Answers. The company also acquired Blogger in 2003.
“Everything was done really intelligently, very clean, very easy to use, and extremely sophisticated,” said technologist Andy Baio, who still blogs at Waxy.org. “And I think that Google Reader was probably the best, like one of the best, shining examples of that.”
“Everybody I knew was living off Google Reader,” recalled Scott Beale of Laughing Squid.
Google Reader was created by engineer Chris Wetherell in 2005. It allowed users to take the RSS feeds — an open protocol for organizing a website’s content and updates — and add those feeds into a singular reader. If Google Search was the spinal cord of 2000s internet culture, Google Reader was the central nervous system.
“They were encouraging people to write on the web,” said Baio. Bloggers like Lenssen, Baio, and Beale felt like everything Google was doing was in service of making the internet better. The tools it kept launching felt tied to a mission of collecting the world’s information and helping people add more content to the web.
Lenssen said he now sees SEO as more or less part of the same nefarious tradition as Google bombing
Many of these bloggers feel differently now. Lenssen said he now sees SEO as more or less part of the same nefarious tradition as Google bombing. “You want a certain opinion to be in the number one spot, not as a meme but to influence people,” he said. Most of the other bloggers expressed a similar change of heart in interviews for this piece.
“When Google came along, they were ad-free with actually relevant results in a minimalistic kind of design,” Lenssen said. “If we fast-forward to now, it’s kind of inverted now. The results are kind of spammy and keyword-built and SEO stuff. And so it might be hard to understand for people looking at Google now how useful it was back then.”
But there is one notable holdout among these early web pioneers: Danny Sullivan, who, during this period, became the world’s de facto expert on all things search. (Which, after the dawn of the millennium, increasingly just became Google Search.) Sullivan’s expertise gives his opinion some weight, though there is one teeny little wrinkle — since 2017, he’s been an employee of Google, working as the company’s official search liaison. Which means even if he doesn’t think they are, his opinions about search now have to be in line with Google’s opinions about search.
According to Sullivan, the pattern of optimizing for search predates Google — it wasn’t the first search engine, after all. As early as 1997, people were creating “doorway pages” — pages full of keywords meant to trick web crawlers into overindexing a site.
More crucially, Sullivan sees Google Search not as a driver of virality but as a mere echo.
“I just can’t think of something that I did as a Google search that caused everybody else to do the same Google search,” Sullivan said. “I can see that something’s become a meme in some way. And sometimes, it could even be a meme on Google Search, like, you know, the Doodles we do. People will say, ‘Now you got to go search for this; you’ve got to go see it or whatever.’ But search itself doesn’t tend to cause the virality.”
Those hundreds of millions of websites jockeying for placement on the first page of results don’t influence how culture works, as Sullivan sees it. For him, Google Search activity does not create more search activity. Decades may have passed, but people are essentially still searching for “Jennifer Lopez dress.” Culture motivates what goes into the search box, and it’s a one-way street.
But causality is both hard to prove and disprove. The same set of facts that leads Sullivan to discount the effect of Google on culture can just as readily point to the opposite conclusion.
That same month, what is largely considered to be the first real internet meme, “All Your Base Are Belong To Us,” was launched into the mainstream
In February 2001, right after Hugedisk’s Google bomb, Google launched Google Groups, a discussion platform that integrated with the internet’s first real social network, Usenet. And that same month, what is largely considered to be the first real internet meme, “All Your Base Are Belong To Us,” was launched into the mainstream after years of bouncing around as a message board inside joke. It became one of the largest search trends on Google, and an archived Google Zeitgeist report even lists the infamous mistranslated video game cutscene as one of the top searches in February 2001.
Per Sullivan’s logic, Google Groups added better discovery to both Usenet and the myriad other message boards and online communities creating proto-meme culture at the time. And that discoverability created word-of-mouth interest, which led to search interest. The uptick in searches merely reflected what was happening outside of Google.
But you can just as easily conclude that Google — in the form of Search and Groups — drove the virality of “All Your Base Are Belong To Us.”
“All Your Base Are Belong To Us” had been floating around message boards as an animated GIF as early as 1998. But after Google went live, it began mutating the way modern memes do. A fan project launched to redub the game, the meme got a page on Newgrounds, and most importantly, the first Photoshops of the meme showed up in a Something Awful thread. (Consider how much harder it would have been, pre-Google, to find the assets for “All Your Base Are Belong To Us” in order to remix them.)
That back and forth between social and search would create pathways for, and then supercharge, an online network of independent publishers that we now call the blogosphere. Google’s backlink algorithm gave a new level of influence to online curation. The spread of “All Your Base Are Belong To Us” — from message boards, to search, to aggregators and blogs — set the stage for, well, how everything has worked ever since.
SEO experts like Sullivan might rankle at the idea that Google’s PageRank is a social algorithm, but it’s not not a social mechanism.
We tend to think of “search” and “social” as competing ideas. The history of the internet between the 2000s and the 2010s is often painted as a shift from search engines to social networks. But PageRank does measure online discussion, in a sense — and it also influences how discussion flows. And just like the algorithms that would eventually dominate platforms like Facebook years later, PageRank has a profound effect on how people create content.
Alex Turvy, a sociologist specializing in digital culture, said it’s hard to map our current understanding of virality and platform optimization to the earliest days of Google, but there are definitely similarities.
“I think that the celebrity gossip world is a good example,” he said. “Folks that understood backlinks and keywords earlier than others and were able to get low-quality content pretty high on search results pages.”
He cited examples such as Perez Hilton and the blogs Crazy Days and Nights and Oh No They Didn’t! Over the next few years, the web began to fill with aggregators like eBaum’s World, Digg, and CollegeHumor.
But even the creators of original high-quality content were not immune to the pressures of Google Search.
Deb Perelman is considered one of the earliest food bloggers and is certainly one of the few who’s still at it. She started blogging about food in 2003. Her site, Smitten Kitchen, was launched in 2006 and has since spawned three books. In the beginning, she says, she didn’t really think much about search. But eventually, she, like the other eminent bloggers of the period, took notice.
“It was definitely something you were aware of — your page ranking — just because it affected whether people could find your stuff through Google,” she said.
It’s hard to find another sector more thoroughly molded by the pressures of SEO than recipe sites
It’s hard to find another sector more thoroughly molded by the pressures of SEO than recipe sites, which, these days, take a near-uniform shape as an extremely long anecdote (often interspersed with ads), culminating in a recipe card that is remarkably terse in comparison. The formatting and style of food bloggers has generated endless discourse for years.
The reason why food blogs look like that, according to Perelman, is pretty straightforward: the bloggers want to be read on Google.
That said, she’s adamant that most of the backlash against food bloggers attaching long personal essays to the top of their posts is obnoxious and sexist. People can just not read it if they don’t want to. But she also acknowledged writers are caving to formatting pressures. (There are countless guides instructing that writers use a specific amount of sentences per paragraph and a specific amount of paragraphs per post to rank better on Google.)
“Rather than writing because there was maybe a story to tell, there was this idea that it was good for SEO,” she said. “And I think that that’s a less quality experience. And yeah, you could directly say I guess that Google has sort of created that in a way.”
Sullivan says PageRank’s algorithm is a lot simpler than most people assume it is. At the beginning, most of the tips and tricks people were sharing were largely pointless for SEO. The subject of SEO is still rife with superstition. There are a lot of different ideas that people have about exactly how to get a prominent spot on Google’s results, Sullivan acknowledges. But most of the stuff you’ll find by, well, googling “SEO tricks” isn’t very accurate.
And here is where you get into the circular nature of his argument against Google’s influence. Thousands of food bloggers are searching for advice on how to optimize their blogs for Google. The advice that sits at the top of Google is bad, but they’re using it anyway, and now, their blogs all look the same. Isn’t that, in a sense, Google shaping how content is made?
“All Your Base Are Belong To Us” existed pre-Google but suddenly rose in prominence as the search engine flickered on. Other forms of content began following the same virality curve, rocketing to the top of Google and then into greater pop culture.
Perelman said that one of the first viral recipes she remembers from that era was a 2006 New York Times tutorial on how to make no-knead bread by Sullivan Street Bakery’s Jim Lahey. “That was a really big moment,” she said.
True to form, Sullivan doubts that it was search, itself, that made it go viral. “It almost certainly wasn’t hot because search made it hot. Something else made it hot and then everybody went to search for it,” he said.
(Which may be true. But the video tutorial was also published on YouTube one month after the site was purchased by Google.)
The viral no-knead bread recipe is a perfect example of how hard it can be to separate the discoverability Google brought to the internet from the influence of that discoverability. And it was even harder 20 years ago, long before we had concepts like “viral” or “influencer.”
Alice Marwick, a communications professor and author of The Private Is Political: Networked Privacy and Social Media, told The Verge that it wasn’t until Myspace launched in 2003 that we started to even develop the idea of internet fame.
“There wasn’t like a pipeline for virality in the way that it is,” she said. “Now, there is a template of, like, weird people doing weird stuff on the internet.”
“Google has gotten shittier and shittier.”
Marwick said that within the internet landscape of the 2000s, Google was the thing that sat on top of everything else. There was a sense that as anarchic and chaotic as the early social web was out in the digital wilderness, what Google surfaced denoted a certain level of quality.
But if that last 25 years of Google’s history could be boiled down to a battle against the Google bomb, it is now starting to feel that the search engine is finally losing pace with the hijackers. Or as Marwick put it, “Google has gotten shittier and shittier.”
“To me, it just continues the transformation of the internet into this shitty mall,” Marwick said. “A dead mall that’s just filled with the shady sort of stores you don’t want to go to.”
The question, of course, is when did it all go wrong? How did a site that captured the imagination of the internet and fundamentally changed the way we communicate turn into a burned-out Walmart at the edge of town?
Well, if you ask Anil Dash, it was all the way back in 2003 — when the company turned on its AdSense program.
“Prior to 2003–2004, you could have an open comment box on the internet. And nobody would pretty much type in it unless they wanted to leave a comment. No authentication. Nothing. And the reason why was because who the fuck cares what you comment on there. And then instantly, overnight, what happened?” Dash said. “Every single comment thread on the internet was instantly spammed. And it happened overnight.”
Dash is one of the web’s earliest bloggers. In 2004, he won a competition Google held to google-bomb itself with the made-up term “nigritude ultramarine.” Since then, Dash has written extensively over the years on the impact platform optimization has had on the way the internet works. As he sees it, Google’s advertising tools gave links a monetary value, killing anything organic on the platform. From that moment forward, Google cared more about the health of its own network than the health of the wider internet.
“At that point it was really clear where the next 20 years were going to go,” he said.
“At that point it was really clear where the next 20 years were going to go.”
Google Answers closed in 2006. Google Reader shut down in 2013, taking with it the last vestiges of the blogosphere. Search inside of Google Groups has repeatedly broken over the years. Blogger still works, but without Google Reader as a hub for aggregating it, most publishers started making native content on platforms like Facebook and Instagram and, more recently, TikTok.
Discoverability of the open web has suffered. Pinterest has been accused of eating Google Image Search results. And the recent protests over third-party API access at Reddit revealed how popular Google has become as a search engine not for Google’s results but for Reddit content. Google’s place in the hierarchy of Big Tech is slipping enough that some are even admitting that Apple Maps is worth giving another chance, something unthinkable even a few years ago.
On top of it all, OpenAI’s massively successful ChatGPT has dragged Google into a race against Microsoft to build a completely different kind of search, one that uses a chatbot interface supported by generative AI.
Twenty-five years ago, at the dawn of a different internet age, another search engine began to struggle with similar issues. It was considered the top of the heap, praised for its sophisticated technology, and then suddenly faced an existential threat. A young company created a new way of finding content.
Instead of trying to make its core product better, fixing the issues its users had, the company, instead, became more of a portal, weighted down by bloated services that worked less and less well. The company’s CEO admitted in 2002 that it “tried to become a portal too late in the game, and lost focus” and told Wired at the time that it was going to try and double back and focus on search again. But it never regained the lead.
That company was AltaVista.
]]>On Sunday night, former President Luiz Inácio Lula da Silva won the final round of the Brazilian presidential election with just over 50 percent of the vote, unseating the far-right Jair Bolsonaro. On Sunday night, Lula, as he’s widely known, told a crowd of supporters, “It’s time to recuperate the soul of this country.” But fully recuperating the country will mean grappling with the strange role of YouTube and other platforms in Bolsonaro’s success — and their continued influence on Brazil’s politics.
As of Monday morning, Bolsonaro has not conceded or even publicly acknowledged the election even happened — and it’s not clear how his supporters will respond to the results. Bolsonaro swept the 2018 Brazilian election amid a global wave of right-wing populism, leading a political movement that gained both supporters and funding through the social quirks of YouTube and Telegram. Now, there’s a serious concern that those networks will activate and begin aggressively denying the results of this week’s election. Blunting those forces will take more than a single election — and as the incoming government looks to consolidate power and legitimacy, YouTube’s pro-Bolsonaro faction will be one of its most visible and vocal spoilers.
Reached for comment, YouTube was quick to point out its efforts during the election cycle. “Throughout the election, we’ve quickly removed videos that violated our policies, and prominently surfaced authoritative sources and limited the spread of borderline content through recommendations,” said YouTube representative Ivy Choi. “Following the TSE certifying the election results, we expanded our election integrity policy to prohibit content claiming the 2022 Brazil presidential election was stolen or rigged, and updated our election results information panel to state the TSE declared Lula as President-elect. We also continue to prohibit and remove ads that promote demonstrably false claims that could undermine trust in elections, including false claims about election results.”
But for many Brazilians, the impact of those efforts can be hard to discern. “YouTube has become one of the most important sources of political information. It’s become more powerful than traditional media, basically,” Brazilian YouTube creator Thiago Guimarães told The Verge. “It’s very obvious [Youtube is] not doing enough. They’re doing less than enough.”
From the beginning, Bolsonaro eagerly tested the limits of what online platforms would allow. He refused to take the covid pandemic seriously, resulting in hundreds of thousands of deaths around the country as he baselessly claimed online and at unmasked rallies that the vaccine would turn you into a crocodile or give you AIDS. The latter proved even too extreme for Facebook, which deleted the claim from his page in 2021. It also earned him an investigation from the country’s federal police. Meanwhile, the president’s rabid followers, derisively referred to as “the Bolsominions,” waged a years-long info war not only against journalists and the country’s left but also Brazil’s center-right, who were labeled traitors for not falling in line.
“I’m not going to expect anything from YouTube at this point.”
As the election intensified, the online bluster spilled into two different physical altercations involving Bolsonaro-linked politicians. On October 23rd, Roberto Jefferson, a former politician linked to Bolsonaro who was under house arrest, opened fire and threw a grenade at federal police who attempted to bring him to jail. (He was later indicted on four counts of attempted murder.) Then, on the 20th, a federal deputy in São Paulo pulled a handgun and chased a Black man through the streets of São Paulo’s upscale Jardins neighborhood after she claimed he pushed her to the ground for supporting Bolsonaro. (According to several different videos of the incident, it appears the deputy may have just fallen over on her own while being heckled.) Other deputies are now calling for her removal from office over the incident.
Perhaps sensing the precarious nature of Lula’s victory, US President Joe Biden released a statement congratulating him on the win within minutes of it being declared Sunday night. And Brazil’s largest newspaper, Folha, is reporting that US National Security Advisor Jake Sullivan is being sent to Brazil to ensure a peaceful transfer of power.
Of course, in 2022, a peaceful transfer of power online is necessary for one offline. Which is what worries people like Guimarães. “I’m very pessimistic about it,” he said. “I’m not going to expect anything from YouTube at this point.”
That failure is particularly important because of YouTube’s outsized popularity in the country. Though TikTok has all but eclipsed YouTube’s importance in America, the shortform video app is still thought of as something for young people in Brazil. And YouTube has effectively replaced — or at the very least, become equally as important as — mainstream television in Brazil, according to Guimarães. Brazil is YouTube’s fourth biggest market, with around 130 million active users, while TikTok only has around 74 million.
According to Guimarães, Bolsonaro’s supporters figured out how to use YouTube’s moderation policies to their advantage. The strategy goes like this: post pro-Bolsonaro propaganda on YouTube; instruct Telegram groups to download it or screenshot it; make clips of it for TikTok and Instagram; wait for YouTube to delete it for violating community guidelines; and then claim they were censored and spread the video and related media across Telegram and WhatsApp.
“It doesn’t matter if the video is deleted — like half an hour after they posted it — it doesn’t really matter,” Guimarães told The Verge. “They don’t really care. What’s important is that they spread this. That’s one of the main strategies, one of the main strategies in this ecosystem of fake news and right-wing propaganda.”
“It doesn’t matter if the video is deleted … They don’t really care.”
YouTube has also become host to a bevy of pro-Bolsonaro channels, which call themselves “alternative news outlets,” the most infamous being Jovem Pan, which is connected to a right-wing radio station in São Paulo but has amassed millions of subscribers on YouTube. The channel’s hard-right stance has earned it the nickname “Jovem Klan” on Brazilian Twitter.
The video platform’s rising importance in Brazilian culture, along with its overwhelmingly right-wing bent, has led to an inescapable feeling of radicalization in the country at the moment — and a big payday for pro-Bolsonaro YouTube channels. Last year, Brazilian news outlet UOL reported that a network of 12 YouTube channels made millions monetizing content attacking Bolsonaro’s enemies in the country’s Congress and federal court.
Guimarães also said that YouTube advertisements have been an incredibly effective way for Bolsonaro to spend his “Orçamento Secreto,” or “Secret Budget,” a dark money fund he’s been accused of using to buy real estate around the country and to influence the election. Earlier this month, UOL revealed that since August, Bolsonaro has pumped R$21 billion, or $3.9 million, into one of the country’s biggest social programs, which UOL found increased his place in polls by about 7 percent. And last month, The Guardian reported that an NGO called Global Witness was actually able to submit misleading electoral ads to YouTube and had them easily approved by the platform.
“[Bolsonaro’s campaign ads] are on my videos — people are complaining to me. They click on my videos and they’re like, ‘That’s false.’ I’ve watched it, too. It’s there,” Guimarães said.
Another outlet tracking the absolute tidal wave of Bolsonaro campaign ads on YouTube over the last few months is the English-language independent Brazilian news outlet Brasil Wire. Nathália Urban, a Scotland-based Brazilian reporter for the outlet, told The Verge that American tech companies like Google and Meta have spent four years largely ignoring anything Bolsonaro has said on their platforms.
“He said the most horrible things on social media and no one ever took it down,” Urban said. “Things like forest fires, police killings, racism, misogyny, LGBTQ-plus phobia, everything has been normalized by them. And you never see social media taking an action, not just against him, but also his sons, his supporters.”
“Facebook, in my personal opinion, has lost a hell of a lot of force in Brazil.”
But the internet landscape that Bolsonaro took advantage of in 2018 is different now. Facebook is no longer the center of the Brazilian online experience and, similar to the American social web in the lead-up to the November midterms, both Bolsonaro and Lula had to run their campaigns across a much more fractured internet.
“Facebook, in my personal opinion, has lost a hell of a lot of force in Brazil,” Vitória Brandão, a Rio de Janeiro-based journalist told The Verge. “It’s still widely used, but I don’t think it’s as relevant as it once was, let’s say four years ago.”
Another factor working against Bolsonaro was Brazil’s Supreme Electoral Court, which has grown more aggressive about regulating the flow of internet content during the election. Earlier this month, the court ordered YouTube to demonetize four pro-Bolsonaro channels, including one run by the president’s son Carlos Bolsonaro. It also recently unanimously voted to change the window regarding paid election content online. For 48 hours before the final vote and 24 hours after, ads, monetized political content, and notably, boosted content were forbidden and carried a fine of up to R$150,000 or $30,000 per hour that the content was online. Social networks had two hours to comply, and if they didn’t, the platform’s services could have been suspended entirely.
And the Supreme Electoral Court has been equally firm about the results of Sunday’s election. Alexandre de Moraes, the president of the court, said in a press release, “The result has been proclaimed, accepted and those who were elected will graduate in December and take office on January 1.”
It’s not just YouTube that continues to play a major role in spreading right-wing propaganda in Brazil. Bolsonaro leaned heavily on Facebook Live in 2018, using it to circumvent the traditional debate process in one decisive moment of the campaign. But during this campaign cycle, he has focused more specifically on Telegram and YouTube. The former president still used Facebook Live during this campaign — broadcasting live at least once a week and using his page to share sometimes dozens of short video clips each day — but those videos found a broader reach on Telegram and YouTube where they are clipped, remixed, and distributed.
The Telegram groups used by Bolsonaro and his supporters are popular enough that, this month, Brazil’s Supreme Electoral Court ordered content be pulled down off Telegram. In May, The New York Times reported from inside a handful of Bolsominion Telegram groups, calling it a “tide of madness.”
“Unlike in America, I think Telegram, the messaging app — between Bolsominions — has become really popular,” Brandão said. “They’re very organized and they’re very radical.”
“It’s not just politics. It’s our ideology.”
Bruno Natal, a Brazilian journalist and podcaster, told The Verge that messaging apps like Telegram and WhatsApp with easy forwarding capabilities, play a huge part in how Bolsominions connect with each other across the country and have had a profound impact on how voters have followed this year’s election.
“Since Android is the main operating system — to the tune of 80 percent of the market share — and iPhones are incredibly expensive here, iMessage never picked up and WhatsApp has been the main way to exchange text messages for free,” he said. “Of course, it comes with all the issues related to the dark web, meaning anything goes and it is really hard to find the ones responsible. Bolsonaro’s team has mastered this market opportunity and leveraged it to their benefit, especially since they simply don’t care about spreading straight-out lies.”
In many ways, Bolsonaro’s silent disappearance from public life is the safest way forward. If he does make a public announcement, there’s no telling what it could do for his still very passionate base. For every video of Lula’s supporters celebrating in the streets, there is another of evangelical Bolsonaro supporters praying in the streets or trying to attack political opponents. The division Bolsonaro has fomented isn’t going anywhere and has become a powder keg that could go off at any moment.
“It’s not a wave,” Brandão said. “It’s now a full-fledged movement. And it’s here to stay for at least another four years. It’s not just politics. It’s our ideology.”
But while Bolsonaro’s movement isn’t going away, Lula’s win suggests his movement isn’t either. As tense as the election was, Núcleo founder and editor Alexandre Orrico told The Verge it feels like a turning point for the country’s online left.
“In 2018, the right and the far-right came out ahead in terms of understanding social media engagement,” he said. “But in 2022, we are seeing part of the left entering the game.”
Update 4:15 PM ET: Added comment from YouTube representative.
]]>Last month, all four major online social platforms — Meta, Twitter, YouTube, and TikTok — released their plans for combating misinformation and disinformation in the weeks leading up to the 2022 US midterms.
Meta will have voting alerts, real-time fact-checking in both English and Spanish, and as it did in 2020, it will also be banning “new political, electoral and social issue ads” during the week leading up to the election. Twitter is focusing on “prebunks,” proactively fact-checking content in users’ feeds based on search terms and hashtags and will have election-themed Explore pages. YouTube is rolling out information widgets on search pages for candidates. And TikTok will have curated election hashtags full of vetted information and will be continuing to enforce its long-standing ban on political advertising.
You’d be forgiven, though, if you couldn’t keep any of this straight in your head anymore or can’t immediately parse what makes these any different from any other election. In fact, researchers and fact-checkers feel the same way.
“I don’t think any platform is in ‘good shape’”
“Unless Facebook has drastically changed the core functions and design of its platform, then I doubt any meaningful changes have happened and ‘policing misinformation’ is still piecemeal, whack-a-mole, and reactive,” Erin Gallagher, a disinformation researcher on the Technology and Social Change research team at the Shorenstein Center, told The Verge.
The world has changed and the internet’s biggest platforms don’t seem to realize it. If polls are to be believed, 45 percent of Americans — and 70 percent of Republicans — believe some variation of the “Big Lie” that former President Donald Trump won the 2020 election. QAnon-affiliated candidates are running in over 25 states, and their conspiracy theories are even more prevalent. And even with new policies pointed directly at banning allegations of 2020 voter fraud, platforms are still full of “Big Lie” content. In the war between platform moderators and conspiracy theories, conspiracy theories won.
November’s election will be the first time that many Americans will enter the voting booth since last year’s insurrection, which was planned and subsequently livestreamed on many of the same platforms now emphasizing their commitments to democracy. And it’s this new troubling political reality that isn’t reflected in the current policies of the four major platform companies.
“I hate to be extremely pessimistic but I don’t think any platform is in ‘good shape,’” Gallagher said.
So what comes next? How do we moderate online platforms in a post-insurrection world? Is it all just hopeless? Well, the simple answer might be that we have reached the limit of what individual platforms can do.
The “war room” looked like a room full of computers
Katie Harbath, CEO of Anchor Change and former public policy director for Facebook, said very little of what she’s seen from platforms regarding the US midterms this year feels new. Harbath left Facebook in 2021 and said she’s particularly concerned that none of the Big Tech companies have anything in their election policies that mention coordinating across platforms on internet-wide conspiracy theories.
“How does this mis- and disinformation spread amongst all these different apps? How do they interplay with one another,” Harbath told The Verge. “We don’t have enough insight into that because nobody has the ability to really look cross-platform at how actors are going to be exploiting the different loopholes or vulnerabilities that each platform has to make up a sum of a whole.”
The idea of a misinformation war room — a specific place with specific staffers devoted entirely to banning and isolating misinformation — was pioneered by Facebook after the Cambridge Analytica scandal. The twin shocks of Donald Trump’s 2016 victory and the unexpected Brexit vote created a need for a way for internet platforms to show that they were actively safeguarding against those who wanted to manipulate the democratic process around the world.
Ahead of the US midterms and the Brazilian presidential election in 2018, Meta (then Facebook) wanted to change the narrative. The company invited journalists to tour a literal physical war room, which The Associated Press described as “a nerve center the social network has set up to combat fake accounts and bogus news stories ahead of upcoming elections.” From pictures, it looked like a room full of computers with a couple of clocks on the wall showing different time zones. The war room would then be shut down less than a month later. But it proved to be an effective bit of PR for the company, lending a sense of place to the largely very mundane work of moderating a large website.
Harbath said that the election war rooms were meant to centralize the company’s rapid response teams and often focused on fairly mundane issues like fixing bugs or quickly taking down attempts at voter suppression. One example of war room content moderation that Harbath gave from the 2018 midterms was that the Trump campaign was running an ad about caravans of undocumented immigrants at the border. There was heavy debate internally about whether that ad would be allowed to run. It ultimately decided to block the ad.
“No platform has been transparent about how much content even gets labeled”
“Let’s say I got a phone call from some presidential candidate’s team because their page had gone down,” she said. “I could immediately flag that for the people in the War Room to instantly triage it there. And then they had systems in place to make sure that they were routing things in the right perspective, stuff like that.”
A lot of that triaging was also happening very publicly, with analysts and journalists flagging harmful content and moderators acting in response. In the 2020 election, the platform finally cracked down on “stop the steal” content — more than two months after results of the election were settled.
Corey Chambliss, a spokesperson for Meta, told The Verge that the 2018 policy with regard to working with “government, cybersecurity, and tech industry partners” during elections was still accurate for this year’s midterms. Chambliss would not specify which industry peers Meta communicates with but said that its “Election Operations Center” will be, in effect, head of Election Day this year.
In a report published this month about removing coordinated authentic activity in Russia and China, Facebook said, “To support further research into this and similar cross-internet activities, we are including a list of domains, petitions and Telegram channels that we have assessed to be connected to the operation. We look forward to further discoveries from the research community.”
“There are also just more platforms now.”
There are other reasons to be pessimistic. Right now, the bulk of the current election response involves using filters and artificial intelligence to automatically flag false or misleading content in some way and specifically remove more high-level coordinated disinformation. But if you’re someone who spends 10 hours a day consuming QAnon content in a Facebook Group, you’re probably not going to see a fact-checking widget and suddenly deradicalize. And making things even more frustrating, according to Gallagher, is the fact that there aren’t any actual numbers on how many posts are flagged as misleading or false.
“As far as I know, no platform has been transparent about how much content even gets labeled or what the reach of that labeled content was, or how long did it take to put a label on it, or what was the reach before vs. after it was labeled,” she said.
Also, if you’re someone immersed in these digital alternate realities, in all likelihood, you’re not just using one platform to consume content and network with other users. You’re probably using several at once, none of which have a uniform set of standards and policies.
“There are also just more platforms now,” said Gallagher, thinking of alternative social media platforms like Rumble, Gettr, Parler, Truth Social, etc. “And TikTok, which is wildly popular.”
Platforms also function in new ways. Social media is not simply just a place to add friends, post life updates, and share links with different communities. It has grown into a vast interconnected universe of different platforms with different algorithms and vastly different incentives. And the problems these sites are facing are bigger than any one company can deal with.
“There was a big platform migration that happened both since 2020, and since January 6th.”
Karan Lala, a fellow at the Integrity Institute and a former member of Facebook’s civic integrity team, told The Verge that it’s useful now to focus on how different apps deliver content to users. He divides them into two groups: distribution-based apps versus community-based apps.
“TikTok, apps like Instagram, those are distribution-based apps where the primary mechanism is users consuming content from other users,” Lala said. “Versus Facebook, which has community-based harms. Right?”
That first class of apps, which includes TikTok and Instagram among others, poses a significant challenge during large news events like an election. This year’s midterms won’t be the first “TikTok election” in the US in the literal sense, but it will be the first US election where TikTok, not Facebook, is the dominant cultural force in the country. Meta’s flagship platform reported that it lost users for the first time this year and, per a recent report from TechCrunch, TikTok pushed the app out of the Apple App Store top 10 this summer.
And, according to Brandi Geurkink, a senior fellow at Mozilla, TikTok is also the least transparent of any major platform. “It’s harder to scrutinize, from the outside, TikTok than it is some other platforms, even like Facebook — they have more in terms of transparency tools than TikTok,” Geurkink told The Verge.
Geurkink was part of the team at Mozilla that recently published “These Are ‘Not’ Political Ads,” a report that found TikTok’s ban on political ads is extremely easy to bypass and that the platform’s new tool that lets creators pay to promote their content has virtually no moderation, allowing users to easily amplify politically sponsored content. TikTok has, however, updated its policy this month, blocking politicians and political parties from using the platform’s monetization tools, such as gifting, tipping, and the platform’s Creator Fund. The Verge has reached out to TikTok for comment.
“I think what we’ve advocated for, for a long time, is there to basically be external scrutiny into the platforms,” Geurkink said. “Which can be done by external researchers, and TikTok hasn’t really enabled that in terms of transparency. They’ve done a lot less than the other platforms.”
It’s not just a lack of transparency with regard to how the platforms moderate themselves that’s a problem, however. We also still have little to no understanding of how these platforms operate as a network. Though, thanks to Meta’s own Widely Viewed Content Reports, we do have some sense of how linked these different platforms are now.
The most viewed domain on Facebook during the second quarter of 2022 was YouTube.com, which accounted for almost 170 million views, and the TikTok.com domain, accounting for 108 million views. Which sort of throws a wrench into the idea of any one platform moderating their content independently. But it’s not just content coming from other big platforms like YouTube and TikTok that create weird moderation gray areas for a site like Facebook.
“If people genuinely believe a false claim, all they’re going to think is that the social media company is trying to work against what they perceive to be the truth.”
Sara Aniano, a disinformation analyst at the Anti-Defamation League’s Center on Extremism, told The Verge that fringe right-wing websites like Rumble are increasingly impactful, with their content being shared back on mainstream platforms like Facebook.
“There was a big platform migration that happened both since 2020, and since January 6th,” Aniano said. “People figured out that they were getting censored and flagged with content warnings on mainstream social media platforms. And maybe they went to places like Telegram or Truth Social or Gab, where they could speak more freely, without consequence.”
Bad actors — the users who aren’t just blindly sharing content they think is true or don’t care enough to personally verify — know that larger mainstream platforms will suspend their accounts or put content warnings on their posts, so they’ve gotten better at moving from platform to platform. And when they are banned or have their posts flagged as misleading or false, it can add to a conspiratorial mindset from their followers.
“If people genuinely believe a false claim, all they’re going to think is that the social media company is trying to work against what they perceive to be the truth,” she said. “And that is kind of the tragic reality of conspiracism, not just leading up to the election, but around everything, around medicine, around doctors, around education, and all the other industries that we’ve been seeing attacked over and over again.”
One good example of how this all works together, she said, was the recent Arizona primaries, where a conspiracy theory spread about the Arizona primary that claimed that the pens being used by Maricopa County election officials were used to rig the election. It was a repeat of a similar conspiracy theory called #SharpieGate that first started going viral in 2020.
The hashtag #SharpieGate is currently hidden on Facebook. But that hasn’t stopped right-wing publishers from writing about it and having their articles shared on the platform. YouTube’s search results are completely free of conspiracy theory content; the hashtag isn’t blocked on Twitter but is blocked on TikTok. Users are still making videos about it.
Ivy Choi, the policy communications manager for YouTube, told The Verge that the platform is not blocking #SharpieGate content but is demoting it in the platform’s search terms. “When you look for ‘#Sharpiegate 2.0,’ YouTube systems are making sure authoritative content is at the top,” she said. “And making sure that borderline content is not recommended.”
“I mean, any attempt at mitigation and more stringent content moderation is a good thing,” Aniano said. “I would never say that it’s futile. But I do think that it needs acknowledging that the problem, and the distrust that has been sowed in the democratic process since 2020 is deeply systemic. It cannot be solved in a week, it can’t be solved in a year, it may take lifetimes to rebuild this trust.”
]]>After more than two weeks of chaotic protest, this week, the Canadian government pushed back. On Tuesday, Canadian Prime Minister Justin Trudeau invoked the country’s Emergencies Act, enabling new financial restrictions on the protests and signaling harsh new penalties against anyone involved.
For many Canadians, it’s an overdue end to a chaotic protest that has stifled trade and brought alarming weaponry into otherwise quiet communities. But right-wing supporters have a wildly different view of events: figures like Tucker Carlson have portrayed the convoy as a working-class rebellion, and Trudeau’s response has been treated as enacting martial law, leading Elon Musk to tweet (and then delete) a meme comparing Trudeau to Adolf Hitler.
It’s a shocking split, arguably the single most important factor in the protests, and much of it originates in the fractured way information travels online. Convoy supporters are getting their news from a tangle of Facebook groups, Telegram channels, and random influencers, which is all then amplified and expanded by right-wing broadcasters like Carlson, The Daily Caller, or Canadian right-wing media network Rebel News. These channels promote a sanitized version of movements like the Freedom Convoy, amplifying its hashtags and turning its obscure extremist leaders into celebrities.
From physical protest to social media to establishment outlets
This pipeline — from physical protest to social media to establishment outlets — is what has helped the convoy evolve from a local standoff into a televised event that can raise millions from supporters thousands of miles away. Almost all of that infrastructure pre-dates the convoy itself, drawing from anti-vaxx groups, QAnon, and other fringe communities. And while the convoy itself may soon be broken up by the Canadian government, those online pathways are much stickier.
To understand how this echo chamber works, we have to start with the Ottawa protest itself. The “Freedom Convoy’’ started as a loosely affiliated group of Canadian truck drivers led by a group called Canada Unity, founded by far-right activist and QAnon conspiracy theorist James Bauder. But over the last 30 days, Bauder has managed to build a coalition of fed-up truck drivers, fringe Canadian political party members, neo-Nazis, anti-vaxxers, and an international coterie of scammers, grifters, and low-level online creators that has been able to generate major headlines around the world.
The convoy’s spread across Facebook didn’t gain any real momentum until a video about the protest was posted on Rumble, a right-wing video platform, on January 18th by a user named Ken Windsor and started to get a few thousand shares. In the caption of the video, Windsor shared links to a page called “Freedom Convoy 2022,” which had been started four days earlier, according to Facebook’s page transparency tools. Windsor had posted several videos on Rumble about truck drivers planning to protest Canadian COVID mandates before this. But the post on January 18th, according to social analytics tool Buzzsumo, was the most-shared piece of convoy content during this first week after it was posted to the Freedom Convoy 2022’s page.
“The Freedom Convoy has had connections to the Canadian far-right from the beginning”
Windsor’s Rumble video also linked out to a right-wing Canadian video creator named Pat King, who was active in Canada’s Yellow Vests protests, and also promoted the movement’s now-defunct GoFundMe page. Following Windsor’s video, between January 14th and January 23rd, several other Facebook pages and groups were created to support the truckers, including the first sizable Facebook group for convoy supporters, which was initially called Freedom Convoy 2022 but has since changed its name to “Convoy For Freedom 2022 🚚🚛🚚🚛🚚🚛🚚🚛”. The group was also a major initial supporter of the GoFundMe page, with users sharing it on the very first day the Facebook page was launched.
According to AntiHate.ca, the GoFundMe was run by Tamara Lich, another former Canadian Yellow Vest who works for a Canadian separatist party, and B.J. Dichter, a right-wing commentator known for Islamophobic rhetoric.
Paris Marx, a PhD candidate based in Canada and host of the podcast Tech Won’t Save Us, told The Verge that the Freedom Convoy’s connections to the country’s far-right significantly outweigh its connections to actual Canadian truckers.
“It’s been pretty well-documented at this point that the Freedom Convoy has had connections to the Canadian far-right from the beginning, including having been behind the initial GoFundMe fundraiser,” he said. “It also never really represented a broad swath of truckers — 90% of whom are vaccinated and whose trade organizations distanced themselves from, if not outright opposed, the convoy.”
On January 25th, the Trucking for Freedom group officially connected with the organizers of the trucker’s protest, Canada Unity, and the group’s founder, James Bauder. Trucking for Freedom would become something like the group’s official documentarians during these early weeks. King’s livestreams and high-res photography from the road were distributed through the Trucking for Freedom group. From there, they were shared through a series of small Canadian Facebook pages and groups like Freedom Convoy 2022 Manitoba Info and VI Freedom Convoy 2022 Bearhug Ottawa.
Five convoy groups were created by a single hacked account
The reach on this content, though organic, was small. Many of Pat King’s videos, for instance, have been watched less than 100,000 times. In January, posts tagged things like #FreedomConvoy, #freedomconvoy2022, and #TruckersForFreedom were unquestionably local, like an emotional post from the page for the Continental Cattle Carriers, Ltd., in Alberta, Canada, or a viral photo album of children cheering on the truckers.
According to Buzzsumo, the total mentions for “convoy” across the web jumped 195 percent between Tuesday, January 25th, and Saturday, January 29th, peaking on that Saturday with 1,920 total mentions, which coincided with other large groups popping up like “Convoy to Ottawa 2022”. But this was also the last moment the truckers convoy would still be entirely Canadian. From here, it would travel south to the United States but also beyond North America entirely, gaining support from right-wing pages across Europe.
With that international attention, convoy groups also began to attract scammers and counterfeits. An investigation from Grid News found that five convoy groups were created by a single hacked account (formerly belonging to a woman from Missouri) and a Facebook spokesperson told NBC News that content farms as far as Bangladesh and Vietnam were promoting convoy memes. For instance, on January 28th, a page currently called Freedom People, run out of Bulgaria, created a convoy Facebook group called, “Freedom Convoy Worldwide,” which currently has 9,000 members. (The Freedom People page currently uses the “Freedom Convoy Worldwide” group as a way to advertise a PayPal donation page for itself.)
“Users in Telegram chats are being bombarded with conflicting maps, routes, dates, and times that seem to evolve by the hour”
The movement’s international spread also introduced it to Facebook’s larger universe of anti-vaxxers and conspiracy theorists. On January 24th, a Facebook page run by four people based in the United States posted its first video about the convoy. And based on shares, the page has become so viral that it may be possibly the center of the entire convoy movement online right now. It’s called “2020: What’s the Real Truth,” and, according to Facebook’s page transparency tools, it was originally titled, “2012: What’s the Real Truth,” seemingly based on the 2012 doomsday conspiracy. It then changed its name in January 2021 and pivoted to hardcore anti-vaccine and anti-mask content, posting multiple times a day, advocating for a global revolution against COVID protocols.
But by the end of January, Canadians were no longer leading the convoy — at least when it comes to the movement’s online presence. It was around this time that what is possibly the largest still-active convoy group was created, and it doesn’t appear to be run by any Canadians at all. It’s called “The People’s Convoy – Official,” and it has 87,000 members. The group has five administrators, all of which show ties to the US on their Facebook profiles.
This disorganized second phase of the movement has created real confusion for supporters, according to Sara Aniano, a misinformation researcher who has contributed to the Global Network on Extremism & Technology.
“Users in Telegram chats are being bombarded with conflicting maps, routes, dates, and times that seem to evolve by the hour, and many have expressed confusion,” Aniano said. “I’ve read comments about wanting to see liberal enemies hanging from nooses, but I’ve also read comments about having a big party and bringing food trucks. Again, they’re not on the same page.”
“Fox News has an interesting way of filtering very local events through the prism of its own culture wars”
But based on Facebook metrics, the core of the Freedom Convoy was never really anything more than a small collection of local conspiracy theorists who were then suddenly given a megaphone by America’s powerful right-wing disinformation machine. Their campaign was first supercharged by Facebook’s algorithm, which currently favors content shared within local groups, and was then blasted out into every feed and screen possible by ravenous conservative tabloids. American right-wing publisher The Daily Wire, founded by conservative commentator Ben Shapiro, latched on to the story at the end of January and published 66 articles featuring the keyword “convoy” between January 28th and January 31st. And the most popular story of theirs from this time period actually promotes a Facebook group that would eventually get shut down by the platform after barely four days for repeatedly violating Facebook’s policies around QAnon.
Amplifying small right-wing political movements like this has become a powerful piece of the conservative toolkit — particularly in the time of COVID. Case in point, over the last month, Fox News aired over eight hours of programming about the Freedom Convoy, warning that an American version was on its way.
“Fox News has an interesting way of filtering very local events through the prism of its own culture wars, which creates the impression for their followers that they are part of some transnational grassroots uprising,” Amarnath Amarasingam, a professor at Queen’s University and senior fellow at the International Center for the Study of Radicalisation and Political Violence, told The Verge.
But The Daily Wire may have succeeded in putting the convoy on Fox News’ radar. The bulk of the Facebook activity around Daily Wire stories has actually come from a network of larger collaborator pages associated with The Daily Wire itself. Crowdtangle data shows that one January 31st story about the 90,000-member convoy Facebook group was able to get a lot of interactions on the platform because it was shared to Shapiro’s personal page twice, twice to the personal page of a Daily Wire commentator Michael Knowles, twice to the personal page of another Daily Wire collaborator Matt Walsh, and once to The Daily Wire’s main Facebook page. The American media attention around the movement accelerated its lifecycle and, in the end, just turned it into another meaningless Facebook meme.
The last real moment of cohesion within the movement was when GoFundMe shut down the convoy’s donation page on February 5th, causing a massive spike in activity. And engagement hasn’t really come back. In many ways, the GoFundMe shutdown was the defining event of the entire convoy. But it also exposes an ugliness that’s been part of the movement since the very beginning. According to a statement from GoFundMe, the convoy’s fundraiser was shut down after the company was given evidence from Canadian law enforcement that protesters were planning to occupy Ottawa.
Amarasingam said Fox News’ coverage has given a huge boost to fringe anti-government figures like Bauder and King, who would not have been influential enough to spark global interest on their own. As the convoy tails off, he worries those figures will use their new profile to drag followers even further toward the fringes, drawing on the same network of Facebook groups and willing right-wing amplifiers.
“The activism on the ground has probably peaked, but the movement I think is here to stay,” he said. “The organizers will continue to be active on the fringes of the Canadian right, but they are now basically minor celebrities and influencers.”
Ryan Broderick is a freelance tech writer and publishes a newsletter about web culture called Garbage Day.
]]>In October, a TikTok user named Matthew Heller posted a video recorded moments after a small traffic accident at a Florida intersection. In the video, a woman named Maddy Gilsoul comes up to the window of Heller’s car and starts screaming at him. Heller captioned the video, “I got hit from behind. My car was hit from behind while I was stopped. #lamborghini #aventador #hornblasters.” The original video has since been deleted from TikTok, but Heller left a version of it up on his Instagram.
In a world without TikTok, the incident would have remained a small but contentious traffic accident, the fault of which would have been determined by insurance companies. Instead, thousands of users on the shortform video app argued about who was to blame. Was it Gilsoul, who, according to Heller’s video, seemed to have clearly clipped the back of Heller’s Lamborghini? Or was Heller at fault? At one point in the “news cycle,” TikTok users accused him of “gaslighting” them, after new footage surfaced from a nearby security camera that seemed to show him hitting Gilsoul first. Or was this incident even real? Heller appeared to be the founder of a company called HornBlasters, which sells custom car horns. A car horn salesman suddenly finds himself at the center of a viral traffic accident? It all seemed too perfect.
As the numbers on Heller’s video began to climb into the millions, TikTok’s army of commentators rushed into the trending topic to debate and analyze what was happening. A TikToker named @ugolord, who calls himself “The TikTok Attorney,” asked his followers to watch the footage of the accident and decide who was liable. A user named @pushpeksidhu from Toronto posted an update on the accident that was watched half a million times, sharing footage from Gilsoul’s account that users hadn’t seen yet, seemingly proving Heller was at fault. A user who goes by @doctor.ryan made another video combining Heller’s account with the new security camera footage for a definitive look at the whole scandal.
Drama-reaction accounts like these are riding a huge wave of popularity right now, thanks to an obsession within the TikTok community with investigating, analyzing, and passing judgment on the content going viral on the platform. The app’s young users pore over random trending videos, constructing elaborate conspiracy theories and even doxxing the people featured in them. Most often, these campaigns to unmask other users are driven by some sense of justice.
A teenage version of the OSINT community
In each of these instances, the app’s aggressive recommendation algorithm awakens, pushing the controversies to millions of users, generating hundreds of videos, thousands of comments, and too many views to count. The app has become home to a teenage version of the OSINT (or “open source intelligence”) community, made famous by outlets like the Atlantic Council’s Digital Forensic Research Lab (DFR Lab) or investigative site Bellingcat.
The biggest account weighing in on the “Lambo crash” trending topic was @TizzyEnt, an account with 3.8 million followers run by a film director named Michael Mc. During the course of the Lamborghini content cycle, Mc posted several updates, at one point even connecting with Heller and sharing additional details about what happened.
If you’re on TikTok, you’ve no doubt seen Mc’s videos. On an app defined by permanently teenage Hype House dancers, Mc’s graying beard and low baritone voice definitely stand out, as does his presentation style. He specializes in what right-wingers might call “cancel culture” or what mainstream journalists might call “internet drama.” He’s part of a network of popular TikTok users who have risen to prominence thanks to the TikTok community’s current fixation on crowdsourced investigations into matters both real and conspiratorial or imaginary.
Mc and his collaborators unmask racists, report anti-vax nurses to their respective hospitals, and help “cancel” members of TikTok’s rogues’ gallery of conspiracy theory truthers, fascists, and viral main characters. But with a massive young audience hungry for accountability, or more accurately, viral justice, these TikTokers are finding there is a fine line between citizen journalism and vigilante information warfare.
Mc and Paterno spent days locked in a feud, dueting videos with each other
In the case of the Lamborghini saga, TikTok users drew lines and took the side of either Heller or Gilsoul, flooding their accounts with nasty comments. Heller’s TikTok account is no longer active and Gilsoul hasn’t posted since October. But this kind of work can get much more intense.
Shortly before Mc waded into the Lamborghini drama, he led a campaign against a FedEx delivery driver named Vincent Paterno, who claimed in a TikTok video that he would not deliver packages to any homes without an American flag or with a Biden/Harris sign in the front yard. Mc posted Paterno’s Facebook page on TikTok and said he would be reporting him to FedEx. Then Mc and Paterno spent days locked in a feud, dueting videos with each other.
“You’re such a piece of shit,” Mc says in one video. “Do you know your wife messaged me to tell me how she and her children do not agree with you? How she begged you not to post this? And now you’ve posted a second time — and she’s getting threats, by the way.” Then Mc scolds his own audience for sending threats to Paterno and his family.
The whole episode ended with Paterno reportedly getting fired from FedEx. In a final update, Mc jokingly threatens to brand “@tizzyent” on Paterno’s back. This kind of content is far from anything you’re going to get on a mainstream media outlet — maybe on Substack — but Mc’s audience can’t get enough of it, flooding his comment section with tips for more video investigations.
“I look at the internet like one giant small town”
Mc told The Verge he’s trying to bring some accountability back to how people behave on the internet. He isn’t actively trying to get people fired — though he does, and isn’t quiet about it on his account when it happens.
“There’s an old expression that ‘bad gas travels fast in a small town.’ If you live in a small town and you do something terrible, everyone knows about it,” he said. “I look at the internet like one giant small town in the sense of: If I put something out there, my objective is not to get someone fired unless their job is directly harming people. So like a nurse refusing to get vaccinated. She could be infecting people. That’s a thing. Or a police officer harming people. That’s the thing.”
According to Mc, he started out making videos just trying to debunk misinformation and conspiracy theories that came across his For You page, the central feed where TikTok users see content recommended to them. He said earlier this year, as America began its vaccine rollout, the app became inundated with anti-vax content. In August, a user sent him a video of a woman who went by @antivaxmomma, who was bragging about selling fake vaccination cards on Instagram.
“[Users] find someone doing that sort of thing, and they don’t know how to deal with it themselves,” he said. “They reach out to people like me, and go, ‘Hey, can you see what this person is doing? Can you help with this kind of thing?’”
“The algorithm kind of promotes tribalism”
Mc and the network of other TikTokers he works with sprang into action, identifying the woman as Jasmine Clifford, who was then charged by law enforcement with conspiracy, and offering and possession of a forged instrument. According to Mc, his regular collaborators include @ThatDaneshGuy, @auntkaren0, and @rx0rcist, all of whom who have made headlines in the last few months for their flashy videos exposing and publicly shaming various villains within the progressive-leaning world of TikTok.
There is a feeling, one Mc shares, among many long-time TikTok users that individual accounts have to step up and personally deal with the app’s rampant misinformation, extremism, and conspiracy theories. As Mc sees it, his account wouldn’t need to exist if TikTok actually moderated its platform properly. “The algorithm kind of promotes tribalism,” he said. “You get inside of the bubble when you start believing that your perspective is the only one that matters.”
“Something fundamentally changed in this work about 10 years ago”
But the videos these users post are a tightrope walk of investigative journalism, punditry, and open source intelligence that could easily fall into the same trap as Reddit’s libelous and disastrous r/FindTheBostonBomber experiment. But according to Emerson T. Brooking, a resident senior fellow at the DFR Lab, we are well past the point where journalistic institutions can decide who can and cannot conduct OSINT research.
“I’m not gonna position myself as a legacy gatekeeper; I think something fundamentally changed in this work about 10 years ago,” he told The Verge. “You need to use communities and enthusiasm to try to accomplish some good.”
Brooking clarified, though, that performing this kind of work out in the open — or at least on an app like TikTok, which has a trending topic-focused algorithm that blurs together responsible TikTokers like Mc and more rogue accounts that are doing it for clout — could easily spin out of control.
“If you look into how Bellingcat is structured,” Brooking said, “whether their most famous, prominent analysts have this public presence, they’re doing their work on a Slack channel, actually. They’re not doing every bit of it on Twitter.”
Making things even more confusing is when large media organizations begin to weigh in on these fairly low-level TikTok dramas. Being the subject of a TikTok spat is one thing, but it’s another to elevate it into a news story and turn a local incident into international headlines.
“I think I’m gonna have to start saying in every video, ‘Hey, don’t, don’t threaten them’”
Both Mc and Sophia Smith Galer, a London-based senior news reporter for VICE who also runs a personal TikTok account with over 275,000 followers, said traditional media has a habit of turning the volume up on these minor naming-and-shaming campaigns happening on TikTok. Smith Galer told The Verge the media often legitimizes the vigilante justice being carried out by random TikTok users when they turn these viral scandals into news stories.
“We saw this with the rush to cover TikTok users investigating the disappearance of Gabby Petito, or the reporting around TikTok users erroneously claiming that a furniture company was somehow trafficking children during the US presidential election,” she said. “TikTok users overanalyzing videos going viral on the app is nothing new.”
During Mc’s crusade against Paterno, the FedEx driver, the two men traded barbs across their social media platforms and Mc did share Paterno’s Facebook information and reported Paterno to FedEx. But the ensuing media swarm, kicked off by this article from TooFab, brought the story to a national stage, eventually getting coverage in the New York Post. Paterno is still on TikTok, by the way, where he’s now posting anti-vax content.
But according to Smith Galer, when mainstream media outlets cover these stories, the more responsible users like Mc, the genuinely nefarious bad actors spreading disinformation, and just the random shitposters get swirled together into a trend that typically makes the chaos on TikTok worse.
Like Brooking, Smith Galer sees the current wave of amateur OSINT happening on TikTok as a net positive, but also a recipe for chaos. There are currently around 80 million active users on TikTok in America, but Statista estimates that roughly a quarter of those users are between ages 10-19. That is a lot of children learning how to doxx each other.
“What’s fun about all of these is how they are universalizing OSINT skills, but what is not fun about all of these is the lack of media literacy and the innocence in which content creators commit contempt of court, libel, or spread harmful misinformation,” Smith Galer said.
And this is the central dynamic Mc said he’s been struggling with recently. He said he was committed to using his TikTok account to exposing various bad actors, but he has noticed that, regardless of how even-keeled his presentation style is, his followers — or users who find his videos in their For You pages — aren’t as interested in making sure things are done the right way.
“I think I’m gonna have to start saying in every video, ‘Hey, don’t, don’t threaten them. No death threats, no violence.’ If you want to write to someone’s employer and say you don’t like what that employee did, that’s your prerogative to do that. But no, you calling an oncology department and saying, ‘I’ll kill all of you’ — I don’t believe that my followers did that,” he said. “But I’m still going to put it out there just in case any of my followers get really upset about something or someone who sees one of my videos.”
]]>On May 22nd, a crypto finance project called DeFi100 posted a message to its website: “We scammed you guys and you can’t do shit about it. HA HA. All you moon bois have been scammed and you can’t do shit about it.”
Screenshots of the message immediately went viral on crypto Twitter (always anarchic, easily risible). A popular anonymous crypto-tracking Twitter account called Mr. Whale estimated that DeFi100 had run off with $32 million. Cryptocurrency news outlets, as well as Yahoo Finance, ran with the number. The project owners denied any foul play, and it soon became clear the message was a website hack rather than a serious warning — but by then, it was too late. Panic had set in, and the price of the underlying coin was in free fall.
“We never stole any funds,” a representative for the project told The Verge. “DeFi100 was a very small project, and we were not holding any investors’ funds, so there are no questions of scamming people or running away with their funds.”
There’s little recourse when crypto investments turn out to be scams
DeFi100’s problems are a small part of the picture, but they’re a reminder of the dangers of the ongoing crypto boom. Despite billions of dollars pouring into the space in recent months, there’s still little recourse when investments turn out to be scams. Most importantly, the radical decentralization of the blockchain means there is simply no way to get your money back — and few assurances that an unproven vendor will keep their promises once the transaction goes through. The result is a new gold rush in crypto scams, as speculators seek ever more obscure opportunities and riskier bets.
The DeFi100 project’s website is now back online, but rumors persist about what actually happened. Certik, a popular blockchain security leaderboard, does currently list DeFi100 as a “rug pull,” which is a term for a scam where the founders of a project raise investment money and run. (The project owners say a rug pull would be impossible since they never held investor funds.) It’s just one of a string of scams that today’s crypto holders need to watch out for, along with sketchy altcoins, Discord pump-and-dumps, Elon Musk impersonators, and more malicious forms of cybercrime.
According to Maren Altman, a TikTok influencer with over a million followers who creates videos about cryptocurrency and astrology, there are three kinds of risk that crypto holders should be wary of: bad investments, collapsing projects, and outright scams.
Subreddits like r/cryptocurrency are awash with accusations of “scam coins”
The first and most common kind of risk is simple bad investments in obscure coins. Outside of major players like Bitcoin and Ethereum, there are thousands of smaller coins built on the blockchain technology, promising huge rewards if the coin ever comes to prominence. Subreddits like r/cryptocurrency are awash with accusations of “scam coins.”
“I mean, I’m in a handful of those myself, where it’s just the investment, it was a promise, the development didn’t go through, and I’m still waiting,” she said.
Trying to research obscure altcoins can be confusing for inexperienced traders. Links to cryptocurrency Discord servers often pop up on Twitter, promising an easy pump-and-dump of a smaller crypto coin. Or more confusingly, Twitter bots will accuse Discord servers that don’t exist of pump-and-dumps, hoping to drive up value for a separate coin. But while they promise easy money, the reality is less enticing.
Another risk is the oftentimes innocent but unfortunate mismanagement of funds. In a bullish crypto market, everyone thinks they have a revolutionary idea involving cryptocurrency. And, obviously, a lot of them don’t pan out.
“Things not being clarified, errors in contract, or just a weak link in the development circle,” Altman explained, “leading to mismanagement of money and people not having their investment turn out as expected.”
One extremely well-known example of this was the DAO project. It launched in the spring of 2016 to huge fanfare, only to be completely defunct by the fall of the same year. The project was created by the Decentralized Autonomous Organization and was an attempt to build a venture capital fund on the Ethereum blockchain. Only a month or two in, a hacker found a vulnerability in the token’s code and made off with $50 million. Traders started selling off DAO tokens en masse and the price never recovered.
Sometimes this chaos can end in outright fraud. According to the Federal Trade Commission, crypto-based financial scams are at an all-time high thanks to the surging interest in cryptocurrency. And the line between well-meaning blunder and crypto Ponzi scheme is blurry. Just ask investors of OneCoin or PayCoin.
In 2019, PayCoin founder Homero Joshua Garza was sentenced to 21 months in prison
OneCoin launched in the mid-2010s and was billed as an educational crypto trading service. It turns out the OneCoin tokens being purchased by investors weren’t actually on the blockchain. It was accused of being a Ponzi scheme and its founders ran off with close to $4 billion. It has been called one of the biggest financial scams in history. One of its founders, Ruja Ignatova, is still missing.
In 2019, PayCoin founder Homero Joshua Garza was sentenced to 21 months in prison and ordered to pay restitution after he created his own cryptocurrency and offered it to investors with the assurance that he had secured a $100 million reserve of capital. There was no reserve, and the whole project ended up losing $9 million.
But even with May 2021’s sizable dip in value for big coins like Bitcoin and Ethereum, cryptocurrency is more popular than ever, and legions of inexperienced traders are learning the hard way what a peer-to-peer financial service actually means.
Neeraj Agrawal, the director of communications for Coin Center, one of the US’s biggest cryptocurrency advocacy groups, told The Verge that wildly speculative coins (known colloquially as “shitcoins”) are now a permanent part of the cryptocurrency space.
“The insane speculative garbage coins are not going to go away,” Agrawal says. “That’s just part of the world now. And it sort of remains to us to show that the really good projects are worth their existence, that there is actual value here.”
That’s particularly hard when crypto celebrities like Elon Musk are driving interest toward the wackier end of the crypto space. Musk recently fueled the massive spike in interest around Dogecoin, a failed crypto coin invented as a joke that’s named after the famous Shiba Inu meme. Musk’s tweets have also been blamed for this month’s massive market downturn. It’s still unclear what effect Musk has on the market, but his recent branding as the main character of crypto has led to a litany of Musk-themed scams. According to the FTC, people impersonating Musk have managed to scam at least $2 million from traders this year.
“A market really only requires two things … a seller and a buyer.”
“Maybe that’s the biggest risk to crypto users — your own stupidity,” joked Meltem Demirors, the chief strategy officer of digital-asset investment firm CoinShares. “I think people just aren’t accustomed to taking responsibility for their financial lives.”
In fact, I was asked by both a family member and a close friend this month about an obscure cryptocurrency called Dogelon Mars. It’s currently worth $0.00000016 USD, but the two people close to me were considering buying a bunch of it because they mistakenly believed that, due to the name and its frankly confusing description, it was a coin launched by Musk himself.
Demirors told The Verge that Dogelon Mars was actually one of her favorite meme coins. “We have to remember, right, the whole point of a lot of this is permission-less financial innovation,” she says. “And a market really only requires two things. It requires a seller and a buyer.”
She said this was the main explanation behind the recent NFT explosion. People had crypto coins on hand and wanted to see what they could spend them on. Turns out what they wanted to buy was surreal internet art for millions of dollars.
“I always think it’s really funny when people are all about crypto and permission-less financial innovation, but then the minute they lose money, they become like the most statist people imaginable,” Demirors said. “You really can’t have it both ways. Like you bought this shitcoin. You now need to make your bed and lie in it.”
]]>