Mark Zuckerberg Can’t Stop You From Reading This Because The Algorithms Have Already Won

And the machines are running the asylum.

There’s a decent chance that Facebook CEO Mark Zuckerberg will see this story. It's relevant to his interests and nominally about him and the media and advertising industries his company has managed to upend and dominate. So the odds that it will appear in his Facebook News Feed are reasonably good. And should that happen, Zuckerberg might wince at this story’s headline or roll his eyes in frustration at its thesis. He might even cringe at the idea that others might see it on Facebook as well. And some almost certainly will. Because if Facebook works as designed, there's a chance this article will also be routed or shared to their News Feeds. And there's little the Facebook CEO can do to stop it, because he's not really in charge of his platform — the algorithms are.

This has been true for some time now, but it's been spotlit in recent months following a steady drumbeat of reports about Facebook as a channel for fake news and propaganda and, more recently, the company's admission that it sold roughly $100,000 worth of ads to a Russian troll farm in 2016. The gist of the coverage follows a familiar narrative for Facebook since Trump’s surprise presidential win: that social networks as vast and pervasive as Facebook are among the most important engines of social power, with unprecedented and unchecked influence. It’s part of a Big Tech political backlash that’s gained considerable currency in recent months — enough that the big platforms like Facebook are scrambling to avoid regulation and bracing themselves for congressional testimony.

Should Zuckerberg or Twitter CEO Jack Dorsey be summoned to Congress and peppered with questions about the inner workings of their companies, they may well be ill-equipped to answer them. Because while they might be in control of the broader operations of their respective companies, they do not appear to be fully in control of the automated algorithmic systems calibrated to drive engagement on Facebook and Twitter. And they have demonstrably proven that they lacked the foresight to imagine and understand the now clear real-world repercussions of those systems — fake news, propaganda, and dark targeted advertising linked to foreign interference in a US presidential election.

Among tech industry critics, every advancement from Alexa to AlphaGo to autonomous vehicles is winkingly dubbed as a harbinger of a dystopian future powered by artificial intelligence. Tech moguls like Tesla and SpaceX founder Elon Musk and futurists like Stephen Hawking warn against nightmarish scenarios that vary from the destruction of the human race to the more likely threat that our lives will be subject to the whims of advanced algorithms that we’ve been happily feeding with our increasingly personal data. In 2014, Musk remarked that artificial intelligence is “potentially more dangerous than nukes” and warned that humanity might someday become a “biological boot loader for digital superintelligence.”

China, Russia, soon all countries w strong computer science. Competition for AI superiority at national level most likely cause of WW3 imo.

But if you look around, some of that dystopian algorithmic future has already arrived. Complex technological systems orchestrate many — if not most — of the consequential decisions in your life. We entrust our romantic lives to apps and algorithms — chances are you know somebody who’s swiped right or matched with a stranger and then slept with, dated, or married them. A portion of our daily contact with our friends and families is moderated via automated feeds painstakingly tailored to our interests. To navigate our cities, we’re jumping into cars with strangers assigned to us via robot dispatchers and sent down the quickest route to our destination based on algorithmic analysis of traffic patterns. Our fortunes are won and lost as the result of financial markets largely dictated by networks of high-frequency trading algorithms. Meanwhile, the always-learning AI-powered technology behind our search engines and our newsfeeds quietly shapes and reshapes the information we discover and even how we perceive it. And there’s mounting evidence that suggests it might even be capable of influencing the outcome of our elections.

Put another way, the algorithms increasingly appear to have more power to shape lives than the people who designed and maintain them. This shouldn’t come as a surprise, if only because Big Tech’s founders have been saying it for years now — in fact, it’s their favorite excuse — “we’re just a technology company” or “we’re only the platform.” And though it’s a convenient cop-out for the unintended consequences of their own creations, it’s also — from the perspectives of technological complexity and scale — kind of true. Facebook and Google and Twitter designed their systems, and they tweak them rigorously. But because the platforms themselves — the technological processes that inform decisions for billions of people every second of the day — are largely automated, they’re enormously difficult to monitor.

Facebook acknowledged this in its response to a ProPublica report this month that showed the company allowed advertisers to target users with anti-Semitic keywords. According to the report, Facebook’s anti-Semitic categories “were created by an algorithm rather than by people.”

And Zuckerberg suggested similar difficulties in monitoring just this week while addressing Facebook’s role in protecting elections. “Now, I'm not going to sit here and tell you we're going to catch all bad content in our system,” he explained during a Facebook Live session last Thursday. “I wish I could tell you we're going to be able to stop all interference, but that wouldn't be realistic.” Beneath Zuckerberg’s video, a steady stream of commenters remarked on his speech. Some offered heart emojis of support. Others mocked his demeanor and delivery. Some accused him of treason. He was powerless to stop it.

Facebook’s response to accusations about its role in the 2016 election since Nov. 9 bears this out, most notably Zuckerberg’s public comments immediately following the election that the claim that fake news influenced the US presidential election was “a pretty crazy idea.” In April, when Facebook released a white paper detailing the results of its investigation into fake news on its platform during the election, the company insisted it did not know the identity of the malicious actors using its network. And after recent revelations that Facebook had discovered Russian ads on its platform, the company maintained that as of April 2017, it was unaware of any Russian involvement. “When asked we said there was no evidence of Russian ads. That was true at the time,” Facebook told Mashable earlier this month.

Some critics of Facebook speak about the company’s leadership almost like an authoritarian government — a sovereign entity with virtually unchecked power and domineering ambition. So much so, in fact, that Zuckerberg is now frequently mentioned as a possible presidential candidate despite his public denials. But perhaps a better comparison might be the United Nations — a group of individuals endowed with the almost impossible responsibility of policing a network of interconnected autonomous powers. Just take Zuckerberg’s statement this week, in which he sounded strikingly like an embattled secretary-general: “It is a new challenge for internet communities to deal with nation-states attempting to subvert elections. But if that’s what we must do, we are committed to rising to the occasion,” he said.

“I wish I could tell you we're going to be able to stop all interference, but that wouldn't be realistic” isn’t just a carefully hedged pledge to do better, it's a tacit admission that the effort to do better may well be undermined by a system of algorithms and processes that the company doesn't fully understand or control at scale. Add to this Facebook's mission as a business — drive user growth; drive user engagement; monetize that growth and engagement; innovate in a ferociously competitive industry; oh, and uphold ideals of community and free speech — and you have a balance that’s seemingly impossible to maintain.

Facebook’s power and influence are vast, and the past year has shown that true understanding of the company’s reach and application is difficult; as CJR’s Pete Vernon wrote this week, “What other CEO can claim, with a straight face, the power to ‘proactively…strengthen the democratic process?’” But perhaps “power” is the wrong word to describe Zuckerberg's — and other tech moguls’ — position. In reality, it feels more like a responsibility. At the New York Times, Kevin Roose described it as Facebook’s Frankenstein problem — the company created a monster it can’t control. And in terms of responsibility, the metaphor is almost too perfect. After all, people always forget that Dr. Frankenstein was the creator, not the monster.

Topics in this article

Skip to footer