After unveiling two new digital regulations to much fanfare last year, the European Commission already needs to go back to the drawing board. Big Tech's latest scandals have made clear that the only workable governance model for the digital economy is one that treats the leading platforms as utilities.
WASHINGTON, DC – Since the start of this year, the European Union’s cautious approach to digital-platform reform has been overtaken by tech-industry scandals. Between temporarily banning all news from appearing on its platform in Australia and suspending the president of the United States with the flick of a switch, Facebook has offered a chilling display of its power. Moreover, along with Twitter and Google/YouTube, it has proven to be a dangerous fire hose of disinformation, playing no small part in the events leading up to the January 6 storming of the US Capitol.
Since the birth of the digital-media platforms 15 years ago, the world’s democracies have been subjected to a grand experiment. What happens to news and information infrastructure when it is increasingly dependent on Silicon Valley companies that offer massive global audiences, algorithmic (non-human) curation of information (or disinformation), and the ability to spread such information with unprecedented ease? The answer has become increasingly clear.
Facebook, Google, and Twitter bill themselves as tech companies, but they are in fact the world’s largest-ever media giants. In that capacity, they have enabled disinformation campaigns aimed at undermining elections in more than 70 countries, even helping to elect a quasi-dictator in the Philippines. They have been used to livestream child abuse, pornography, and mass murder, such as of Muslims in New Zealand. And their recommendation algorithms reliably steer billions of users to fake news and propaganda. How can we ever come together to tackle climate change when a majority of YouTube videos on the topic deny climate science?
Europe’s recent digital reforms have made headlines of their own, but they barely scratch the surface of the problem. Touting the virtues of the recently proposed EU Digital Services Act (DSA) and Digital Markets Act (DMA), European Commission President Ursula von der Leyen has hailed the arrival of a “new framework for the digital market and for our society.” Yet neither regulatory package is properly designed to address the digital media’s problems or police abuses by the major platforms.
For example, the fines that the platforms would face for certain anti-competitive practices would be too small to act as a meaningful deterrent. Like the US Federal Trade Commission’s $5 billion fine against Facebook for privacy violations, EU penalties, capped at 10% of a firm’s global revenue, would become just another cost of doing business.
Similarly, although platforms operating in the EU would bear greater responsibility for removing vaguely defined “illegal content,” this level of tightened moderation would hardly stop the avalanche of disinformation that they help to disseminate, most of which is not illegal. More than 100 billion pieces of content are posted on Facebook each day, making it unrealistic that either its algorithms or its small army of human monitors could ever curb all of the problematic material.
The DSA and DMA also rely heavily on the idea of algorithmic transparency to ensure oversight and protect consumers. But such transparency will not act as a deterrent. Understanding how platforms’ recommendation engines work will not stop them from amplifying lies and sensational content. Though users would have a right to “opt out” of content recommendations, this gets things exactly backwards: the default should be that no private data collection is permitted, unless and until a user opts in.
The platforms will argue that they provide their services for free in exchange for our private data. But as the EU’s competition-policy chief, Margrethe Vestager, has pointed out, this is a fool’s bargain. “I would like to have a Facebook in which I pay a fee each month,” she says, with “no tracking and advertising and the full benefits of privacy.” But if that is the position of the EU’s leading competition official, why is the bloc continuing to permit the platforms’ noxious business model of surveillance capitalism?
It is time for a reset. As the creators of the new infrastructure of the digital age, the Silicon Valley giants should be treated like investor-owned utilities in the same vein as the telephone, railroad, and power industries. (In fact, Facebook CEO Mark Zuckerberg himself has suggested such an approach.) As utilities, they would be subject to a digital operating license that defines the rules of the market.
Now that we know the platforms suck up our private data, track our physical locations, and collate every “like,” “share,” and “follow” so that we can be targeted by advertisers and political operatives, regulators have a duty to intervene. As Mathias Döpfner, the pioneering CEO of the digital powerhouse Axel Springer, put it in his own criticism of the European Commission’s approach, “I appeal to you … prevent the surveillance of our citizens by making it illegal to store all personal, private, and sensitive data.”
Although the EU’s landmark General Data Protection Regulation (GDPR) was supposed to address this privacy issue, its user-consent requirement has been found to be riddled with loopholes. Now that Apple is moving to turn off data-tracking by companies that have not obtained explicit iPhone-user consent, one wonders why the EU hasn’t adopted the same regulatory standard.
Another strength of the utility model is that it could encourage more competition, by limiting the digital media monopolies’ mega-scale audience size. This could be done in several ways, including through an anti-trust breakup or, as Vestager suggested, by requiring a subscription model (like Netflix or the BBC), with users paying a monthly fee. As utilities, the platforms also should be restrained from using certain “engagement” techniques, such as hyper-targeted content and ads, and manipulative behavioral nudges (like pop-up screens and autoplay).
The platforms’ frequent scandals are supposedly the price we must pay for search engines, photo sharing with “friends,” and channels through which political dissidents and whistle-blowers can alert the world of their just causes. Those are all valuable uses, but we can do better. As one of the world’s economic giants, the EU should use its market muscle to improve the bargain, by providing sensible guardrails for the twenty-first century’s digital infrastructure.
WASHINGTON, DC – Since the start of this year, the European Union’s cautious approach to digital-platform reform has been overtaken by tech-industry scandals. Between temporarily banning all news from appearing on its platform in Australia and suspending the president of the United States with the flick of a switch, Facebook has offered a chilling display of its power. Moreover, along with Twitter and Google/YouTube, it has proven to be a dangerous fire hose of disinformation, playing no small part in the events leading up to the January 6 storming of the US Capitol.
Since the birth of the digital-media platforms 15 years ago, the world’s democracies have been subjected to a grand experiment. What happens to news and information infrastructure when it is increasingly dependent on Silicon Valley companies that offer massive global audiences, algorithmic (non-human) curation of information (or disinformation), and the ability to spread such information with unprecedented ease? The answer has become increasingly clear.
Facebook, Google, and Twitter bill themselves as tech companies, but they are in fact the world’s largest-ever media giants. In that capacity, they have enabled disinformation campaigns aimed at undermining elections in more than 70 countries, even helping to elect a quasi-dictator in the Philippines. They have been used to livestream child abuse, pornography, and mass murder, such as of Muslims in New Zealand. And their recommendation algorithms reliably steer billions of users to fake news and propaganda. How can we ever come together to tackle climate change when a majority of YouTube videos on the topic deny climate science?
Europe’s recent digital reforms have made headlines of their own, but they barely scratch the surface of the problem. Touting the virtues of the recently proposed EU Digital Services Act (DSA) and Digital Markets Act (DMA), European Commission President Ursula von der Leyen has hailed the arrival of a “new framework for the digital market and for our society.” Yet neither regulatory package is properly designed to address the digital media’s problems or police abuses by the major platforms.
For example, the fines that the platforms would face for certain anti-competitive practices would be too small to act as a meaningful deterrent. Like the US Federal Trade Commission’s $5 billion fine against Facebook for privacy violations, EU penalties, capped at 10% of a firm’s global revenue, would become just another cost of doing business.
Similarly, although platforms operating in the EU would bear greater responsibility for removing vaguely defined “illegal content,” this level of tightened moderation would hardly stop the avalanche of disinformation that they help to disseminate, most of which is not illegal. More than 100 billion pieces of content are posted on Facebook each day, making it unrealistic that either its algorithms or its small army of human monitors could ever curb all of the problematic material.
BLACK FRIDAY SALE: Subscribe for as little as $34.99
Subscribe now to gain access to insights and analyses from the world’s leading thinkers – starting at just $34.99 for your first year.
Subscribe Now
The DSA and DMA also rely heavily on the idea of algorithmic transparency to ensure oversight and protect consumers. But such transparency will not act as a deterrent. Understanding how platforms’ recommendation engines work will not stop them from amplifying lies and sensational content. Though users would have a right to “opt out” of content recommendations, this gets things exactly backwards: the default should be that no private data collection is permitted, unless and until a user opts in.
The platforms will argue that they provide their services for free in exchange for our private data. But as the EU’s competition-policy chief, Margrethe Vestager, has pointed out, this is a fool’s bargain. “I would like to have a Facebook in which I pay a fee each month,” she says, with “no tracking and advertising and the full benefits of privacy.” But if that is the position of the EU’s leading competition official, why is the bloc continuing to permit the platforms’ noxious business model of surveillance capitalism?
It is time for a reset. As the creators of the new infrastructure of the digital age, the Silicon Valley giants should be treated like investor-owned utilities in the same vein as the telephone, railroad, and power industries. (In fact, Facebook CEO Mark Zuckerberg himself has suggested such an approach.) As utilities, they would be subject to a digital operating license that defines the rules of the market.
Now that we know the platforms suck up our private data, track our physical locations, and collate every “like,” “share,” and “follow” so that we can be targeted by advertisers and political operatives, regulators have a duty to intervene. As Mathias Döpfner, the pioneering CEO of the digital powerhouse Axel Springer, put it in his own criticism of the European Commission’s approach, “I appeal to you … prevent the surveillance of our citizens by making it illegal to store all personal, private, and sensitive data.”
Although the EU’s landmark General Data Protection Regulation (GDPR) was supposed to address this privacy issue, its user-consent requirement has been found to be riddled with loopholes. Now that Apple is moving to turn off data-tracking by companies that have not obtained explicit iPhone-user consent, one wonders why the EU hasn’t adopted the same regulatory standard.
Another strength of the utility model is that it could encourage more competition, by limiting the digital media monopolies’ mega-scale audience size. This could be done in several ways, including through an anti-trust breakup or, as Vestager suggested, by requiring a subscription model (like Netflix or the BBC), with users paying a monthly fee. As utilities, the platforms also should be restrained from using certain “engagement” techniques, such as hyper-targeted content and ads, and manipulative behavioral nudges (like pop-up screens and autoplay).
The platforms’ frequent scandals are supposedly the price we must pay for search engines, photo sharing with “friends,” and channels through which political dissidents and whistle-blowers can alert the world of their just causes. Those are all valuable uses, but we can do better. As one of the world’s economic giants, the EU should use its market muscle to improve the bargain, by providing sensible guardrails for the twenty-first century’s digital infrastructure.