Many companies are angling to shape how virtual reality and digital identities will be used to organize more of our daily lives – from work and health care to shopping, gaming, and other forms of entertainment. The opportunities of the metaverse seem limitless, but in the absence of independent oversight, so do the risks.
LONDON – The “metaverse” isn’t here yet, and when it arrives it will not be a single domain controlled by any one company. Facebook wanted to create that impression when it changed its name to Meta, but its rebranding coincided with major investments by Microsoft and Roblox. All are angling to shape how virtual reality and digital identities will be used to organize more of our daily lives – from work and health care to shopping, gaming, and other forms of entertainment.
The metaverse is not a new concept. The term was coined by sci-fi novelist Neal Stephenson in his 1992 book Snow Crash, which depicts a hyper-capitalist dystopia in which humanity has collectively opted into life in virtual environments. So far, the experience has been no less dystopian here in the real world. Most experiments with immersive digital environments have been marred immediately by bullying, harassment, digital sexual assault, and all the other abuses that we have come to associate with platforms that “move fast and break things.”
None of this should come as a surprise. The ethics of new technologies have always lagged behind the innovations themselves. That is why independent parties should provide governance models sooner rather than later – before self-interested corporations do it with their own profit margins in mind.
The evolution of ethics in artificial intelligence is instructive here. Following a major breakthrough in AI image-recognition in 2012, corporate and government interest in the field exploded, attracting important contributions from ethicists and activists who published (and republished) research into the dangers of training AIs on biased data sets. A new language was developed to incorporate into the design of new AI applications the values that we want to uphold.
Owing to this work, we now know that AI is effectively “automating inequality,” as Virginia Eubanks of the University of Albany, SUNY, puts it, as well as perpetuating racial biases in law enforcement. To call attention to this problem, computer scientist Joy Buolamwini of the MIT Media Lab launched the Algorithmic Justice League in 2016.
This first-wave response aimed a public spotlight at the ethical issues associated with AI. But it was soon eclipsed by a renewed push within the industry for self-regulation. AI developers introduced technical toolkits for conducting internal and third-party evaluations, hoping that this would alleviate public fears. It didn’t, because most firms pursuing AI development have business models that are in open conflict with the ethical standards that the public wants them to uphold.
Access every new PS commentary, our entire On Point suite of subscriber-exclusive content – including Longer Reads, Insider Interviews, Big Picture/Big Question, and Say More – and the full PS archive.
Subscribe Now
To take the most common example, Twitter and Facebook will not deploy AI effectively against the full range of abuses on their platforms because doing so would undermine “engagement” (outrage) and thus profits. Similarly, these and other tech firms have leveraged value extraction and economies of scale to achieve near-monopolies in their respective markets. They will not now willingly give up the power they have gained.
More recently, corporate consultants and various programs have professionalized AI ethics to address the reputational and practical risks of ethical failures. Those working on AI within Big Tech companies would be pressed to consider questions such as whether a function should default to opt-in or opt-out; whether it is appropriate to delegate a task to AI or not; and whether the data being used to train AI applications can be trusted. To that end, many tech corporations established supposedly independent ethics boards. However, the reliability of this form of governance has since been called into question following high-profileousters of internal researchers who raised concerns about the ethical and social implications of certain AI models.
Establishing a sound ethical foundation for the metaverse requires that we get ahead of industry self-regulation before it becomes the norm. We also must be mindful of how the metaverse is already diverging from AI. While AI has been largely centered around internal corporate operations, the metaverse is decidedly consumer-centric, which means that it will come with all kinds of behavioral risks that most people will not have considered.
Just as telecom regulation (specifically Section 230 of the US Communications Decency Act of 1996) provided the governance model for social media, regulation of social media will become the default governance model for the metaverse. That should worry us all. Though we can easily foresee many of the abuses that will occur in immersive digital environments, our experience with social media suggests that we might underestimate the sheer scale that they will reach and the knock-on effects they will have.
It would be better to overestimate the risks than to repeat the mistakes of the past 15 years. A wholly digital environment creates the potential for even more exhaustive data collection, including of personal biometric data. And since no one really knows exactly how people will respond to these environments, there is a strong case for using regulatory sandboxes before allowing a wider rollout.
Anticipating the metaverse’s ethical challenges is still possible; but the clock is ticking. Without effective independent oversight, this new digital domain will almost certainly go rogue, recreating all the abuses and injustices of both AI and social media – and adding more that we have not even foreseen. A Metaverse Justice League may be our best hope.
To have unlimited access to our content including in-depth commentaries, book reviews, exclusive interviews, PS OnPoint and PS The Big Picture, please subscribe
It is too soon to tell whether the current wave of popular anger and disillusionment in Turkey will evolve into a coherent movement capable of mounting a credible opposition to President Recep Tayyip Erdoğan. But one thing should be obvious to the main opposition party: When the game is rigged, the only hope is to flip the board.
explains why popular resistance to the Erdoğan regime has sidelined the opposition parties.
More than just a popular mayor, Ekrem İmamoğlu is a national symbol of the political pluralism and democratic possibility that Turkish President Recep Tayyip Erdoğan has sought to quash. Given the precarious state of the Turkish economy, his sudden arrest and imprisonment may prove to be the last straw.
believes the current mass protests are about more than the arrest of the country’s leading opposition figure.
Log in/Register
Please log in or register to continue. Registration is free.
LONDON – The “metaverse” isn’t here yet, and when it arrives it will not be a single domain controlled by any one company. Facebook wanted to create that impression when it changed its name to Meta, but its rebranding coincided with major investments by Microsoft and Roblox. All are angling to shape how virtual reality and digital identities will be used to organize more of our daily lives – from work and health care to shopping, gaming, and other forms of entertainment.
The metaverse is not a new concept. The term was coined by sci-fi novelist Neal Stephenson in his 1992 book Snow Crash, which depicts a hyper-capitalist dystopia in which humanity has collectively opted into life in virtual environments. So far, the experience has been no less dystopian here in the real world. Most experiments with immersive digital environments have been marred immediately by bullying, harassment, digital sexual assault, and all the other abuses that we have come to associate with platforms that “move fast and break things.”
None of this should come as a surprise. The ethics of new technologies have always lagged behind the innovations themselves. That is why independent parties should provide governance models sooner rather than later – before self-interested corporations do it with their own profit margins in mind.
The evolution of ethics in artificial intelligence is instructive here. Following a major breakthrough in AI image-recognition in 2012, corporate and government interest in the field exploded, attracting important contributions from ethicists and activists who published (and republished) research into the dangers of training AIs on biased data sets. A new language was developed to incorporate into the design of new AI applications the values that we want to uphold.
Owing to this work, we now know that AI is effectively “automating inequality,” as Virginia Eubanks of the University of Albany, SUNY, puts it, as well as perpetuating racial biases in law enforcement. To call attention to this problem, computer scientist Joy Buolamwini of the MIT Media Lab launched the Algorithmic Justice League in 2016.
This first-wave response aimed a public spotlight at the ethical issues associated with AI. But it was soon eclipsed by a renewed push within the industry for self-regulation. AI developers introduced technical toolkits for conducting internal and third-party evaluations, hoping that this would alleviate public fears. It didn’t, because most firms pursuing AI development have business models that are in open conflict with the ethical standards that the public wants them to uphold.
Introductory Offer: Save 30% on PS Digital
Access every new PS commentary, our entire On Point suite of subscriber-exclusive content – including Longer Reads, Insider Interviews, Big Picture/Big Question, and Say More – and the full PS archive.
Subscribe Now
To take the most common example, Twitter and Facebook will not deploy AI effectively against the full range of abuses on their platforms because doing so would undermine “engagement” (outrage) and thus profits. Similarly, these and other tech firms have leveraged value extraction and economies of scale to achieve near-monopolies in their respective markets. They will not now willingly give up the power they have gained.
More recently, corporate consultants and various programs have professionalized AI ethics to address the reputational and practical risks of ethical failures. Those working on AI within Big Tech companies would be pressed to consider questions such as whether a function should default to opt-in or opt-out; whether it is appropriate to delegate a task to AI or not; and whether the data being used to train AI applications can be trusted. To that end, many tech corporations established supposedly independent ethics boards. However, the reliability of this form of governance has since been called into question following high-profile ousters of internal researchers who raised concerns about the ethical and social implications of certain AI models.
Establishing a sound ethical foundation for the metaverse requires that we get ahead of industry self-regulation before it becomes the norm. We also must be mindful of how the metaverse is already diverging from AI. While AI has been largely centered around internal corporate operations, the metaverse is decidedly consumer-centric, which means that it will come with all kinds of behavioral risks that most people will not have considered.
Just as telecom regulation (specifically Section 230 of the US Communications Decency Act of 1996) provided the governance model for social media, regulation of social media will become the default governance model for the metaverse. That should worry us all. Though we can easily foresee many of the abuses that will occur in immersive digital environments, our experience with social media suggests that we might underestimate the sheer scale that they will reach and the knock-on effects they will have.
It would be better to overestimate the risks than to repeat the mistakes of the past 15 years. A wholly digital environment creates the potential for even more exhaustive data collection, including of personal biometric data. And since no one really knows exactly how people will respond to these environments, there is a strong case for using regulatory sandboxes before allowing a wider rollout.
Anticipating the metaverse’s ethical challenges is still possible; but the clock is ticking. Without effective independent oversight, this new digital domain will almost certainly go rogue, recreating all the abuses and injustices of both AI and social media – and adding more that we have not even foreseen. A Metaverse Justice League may be our best hope.