Today marks the one-year anniversary of the 2021 insurrection, when thousands of protesters stormed the U.S. Capitol to dispute the election of President Joe Biden. They injured at least 140 officers, planted pipe bombs and vandalized lawmakers’ offices. Their actions ended in five deaths and tested the mettle of American democracy. Crucially, they organized on social media. An internal Facebook report even acknowledged that the company “helped incite the Capitol Insurrection” by failing to stop the spread of “Stop the Steal” groups and rhetoric. On Jan. 6, users were submitting reports of “false news” at a rate of nearly 40,000 per hour.
In October, Facebook announced it was changing its name to Meta, signaling a full embrace of their belief in the world’s metaverse future. Many critics—including the whistleblower Frances Haugen—feared the move was little more than a tactical distraction from the many harms that have come from the company’s profit-driven decision-making. And Haugen, speaking with my colleague Billy Perrigo, worried that Facebook’s new immersive platform would only exacerbate its existing safety flaws, if left unregulated.
Tiffany Xingyu Wang says she shares that concern. Wang is the chief strategy & marketing officer at the AI company Spectrum Labs and the founder of the think tank OASIS Consortium. The OASIS Consortium was founded last year, and pulls together leaders deeply invested in the metaverse: from gaming, dating apps and immersive tech platforms like Roblox, Riot Games and Wildlife Studios to address safety and privacy in Web 3.
Wang believes in the power of the metaverse and the benefits of virtual worlds, but also fully understands the damage they could wreak if left to grow unchecked. “You can think of the Jan. 6 insurrection as a result of not having safety guardrails 15 years ago,” she tells me. “This time in the metaverse, either the impact will be much bigger, or the time to get to that catastrophic moment will be much shorter.”
But Wang’s solution is not to seek government intervention—but instead work with metaverse builders to self-regulate and think about safety first in a way that most social media platforms did not. Today, the consortium published its first-ever Safety Standards, which it hopes will be a blueprint for how metaverse companies approach rules around safety going forward. “There’s no consensus or definition of good: Most platforms I talked with do not have a playbook as to how to do this,” Wang says. “And then that’s not even mentioning the emerging platforms. There’s a huge gap in terms of fundamental governance issues, which is not a tech problem.”
You can find the full standards here. They cover how emerging tech companies should handle privacy, inclusion, interactions with governments and law enforcement; they recommend companies appoint an executive-level officer of trust and safety, partner with hate speech nonprofits and invest in moderation tools. OASIS’s ambition is that “hundreds or thousands” of companies will pledge to adopt the standards going forward.
The standards also open the door for OASIS to preside over a grading system for platforms, similar to how buildings are graded on energy efficiency or how companies can be certified as B Corporations—signaling a commitment to social responsibility.
Here are some of Wang’s biggest concerns—and potential solutions—that informed OASIS’ metaverse safety standards.
Current online safety problems could be exponentially worse in the metaverse
Some of the leading thinkers about the metaverse, including Matthew Ball, have listed a few key traits of the metaverse, including that it will be immersive (i.e., you go into a 3D internet instead of looking at it through screen), persistent (platforms never pause or resent, and you interact with them and their inhabitants in real time) and interoperable (you will be able to transfer your digital identity and goods across distinct platforms).
While metaverse builders believe each of these traits will benefit users, Wang argues that each also poses significant risks. “Immersiveness increases the impact of any toxicity. Persistence increases the velocity of toxicity. And the interoperability part makes content moderation very hard, because toxicity is very industry-specific. Dating, gaming and social platforms, for example, can have different types of behaviors,” she says.
Current social media platforms already have enough trouble tamping down on hate speech, while Facebook video moderators have spoken out about suffering from trauma and burnout from having to watch hours of harrowing content daily. The OASIS Safety Standards stipulate that platforms should spend ample resources from the jump to define, and then prevent hate speech, abuse, and other forms of toxicity from being able to enter immersive digital spaces. The use of AI to rapidly and accurately track misbehavior will be crucial, but must be supported by an actual team of people that grapples with false positives, grey areas and user appeals, Wang says.
The adoption of rigorous safety rules will be an uphill battle
In the tech world, safety and privacy have long been afterthoughts in favor of revenue, growth and innovation. For many years, one of Mark Zuckerberg’s favorite mottos, for instance, was “move fast and break things.” The grave flaws in this approach were revealed in the Facebook Papers—leaked internal reports—that showed Facebook deprioritized the fight against misinformation, allowing propaganda and misinformation to spread.
Wang predicts this profit strategy for metaverse platforms will be far less successful, because of the uphill battle they face to gain new adopters and existing suspicions surrounding the space. If platforms are plagued by safety and privacy concerns from the jump, then “users will not come because they hear it’s toxic: Imagine 4chan and 8chan on the metaverse,” she says. “When it becomes so physically impactful, you will have more reasons for regulators to step in. The government will just shut it down. So safety is key to the survival of the metaverse.”
But despite the publishing of the Facebook Papers and the waves of bad press around the company, Meta’s VR app Oculus was the most-downloaded app in the U.S. on Christmas Day. Many of the top metaverse and gaming platforms–including Decentraland, Fortnite, or Twitch–have yet to pledge to adopt the standards.
The metaverse will have even more of your personal data
Digital companies already track vast amounts of data about us for their own gain. This dynamic, as the journalist Franklin Foer writes in World Without Mind, “provides the basis for invisible discrimination; it is used to influence our choices, both our habits of consumption and our intellectual habits.”
Wang says that the data collection in a 3D world could be even more dangerous. Virtual platforms might rely on users having high-quality cameras and microphones in their rooms, and could theoretically track all of movements and purchases across virtual worlds. “The volume of PII, or personal identifiable information, a platform can collect is staggering,” she says. “It’s an issue that keeps me awake.” So later this year, Wang says that OASIS will launch a separate privacy board to deal specifically with this issue and devise guidelines for metaverse platforms.
Representation is a key aspect of safety
Some metaverse optimists argue Web 3 will help usher in some new utopian, discrimination-free, post-race world. Wang, though, points to an MIT and Stanford study that showed that AI facial recognition worked significantly better for light-skinned men than dark-skinned women. “The machines discriminate,” she says. “If the code of conduct for a platform is written by a very specific privileged group of the society, then it’s impossible for you to be inclusive and cautious about what potential racism and hate speech could happen against underprivileged groups.”
The OASIS standards stipulate, then, that companies need diverse hiring practices, especially when it comes to staffers who label and categorize data and moderator content.
Pledges to do good aren’t enough
There are already several companies that have pledged to use the OASIS standards at its launch, including the gaming platform Roblox, the music streaming company Pandora/Sirius XM, the livestreaming and social networking conglomerate The Meet Group, and the mobile gaming company Wildlife Studios. But Wang is well aware that promises alone are far from adequate. The next step will be to hold platforms accountable when they make mistakes or aren’t living up to their promised standards.
That begins with a grade assessment system, which OASIS hopes to roll out in the second quarter of 2023 in conjunction with audit firms. “A company can request grades to very specifically know where they are, so they can actually improve their practices internally,” Wang says.
Geoff Cook, the CEO of the Meet Group and a member of OASIS’s safety advisory board, says he looks forward to the formal process of certification and implementing any suggested policy changes that might arise. “The work of keeping our communities safe is never over,” he said in an email.
OASIS also plans to work with international governments and agencies to distribute the standards. The think tank already has opened up a dialogue with the Australian government, for example. In a statement, Julie Inman Grant, Australia’s eSafety Commissioner, wrote that “pairing our interactive self-assessment with the Oasis User Safety Standards has so much promise in helping to build a digitally sustainable future.”
But Wang hopes that the companies of Web 3 will first start with intensive self-regulation. “People are reaching this point of collective consciousness that the current web is not sustainable,” she says. “The role of OASIS is to foster a healthy conversation with governments and private sectors who want to self-regulate.”
The standards will be ever-evolving
Given the speed at which technology surrounding the metaverse is developing, Wang says it is crucial for the OASIS safety standards to be reviewed biannually. Wang says the think tank will take a “multi-stakeholder approach” to continually tweak its rules; she mentioned deepfakes, in which video or audio files are falsified or manipulated, as a particular area that needs addressing. “We started to talk with nonprofits who give us very specific advice in certain areas. We haven’t really fully looked into deepfakes because the applications and tech are evolving very fast,” she says.
Green energy standards are a blueprint for tech’s self-regulation