We’d all like to be safe in social VR experiences. Barring the human race one day waking up and unanimously deciding to be decent to one another, how might this future come about? One potential solution is robust, clear, accessible community guidelines from platforms. But, what might those look like?
VR researcher Rafi Lazerson recently published a paper with the University of California Berkeley’s Center for Long-Term Cyber Security, titled “A Secure and Equitable Metaverse: Designing Effective Community Guidelines for Social VR.” The paper breaks down what harms can look like in social VR environments as well as what shape community guidelines for those environments should look like to prevent and address those harms.
Learning From the Past?
The paper’s introduction presents a provocative question:
“Will social VR platforms proactively develop clear community guidelines at this early stage of user adoption, or will their process follow the slow, opaque, and reactive trajectories that were typical of 2D social media platforms?”
The paper draws on industry and academic research, academic literature, media reports, and the existing community guidelines of both 2D and VR social platforms. It takes a particularly close look at Meta’s guidelines for both its 2D and its social VR experiences. This is a handy example but it also comes with a message to Meta:
“Well-funded corporations have a disproportionate impact on the formation of the metaverse and on norms within social VR, and therefore have a responsibility to lead the industry in developing responsible policies and practices.”
Harms in Social VR
“Embodiment removes the sense of separation and distinction between the user and the avatar, contributing to interactions between the users that feel real and present,” wrote Lazerson. “To the user, any VR world, even the fantastical, can feel real and present due to avatar embodiment, world-immersion, and synchronous conduct-based interactions.”
This won’t present an entirely new idea to most readers, but it is central to this work in particular and to this whole body of work. It means that misconduct can be more difficult to identify because it might not be recorded in the way that most social media interactions are. It also means that the interactions are worth taking seriously even though they happen in a “game.”
“Experiences of harassment in VR have been described as comparable to in-person harassment,” wrote Lazerson. “As haptic gloves, suits, and other VR immersion hardware become a common part of VR use, experiences of harassment may feel increasingly indistinguishable from in-person harassment.”
What’s more than that, many of the forms of harassment that we know and hate from traditional social media – based on race, religion, gender, and other factors – are already being reported in VR even to such a degree that some users report hiding aspects of their identities in order to avoid it. The problem compounds as immersive tech is increasingly used for work and wellness.
“The inability of some users to present as themselves in or even enter into social VR without fear could have severe health and economic ramifications,” wrote Lazerson.
So, how do we preserve these environments as safe spaces for everyone?
Effective Community Guidelines
According to Lazerson, effective community safety practices consist of three main components:
- External communication of expectations to users;
- Internal communication of policies to moderators;
- User tools;
- Moderator tools;
- Educational tools;
- Invisible safety tools (age-gating of select experiences, etc.).
- The means by which policy is enforced through product.
It can be difficult for anyone other than a platform maintainer or moderator to see all of these pieces working together or to gauge how effective they are. However, one item on that list, outward-facing policy, is easy to see. So, how do Meta’s community guidelines work as a model for social VR experiences generally?
Are Meta’s Social VR Guidelines Sufficient?
A theme throughout this paper – and the realm of metaverse safety generally – is that immersive platforms can learn from conventional social media while recognizing that immersive content is different and accommodating those differences. According to Lazerson, one of Meta’s biggest problems may be that its immersive policies don’t come with Facebook’s existing safeguards.
“There is no single list of public-facing community guidelines for users to follow in Meta’s social VR. There are at least two, perhaps three: the Horizon Policy, the Conduct in VR Policy, and possibly the Facebook Community Standards,” Lazerson wrote. “There is a significant amount of ambiguity regarding where each of the aforementioned Meta community guidelines applies in VR.”
Lazerson isn’t only here to criticize. He closes the paper with recommendations to all immersive platforms. These include accessible, transparent, specific, and comprehensive guidelines using existing social media guidelines as a “baseline.” He also recommends platforms work with each other on policy “to ensure that no forms of harm are overlooked.”
For full recommendations, find the full report available for free here.
It’s unfortunate that we need things like community guidelines – long documents detailing ways that people aren’t allowed to be mean to each other. Ideally, immersive worlds would play out like the world that we live in – in which most people are just impulsively decent to one another.
But, for whatever reason, social VR can be an uncomfortable space. It also has great beauty and can be a place where we share knowledge, experience, and just have fun. It’s up to each of us to help bring out the best in this medium. So read the guidelines, follow them, encourage others to follow them, and encourage your favorite platform to use them well.