There are thousands of distortion filters available on major social platforms, with names like La Belle, Natural Beauty, and Boss Babe. Even the Big Mouth dingo on Snapchat, one of the most popular social media filters, is made with distortion effects.
In October 2019, Facebook banned distorting effects due to “public debate on the potential negative impact”. Awareness of body dysmorphia increased and a filter called FixMe, which allowed users to mark their faces like a cosmetic surgeon would, sparked a wave of criticism to encourage plastic surgery. But in August 2020, the effects were reissued with a new policy banning filters that explicitly promoted surgery. However, effects that resize facial features are still allowed. (When asked about the decision, a spokesperson asked me to Facebook press release at the time.)
When the effects were reissued, Rocha decided to take a stand and began posting convictions of body shame online. She pledged to stop using the warping effects herself unless they were clearly humorous or dramatic rather than beautifying and said she didn’t want to “be responsible” for the harmful effects that some. filters had on women: some, she said, have sought plastic surgery that makes them look like their filtered selves.
“I would have liked to wear a filter at the moment”
Krista Crotty is a Clinical Education Specialist at the Emily Program, a leading center for eating disorders and mental health based in St. Paul, Minnesota. Much of his work over the past five years has focused on educating patients on how to consume media in a healthier way. She says when patients present themselves differently online and in person, she sees an increase in anxiety. “People post information about themselves, whether it’s size, shape, weight, whatever, it doesn’t look anything like what they actually look like,” she says. “Between this authentic self and this digital self there is a lot of anxiety, because that is not who you really are. You don’t look like the photos that have been filtered out.
For young people, who are still figuring out who they are, navigating between a digital and authentic self can be particularly complicated, and it’s unclear what the long-term consequences will be.
“Online identity is a bit like an artifact, almost,” says Claire Pescott, a researcher at the University of South Wales. “It’s kind of a projected image of yourself.”
Pescott’s observations of children led her to conclude that filters can have a positive impact on them. “They can kind of try out different characters,” she explains. “They have these identities ‘as long as’ they could change, and they can grow with different groups. “
But she doubts that all young people will be able to understand how filters affect their sense of self. And she’s worried about how social media platforms grant immediate validation and feedback in the form of likes and comments. Young girls, she says, find it especially difficult to tell the difference between filtered photos and ordinary photos.
Pescott’s research also revealed that while children are now often educated about online behavior, they receive “very little education” about filters. Their safety training “was related to the overt physical dangers of social media, not the emotional and more nuanced side of social media,” she says, “which in my opinion is more dangerous.”
Bailenson expects that we can learn more about some of these emotional strangers through established research in virtual reality. In virtual environments, people’s behavior changes with the physical characteristics of their avatar, a phenomenon called the Protée effect. Bailenson found, for example, that people with taller avatars were more likely to behave confidently than those with shorter avatars. “We know that visual representations of the self, when used meaningfully in social interactions, change our attitudes and behaviors,” he says.
But sometimes these actions can play on stereotypes. A well-known study from 1988 found that athletes who wore black uniforms were more aggressive and violent when playing sports than those who wore white uniforms. And this translates into the digital world: a recent study showed that video game players who used avatars of the opposite sex actually behaved in a gender stereotypical manner.
Bailenson says we should expect to see similar behavior on social media as people adopt masks based on filtered versions of their own faces, rather than entirely different characters. “The world of filtered video, in my opinion – and we haven’t tested that yet – is going to behave very similarly to the world of filtered avatars,” he says.
Considering the power and ubiquity of filters, there is very little in-depth research into their impact and even less safeguards around their use.
I asked Bailenson, who is a father of two young daughters, what he thinks about his daughters’ use of AR filters. “It’s really hard,” he says, “because it goes against everything we learn in all of our basic cartoons, which is ‘Be yourself’.”
Bailenson also says that playful use is different from constantly increasing ourselves in real time, and that it’s important to understand what these different contexts mean for kids.
The few regulations and restrictions that exist on the use of filters are up to companies to control themselves. Facebook’s filters, for example, must go through an approval process which the spokesperson said uses “a combination of human and automated systems to review effects as they are submitted for publication.” . They are reviewed for certain issues, such as hate speech or nudity, and users can also report filters, which are then reviewed manually.
The company says it regularly consults with expert groups, such as the National Eating Disorders Association and the JED Foundation, a mental health nonprofit.
“We know people may feel pressured to look a certain way on social media, and we are taking action to address this issue on Instagram and Facebook,” a statement from Instagram said. “We know effects can play a role, so we ban those that clearly promote eating disorders or encourage potentially dangerous cosmetic surgery procedures… And we’re working on more products to help reduce pressure than those that clearly promote eating disorders. people can feel on our platforms, like the option to hide like counts. “
Facebook and Snapchat also tag filtered photos to show they’ve been transformed, but it’s easy to bypass the tags by simply applying changes outside of apps, or uploading and re-uploading a filtered photo.
Labeling can be important, but Pescott says she doesn’t think it will significantly improve an unhealthy beauty culture online.
“I don’t know if that would make a huge difference, because I think it’s the fact that we see it, even though we know it’s not real. We still have that aspiration to look that way, ”she says. Instead, she believes the images kids are exposed to should be more diverse, more authentic, and less filtered.
There is also another concern, especially since the majority of users are very young: the amount of biometric data that TikTok, Snapchat and Facebook have collected through these filters. Although Facebook and Snapchat say they don’t use filtering technology to collect personally identifiable data, a review of their privacy policies shows that they do have the right to store photo and video data on them. platforms. Snapchat’s policy says snaps and chats are deleted from its servers once the message is opened or expired, but stories are stored longer. Instagram stores photo and video data for as long as it wants or until the account is deleted; Instagram also collects data on what users see through its camera.
Meanwhile, these companies continue to focus on AR. In a speech to investors in February 2021, Snapchat co-founder Evan Spiegel said, “Our camera is already capable of extraordinary things. But it’s augmented reality that is driving our future, ”and the company“ doubles ”augmented reality in 2021, calling the technology“ utility ”.
And while Facebook and Snapchat say the facial detection systems behind the filters don’t log into user identities, it’s worth remembering that Facebook’s smart photo tagging feature, which examines your photos and tries to ” identify the people who could be included. – was one of the first large-scale commercial uses of facial recognition. And TikTok recently settled for $ 92 million in a lawsuit that alleged the company was misusing facial recognition for ad targeting. A Snapchat spokesperson said, “The Snap’s Lens product does not collect any identifiable information about a user and we cannot use it to link or identify individuals.”
And Facebook in particular sees facial recognition as part of its AR strategy. In January 2021 blog post titled ‘No Looking Back,’ Andrew Bosworth, director of Facebook Reality Labs, wrote: ‘This is just the start, but we intend to give creators more to do in AR and with greater capabilities. The company’s planned release of AR glasses is highly anticipated, and it has already teased the possible use of facial recognition as part of the product.
In light of all the effort it takes to navigate this complex world, Sophia and Veronica say they just wish they were better informed about beauty filters. Apart from their parents, no one has ever helped them understand all of this. “You shouldn’t have to get a specific college degree to figure out that something might be unhealthy for you,” says Veronica.