‘I’m literally just chasing a ghost’: Tech companies struggle to address cyber harassment
Cyber security experts say Meta does not prioritize user safety
CHARLOTTE, N.C. (WBTV) - Presley Rhodes first learned she was being impersonated online five years ago.
In an Instagram message, a man told her he had connected with a woman on Tinder who called herself Emily Thawe, and they soon began chatting through what the woman said was her Instagram account.
The conversation turned so vulgar that he eventually became suspicious. A reverse Google search of the woman’s face led him to Rhodes’ Instagram account.
“At first, I was thinking what this predator was doing was highly illegal, and there is no way Instagram won’t get involved,” Rhodes said.
But Rhodes quickly found herself in a never-ending game of Whack-a-Mole - reporting new accounts when she discovered them, only for another to appear.
“I’m literally just chasing a ghost, just waiting for the next person to reach out to me and tell me about a new page,” Rhodes said.
Despite having teams of state-of-the-art engineers, tech giants like Meta, which owns Instagram, have been slow to take meaningful action to address such cyber harassment. The industry appears to lack real-time responsiveness and tailored support for victims.
According to a report released by the firm HypeAuditor, of the 1.3 billion Instagram accounts worldwide, only 55% belong to real people. No one is sure how many of the hundreds of millions of fake accounts impersonate real people.

Legal experts argue that social media platforms are hiding behind an outdated law to evade enhancing user safety. They’re urging Congress and state legislatures to craft more well-defined legislation that would hold social media platforms more accountable for online harassment.
“There is a systematic problem within the social media networks that have been on the market for almost 20 years to actually work with victims of impersonation,” Christina Gagnier, an attorney who focuses on privacy and cyber security in Los Angeles, said. “We do not have laws in place and systems in place to support victims.”
Social media impersonation
Rhodes is a model, artist and influencer, with almost 60,000 followers on Instagram. Although she never made money from the account directly, she used the platform to market her art and modeling portfolio.
“I felt extremely carefree. I loved being spontaneous and sharing with followers what I was doing,” Rhodes said.
It took Instagram more than seven months to remove the first fake account Rhodes uncovered—and she says the platform gave her no information before the account vanished.
But the impersonations continued there and on other social media and dating platforms, including Snapchat, TikTok, Tinder, and LinkedIn. Rhodes says more than 20 men have reached out to her, mostly on Instagram, to report that she had been impersonated online.

“All of these men believe that they have developed a serious relationship with me,” Rhodes said. “So even though they find me, they let me know the page is fake, I let them know that it’s not me - they still feel like they’ve got a really deep connection with me, and I have become incredibly paranoid. I actually developed a panic disorder.”
The impersonator(s) used hundreds of images of Rhodes that she shared on Instagram stories: photos of her walking her dog, making coffee, working out, or at a photoshoot. Some fake accounts had nearly 6,000 followers. Others used those photos to catfish men they met online and solicit them for money.
She also fell victim to pornographic impersonation on Instagram.

In most cases, she would report the impersonator’s account herself numerous times and ask her followers to report it as well. Rhodes says in some cases it took Meta years to take fake accounts down. Oftentimes, Meta said the accounts didn’t violate Instagram’s community guidelines.
On its community guidelines webpage, Instagram asks users to not “impersonate others” or “create accounts for the purpose of violating our guidelines or misleading others.”
Meta recently introduced an option allowing users to pay a monthly fee to verify their accounts.
“This new feature is about increasing authenticity and security across our services,” Mark Zuckerberg wrote in a post on Instagram and Facebook.
Rhodes is now a verified user. She says it does not expedite the removal of fraudulent accounts, but it does enable her to contact a live representative at Meta.
After reporting another account, she contacted Meta several times and received an email in August from their support team that stated, “We received a word from our dedicated team, and I would like to inform you that there will be no actions taken at this time. However, you can still fill out the form to report the impersonator’s account.”

She reported it again and got a similar response from another representative. It was not until she provided screenshots of the vulgar and explicit conversations between the victim being catfished and the impersonator that Instagram finally took down the account.
According to Rhodes, it is clear the enforcement of Meta’s policies is inconsistent, and there is no legal incentive to tackle this escalating issue.
WBTV contacted Meta several times to find out how many of its users have reported being impersonated and asked what plans the company has to improve tools to combat this type of online harassment. We have yet to get a response.
Finding little or no help from social media platforms, Rhodes went to the police but was told there was nothing they could do.
Over the last few years, she has become her own investigator, scouring the internet to find new accounts and tracking who might be impersonating her online. She believes it to be one person rather than many and served the alleged offender with a cease-and-desist order. The person she accused denied the allegations.

Rhodes eventually came across a book that referenced an impersonation case that sounded frighteningly familiar - and ultimately connected her with a cyber security expert based here in Charlotte.
Theresa Payton, the CEO of the cybersecurity firm Fortalice Solutions and a former chief information officer for President George W Bush’s White House, wrote in her 2020 book, “Manipulated,” about dealing with a LinkedIn account that claimed to be a brand new employee of her firm. The account also used the name “Emily Thawe.”
The account claimed to belong to a young woman making her way in the field and had a pleasant and professional-looking photo.
After reading LinkedIn’s policies on fraudulent accounts, Payton didn’t trust that the company would handle the impersonation swiftly, so she took matters into her own hands. With an alternate account she used for investigations, she reached out to “Emily Thawe.”
“I told the account owner that they were a fraud,” Payton said.
When Rhodes came across Payton’s book, she reached out for help. She told Payton she was a victim of impersonation and that someone using the same handle had also used her likeness.
Payton says Rhodes was desperate for answers.
“She’s being cyberstalked,” Payton said. Impersonators “are very quick to show up and pull her image — her likeness, pictures of her dog — and present them in their own fantasy fairy tale of their own life.”
Payton doesn’t think that sort of impersonation should be protected by the First Amendment.
“That is not free speech,” she said. “That’s impersonation, it’s cyberstalking, it’s cyber harassment.”
Payton says women tend to be the greatest victims of online impersonation, more specifically “young females that are single, attractive, smart and have big dreams.”
The Wild, Wild West
Christina Gagnier, with over 13 years of experience representing online harassment victims, argues that social media platforms can conduct investigations and verify accounts.
“They have the back-end data to figure out how the account was created, when it was created and what email was used,” Gagnier said. “Why don’t these apps invest in trust and safety?”
The reluctance of tech giants to take action stems from a law passed 27 years ago. Under Section 230 of the Communications Decency Act, social media companies aren’t liable for content that appears on their platforms. Even if it is defamatory or dangerous, they are not considered the publisher or speaker of posts.
Section 230 was enacted in response to a 1995 U.S. Supreme Court ruling that declared that publishers are legally liable for their content. In 1996, Congress passed the amendment with bipartisan support to protect third parties from such liability.
Gagnier notes that victims often encounter obstacles due to Section 230 when seeking recourse.
Meanwhile social media platforms argue they cannot act as content police and their platforms rely on user-generated content. If they start taking down posts and closing accounts, they fear they will lose immunity to operate.
According to Gagnier, legislators may have significantly underestimated the extent and harm of online harassment on social media.
Data from the Pew Research Center found that 41% of Americans said they experienced some form of online harassment in 2021. Nearly 80% say social media is doing only a fair or poor job of addressing online harassment.

But the question arises: should the government regulate what a social media platform can and cannot publish?
Recent Supreme Court rulings favored Google and Twitter, declining to hold these platforms accountable for user-generated content.
However, California recently passed a law that will hold social media companies liable for facilitating child sex trafficking. The platforms must take down harmful material within 36 hours and demonstrate that they are actively fighting abuse to avoid lawsuits. California is the first state in the country to enact legislation like this.
Victims of online impersonation fear that harassment will only get worse with the advent of deep fake AI videos and deep fake voice cloning and that addressing these challenges now will seem like child’s play compared to the challenges victims will face with artificial intelligence.
“I believe that we’re moving into an era where AI is becoming more and more prominent, and crimes are just going to be getting worse. So, if we release these tools and these technologies before we have proper systems around it to protect its creators, protect its users things are going to get really messy really fast,” said Rhodes.
Rhodes says that this past year has forced her to put her life on hold and focus on holding the impersonator accountable. Nevertheless, she remains hopeful that her advocacy will contribute to the creation of new systems prioritizing content creators’ safety.
Copyright 2023 WBTV. All rights reserved.