Skip to content
Menu
menu

Illustration by Security Management; iStock

Twitter Verification Changes Unleash Deluge of Impersonations, Misinformation, and Reputation Risks

“We are excited to announce insulin is free now.”

The Twitter post immediately attracted a large response because it came from an account using the name and logo of pharmaceutical company Eli Lilly and Co., and the account carried a blue “verified” checkmark. But the @EliLillyandCo account was fake, and in the six hours it took Twitter to remove the tweet, it spurred a wave of other fake Eli Lilly accounts and misinformation about drug costs and healthcare.

At the heart of this issue is that checkmark—the small graphic has long signaled that an account is verified as authentic. But under new Twitter owner Elon Musk, the check mark was suddenly available to anyone through a new Twitter Blue subscription service. Identity verification was skipped; the account owner just had to pay $8.00.

“The definition of verification and the accompanying blue checkmark is changing,” according to Twitter. “Until now, Twitter used the blue checkmark to indicate active, notable, and authentic accounts of public interest that Twitter had independently verified based on certain requirements.

“Now the blue checkmark may mean two different things: either that an account was verified under the previous verification criteria (active, notable, and authentic), or that the account has an active subscription to Twitter’s new Twitter Blue subscription service, which was made available on iOS in the US, Canada, Australia, New Zealand, and the UK on November 9, 2022. Accounts that receive the blue checkmark as part of a Twitter Blue subscription will not undergo review to confirm that they meet the active, notable, and authentic criteria that was used in the previous process.

“Please note, to minimize impersonation risks, display name changes will be temporarily restricted on Verified accounts. This will impact accounts Verified under the legacy program and Twitter’s new Twitter Blue subscription product.”

A day after the changeover in early November, however, the Twitter Blue program was paused after users rushed to create accounts that spoofed U.S. President Joe Biden, celebrities, and brands, The Washington Post reported. A fake account claiming to be basketball player LeBron James falsely tweeted that the athlete was requesting a trade. A “verified” account purportedly belonging to Chiquita Brands claimed the company had “overthrown the government of Brazil,” the New York Post noted. A fake account pretending to be former U.S. president George W. Bush—complete with blue check mark—tweeted “I miss killing Iraqis.”

The new product launch had skipped much of the company’s internal risk evaluation process due to Musk’s expedited timeline, and it failed to make key upgrades—including equipping content moderators to help solve problems and establishing a way for internal Twitter staff to quickly distinguish between newly awarded check marks and legacy verified accounts, The Washington Post reported. In addition, the risk evaluation team had been laid off.

In a letter to Musk, U.S. Senator Edward J. Markey wrote: “Apparently, due to Twitter’s lax verification practices and apparent need for cash, anyone could pay $8.00 and impersonate someone on your platform. Selling the truth is dangerous and unacceptable.”

Trust has been at a premium for years now, according to the Edelman 2022 Trust Barometer, and distrust is now the default. But false accounts and tweets play on users’ emotions and try to trick them into believing disinformation. Undercutting a signal that users have linked with trusted accounts for years makes the information environment on Twitter even more complicated, especially for brand and reputation security, says Jeremy Plotnick, an assistant professor at George Mason University’s School of Business.

“Twitter’s move to effectively eliminate its attempts at validating account ownership have significantly increased the risks for both corporate users and high-profile individuals,” Plotnick tells Security Management. “That said, there have always been risks associated with Twitter (and other social media platforms) in the form of fake accounts and/or other types of information operations, but Twitter’s decision to sell its verification blue checks for $8.00 made it exceedingly easy to execute a range of attacks.”

Those attacks include brand reputation risks, fraud, false verifications, and account takeovers, says Pete Barker, director of fraud and identity for cybercrime analytics company SpyCloud. An account bearing a blue checkmark was spotted running a cryptocurrency scam. Another copied the profile picture, banner, and bio straight from Musk’s own account and still managed to receive the blue check mark, tech reporting site Bleeping Computer found.

“When you’re looking at a verification process, I think people have a tendency to say—when they see this verification, whether it’s Twitter or anybody else—you’re supposed to feel good about that: ‘Hey, I’m authenticated, and this is who I am,’” Barker says. And in the case of Twitter Blue, “the verification method or lack thereof has unfortunately and unintentionally created a doorway for cybercriminals to commit identity fraud.”

“People want to be authenticated,” he continues. “People want to feel secure about something. And now that this has happened, there’s a brand reputation piece that’s going to tarnish [Twitter.] It’s going to sting for a while. And because of all that when you get back to the fraud aspect, we have this sort of zero trust mentality right now—we don’t trust anything, we don’t trust the news, we don’t trust this person—and when something like this comes along, there’s even less trust.”

And the minimal cost of just $8.00 makes it more tempting and easier to commit fraudulent activity or produce misinformation online, Barker notes.

While many fake accounts will use a brand’s name or image, some are more serious than others about fooling their audience, Plotnick says. While a Twitter user is less likely to believe a post from an obviously fake name like “@LockheedMartini” or “@NestleDeathCult,” “@EliLillyandCo” feels quite plausible, especially when accompanied by a blue check mark. The company’s real Twitter account is @LillyPad.

Inside Eli Lilly, the fake tweet sparked a panic. According to the Post, “Company officials scrambled to contact Twitter representatives and demanded they kill the viral spoof, worried it could undermine their brand’s reputation or push claims about people’s medicine.” Within a day of the debacle, Eli Lilly executives ordered a halt to all Twitter ad campaigns—a potential loss of millions of dollars in ad revenue for the social media platform.

While mis- and disinformation are nothing new on Twitter and other social media platforms, Musk has slashed staffing in departments that are usually in charge of content moderation.

By his third week in charge of the company, Musk ordered nearly two dozen Twitter employees who had publicly and privately pushed back against him and his initiatives to be fired, and cut down the contractor workforce tasked with content moderation and data science, The New York Times reported.

Overall, half of the company’s 7,500 employees have been laid off in the past month. And today, Musk issued an ultimatum to Twitter employees who remain: commit to a new “hardcore” Twitter by signing a pledge to stay on or leave the company with three months’ severance pay, according to the Post.

For other organizations and individuals, this reduction in force means that they will need to be extra vigilant about potential brand reputation risks.

“I do not think the core advice for brand protection on Twitter has changed, but it needs to be addressed with greater urgency,” Plotnick says. “Companies need to closely monitor the platform for references to their brand(s) and be prepared to react immediately when they identify a fake account or the transmission of demonstrably false information. The companies need to be sure to maintain clear and consistent communications across multiple channels, including the traditional media, and cultivate relationships with key journalists and media influencers. Doing this builds credibility and may help the company with its post-attack communications.”

In addition, he says, “Companies have greater resources for their social media operations than most individuals and can thus engage in more comprehensive 24/7 surveillance and response. On the other hand, individuals who are seen as victimized by bad actors on social media may elicit more sympathetic media coverage. Ultimately, high profile individuals will need to make a determination if the risk of being the victim of a look-alike account is worth the audience available on Twitter.”

So, how can organizations or individuals make that decision? Plotnick notes that the decision will need to be addressed on a case-by-case basis, taking into the account several key factors:

  • The size of their current audience and its rate of growth/decline.

  • The level of engagement it has with its stakeholders on Twitter.

  • The primary reasons for its existing Twitter presence, such as proactive communications, customer relationships, taking the pulse of the market, etc.

  • The viability of other social media channels to meet the organization’s needs.

  • The risk of surrendering their Twitter presence to competitors.

  • The risk of looking like the organization is making a statement about Twitter’s ownership rather than mitigating its reputational risk.

“It should be noted that while an organization can deactivate its official account(s), bad actors can continue to create and use fake accounts,” Plotnick cautions. “The difference is that the organization will no longer be there to defend itself.”

Additionally, organizations can beef up their reputational threat assessment processes—while it is difficult for organizations or individuals to plan for random malicious posts, using monitoring tools, establishing immediate response plans and procedures, and practicing them can be essential, Plotnick adds.

“Ideally, when an organization creates crisis scenarios the team will incorporate information operations into the mix,” he notes. “For scenario planning, the organization can identify issues that may arise from its operations or products. The nature of these issues really depends on the nature of the organization.

“Let’s take a hypothetical case of a restaurant chain,” he continues. “A possible crisis scenario could be an E. coli outbreak. This scenario (and related training) would normally involve how the company responds to the issue from an operational, regulatory, legal, shareholder, and media perspective. Today, the scenario should also include the risks of bad actors exploiting the real-world crisis using false social media accounts.”

Plotnick also recommends considering the various motivations that could be behind fake accounts and disinformation online when planning out potential responses.

“Looking at some of the recent incidents on Twitter it seems that there are a range of motivations driving those who have created the accounts and posted the false information,” he says. “Some of the recent posts would indicate the motivations are environmental activism, social activism, anti-corporate sentiment, and/or political ideology. Knowing the motivation of the attacker can be helpful in understanding the nature and extent of the risk.”

On the financial risk side, one potential motivation is market manipulation.

“Fake posts related to both Eli Lilly and Lockheed Martin led to short but steep declines in their stock prices,” Plotnick says. “A bad actor could theoretically deploy a fake account to release market moving information in order to buy, or sell, stock in the company or its competitors. This has been done in the past using fake press releases.”

Meanwhile, at Twitter, the blue check subscription service will relaunch on 29 November, Musk announced, to make sure the function is “rock solid.” With the new release, a user cannot change an account’s verified name without losing the blue check “until name is confirmed by Twitter to meet terms of service,” Musk said.

This saga serves as a lesson to developers everywhere, Barker says.

“Regardless of it’s Twitter or anybody else, the bigger picture here is making sure that if you roll out something that you have it buttoned up," he adds. "If you’re going to the public—whoever you are—and you say ‘We have this new process in place and as a result of that, we’re going to create this persona of validation and verification,’ then you need to make sure that you actually do have that and it’s buttoned up, tested, and it’s ready to go.”

arrow_upward