Meta is taking new steps to protect its users from scams, particularly those involving fake ads and celebrity impersonation, by testing facial recognition technology on Facebook and Instagram. This new technology aims to tackle “celeb-bait” scams, which use images of famous people to trick users into engaging with ads that lead to scam websites. These ads often encourage people to share personal information or send money, thinking they’re interacting with something legitimate, like a celebrity-endorsed product or investment. But these scam ads are becoming increasingly difficult to spot, especially as they’re designed to look like real promotions.
How Meta is Using Facial Recognition
To counter these scams, Meta is testing facial recognition technology in its ad review system. When an ad is flagged as potentially suspicious and contains an image of a public figure, Meta will compare the face in the ad with that celebrity’s official Facebook or Instagram profile picture. If a match is found and it turns out to be a scam, Meta will block the ad immediately. Importantly, any facial data collected for this purpose is deleted right after the comparison, whether the system finds a match or not.
Initial tests of this system have shown promising results. Meta has been working with a small group of celebrities and public figures to see how well this method works. In the coming weeks, Meta plans to expand this system and will begin notifying more public figures that they’re being enrolled in this protection. However, celebrities can opt out if they choose not to be part of this facial recognition system.
The Celebrity Scam
Meta is particularly worried about scammers using celebrities’ images for their own gain. In some cases, scammers create fake accounts pretending to be a public figure. They then trick people into engaging with scam content, such as fake endorsements for investment schemes or giveaways, asking for personal details or money in return.
Currently, Meta uses a mix of automated systems and user reports to spot and shut down these fake accounts. However, Meta is exploring using facial recognition technology as an additional layer of security to compare the photos on suspicious accounts with those of official celebrity profiles. This would allow Meta to catch imposters more quickly and stop the scams before they reach users.
Meta is also testing facial recognition to help users regain access to their Facebook or Instagram accounts if they’ve been hacked or locked out. Right now, people are asked to upload official documents like an ID card for verification. But Meta is testing a faster and simpler way – video selfies. In this process, a user uploads a video of themselves, and Meta compares it to their profile picture. The video is encrypted and securely stored, and any facial data generated is deleted immediately after the comparison is made.
This new method is not only faster but also more secure than traditional document verification. It’s a small step toward making it harder for hackers to exploit account recovery tools.
Meta knows scammers will continue to evolve their tactics, but it is equally committed to improving its security measures. The company is constantly testing and building new defenses, including facial recognition technology, to make its platforms safer. With privacy safeguards in place, including user notifications and automatic deletion of facial data, Meta aims to stay ahead of scammers and keep its users safe online.
By taking a proactive approach, Meta hopes this technology will make it easier to spot scams and prevent users from falling victim to celebrity impersonation or losing access to their accounts.