Deepfake tech strikes again
Deepfake technology is an area of tech that certainly receives a lot of backlash, and it’s not hard to see why with all the bad press it receives.
Whilst the idea for the technology could be advantageous under the right circumstances, it, unfortunately, also has the power to cause a tremendous amount of damage – not least to a person’s personal reputation, and the darker side of the technology soon became apparent and has raised serious concerns.
In an effort to combat these issues, Meta has introduced facial recognition technology to its platform.
Meta’s facial recognition crackdown
Meta, the parent company of Facebook and Instagram, has announced the integration of facial recognition technology to combat celebrity impersonation in fraudulent advertisements.
This moves comes about from a growing issue: scammers using high-profile figures like Elon Musk and Martin Lewis to promote fake investment schemes and cryptocurrencies.
Martin Lewis is a prominent financial expert who has repeatedly expressed his frustration with these scams, stating on the BBC Radio 4s, the Today programme that "Countless” reports come in daily about his name and face being used in scams, leaving him feeling “sick.”
To address these ongoing concerns, Meta’s new technology will compare flagged ad images with celebrities' profile photos from its platforms. If a match is detected and the ad is confirmed as fraudulent, it will be automatically removed. Early trials of this system have shown promise, leading Meta to expand its scope by notifying public figures impacted by these scams.
Deepfake scams
Deepfakes use artificial intelligence to create realistic but fabricated images, audio, or video content, and the technology is sometimes so good that it has made scams more believable, allowing malicious actors to use computer-generated likenesses of celebrities to promote fake services.
In the 2010s, Martin Lewis sued Facebook after his name and likeness were used in scam ads. Although he eventually dropped the lawsuit when Facebook introduced a reporting feature for scam ads and donated £3 million to Citizens Advice, the issue has only intensified, and today’s deepfakes make these fraudulent schemes much harder to detect and combat.
Deeper concerns
Facial recognition was once a standard feature of Facebook, but in 2021, it was shelved amid concerns over privacy, accuracy, and bias. Now, it is being reintroduced in a more limited form, specifically to help users gain access to their accounts. Meta states that any facial data collected during the process will be encrypted and securely stored. However, it isn’t currently being offered in the UK or EU as permission from regulators has not yet been received.
Outside of scam ads, deepfakes are being increasingly used for more sinister purposes, including political manipulation and personal attacks. In addition to impersonating celebrities, deepfakes have also been used to create misleading content, such as false interviews or doctored speeches.
Regulating deepfakes
The UK’s regulator, Ofcom, is one example of an authority facing growing pressure to act. Martin Lewis recently called for Ofcom to be granted more power to tackle scam ads and other malicious uses of AI after a fake interview was used to trick people into giving away their bank details.
Current laws around intellectual property and the right of publicity, which is “the right of all individuals to control commercial use of their names, images, likenesses, or other identifying aspects of identity”, were not designed to handle the challenges created by AI-generated content, and this has left platforms like Meta in a difficult position, where they will need to balance the need to protect users with ensuring they are compliant with privacy laws.
As governments and regulators worldwide begin to grapple with these issues, it is likely that new legal frameworks will emerge – they have to – to address the ethical and legal questions surrounding deepfake technology. Until then, platforms like Meta will continue to rely on a combination of AI, facial recognition, and user reporting to mitigate the risks posed by deepfakes.