DEEP FAKES KEEP GETTING DEEPER

Deepfakes are permeating society quicker than photoshopped celeb Instagram images these days. However, often the intention behind generating and posting a deepfake goes much deeper than some face-tuned Kardashian insecurities. Unlike photoshop though, deepfakes are essentially synthetic media, typically videos that have been created using a form of artificial intelligence known as deep learning to almost-imperceptibly mimic the chosen person for nefarious or malicious reasons. Usually, it is done over a face or video of person, which is then altered to come off as someone else. Due to the convincing and realistic nature of deepfakes, they can and have been used to spread misinformation, this is especially true when it comes to politics and porn. When deepfakes initially started appearing on mainstream social platforms and on the internet, most of their use was in porn.

In 2019, a shocking and disturbing 96% of the deepfake videos found online by AI firm Deeptrace were pornographic, often containing the faces and bodies of well-known female celebrities. It is something that many professionals in the space, believe can be “weaponized”, especially against women. Deepfakes aren’t just reserved for video content but can be used for images and audio too. Audio deepfakes are also known as ‘audio skins’ or ‘audio clones’. At this stage, at least in the United States, deepfakes and deep learning AI aren’t illegal, as there is “no federal law that criminalizes the creation or sharing of non-consensual deepfake porn” even if the cases do make it to court, they often don’t go very far and seem to be harder to prosecute, at least at this stage.

Making good deepfakes requires skill and processing power. Its not something that can be easily done on your stock standard PC, and usually requires something higher-end and more powerful in order to reduce the processing time from days to hours. Although, for those who aren’t naturally inclined or specifically skilled, there are now many programs out there that can assist. Spotting a deepfake can be equally hard as creating one. Initially there were a few tell-tale signs, such as the fact that most of the early deepfakes never blinked, understandably as they were originally comprised of still images. However, as the years go on, technology has become more advanced, and deepfakes are less and less obvious.

So, how can deepfakes be regulated? Well, this is a tough ask but Meta believes that it is up to the task, especially where politics are concerned. Although Meta already has some policies in place when it comes to deepfakes, its looking to institute a few more, including necessitating political advertisers having to disclose when any AI or digital manipulation has been used in either Instagram or Facebook adverts. The policy, which will be implemented from January, will be regulated by a combination of AI and human ‘fact checkers.’ Google also has similar policies in place to try and limit the spread of misinformation – especially from a political standpoint. Social media sites such as TikTok don’t allow political advertising at all. With major global elections looming for 2024, such measures are becoming increasingly important.