Inno Yolo
  • Home
  • Why Inno Yolo
  • About the Founder
  • Contact
  • Impressum | Imprint
  • Datenschutz | Data Privacy
  • Home
  • Why Inno Yolo
  • About the Founder
  • Contact
  • Impressum | Imprint
  • Datenschutz | Data Privacy
Search by typing & pressing enter

YOUR CART

"I write what I like" - Steve Bantu Biko

19/6/2024 0 Comments

6 Tips for Detecting Deepfakes and Mitigating against AI-Powered Disinformation

In a year where 2 billion people will have voted, it's never been more important to mitigate against being influenced through disinformation campaigns
Over the past few years we’ve heard a lot about alternative facts. It’s been a while since we lived in a world where we drew our news from the same 5 newspapers and 4 TV channels (2 of which were only available for 12 hours a day and one was a premium subscriber only channel). We now live in a world where entertainment channels can call themselves news channels and get away with it. News channels are plentiful, run 24/7 and some have an openly declared agenda. Some "news sources" don't consider themselves accountable to neither industry bodies nor the public.

Our world is divided. We can’t agree on what a reliable source of information to support our views and standpoint is so we regularly end up in a stand-off, distrusting each other's sources. Ours is a world where we sometimes mock what some would reasonably refer to as credible scientific sources and fact checkers. Academics and scientists are in some quarters considered part of a great number of large conspiracies relating to some of the most important topics of our time and every erroneous study or influenced scientist is waved around as evidence that the "whole lot can't be trusted". Our current context is one where harmful deepfakes can thrive.  
What’s a deep fake?

Deepfakes are artificially generated or amended media that, although they are not a recording of true events as is the case with filmed video or live recordings, are highly realistic and convincing to the consumer.
When social media launched, we celebrated the democratization of information and especially that of news media. Anyone could now be a journalist and report news live. A decade to a decade and a half later, we’re getting our news from a myriad of sources and channels (including individuals in social media groups) and we're being fed news in line with our previous clicks and are solidly anchored in our own bubbles. The more we show interest in certain topics or discussions, the deeper we get pulled into the bubble because, "This might interest you". That would probably be bad enough, but we’re also being bombarded with disinformation and dare I say it,  the real „fake news“. 

    How often do you directly source your news online?

Submit
The above information bubble trend has only been made more challenging by massive improvements in the technologies that allow the creation of deepfakes videos, images and voice recordings. We already know that every technology is a double edged sword - wielded for good or bad depending whose hands it lands on. ​
Why are deep fakes so effective?

Deepfakes often leverage artificial intelligence or sophisticated algorithms to effectively manipulate content in a manner that is deceptively realistic. The greater the advances in computing and artificial intelligence, the better the quality of deepfakes. 
This means that deepfakes are here to stay. So it’s imperative that we’re equipped with tactics for identifying deepfakes and protecting ourselves against disinformation. 

The question may arise: why is anybody working on technology that can be used to manipulate and deceive people? How did we get here and why don't we just legislate all deepfakes away?

Deepfake video generation or AI generated content allows for the generation of media content using AI at a monetary cost that is marginal when compared to what it would cost to produce the same media using traditional means. Additionally, AI generated content can be created at speeds and at a scale that simply cannot be achieved using traditional media creation methods. There are also examples of deepfakes or AI generated content being used for good. It's important to make clear that there are critical ethical discussions that we need to have around the implications of these efficiencies for humanity and how commercial interest should be treated in cases where there is plenty of room for abuse, harm or the massive displacement of human labour.
Why deepfakes?

As with all technologies and tools there is a business case for deepfakes, or AI generated content, if created and used responsibly.
What we do know, however, is that bad actors are leveraging this technology to deceive and manipulate with the most common examples relating to politics, social engineering schemes and financial fraud. These actors do not concern themselves with responsibility and accountability in their use of technology. They also do not prioritize humanity or sustainability. Irrespective of the good work and good intent behind initiatives such as the content authenticity initiative, bad actors don't feel bound by the same rules that you and I might feel bound by and they don't care to build a healthy and well functioning society and planent in the way that you and I might. So, it is imperative that we arm ourselves against deepfakes and other forms of disinformation, because AI generated content may still espouse room for improvement, but it's effective and here to stay. 

Detecting Deepfakes: 

Fake videos and images have been an issue for some time, with applications like photoshop making it possible to create images showing people in locations that they have never been to and doing thing’s they’ve never done with people they’ve never met. You might recall the scandal earlier this year relating to the member of a famous family who had released a photo that never was to the media. 


Much more difficult to fake without detection in the past have been videos and voice recordings. However, the quality of both deepfakes videos and fake voice recordings has improved dramatically. We’ve seen the impact of deepfakes in the political arena. Late last year a deep fake video of Olaf Scholz, the current German Chancellor (German head of government) caused waves. While the stance taken by fake Olaf Scholz in speaking out against the striking rise of right-wing extremism in Germany, the video wasn’t real and some of the statements made by fake Olaf Scholz may be constitutionally questionable. Another deepfake caused waves around the Unites States primary elections last year. In that case, a voice eerily mirroring that of Joe Biden, the current president of the Unites States, was used to robocall thousands of voters that were more likely to vote for his party and discourage them from doing so. 


So deepfakes are not only coming at us through our social media channels, where the companies behind the most successful platforms seem to avoid the responsible path, deepfakes will come at us via every single channel we use to consume media and information. So, when politicians - as is currently the case following the South African elections, promise to provide us with evidence of vote rigging - in the form of videos and recordings - we need to learn to take a beat (pause), take a step back and do what we can to verify the veracity of the materials presented to us. 


Detecting Deepfakes generated with the help of AI is hard. However, the reality of life long learning is that it is life long, so we need to keep on learning. That said, there are steps you can do to minimize the risk of being taken in: 

  1. Video Quality: Ensure that the video you’re looking at is a good quality video, with a high resolution. This way, you’ll place yourself in a better position for picking up misalignments or discrepancies, e.g. unnatural differences in lighting, contours that break where they shouldn’t or even unnaturally perfect levels of alignment (e.g. perfectly aligned ears, eyes and brows). If the quality of the video is not good, maybe find a different source and if there is no better quality source then it might be a good sign that you might be dealing with a deep fake;
  2. Unnatural Blurring: AI generated deepfakes still struggle with fine image resolution, especially where there is movement, you’re more likely to find unexplained blurring in a deepfake video. Generating a high quality deepfake still requires the investment of extensive computer and financial resources as well as expert time. While bad actors are probably willing to invest those resources, blurry videos could make for a great pre-filtering step; 
  3. Distorted Emotional Expression: Another tell is around facial expressions in humans. Humans have facial expressions that align with their emotional states and tend to be difficult to control (that's why we're not all poker champions). Aligning facial expressions to the content of the video still presents a challenge for AI generated deepfakes - does the anger vein you see or don’t see align with what the speaker is saying in the video? Is the blinking in the video what you would expect from a fellow human? How about other forms of emotional expression like happiness, sadness, excitement, etc. Do the facial expressions and emotions match? This is important since a lot of deep fake video generation software leverages existing video material as an input, said input material may not align to the messaging in the fake video; 
  4. Slowing it Down: Slowing the video down to improve your ability to detect discrepancies and distortions can help. We’re fortunate enough to have video players these days that allow us to slow down the speed of video replay. An example of something we might pick up if we slow a video down is shadows, or the absence thereof, that don’t make sense considering the content reflected in the videos or a misalignment between lip movements and the words being spoken in a video.  
  5. Triangulation: This tip is old school journalism and is applicable irrespective of the type of media you’re looking at or the type of information you’re looking to verify. The first time I head about triangulation was from one of my English teachers in high school, Mrs Lang. She didn’t use the term, but she lovingly told us about her Sunday mornings with her husband and the Sunday papers. What seemed to me like a waste of money - buying many Sunday papers instead of one - would soon make sense. She told us that she and husband went through all the Sunday papers, often reading the same story from different perspectives, to form a more enlightened opinion. Triangulation in this context, means that you seek out different sources to see if they have also reported on the content of the video or voice recording. This is especially important if you’re consuming media that causes you to experience strong emotions, e.g. anger, resentment, extreme excitement. It’s also critical to do this if you’re consuming media that presents a famous figure, e.g. a politician, taking a position that is contrary to what they typically stand for or contradicts their interests. This was, for example, the case when Joe Biden supposedly called people asking them not to waster their time voting for his party during one of the primary elections in the United States last year. One of the key successes of disinformation campaigns lies in how those sources present themselves as the only reliable and trustworthy source of information. Triangulation helps us break out of those chains.
  6. The Source: Now, as wonderful as the people in the church (replace with any religious institution) WhatsApp group, or those in your friendship circle, political circle and/or family are, they do not represent credible sources. I’ve personally objected to a number of messages and posts shared in family groups making false claims about health, politics and other topics. These messages and posts are amazingly never sourced and tend to have been forwarded many times. Some of them even have enjoy multiple reincarnations in the same group (didn't we debunk that last year?). We all have work to do when it comes to helping ourselves and / or people we know improve their digital media literacy. The source is important because in most countries journalists, news media houses and academics are required to follow stringent processes for validating and verifying the information they present to us. Journalists that work for reputable news media houses are expected to triangulate the information that they present to the public to mitigate against running fake news, ruining their reputation or being taken to task by the public. One has to be careful here because the rise of click bait journalism has resulted in the dilution of the veracity of some news sources. However, combining multiple sources can help you land somewhere more reasonable than bad actors may intend. 

There’s a lot of technology work focused on detecting deepfakes. While we’re not where we want to be yet, tools such as those described in the Techopedia article referenced below can be very useful. We can also be hopeful when we look at the amount of collaboration seen from many different companies and organisations that have joined initiatives focused on addressing the issues around deep fakes, e.g. the content authenticity initiative as well as the coalition for content provenance and authenticity.

However, we need to train ourselves - using the above tips - to more frequently stop and verify the content and media presented to us. It is harder than consuming media passively, but we need to increase our current dosage of healthy skepticism because the sustainability of our world literally depends on it. 


Every news media house or organisation that deals with information, especially those trusted by the public as a source of credible and verified information, should have a team that combines their expertise on deepfakes and the available technology set a razor-sharp focus on ensuring that they mitigate against the dark side of deep fakes or AI-generated content.  

Resources: 

2024 is a record year for elections. Here’s what you need to know: https://www.weforum.org/agenda/2023/12/2024-elections-around-world/

The Content Authenticity Initiative: https://contentauthenticity.org/

The Coalition for Content Provenance and Authenticity: https://c2pa.org

Deepfakes: Ist das echt? https://www.bundesregierung.de/breg-de/schwerpunkte/umgang-mit-desinformation/deep-fakes-1876736

Deepfake-Scholz verkündet AfD-Verbot: https://www.zdf.de/nachrichten/politik/aktion-gefaengnis-afd-verbot-100.html

Scholz-Deepfake: Sind KI-Fälschungen verboten? https://www.br.de/nachrichten/netzwelt/scholz-deepfake-sind-ki-faelschungen-verboten,TwzZ6nE

Meta Oversight Board Warns of ‘Incoherent’ Rules After Fake Biden Video: https://time.com/6686574/meta-oversight-board-biden-video/

The Biden Deepfake Robocall Is Only the Beginning: www.wired.com/story/biden-robocall-deepfake-danger/

Deepfakes are being used for good – here’s how: https://theconversation.com/deepfakes-are-being-used-for-good-heres-how-193170

'Deepfake is the future of content creation‘: https://www.bbc.com/news/business-56278411

Synthesis AI Video Generator: https://www.synthesia.io

7 Best AI Deepfake Detector Tools For 2024: https://www.techopedia.com/best-ai-deepfake-detectors
0 Comments



Leave a Reply.

    Archives

    July 2024
    June 2024
    January 2024
    December 2023

    Categories

    All

    RSS Feed

Inno Yolo

Home
Why Inno Yolo
About the Founder
​Blog

Support

Contact Us
Book an appointment

Legal

Impressum | Imprint
​

© COPYRIGHT 2025. ALL RIGHTS RESERVED.