Back
Cyber Security Training & Software for Companies | MetaCompliance

Products

Discover our suite of personalised Security Awareness Training solutions, designed to empower and educate your team against modern cyber threats. From policy management to phishing simulations, our platform equips your workforce with the knowledge and skills needed to safeguard your organisation.

Cyber Security eLearning

Cyber Security eLearning to Explore our Award-Winning eLearning Library, Tailored for Every Department

Security Awareness Automation

Schedule Your Annual Awareness Campaign In A Few Clicks

Phishing Simulation

Stop Phishing Attacks In Their Tracks With Award-Winning Phishing Software

Policy Management

Centralise Your Policies In One Place And Effortlessly Manage Policy Lifecycles

Privacy Management

Control, Monitor, and Manage Compliance with Ease

Incident Management

Take Control Of Internal Incidents And Remediate What Matters

Back
Industry

Industries

Explore the versatility of our solutions across diverse industries. From the dynamic tech sector to healthcare, delve into how our solutions are making waves across multiple sectors. 


Financial Services

Creating A First Line Of Defence For Financial Service Organisations

Governments

A Go-To Security Awareness Solution For Governments

Enterprises

A Security Awareness Training Solution For Large Enterprises

Remote Workers

Embed A Culture Of Security Awareness - Even At Home

Education Sector

Engaging Security Awareness Training For The Education Sector

Healthcare Workers

See Our Tailored Security Awareness For Healthcare Workers

Tech Industry

Transforming Security Awareness Training In The Tech Industry

NIS2 Compliance

Support Your Nis2 Compliance Requirements With Cyber Security Awareness Initiatives

Back
Resources

Resources

From posters and policies to ultimate guides and case studies, our free awareness assets can be used to help improve cyber security awareness within your organisation.

Cyber Security Awareness For Dummies

An Indispensable Resource For Creating A Culture Of Cyber Awareness

Dummies Guide To Cyber Security Elearning

The Ultimate Guide To Implementing Effective Cyber Security Elearning

Ultimate Guide To Phishing

Educate Employees About How To Detect And Prevent Phishing Attacks

Free Awareness Posters

Download These Complimentary Posters To Enhance Employee Vigilance

Anti Phishing Policy

Create A Security-Conscious Culture And Promote Awareness Of Cyber Security Threats

Case Studies

Hear How We’re Helping Our Customers Drive Positive Behaviour In Their Organisations

A-Z Cyber Security Terminology

A Glossary Of Must-Know Cyber Security Terms

Cyber Security Behavioural Maturity Model

Audit Your Awareness Training And Benchmark Your Organisation Against Best Practice

Free Stuff

Download Our Free Awareness Assets To Improve Cyber Security Awareness In Your Organisation

Back
MetaCompliance | Cyber Security Training & Software for Employees

About

With 18+ years of experience in the Cyber Security and Compliance market, MetaCompliance provides an innovative solution for staff information security awareness and incident management automation. The MetaCompliance platform was created to meet customer needs for a single, comprehensive solution to manage the people risks surrounding Cyber Security, Data Protection and Compliance.

Why Choose Us

Learn Why Metacompliance Is The Trusted Partner For Security Awareness Training

Employee Engagement Specialists

We Make It Easier To Engage Employees And Create a Culture of Cyber Awareness

Security Awareness Automation

Easily Automate Security Awareness Training, Phishing And Policies In Minutes

MetaBlog

Stay informed about cyber awareness training topics and mitigate risk in your organisation.

Delusion or Reality? How Artificial Intelligences Abuse Our Trust

deepfake

about the author

Share this post

Do you still remember Barack Obama’s words: “President Trump is a total and complete dipshit!”? Quite provocative, one is not used to such statements from the ex-US president. But did he really say that? Of course he didn’t. This video is a so-called deepfake and was created by Jordan Peele to show how dangerous such a fake can be.1 But let’s delve a little deeper into the matter.

Deepfake is a neologism made up of “deep learning” and “fake”. It describes a method of manipulating images, videos or audio formats (with the help of artificial intelligence) in such a way that the human eye or ear can hardly perceive the fakes. But what exactly is the purpose of a deepfake, and how exactly is it generated in the first place?

To create a deepfake, so-called neural networks are used. These networks act similarly to the human brain and, given a high data set, can predict what other data of the same type might look like. Therefore, if you feed these networks with enough images, videos and audio content, they get better and better and create higher quality manipulations.

One highly effective neural network is the GAN. It was first mentioned in a scientific paper by Ian Goodfellow in 2014. Over the years, various researchers continued to expand these networks and combine them with each other. As a result, the forgeries became higher quality and more credible. But first, let’s define what a GAN is.

A GAN – short for Generative Adversarial Networks – is a network consisting of two algorithms. One algorithm forges an image (forger) while the other algorithm tries to detect the forgery (investigator). If the investigator succeeds in identifying the forgery, the forger learns from it and constantly improves. This process is also called deep learning.

What types of deepfakes are there?

The first and probably most widespread type is the exchange of faces in pictures or videos, so-called face-swapping. Here, the heads of famous people are usually taken and placed in a different context.

A similar method is voice swapping. As the name suggests, voices or general audio content are manipulated to sound like a specific person. This method can be further developed with the manipulation of the facial expressions so that the words spoken match the movement of the lips and facial movements.

Finally, there is body puppetry. Here, body movements are analysed and can even be imitated in real-time.

Why are deepfakes so dangerous?

When the technology began in 2014, it was steadily expanded and improved. By 2017, the technology had reached the point where the first videos could be produced. This led internet users to exploit deepfakes for the manipulation of pornographic content, which was first made available on the internet platform Reddit. These videos consisted of celebrities depicted in compromising poses. According to a study by Sensity (then known as Deeptrace), 96% of all deepfake videos in 2019 were pornographic and exclusively concerned women.2

“The development of full artificial intelligence could spell the end of the human race.”
– Stephen Hawking

With time and continuous further development of the deep learning process, more and more YouTube channels were created to deceive. Fakes of politicians, actors and other public figures began to see the light of day. From 2018 to 2020, the number of fake videos doubled every six months, reaching more than 85,000 in December 2020.3

Hao Li, a deepfake expert, has warned that we will soon no longer be able to identify deepfakes as fakes. The problem, however, is not the technology itself but the lack of means in recognising these fakes. “Deepfakes will be perfect in two to three years,” Li said.4

The truth in this statement is revealed in a programming competition initiated by Facebook AI in 2019. The group developed a dataset of 124,000 videos, 8 face-modification algorithms and associated research papers. But even the best competitors only achieved a detection rate of just over 65%.

“This outcome reinforces the importance of learning to generalize to unforeseen examples when addressing the challenges of deepfake detection.”, a Facebook AI spokesperson explained.5

Example of the misuse of deepfakes

The extent of the damage that deepfakes can cause can be illustrated, among other examples, by a case in Gabon in 2018. President Ali Bongo, who had not been in the public eye for a very long time and was thought by some to be dead, published a video of a speech. Political opponents dubbed the video a deepfake, triggering an attempted coup by the military.

TW: Violence against children/youth
Another frightening case is told by X Gonzáles, a strong advocate for tougher gun laws in the US. Gonzáles is a survivor of the Parkland school massacre and gained international recognition for her emotional speech at a memorial service following the event. Opponents of further gun legislation defamed Gonzáles in a video, depicting her tearing up the American Constitution. In the original video, she tears up a target.

A video that was produced about the US Democrat, Nancy Pelosi, demonstrates the ability of voice swapping. Trump’s supporters, and thus competitors of the Speaker of the House of Representatives, edited a video to make her appear drunk and somewhat confused. The fake was clicked millions of times, despite the fact that Nancy Pelosi does not drink alcohol.

TW: Sexualised voilence
The next scandal concerned Rana Ayyub. The female Indian journalist made a comment about the nationalist BJP party, accusing it of defending child abusers. As a result, and as an attempt to undermine her credibility, a fake porn film was made of her by those critical of her actions.

What apps are available to create deepfakes?

DeepFaceLab: Probably the best-known open-source application is DeepFaceLab. According to the app’s developers, 95% of all deepfake videos are generated with DeepFaceLab. The app makes it possible to swap faces or entire heads, modify a person’s age or adjust the lip movements of strangers. DeepFaceLab is available for Windows and Linux.

Zao: Unlike DeepFaceLab, Zao is an app for smartphones. Originating from China, the extremely popular application creates deepfake videos in seconds and is geared towards entertainment purposes. So far, however, the app is only available in China (or for those with a Chinese phone number) on Android and iOS. Recently, the app has been criticised for its questionable privacy policy. Users relinquish all rights to their own images and videos when they use the app.

FaceApp: The application gained increasing popularity in 2019. It offers numerous functions such as rejuvenation or ageing, adding beards, make-up, tattoos, hairstyles or even the ability to change one’s gender. However, just like Zao, FaceApp has also been the subject of much criticism for its privacy policies. Here also, the rights to one’s own image and video are ceded. The app is available for Android and iOS.

Avatarify: Finally, we have Avatarify. With this application, users can create live deepfakes in video chats. The technology is able to recreate facial movements such as eye blinks and mouth movements in real time and thus achieves extremely realistic imitations. However, the basic requirements are not without difficulty. You need a powerful graphics card and other additional tools to be able to carry out the installation. It is available for Windows, Mac and Linux or in a slimmed-down form for iOS.

How do I recognise a deepfake?

Unmasking a deepfake is not always an easy task. First, always check the context of the video or image and consider whether the context makes sense. The FBI has also published a list6 in which it highlights characteristics of deepfakes. This list includes, but is not limited to:

  • Visual indicators such as distortions, deformations or inconsistencies.
  • Distinct eye spacing/placement of the eyes
  • Noticeable head and body movements
  • Synchronisation problems between facial and lip movements and associated sound
  • Distinct visual distortions, usually in pupils and earlobes
  • Indistinct or blurred backgrounds
  • Visual artifacts in the image or video

The DeepFake-o-meter can also be used to analyse and debunk video files.

But are deepfakes only negative?

Deepfakes are not exclusively negative. One positive impact they have can be witnessed in the film world. For example, Luke Skywalker was artificially deepfaked in the series Mandalorian. Disney is also planning more deepfake films using its Disney Megapixel deepfakes technology. In the future, it will be possible to make films with actors who have already died.

Progress is also being made in the area of e-training. The software company Synthesia has developed an AI that generates videos from text. The videos contain artificially created people who can reproduce the desired content. In Synthesia’s case, this technology is used to create e-learning courses, presentations, personalised videos or chatbots.

Another example of the innovative use of deepfake technology is demonstrated by a research team from Moscow. They have managed to breathe life into the Mona Lisa. You can marvel at the moving oil painting on YouTube.

1 https://www.youtube.com/watch?v=cQ54GDm1eL0
2 https://regmedia.co.uk/2019/10/08/deepfake_report.pdf
3 https://sensity.ai/how-to-detect-a-deepfake/
4 https://www.cnbc.com/2019/09/20/hao-li-perfectly-real-deepfakes-will-arrive-in-6-months-to-a-year.html
5 https://ai.facebook.com/datasets/dfdc
6 https://www.ic3.gov/Media/News/2021/210310-2.pdf

Other Articles on Cyber Security Awareness Training You Might Find Interesting