A deep fake is a sophisticated digital forgery of an image, sound or video enabled by artificial intelligence (AI)
Such forgeries are so good that the human eye is unlikely to detect that the image has been manipulated. The goal of a deep fake, generally, is to mislead and deceive, making it appear as though a person has said or done something when, in fact, that is not the case.
Altering videos and creating fake content is nothing new. Many people and entities, from governments to common criminals, have used misinformation campaigns for political, social or personal gain for quite some time.
However, what once took skilled engineers significant time to create can now be done rather cheaply, quickly and much more convincingly.
Supported by advances in artificial intelligence, deep fakes have proliferated across the internet, as the technology has become less expensive and more accessible. There are now apps and websites dedicated to creating fake material, bringing layers of artificial neural networks or “deep learning” to amateurs.
Why Does it Matter?
With the barrier to entry for deep fake technology now so low, there exists a real possibility that information can be transformed for nefarious purposes and masked truths. A ubiquity of false information has the potential to profoundly affect democratic institutions by eroding public trust, destabilising free markets and compromising national security.
Businesses have always strived to protect the “CIA” triad of information security – confidentiality, integrity and availability. But while corporate cyber defenders are battled-tested against data confidentiality and availability threats, they are only now seeing the true potential of data integrity risks. As a result, businesses may not be fully prepared to respond to this sleeping giant, which can have a lasting impact on organizations and executives.
Spurred by financial gain, geopolitical influence, or social causes, online actors have already targeted businesses with false information campaigns. The worst, however, may be yet to come. Instead of using social media to make false claims about a company, an already common tactic, what if a malicious actor uses deep fake technology to secretly change content on a company website or release a manipulated public report just before a filing deadline? The mere threat of such an action could throw a company into chaos and send stock prices into a free fall.
Would your company be ready to respond to a fake video of an executive committing an illegal act or altered audio of someone saying something offensive or inaccurate? The possibilities are endless.
As the onslaught of AI-enabled forgeries become a reality, casting a shadow over the old adage “seeing is believing,” businesses must continue to build resilience and take a comprehensive approach to addressing new data integrity threats.