Services
Services
SOC & Attestations
SOC & Attestations
Payment Card Assessments
Payment Card Assessments
ISO Certifications
ISO Certifications
Privacy Assessments
Privacy Assessments
Federal Assessments
Federal Assessments
Healthcare Assessments
Healthcare Assessments
Penetration Testing
Penetration Testing
Cybersecurity Assessments
Cybersecurity Assessments
Crypto and Digital Trust
Crypto and Digital Trust
Schellman Training
Schellman Training
ESG & Sustainability
ESG & Sustainability
AI Services
AI Services
Industry Solutions
Industry Solutions
Cloud Computing & Data Centers
Cloud Computing & Data Centers
Financial Services & Fintech
Financial Services & Fintech
Healthcare
Healthcare
Payment Card Processing
Payment Card Processing
US Government
US Government
Higher Education & Research Laboratories
Higher Education & Research Laboratories
About Us
About Us
Leadership Team
Leadership Team
Careers
Careers
Corporate Social Responsibility
Corporate Social Responsibility
Strategic Partnerships
Strategic Partnerships

Protecting Against Deepfakes Without Killing Innovation

Cybersecurity Assessments

Protecting Against Deepfakes Without Killing Innovation

 
Introduction

This is the first in a two-part series regarding deepfakes. To begin, we will analyze the technological developments affecting deepfakes and discuss strategies for handling deepfakes (and cheapfakes) at an organizational level.

By now, most people have had some exposure to deepfakes – that is, hyper-realistic audio, video, or images that have been altered using deeplearning algorithms to make a subject appear to do or say something that they did not.  Deepfakes first received widespread attention in July 2017, when researchers at the University of Washington used artificial intelligence (AI) to bring a picture of President Obama to life, convincingly turning his photograph into a video depicting him saying words from an audio track.  Since then, there has been a deluge of deepfakes and “cheapfakes,” which rely upon manipulation through more conventional editing techniques.

Deepfakes and cheapfakes are widely considered a special threat to the political landscape, because hostile actors could sway a close election by flooding social media platforms with fabricated content.  Recall, for instance, the slowed video of House Speaker Nancy Pelosi appearing to slur her speech.  (Notably, the video of Speaker Pelosi was a cheapfake, not a deepfake.  However, the risk is similar.)  Fake, scandalous content besmirching powerful politicians is, in fact, just the tip of the iceberg.  Deepfake technology has expanded beyond politics, and into the realm of fraud – last year, criminals used AI voice impersonation to mimic the CEO of a parent company on the telephone, instructing the subsidiary’s CEO to wire $234,000 to the criminal’s bank account.

As such, this is a watershed moment in the timeline of AI development, and it has not gone unrecognized. The European Union is expected to pass regulation related to AI development by year’s end, and California and Texas have already passed laws targeted at deepfakes for political purposes in advance of the 2020 election.  In parallel, technological developments continue to move at exponentially increasing speed toward a future in which deepfakes will be even easier to produce, harder to detect, and even more prevalent on the internet.  Despite the moves already being made to thwart the influence of this fabricated media, we must keep tabs on deepfake enabling technology as it continues to develop.  All organizations should look into advances in detection and other defensive measures to take as deepfakes become an increasing threat to security and privacy.

Benefits of Deepfakes

The dangers of deepfakes are already hauntingly apparent.  Given just how much is available on the internet, one can easily imagine a world in which AI-generated political propaganda pervades the world wide web and even fools major news outlets, ruining the reputations of prominent figures and ordinary citizens alike. Much of this fake content is specific in nature, as one study analyzing over 14,000 deepfake videos online noted that 96% were pornographic, overwhelmingly targeting and harming women. Even as wild and terrible as that number is and those results are, deepfakes do still have legitimate social and commercial applications.

Deepfake technology can be used to create a voice font, allowing those diagnosed with ALS or other disabilities affecting speech to continue speaking with their own voices even after they can no longer speak normally (or at all).  Deepfake technology can also be used to alter mouth movement of a movie when it is dubbed into a foreign language, making for a better viewing experience.  What’s more is that deepfakes aren’t just capable of changing already-established media--imagine that an author dies midway through writing the final book in a series.  Deepfake technology may one day be used to complete the written work in a near-perfect facsimile of his or her authorial voice.

So yes, while the hottest topics surrounding deepfakes may be the more nefarious ones, the potential benefits of the tech are manifold.  All this is simply to illustrate that, as a society, we don’t necessarily want to preclude companies or others with legitimate reasons from producing deepfake-enabling technologies.  The challenge, of course, remains in striking an effective balance to foster legitimate developments while safeguarding against the many harmful applications.

A Technological Arms Race

Because the developments do keep coming, at this point it is worth a brief, high-level overview to discuss just how deepfake technologies have evolved over the past few years, along with the forensic technologies that have evolved in response.  It’s an arms race, of sorts, where the detectives struggle to keep any advantage against malicious actors.

Let’s begin with generative adversarial networks (GANs), which have become a popular technology for producing convincing deepfakes involving images and video, especially as the tech itself has evolved.  GANs use at least two neural networks—a generator that generates content and a discriminator that classifies examples as real or fake.  The generator produces altered content, and the discriminator analyzes both altered and unaltered content, marking it as faked or not.  The generator is thereby trained until it fools the discriminator as much as practically possible—consider, a 50% success rate means it’s quite difficult to spot a fake.  GANs make the production of deepfake content both easier and more realistic.

But as the production of deepfakes has arisen, so too have techniques to spot altered content.  One such technique detects deepfake videos by analyzing blinking patterns.  However, as the fabricated media has become increasingly sophisticated, this technique is less and less reliable in detecting frauds.  For a more bespoke approach, researchers at UC Berkley devised a new detection method in creating a “fingerprint” of sorts.  An AI system is fed hours of video of the potential deepfake target—i.e. a high-level politician—and analyzes that person’s facial movements, expressions, etc.  The trained model can then spot incongruities with the fingerprint in any further videos released to identify a potential deepfake.  Unlike the increasingly obsolete blinking pattern analyzation, this technique is highly effective at detection of fabricated media, but it is also time-consuming to create—a model needs to be created for each potential target, making this AI fingerprint inapplicable for broader, everyday use.

As such, many companies have begun to support development of other forensic techniques, including  Facebook, which is notably spearheading the Deepfake Detection Challenge—a program that will hopefully produce technology everyone can use to better detect to AI-altered video content.  To foster improvement of such technologies, those participating in the challenge are given access to a training dataset with both real and tampered videos that have been labeled.

Given all this, it’s clear that the arms race between deepfake content creators and the detectors will not end anytime soon.  It’s likely that we will see social media platforms and other content hosting environments soon implement detection technologies and teams of analysts to identify, flag, and/or otherwise remediate deepfakes.  Nevertheless, these measures may not be sufficient or applicable across the board.  Remember, in at least one instance, a deepfake voice was used to commit fraud over the phone.

Measures Organizations Should Consider

But even though the technical tools are still being created and reformed, that’s not to say that other precautions cannot still be taken against deepfakes at this time. In fact, because politicians are favorite targets of deepfake distributors, scholar Danielle Citron has coauthored specific guidance on how political campaigns may deal with deepfakes.

However, with some adaptations, her instructions can also apply more broadly to other, non-political organizations that may too be affected by deepfakes and cheapfakes.  The main point is that, just as companies usually prepare clear protocols and remediation steps to deal with breaches or natural disasters, deepfakes should be treated in the same way. Some specific recommendations include:

  • Assess How Deepfakes Could Pose a Threat to Your Organization.
    Many companies already conduct risk assessments in other areas, and deepfake analysis can easily be rolled in as well.  For instance, does your organization conduct a risk assessment for all new processes before funding is issued? Consider updating that existing evaluation to consider the potential impact of deepfakes.  There might also be reason to conduct periodic assessments at the enterprise level of the harm deepfakes might propose—e.g. through proliferation of false content on social media.  This broader assessment could similarly become part of existing assessments (such as those already analyzing the risks posed by a potential data breach.  Evaluating the nature of the risk should be an organization’s first step, because that will inform how further protocols are devised to deal with such potential threats.

  • Include Awareness of Deepfake-Related Threats in Security and Privacy Training.
    Technological advances can help filter out or flag deepfake content, but the actual harm in deepfakes is in the belief and/or spread of misinformation.  Prepared personnel can be just as effective as any technological solution, if not more so.  Training for employees should have two focal points.  First, employees should be given tools and tips to spot deepfakes and cheapfakes.  Things to look for include anomalies in the appearance of the content, such as unnatural head movements by a person in a video.  Second, employees should be aware that the nature of deepfake content is to try to steer the recipient to do something outside of the normal course of business, and they should be instructed to question such motives—i.e., is someone asking via phone or video call to move funds in non-approved manner or issue a press release without going through proper approvals?

  • Consider Technical and Administrative Measures to Detect Deepfake Content.
    These will, of course, vary based on the nature of the threat to the organization and whether the biggest deepfake-related threats would occur within or outside the organization’s ecosystem.  For instance, if the most likely threat is the spread of reputation-ruining images or video on various content-hosting platforms, the organization may be reliant, in part, on measures implemented by those outside hosting platforms. In the interest of everyone, organizations should absolutely be familiar with the steps those platforms have taken against deepfakes.  • Moreover, organizations should also consider reliable procedures to prevent fraud, particularly where funds are concerned.  For instance, it should be a matter of policy that money transfers should not be authorized via phone – instead, there should be a required document trail verifying the identity and authority of the requestor and approver.

  • Treat Deepfake Remediation Like a Breach or Disaster.
    Of course, the specific remediation measures should be different than those relevant to actual disasters or breaches.  Nevertheless, organizations should designate a team to manage the incident and keep this team briefed on key trends and threats. It’s also recommended to establish relationships with officials that will be helpful during an incident, and testing incident response procedures via table-top exercises should take place as well.  If the posting of altered content on an outside platform poses a threat, organizations should assign responsibility for knowing the platforms’ policies and procedures with respect to such content, including points-of-contact to coordinate remediation.

More so than ever before, deepfakes are an emerging threat area for all organizations.  Although they’ve primarily cropped up in the field of politics to this point, technological developments--including text and voice fakes—suggest that the pervasion could expand, with the technology becoming a tool for fraudsters and other malicious actors to steal from or damage organizations in other industries.  The above considerations are by no means a comprehensive list of preparatory and remedial measures, but merely the starting point as this emerging threat comes increasingly to the forefront.

** Stay tuned for the next post, which will use example deepfakes and technologies as a case-study under existing laws.  Specifically, the post will examine how the GDPR and CCPA treat the design of deepfake technology.  It will also examine deepfake-specific laws, such as those recently passed in California and Texas, and close by discussing emerging trends in governments seeking to regulate the development in emerging AI technologies.

About Adam Adler

Adam Adler is a Senior Manager of Privacy & Security Compliance at Schellman. Prior to joining Schellman, Adam was a data privacy consultant focusing on privacy program development for existing and emerging privacy laws, including the GDPR and CCPA. An attorney by training, Adam leverages his legal background to educate clients about the practical implications of emerging laws.