< Back to 68k.news IN front page

The Imperative of Collective Action Against the Threat of Deepfakes | TechPolicy.Press

Original source (on modern site)

Amy S. Mitchell / Mar 28, 2024

In 2023, the number of deepfakes increased tenfold over the prior year, flooding the internet with synthetic media content often designed to mislead, if not deceive, people. Research indicates that these AI-generated depictions of people and events are increasingly difficult to detect and counter.

Something must be done to safeguard a well-informed public. But technology moves much faster than legislation, and freedom of speech must be protected - making defense against deepfakes a complex challenge that requires collaborative response from governments, researchers, the tech industry, and journalists.

First, how do we even define the term "deepfake?" Not all AI-generated depictions of real people are malicious, so intent and deceptiveness must factor into any formal definition. A manufactured video of Ukrainian President Volodymyr Zelenskyy ordering his country to surrender to Russia is clearly a deepfake. But an AI-generated video of soccer star David Beckham speaking nine languages in an effort to combat malaria is far from malicious.

While that may be a manageable problem, the issue of deepfake distribution is far more complicated. The general public is exposed to a barrage of synthetic content on social media, but journalists - who are expected to provide clarity and weed out falsehoods - also lack the tools necessary to keep up with the people creating deepfakes. Without technical support, which is expensive in the United States and other Western countries but unaffordable in other parts of the world, the public will always be at a disadvantage.

What can be done?

Some recent efforts have focused on training the public. Research suggests people can be trained to better detect deepfakes, to focus more on sharing based on the accuracy of information, and to differentiate low cost versus high-cost deepfakes. For instance, research shows that when consumers realize how inexpensive deepfakes can be, they are less likely to believe and share fake news. There are also various media literacy efforts that show promise, and some evidence that fact-checking can help dispel false information.

But those approaches are far from fool-proof, particularly if they are not comprehensive. Findings suggest, for example, that tagging information as false, while beneficial, also has consequences for true, authentic information. General, broad disclosures about false information can cause viewers to discount true, accurate news. And false information that is not tagged as such is found to be interpreted as more accurate than other false information that has been tagged as false. Moreover, research also finds that labels, declaring either the origin or use of AI to alter content, seem like a positive step forward, providing transparency and helping dispel disinformation, but can also have some drawbacks. In a recent research study, for example, consumers consider labeled, AI-generated content to be less trustworthy, even if the content itself is not shown to be less accurate. Furthermore, a lack of comprehensive labeling causes an "implied truth effect" - when false information that is not tagged as such may be seen as authentic.

While the media and the technology industry can - and should - continue to work on those fronts, there are steps for policymakers to take as well. Any legislative or regulatory process must begin with a determination of what exactly governments are trying to ban or restrict. Do they want to eliminate all AI-generated deepfake content, limit what can be used in deepfakes, and/or set labeling requirements? A handful of countries have implemented- or begun discussions about- deepfake regulations. For example, Australia is working to prevent production of deepfakes that depict child sexual abuse, and China requires deepfake creators receive consent from the depicted people and that the work contain watermarks. Labeling is also being pursued in the European Union and in a number of US states. A new bill put forward last week in the US Congress, the Protecting Consumers From Deceptive AI Act, requires the development of standards and that AI developers and online content platforms provide disclosures on AI-generated content.

But given the speed at which technology advances, some experts recommend policymakers focus on protecting rights, rather than focusing on the technologies themselves. For example, the Center on Technology Policy at the University of North Carolina recommends Congress and the states ensure voting and civil rights are protected and that media literacy and factual information be promoted - instead of trying to keep AI out of political advertising.

These are really tough issues with no simple solution. This makes it all the more important that journalists, developers, policymakers, researchers work together, using research and evidence to inform their thinking. The Center for News, Technology and Innovation has tried to bring some of these resources together in our Synthetic Media & Deepfakes primer, which gathers global research on deepfakes, notes gaps in our collective knowledge, analyzes the state of policymaking around the world, and identifies organizations and experts who are actively working on the issues. Finding tenable paths forward that protect an independent news media and the public's open access to news takes all of us.

< Back to 68k.news IN front page