< Back to 68k.news IN front page

Preventing AI-assisted deepfake fraud is becoming a major commercial industry | Biometric Update

Original source (on modern site) | Article images: [1]

Deepfakes have become the latest technically challenging threat for the biometric security sector, as researchers and developers push fraud-protection tools and capabilities into new territory. But a mature threat also means the need for a mature market response, and as generative AI and deepfakes continue to evolve into ever-more sophisticated forms, the proposition for firms working in biometrics, digital identity verification and liveness detection is starting to mature into something lucrative.

The National Institute of Standards and Technology (NIST) often sets the benchmark for best practices, and with the launch of its new U.S. AI Safety Institute, it is aiming to lead the way on global discussions about standardization for AI tools and applications, applied to both the private and public sector. Synthetic identities and other synthetically generated content will be a particular focus, says an article on Nextgov/FCW.

The institute was founded last month, in the wake of President Joe Biden's Executive Order on AI. Director Elizabeth Kelly says NIST is "very excited about positioning it as the leading safety institute in the world." Its pillars balance problems and solutions, and include testing and evaluation, safety and security protocols, and guidance on AI-generated content.

Among the institute's larger goals is to develop international engagement strategies for AI with other nations.

"We at the Safety Institute are working very closely with all of our allies and partners - both those countries that have already set up Safety Institutes like Japan and the UK, as well as those that are thinking about it are in earlier stages," says Kelly. Two areas of focus on this file will be creating "interoperable guidance" to level the playing field for private sector companies, and collaboration on advancing the science behind AI.

"A lot of the questions that we are tackling — for example, 'What should watermarking look like?' 'What risk mitigations actually work? Which tests work best?' — are all areas where there are open questions," says Kelly. "And by pooling our resources and working together closely, I think we can make a lot more progress."

Mitre opens facility for government AI testing and experimentation

Kelly spoke at the opening of Mitre's new AI Assurance and Discovery Lab. Mitre, a government-affiliated non-profit consultancy for the national security, aviation, health and cybersecurity sectors, will test for security vulnerabilities and other risks in AI systems used by federal government agencies. According to Bloomberg BNN, robotics engineer Miles Thompson will lead the lab, which is based at Mitre's headquarters in McLean, Virginia. A press release from Mitre says the facility offers "configurable space for risk discovery in simulated environments, AI red teaming, large language model evaluation, human-in-the-loop experimentation, and assurance plan development."

Mitre Senior Vice President Cedric Sims says that, "for federal agencies and private companies, this new lab delivers an objective, independent analysis to complement and validate their own testing and evaluation of AI-enabled systems."

Deepfakes push UK fraud fighting industry to £1 billion valuation

New numbers from UK digital identity security firm ID Crypt Global show that the rapid rise in  deepfake technology has powered growth in the fraud detection software industry by 200 percent, turning it into a billion-pound industry.

According to a press release from the company, data shows that the number of online deepfakes is doubling every six months, fueling waves of misinformation, disinformation and online fraud in a fraught, election-heavy year around the world. The £1.1 billion (roughly US$1.4 billion) now spent on tools to detect and safeguard against deepfake videos, audio, or images created with AI and deep learning algorithms represents a trend of upward growth in response, which has continued for several years and shows no signs of abating.

ID Crypt Global's own solution, the Authentic Media Protection program (AMP), focuses on authenticating content for journalists and media outlets, in the interest of flagging images that have been manipulated by bad actors for fraudulent purposes or to foment unrest.

"There is something truly sinister and unsettling about living in a world where we can no longer trust anything we see," says the company's CEO, Lauren Wilson-Smith. "This kind of uncertainty breeds fear and fear breeds conspiracy and unrest. Thankfully, there are some brilliant people and organizations working day and night to bring tools to the people that enable them to spot a fake quickly and minimize the spread of disinformation, and we are incredibly proud to be one of those companies."

Article Topics

AI Safety Institute  |  biometrics  |  deepfake detection  |  deepfakes  |  fraud prevention  |  ID Crypt Global  |  Mitre Corp.  |  NIST  |  synthetic data

< Back to 68k.news IN front page