ETSI publishes ‘deepfake’ study
ETSI has produced a new report on the use of AI in the production of ‘deepfakes’. The report – known as ETSI GR SAI 011 – is being released by the organisation’s Securing AI group.
According to ETSI, the report “focuses on the use of AI for manipulating multimedia identity representations and illustrates the consequential risks, as well as the measures that can be taken to mitigate them.”
A statement released by the organisation continues: “ETSI GR SAI 011 outlines many of the more immediate concerns raised by the rise of AI. [This includes] the use of AI-based techniques for automatically manipulating identity data represented in various media formats, such as audio, video, and text.
“The report describes the different technical approaches, and also analyses the threats posed by deepfakes in various attack scenarios. The ISG SAI group works to rationalize the role of AI within the threat landscape.”
Chair of the Securing AI group, Scott Cadzow, said: “AI techniques allow for automated manipulations which previously required a substantial amount of manual work, and - in extreme cases - can even create fake multimedia data from scratch.
“Deepfake can also manipulate audio and video files in a targeted manner, while preserving high acoustic and visual quality in the results. Our ETSI report proposes measures to mitigate them.”
Register now to continue reading
Thank you for visiting Land Mobile, register now for free and unlimited access to our industry-leading content.
What's included:
-
Unlimited access to all Land Mobile content
-
New content and e-bulletins delivered straight to your inbox