AI security group publishes first findings

The ETSI Securing Artificial Intelligence (SAI) industry specification group has released its first report, giving an overview of the ‘problem statement’ in relation to AI security.

According to the organisation, the report focuses in particular on machine learning, as well as “confidentiality, integrity and availability”. It furthermore points to what it calls the “broader challenges” of AI, such as bias and issues around ethics.

Discussing the report, as well as the group’s methodology, a spokesperson said: “To identify the issues involved in securing AI, the first step was to define AI itself. For the group, artificial intelligence is the ability of a system to handle representations – both explicit and implicit – and procedures in order to perform tasks that would be considered ‘intelligent’ if performed by a human.

“This definition represents a broad spectrum of possibilities. However, a limited set of technologies are now becoming feasible, largely driven by the evolution of machine learning and deep-learning techniques. [They are also being driven by] the wide availability of the data and processing power required to train and implement such technologies.”

Register now to continue reading

Thank you for visiting Land Mobile, register now for free and unlimited access to our industry-leading content. 

What's included:

  • Unlimited access to all Land Mobile content

  • New content and e-bulletins delivered straight to your inbox