European Commission’s technical report hints on the future regulatory framework of AI
In January 2020, the European Commission’s science and research centre, the Joint Research Centre (JRC), published a study ‘Robustness and Explainability of Artificial Intelligence (AI)’.
The technical report’s main goal is to contribute to the discussions on the future establishment of a regulatory framework for AI. It focuses on the robustness and explainability of AI systems as the key element for a future regulation of this technology.
The document correlates the principles from current regulations regarding the cybersecurity of digital systems and the protection of data, the policy activities concerning AI, and the technical discussions within the scientific community of AI. In particular in the field of machine learning.
According to the report, three important topics are deemed essential for a right deployment of AI in the society:
- Transparency of models: it relates to the documentation of the AI processing chain, including the technical principles of the model, and the description of the data used for the conception of the model;
- Reliability of models: it concerns the capacity of the models to avoid failures or malfunction, either because of edge cases or because of malicious intentions;
- Protection of data in models: The security of data used in AI models needs to be preserved. In the case of sensitive data, for instance personal data, the risks should be managed by the application of proper organisational and technical controls.
Moreover, the report brings forward policy-related considerations for the policy makers to establish a set of standardisation and certification tools for AI. These include:
- Developing a methodology to evaluate the impacts of AI systems on society that would provide an assessment of the risks in the usage of AI techniques to the users and organisations;
- Introducing standardised methodologies to assess the robustness of AI models;
- Raising awareness among AI practitioners through the publication of good practices regarding to known vulnerabilities of AI models and technical solutions to address them;
- Promoting transparency in the conception of machine learning models, emphasizing the need of an explainability-by-design approach for AI systems.
CECE has been closely monitoring the regulatory developments in Artificial Intelligence and new technologies. Our task forces are focusing on the data aspects and on the suitability of the technical legislation related to safety, such as the Machinery Directive.
The full report is available on the JRC website.
More news