DIGITAL SOLUTIONS

Standardization to foster the adoption of Artificial Intelligence

Artificial Intelligence (AI) technologies keep evolving and are progressively entering every area of our lives. AI applications are now being deployed in almost every industry and sector. Benefits of AI are numerous, for example, personalized healthcare, better working conditions in factories, improved traffic… However, as any new technology, AI is coming with new risks that have to be managed in order to maximize the benefits of its use.

December 19, 2020

In this context, multiple countries and organizations have put in motion mechanisms to ensure that AI applications are safe and trustworthy, and respect ethical values. For example, the European Commission adopted a comprehensive AI approach, including ethical guidelines, research and investment recommendations, support for education and digital skills development, and the exploration of a legal framework. In this frame, standards have an important role to play. They can be used to educate people, to give them solid and objective grounds to understand the technologies. They can also be used to foster research and innovation, by providing a common language for the description and criteria for the evaluation of output. They can support the implementation of ethical principles by providing clear technical guidance. Finally, they can help to achieve regulatory compliance by translating legal criteria into technical ones and describing controls. 

In the international standardization arena, much work has been done by a technical sub-committee on Artificial Intelligence, ISO/IEC JTC 1/SC 42. Mrs. Anna Curridori, working in the IT supervision department in the Commission de Surveillance du Secteur Financier (CSSF), is registered as a national delegate in this sub-committee. We asked her to share her views on the role of standards in addressing AI-related challenges, and on the benefits of being part of the international standardization community.

Ethical and trustworthy AI are at the center of discussions in Europe, and worldwide. Do you think standards will help to address these topics? Which of the underlying challenges (explainability, bias, risk management, etc.) do you think could benefit most from standardization?

Standards are built by having several experts around the world – from different organizations and backgrounds – exchange views on the same topic. This process is very beneficial since the resulting document is a representation of good practices, which other companies can then use for their projects. Artificial Intelligence, and in particular ethical and trustworthiness aspects (including explainability and bias for example), represent recent challenges for which there is currently a lively debate on how they should be addressed, with no unique “one fits all” solution.

Standardization in the area of AI could therefore be very helpful since it can define common terminology and describe good practices that could help address those challenges in a more efficient way. Nevertheless, given that the underlying technologies / techniques are still evolving significantly, the standards created should not be too restrictive in order to accommodate future evolutions.

Risk management processes also need to be adapted to carefully consider the specific risks related to AI implementations. Standardization in this area can therefore help companies implementing AI projects to analyze AI-related risks according to a standard methodology.

Regulation and standardization can be seen as complementary tools to provide guidance on how to guarantee product safety, protect users’ privacy, etc. Can you see this convergence in the field of AI? Can current AI standardization projects support AI regulation (at least for the financial sector)?

In the end, the goal is the same: we all want consumers to be protected and financial institutions to keep their risks under control, although the way to reach those objectives is quite different. Standards represent good practices, which institutions can decide (or not) to follow, given that there is no obligation to use them. 

Regulation instead is different since it can impose to institutions (in scope of that regulation) to comply with certain requirements, generally without specifying the means to do it. Therefore, the institution may decide to adopt a particular standard when implementing the requirements but the choice remains with the institution.

Nevertheless, having internationally agreed standards in a complex area such as AI, and in particular regarding ethics and trustworthiness, may also help the regulator to formulate better and clearer requirements, for example by using standardized terminology or defining requirements that are in line with the existing good practices. This can help to address more effectively those risks that the regulator aims at keeping under control. 

What do you see as main benefits of being involved in standardization?

As explained earlier, standards are built by a workforce of international experts with different backgrounds and from different organizations. Being part of such a workforce allows therefore to interact and exchange views with those experts, which is very enriching. Furthermore, participating in a working group allows to access documents prepared by other working groups on other related topics, enabling a 360° view on the subject. Personally, this allows me to gain a better understanding of AI challenges and of the evolution of the corresponding standardization activities, which helps me also when participating to other international working groups on the supervisory side.

Interview by Mrs. Natalia Cassagnes, Project Officer in the Standardization Department of ANEC GIE and President of the National Mirror Committee ISO/IEC JTC 1/SC 42 in Luxembourg.

With 6 published standards and 21 standards under development, ISO/IEC JTC 1/SC 42 takes a holistic view of AI technologies and applications. Its working groups focus on describing AI concepts in a clear and comprehensive manner, collecting and analyzing representative use cases, addressing data-related, trustworthiness and governance challenges of AI, and providing an overview of AI computational methods and approaches. More than 20 national delegates in standardization already contribute to the development of international standards and defend the interests of Luxembourg. Indeed, while many standards are still under development, now is the best time to influence their content. And as the motto says: “Setting the standards means setting the market”.

In Luxembourg, ILNAS (Institut luxembourgeois de la normalisation, de l’accréditation, de la sécurité et qualité des produits et services), the national standards body, defined the ICT sector – including Artificial Intelligence and Big Data, Blockchain, Cloud Computing and the Internet of Things – as one of the priorities in the Luxembourg Standardization Strategy 2020-2030. ILNAS benefits from the support of the EIG ANEC (Agence pour la Normalisation et l’Economie de la Connaissance) to strengthen the participation of this economic sector in technical standardization. Furthermore, to support the participation of national market players in this sector in technical standardization, ILNAS offers free-of-charge registration in the international and European standards developing organizations (ISO, IEC, CEN, and CENELEC). 

For more information, please visit the Portail-qualité, or feel free to directly contact us at normalisation@ilnas.etat.lu or anec@ilnas.etat.lu.

Watch video

In the same category