Risk management framework for AI: new Standards from IEC/ISO

  • Home
  • Risk management framework for AI: new Standards from IEC/ISO
19 Mar

Risk management framework for AI: new Standards from IEC/ISO

As the use and application of artificial intelligence (AI) systems increases, addressing the trust aspect of these systems is key to widespread adoption. Examples include the effects of data or algorithmic bias, ensuring data privacy, and also the lack of transparency and accountability.

A new international standard is being developed by IEC and ISO, which will provide guidelines on managing risk faced by organizations during the development and application of AI techniques and systems. It will assist organizations in integrating risk management for AI into significant activities and functions, as well as describe processes for the effective implementation and integration of AI risk management. The application of these guidelines will be able to be customized to any organization and its context.

Risk management in the AI context

New technologies bring new challenges, where the unknown is greater than the known. Risk management can help deal with uncertainty in areas where no recognized measures of quality have been established. “For a specific AI product or AI service, a risk management process ensures that ‘by design’ throughout the product or service lifecycle, stakeholders with their vulnerable assets and values are identified, potential threats and pitfalls are understood, associated risks with their consequences (or impact) are assessed, and conscious risk treatment decisions based on the organization’s objectives and its risk tolerance are made”, said Wael William Diab, Chair of ISO/IEC JTC 1/SC 42, the IEC and ISO joint technical committee for artificial intelligence.

In the case of AI systems, risk management would address:

  • Engineering pitfalls and assess typical associated threats and risks to AI systems with their mitigation techniques and methods, by allowing for identification, classification and treatment of risks to (and from the use of) AI systems.
  • Establishment of trust in AI systems through transparency, verifiability, explainability, controllability, by using a well understood and documented risk management process.
  • AI system’s robustness, resiliency, reliability, accuracy, safety, security, privacy, etc., by providing transparency to the treatment of risks to the identified stakeholders.

The new standard builds on the principles and guidelines described in ISO 31000 (Risk management – Guidelines), which help organizations with their risk analysis and assessments. It also provides guidance that arises when AI is applied to existing processes in any organization or when an organization provides an AI system for use by others.