Everyone is aware that ISO 14971 is the gold standard for risk management of medical devices. As the regulatory landscape evolves to forge a path for innovative technology like AI/ML devices, it begs the question of how to apply our existing systems and methodologies to novel technology that is inevitably the future of healthcare.

The unique risks posed by AI/ML are unlike any other that currently exist in MedTech. These devices are the first of their kind, and the technology that makes them so powerful and transformative to our healthcare industry can also make them extremely risky to public health. It has taken regulators and industry alike several years to even scratch the surface of how balance that double-edged sword, and we are starting to see the fruits of that labor emerge by way of various standards, guidance documents, and other regulatory activities such as FDA’s Digital Health Center of Excellence.

While the FDA races to create robust and structured regulatory guidelines around AI/ML, so to have various international standards and accreditation bodies globally. In May of this year, the Association for the Advancement of Medical Instrumentation (AAMI) and the British Standards Institute (BSI) partnered to publish guidance documents and standards on applying ISO 14971 to AI/ML. BS/AAMI 34971:2023 is the official standard on the Application of ISO 14971 to machine learning in artificial intelligence. This new document compliments ISO 14971 by providing a structure on how to assess and manage risk associated with AI/ML.

Specifically, this new standard covers how the hazards already identified in ISO 14971 can be applied to AI/ML devices. This document also provides examples and suggests strategies for exploring additional hazards and eliminating/mitigating the associated risks. In essence, this new document is the first step in bridging the gap between “traditional” risk management and innovative new devices with AI/ML.

Its important to note that this document does not modify existing risk management methodology, rather it demonstrates how to apply it to AI/ML. This is critical because it allows regulators and medical device manufacturers to keep with the same risk management practices and expectations that have been in place for decades. Learning how to apply existing science-based practices for regulating medical devices, and specifically AI/ML devices, instead of introducing completely new methodologies will be the key to continued regulatory stability and protection of public health.

Do you need support with your software as a medical device? Are you developing a device with AI/ML? EMMA International’s team of digital health experts can help! Contact us today at 248-987-4497 or email info@emmainternational.com to get in touch today.

Emma International

Emma International

More Resources

PCR Tests

PCR Tests

PCR (Polymerase Chain Reaction) tests have become widely recognized due to their essential role in detecting viral ...

Ready to learn more about working with us?

Pin It on Pinterest

Share This