Transparency and Provision of Information to Users (TPI)

This control contains the requirements for the transparency of AI/ML based SaMD, as well as requirements of what information is required to be provided to users. It also deals with SaMD N23 6.1 (Leadership and Accountability in the Organisation), 7.5 (Measurement, Analysis and Improvement of Processes, Activities and Product), 8.5 (SaMD Realization and Use Processes - Deployment) and 8.6 (SaMD Realization and Use Processes - Maintenance).

As per the FDA published whitepapers on AI/ML based SaMD, FDA expressed an expectation for transparency and real-world performance monitoring by manufacturers that could enable FDA and manufacturers to evaluate and monitor a software product from its premarket development through postmarket performance.

According to the FDA:

The Agency acknowledges that AI/ML-based devices have unique considerations that necessitate a proactive patient-centered approach to their development and utilization that takes into account issues including usability, equity, trust, and accountability. One way that FDA is addressing these issues is through the promotion of the transparency of these devices to users, and to patients more broadly, about the devices’ functioning. Promoting transparency is a key aspect of a patient-centered approach, and we believe this is especially important for AI/ML-based medical devices, which may learn and change over time, and which may incorporate algorithms exhibiting a degree of opacity.

Numerous stakeholders have expressed the unique challenges of labeling for AI/ML-based devices and the need for manufacturers to clearly describe the data that were used to train the algorithm, the relevance of its inputs, the logic it employs (when possible), the role intended to be served by its output, and the evidence of the device’s performance. Stakeholders expressed interest in FDA proactively clarifying its position on transparency of AI/ML technology in medical device software.

The Agency is committed to supporting a patient-centered approach including the need for a manufacturer’s transparency to users about the functioning of AI/ML-based devices to ensure that users understand the benefits, risks, and limitations of these devices. To this end, in October 2020, the Agency held a Patient Engagement Advisory Committee (PEAC) meeting devoted to AI/ML-based devices in order to gain insight from patients into what factors impact their trust in these technologies. The Agency is currently compiling input gathered during this PEAC meeting; our proposed next step is to hold a public workshop to share learnings and to elicit input from the broader community on how device labeling supports transparency to users. We intend to consider this input for identifying types of information that FDA would recommend a manufacturer include in the labeling of AI/ML-based medical devices to support transparency to users. These activities to support the transparency of and trust in AI/ML-based technologies will be informed by FDA’s participation in community efforts, referenced above, such as standards development and patient-focused programs. They will be part of a broader effort to promote a patient-centered approach to AI/ML-based technologies based on transparency to users.

The IMDRF/SaMD N23 states that:

Patient Safety and Clinical Environment Considerations

  • The implementation of clinical algorithms adopted should be transparent to the user in order to avoid misuse or unintended use.

  • The implementation of proper access controls and audit trail mechanisms should be balanced with the usability of SaMD as intended.

As per the FDA GMLP, this compliance category supports the following principles:

Principle 6. Model Design Is Tailored to the Available Data and Reflects the Intended Use of the Device: Model design is suited to the available data and supports the active mitigation of known risks, like overfitting, performance degradation, and security risks. The clinical benefits and risks related to the product are well understood, used to derive clinically meaningful performance goals for testing, and support that the product can safely and effectively achieve its intended use. Considerations include the impact of both global and local performance and uncertainty/variability in the device inputs, outputs, intended patient populations, and clinical use conditions.

Principle 9. Users Are Provided Clear, Essential Information: Users are provided ready access to clear, contextually relevant information that is appropriate for the intended audience (such as health care providers or patients) including: the product’s intended use and indications for use, performance of the model for appropriate subgroups, characteristics of the data used to train and test the model, acceptable inputs, known limitations, user interface interpretation, and clinical workflow integration of the model. Users are also made aware of device modifications and updates from real-world performance monitoring, the basis for decision-making when available, and a means to communicate product concerns to the developer.

Below is the list of the controls that are part of this compliance category:

Last updated