AI Safety (AIS)
The implementation of AI systems may result in the emergence of previously unanticipated hazards. Safety refers to the expectation that a system will not, under the conditions that are specified, lead to a state in which human life, human health, property, or the environment is put in jeopardy. There is a potential for increased danger associated with the deployment of AI systems in automated settings, such as manufacturing equipment, robots, and vehicles. When developing AI systems for particular application domains, particular standards for those domains should be taken into consideration. Some examples of these standards include the design of machinery, transportation, and medical devices.
Controls related to this risk category are listed as below:
AIS 01 - AI Usage by Humans
AIS 02 - Deployment Valid and Reliable
AIS 03 - Logging - Situations that may cause AI Risk
AIS 04 - Regular Evaluation of AI Systems
Last updated