AI Privacy (AIP)

The ability of individuals to control or influence what information related to them can be collected, stored, and processed, as well as by whom that information can be disclosed, is related to the concept of privacy. According to the explanation provided in Technical Report 24028:2020, "many AI techniques (such as deep learning) highly depend on big data because the accuracy of these techniques is dependent on the amount of data they use."

It is possible for data subjects to suffer negative consequences as a result of the improper use or disclosure of certain data, particularly personal and sensitive data (like health records). As a result, the protection of individuals' privacy has emerged as a primary concern in the field of artificial intelligence and big data. It is important to give careful thought to determining whether or not an AI system can infer sensitive personal information. Protecting an individual's privacy in relation to AI systems entails safeguarding the data that is used in the process of developing and running the AI system, ensuring that the AI system cannot be exploited in such a way as to provide unwarranted access to its data, and safeguarding access to models that have been customised for an individual or that have the capacity to infer information about or characteristics shared by individuals who are similar to the target individual.

Inappropriate collection, use, and disclosure of personal information can also have direct repercussions for fundamental human rights, including the freedoms of expression and information as well as the protection against discrimination. It is important to take into account how the situation will affect the ethical principles of respecting human values and maintaining human dignity.

Controls related to this risk category are listed as below:

  • AIP 01 - Evaluation of AI Privacy Risks

  • AIP 02 - Data Misuse Prevention

  • AIP 03 - AI Privacy Preservation

  • AIP 04 - AI Profiling Individuals

Last updated