Seclea User Documentation
HomeAI RegulationsAI Risk ManagementGet started
  • Seclea User Documentation
  • Introducing Seclea
    • Seclea - Building trust in AI
      • Seclea - Features and services
      • Seclea integrations and supported frameworks
  • Getting Started
    • Create a Seclea account
    • First Project - Example
    • Cloud Platform
    • On-Premises Deployment
  • Seclea Web UI
    • Overview
    • Basic Workflows
    • Creating a Project
    • Project Settings
      • User Access Setting
      • Compliance Settings
      • Risk Management Settings
      • Performance (Internal Policies) Setting
    • User Management
    • AI regulatory compliance
      • Seclea Compliance Dashboard
      • Working with templates for compliance
    • AI risk management
      • Seclea AI Risk Management Dashboard
      • Working with templates for risk management
  • Python API (seclea-ai)
    • Overview
    • API Documentation
  • Supported AI regulations
    • EC Artificial Intelligence Act
    • FDA SaMD with Artificial Intelligence
    • OECD AI Principles
    • Canada Artificial Intelligence and Data Act (AIDA)
    • South Korean - AI-based Medical Devices (SK-AIMD)
    • Saudi Arabia - AI-based Medical Devices (SA-AIMD)
  • Supported risk management frameworks
    • NIST AI risk management
    • ISO AI risk management
    • FDA AI based SaMD Risk Management
  • Seclea INFO
    • Reporting Bugs
    • Error Catalog
Powered by GitBook
On this page
  1. Supported AI regulations

OECD AI Principles

PreviousFDA SaMD with Artificial IntelligenceNextCanada Artificial Intelligence and Data Act (AIDA)

Last updated 2 years ago

The OECD AI Principles promote innovative and trustworthy AI use that respects human rights and democratic values. Adopted in May 2019, they set standards for AI that are practical and flexible enough to stand the test of time. The template captures the baseline an AI project should meet to conform to the principles set by the OECD AI Principles.

According to the OECD website, AI is a general-purpose technology that has the potential to improve the welfare and well-being of people, to contribute to positive sustainable global economic activity, to increase innovation and productivity, and to help respond to key global challenges. It is deployed in many sectors ranging from production, finance and transport to healthcare and security.

Alongside benefits, AI also raises challenges for our societies and economies, notably regarding economic shifts and inequalities, competition, transitions in the labour market, and implications for democracy and human rights.

The OECD has undertaken empirical and policy activities on AI in support of the policy debate over the past two years, starting with a Technology Foresight Forum on AI in 2016 and an international conference on AI: Intelligent Machines, Smart Policies in 2017. The Organisation also conducted analytical and measurement work that provides an overview of the AI technical landscape, maps economic and social impacts of AI technologies and their applications, identifies major policy considerations, and describes AI initiatives from governments and other stakeholders at national and international levels.

This work has demonstrated the need to shape a stable policy environment at the international level to foster trust in and adoption of AI in society. Against this background, the OECD Committee on Digital Economy Policy (CDEP) agreed to develop a draft Council Recommendation to promote a human-centric approach to trustworthy AI, that fosters research, preserves economic incentives to innovate, and applies to all stakeholders.

The OECD AI Principles include:

You can access the full documentation on the Seclea Platform's OECD AI Principles template .

here
Inclusive growth, sustainable development and well-being
Human-centred values and fairness
Transparency and Explainability
Accuracy, Robustness and Cybersecurity
Accountability