REFERENCE

AI Governance
Glossary

Clear definitions for every term you need to know. From shadow AI to EU AI Act compliance, understand the language of AI governance.

A

Acceptable Use Policy (AUP)

A document that outlines the rules and guidelines for how employees can use AI tools within an organization. An effective AUP specifies approved tools, prohibited uses, data handling requirements, and consequences for violations.

Accountability

The principle that organizations and individuals must be answerable for the outcomes of AI systems they develop, deploy, or use. Accountability requires clear roles, documentation, and mechanisms for redress when AI causes harm.

Adversarial Attack

A deliberate attempt to manipulate an AI system by providing inputs designed to cause incorrect outputs or bypass safety controls. Adversarial attacks can undermine AI reliability and are a key concern in AI security and robustness testing.

AI Audit

A systematic examination of an organization's AI systems and practices to assess compliance with policies, regulations, and ethical standards. AI audits evaluate governance frameworks, data handling, model performance, and documentation.

AI Ethics

The branch of applied ethics that examines the moral implications of developing and using AI systems. AI ethics encompasses fairness, transparency, privacy, accountability, and the societal impact of artificial intelligence.

AI Governance

The framework of policies, procedures, and controls that guide how an organization develops, deploys, and uses artificial intelligence. Effective AI governance ensures AI is used responsibly, ethically, and in compliance with regulations.

AI Lifecycle

The complete sequence of stages an AI system goes through, from initial design and data collection through development, testing, deployment, monitoring, and eventual retirement. Governance should address each stage of the AI lifecycle.

AI Literacy

The knowledge and skills required to understand, use, and evaluate AI systems appropriately. Under the EU AI Act, organizations must ensure employees have sufficient AI literacy before working with AI systems. This requirement took effect February 2025.

AI Policy

A formal document that establishes an organization's rules, guidelines, and expectations for AI use. AI policies typically cover acceptable use, data protection, prohibited activities, and compliance requirements.

AI Risk Management

The process of identifying, assessing, and mitigating risks associated with AI systems. This includes risks related to data privacy, bias, security, compliance, and operational failures.

Algorithmic Discrimination

When an AI system produces outcomes that unfairly disadvantage individuals based on protected characteristics such as race, gender, age, or disability. Under laws like the Colorado AI Act, organizations must take reasonable care to prevent algorithmic discrimination.

Audit Trail

A chronological record of activities, events, and changes within a system that provides evidence of compliance and enables investigation of incidents. In AI governance, audit trails track policy acknowledgments, training completion, and AI tool usage.

Automated Decision-Making (ADM)

The process of making decisions using AI or algorithmic systems with limited or no human intervention. GDPR Article 22 gives individuals the right not to be subject to decisions based solely on automated processing that significantly affect them.

B

Bias (in AI)

Systematic errors in AI outputs that result from flawed assumptions, unrepresentative training data, or prejudiced design decisions. AI bias can lead to unfair outcomes and regulatory violations, particularly in high-risk applications.

Black Box

An AI system whose internal decision-making process is opaque or unexplainable. Black box AI creates governance challenges because organizations cannot fully explain how decisions are made, which is problematic for compliance and accountability.

C

CCPA (California Consumer Privacy Act)

A California state law that gives consumers rights over their personal information and imposes obligations on businesses that collect or sell that data. CCPA includes provisions relevant to automated decision-making and AI systems processing personal data.

Chatbot

An AI-powered software application designed to simulate conversation with human users. Chatbots range from simple rule-based systems to advanced generative AI assistants. Under the EU AI Act, transparency obligations require disclosure when users interact with chatbots.

Colorado AI Act

A US state law requiring developers and deployers of high-risk AI systems to take reasonable care to prevent algorithmic discrimination. The Colorado AI Act includes safe harbor provisions for organizations that follow recognized frameworks like NIST AI RMF.

Compliance

The state of adhering to laws, regulations, policies, and standards. AI compliance requires organizations to meet requirements from regulations like GDPR, EU AI Act, and industry-specific rules while following internal policies.

Compliance Officer

A designated individual responsible for ensuring an organization meets its regulatory and policy obligations. In the context of AI governance, compliance officers oversee AI policy enforcement, training programs, and audit readiness.

Conformity Assessment

The process of evaluating whether a product, service, or system meets specified requirements. Under the EU AI Act, high-risk AI systems must undergo conformity assessments before being placed on the market.

Consequential Decision

A decision that has a material legal or significant effect on an individual's life, such as decisions about employment, credit, housing, education, or healthcare. Many AI regulations focus specifically on AI systems that make or support consequential decisions.

D

Data Breach

An incident where sensitive, protected, or confidential information is accessed, disclosed, or stolen by unauthorized parties. AI tools can contribute to data breaches when employees share sensitive information in prompts.

Data Controller

Under GDPR, the entity that determines the purposes and means of processing personal data. When an organization uses AI tools that process personal data, it typically acts as the data controller and bears responsibility for compliance.

Data Minimization

The principle that organizations should collect and process only the minimum amount of personal data necessary for a specific purpose. This GDPR principle applies to AI systems that process personal data.

Data Processor

Under GDPR, an entity that processes personal data on behalf of a data controller. AI tool providers often act as data processors, and organizations must ensure appropriate data processing agreements are in place.

Data Protection Impact Assessment (DPIA)

A process to identify and minimize data protection risks of a project or system. Under GDPR, DPIAs are required for processing likely to result in high risk to individuals, including many AI applications.

Deepfake

Synthetic media created using AI that convincingly depicts people saying or doing things they never did. Several regulations address deepfakes, including disclosure requirements and prohibitions on malicious use.

Deployer

Under the EU AI Act, any entity that uses an AI system under its authority, except where the AI system is used in personal non-professional activity. Deployers have specific obligations including monitoring, record-keeping, and transparency.

Drift (Model Drift)

The degradation of an AI model's performance over time as the real-world data it encounters diverges from its training data. Model drift requires ongoing monitoring and periodic retraining to maintain accuracy and compliance.

E

Embedding

A numerical representation of data (text, images, or other content) in a format that AI models can process. Embeddings capture semantic meaning and are used in search, recommendation systems, and retrieval-augmented generation.

EU AI Act

The European Union's comprehensive regulation on artificial intelligence, establishing a risk-based framework for AI governance. It categorizes AI systems by risk level (unacceptable, high, limited, minimal) and imposes corresponding requirements. High-risk requirements take effect August 2026.

Explainability

The ability to describe how an AI system reaches its outputs in terms that humans can understand. Explainability is increasingly required by regulations and is essential for accountability and trust in AI systems.

F

Fairness

The principle that AI systems should treat individuals and groups equitably, without unjust bias or discrimination. Achieving fairness in AI requires careful attention to data, model design, evaluation metrics, and ongoing monitoring.

Foundation Model

A large AI model trained on broad data that can be adapted for many downstream tasks. Examples include GPT-4, Claude, and Gemini. The EU AI Act includes specific provisions for foundation models and general-purpose AI.

Fundamental Rights Impact Assessment

An evaluation required under the EU AI Act for deployers of high-risk AI systems to assess the impact on fundamental rights of affected individuals before putting the system into use.

G

GDPR (General Data Protection Regulation)

The European Union's comprehensive data protection law that governs how organizations collect, process, and store personal data. GDPR includes Article 22 on automated decision-making and applies to AI systems processing personal data of EU residents.

General Purpose AI (GPAI)

AI models capable of performing a wide range of tasks, as defined by the EU AI Act. GPAI providers have obligations including technical documentation, transparency about training data, and compliance with copyright law. Models with systemic risk face additional requirements.

Generative AI

AI systems that can create new content including text, images, audio, video, and code based on training data and user prompts. Examples include ChatGPT, Claude, Midjourney, and GitHub Copilot. Generative AI creates unique governance challenges around data leakage and output accuracy.

Governance Framework

A structured approach to managing and overseeing AI within an organization, including policies, procedures, roles, responsibilities, and controls. Effective governance frameworks address the full AI lifecycle from procurement to retirement.

Guardrails

Technical or procedural controls designed to prevent AI systems from producing harmful, inaccurate, or non-compliant outputs. Guardrails can include content filters, usage policies, data redaction, and human oversight mechanisms.

H

Hallucination

When an AI model generates content that appears plausible but is factually incorrect, fabricated, or not grounded in its training data. Hallucinations pose significant risks in professional contexts where accuracy is critical, such as legal, medical, or financial applications.

High-Risk AI System

Under the EU AI Act, AI systems that pose significant risks to health, safety, or fundamental rights. High-risk systems are subject to strict requirements including risk management, data governance, documentation, human oversight, and accuracy standards.

HIPAA (Health Insurance Portability and Accountability Act)

US federal law that protects sensitive patient health information. HIPAA applies to AI systems that process protected health information (PHI) and imposes strict requirements on healthcare organizations using AI.

Human-in-the-Loop

A design approach where human judgment is integrated into the AI decision-making process, allowing people to review, approve, or override AI outputs before they take effect. Many regulations require human-in-the-loop for high-risk or consequential decisions.

Human Oversight

The requirement that AI systems, particularly high-risk ones, be designed to allow effective human supervision and intervention. The EU AI Act mandates human oversight measures for high-risk AI systems.

I

Impact Assessment

A systematic evaluation of the potential effects of an AI system on individuals, groups, or society. Various regulations require different types of impact assessments before deploying AI systems.

Incident Reporting

The process of documenting and communicating AI-related incidents, malfunctions, or violations. Under laws like the Colorado AI Act, organizations must report discovered algorithmic discrimination within 90 days.

Inference

The process by which a trained AI model generates predictions or outputs based on new input data. Inference is the operational phase where an AI system is actively being used, as opposed to the training phase where it learns from data.

ISO 42001

The international standard for AI management systems, providing a framework for establishing, implementing, maintaining, and improving AI governance. ISO 42001 certification demonstrates commitment to responsible AI practices.

L

Large Language Model (LLM)

A type of AI model trained on vast amounts of text data that can understand and generate human-like text. LLMs power tools like ChatGPT, Claude, and Gemini. Governing LLM use is a primary focus of AI policy.

M

Machine Learning

A subset of artificial intelligence where systems learn patterns from data and improve their performance without being explicitly programmed for each task. Machine learning underpins most modern AI applications and is the technical foundation for models that AI governance seeks to regulate.

Model Card

A document that provides information about an AI model including its intended uses, limitations, performance metrics, and potential risks. Model cards promote transparency and help users understand appropriate applications.

Model Governance

The policies and processes for managing AI models throughout their lifecycle, including development, testing, deployment, monitoring, and retirement. Model governance is distinct from usage governance, which focuses on how employees use AI tools.

N

Natural Language Processing (NLP)

A field of AI focused on enabling computers to understand, interpret, and generate human language. NLP powers chatbots, translation tools, sentiment analysis, and large language models. It is central to most generative AI applications employees use.

NIST AI Risk Management Framework (AI RMF)

A voluntary framework developed by the US National Institute of Standards and Technology to help organizations manage AI risks. The AI RMF provides guidance on governance, mapping, measuring, and managing AI risks. Alignment with NIST AI RMF provides safe harbor under the Colorado AI Act.

O

Opt-Out Rights

The right of individuals to refuse automated decision-making or AI processing of their data. Various regulations including GDPR Article 22 provide opt-out rights for certain automated decisions.

P

PII (Personally Identifiable Information)

Any information that can be used to identify an individual, such as names, addresses, social security numbers, or biometric data. Protecting PII is a primary concern when employees use AI tools.

Policy Acknowledgment

A recorded confirmation that an individual has read and understood a policy. Policy acknowledgments with timestamps provide audit evidence that employees were informed of AI usage requirements.

Post-Market Monitoring

The ongoing surveillance and evaluation of an AI system after it has been deployed to market. Under the EU AI Act, providers of high-risk AI must establish post-market monitoring systems to identify issues and ensure continued compliance.

Pre-Market Conformity

The set of requirements and assessments an AI system must satisfy before it can be placed on the market. Under the EU AI Act, high-risk AI systems must demonstrate pre-market conformity through testing, documentation, and certification.

Privacy by Design

An approach that embeds privacy protections into the design and architecture of systems from the outset, rather than adding them later. Privacy by design is a GDPR principle applicable to AI system development.

Prohibited AI Practices

AI applications banned outright under regulations like the EU AI Act. Examples include social scoring by governments, real-time biometric identification in public spaces (with exceptions), and AI that manipulates behavior to cause harm.

Prompt

The input text or instructions given to a generative AI system to generate a response. Prompts can inadvertently contain sensitive information, creating data leakage risks that AI policies must address.

Provider

Under the EU AI Act, any entity that develops or has an AI system developed and places it on the market or puts it into service. Providers have extensive obligations including conformity assessment and post-market monitoring.

R

Red Teaming

A structured approach to testing AI systems by simulating adversarial attacks, misuse scenarios, or edge cases to identify vulnerabilities. Red teaming helps organizations discover risks before deployment and is increasingly considered a best practice in AI governance.

Regulatory Compliance

The process of ensuring that an organization's AI practices meet the requirements set by applicable laws and regulations. Regulatory compliance involves understanding obligations, implementing controls, maintaining documentation, and demonstrating adherence through audits.

Regulatory Sandbox

A controlled environment where organizations can test innovative AI applications under regulatory supervision with relaxed requirements. The EU AI Act establishes regulatory sandboxes to support AI innovation while managing risks.

Responsible AI

The practice of designing, developing, and deploying AI systems in ways that are ethical, transparent, fair, and accountable. Responsible AI encompasses technical practices, governance frameworks, and organizational culture.

Right to Explanation

The principle that individuals affected by automated decisions should receive meaningful information about the logic involved. GDPR provides a form of this right in the context of automated decision-making, requiring organizations to explain how AI-driven decisions are reached.

Risk-Based Approach

A regulatory strategy that imposes requirements proportional to the level of risk an AI system poses. The EU AI Act uses a risk-based approach, with stricter rules for higher-risk applications.

Robustness

The ability of an AI system to maintain accurate and reliable performance under varying conditions, including unexpected inputs, adversarial attacks, or environmental changes. Robustness is a key requirement for high-risk AI systems under the EU AI Act.

S

Safe Harbor

A legal provision that protects organizations from liability if they follow specified practices. Under the Colorado AI Act, alignment with NIST AI RMF or ISO 42001 creates a rebuttable presumption of reasonable care, providing safe harbor protection.

Sensitive Data

Information that requires heightened protection due to its nature, including health records, financial data, biometric identifiers, and data revealing racial or ethnic origin. AI governance policies must address how sensitive data is handled when used with AI tools.

Shadow AI

The use of AI tools by employees without organizational knowledge, approval, or oversight. Shadow AI creates significant compliance and security risks because organizations cannot govern what they cannot see. Studies show 80% of employees use unapproved AI tools.

SOC 2

A compliance framework developed by AICPA that evaluates an organization's controls related to security, availability, processing integrity, confidentiality, and privacy. SOC 2 audits increasingly examine AI governance practices.

Supervisory Authority

A government body responsible for enforcing AI or data protection regulations within its jurisdiction. Under the EU AI Act, each member state designates national competent authorities to oversee compliance and handle enforcement.

Synthetic Data

Artificially generated data that mimics real data without containing actual personal information. Synthetic data can help organizations train AI models while reducing privacy risks.

T

Third-Party AI

AI tools and systems provided by external vendors rather than built in-house. Organizations using third-party AI remain responsible for governance and compliance, and must assess vendor practices through due diligence and contractual safeguards.

Training (AI Governance)

Education provided to employees about AI policies, acceptable use, risks, and best practices. Effective AI training includes assessments to verify comprehension and should be refreshed as policies evolve.

Transparency

The principle that AI systems and their use should be open and understandable to affected individuals and stakeholders. Transparency requirements appear in regulations including GDPR, EU AI Act, and various US state laws.

Transparency Obligation

Requirements to disclose when AI is being used, how it makes decisions, or what data it processes. The EU AI Act imposes transparency obligations for AI systems that interact with humans, generate content, or process biometric data.

U

Unapproved AI Tools

AI applications that employees use without organizational authorization. Unapproved tools may lack security controls, violate data handling policies, or create compliance risks.

Use Case

A specific application or scenario in which an AI system is deployed. Identifying and documenting use cases is essential for AI governance, as the risk profile and regulatory requirements vary significantly depending on how AI is being used.

V

Vendor Risk Management

The process of assessing and managing risks associated with third-party AI tools and services. Organizations are responsible for AI governance even when using vendor-provided AI systems.

W

Whistleblower Protection

Legal safeguards for individuals who report AI-related violations or concerns. The EU AI Act includes provisions protecting those who report non-compliance with AI regulations.

86 terms defined

Ready to Implement AI Governance?

PolicyGuard helps you move from understanding AI governance to proving it.

See it in action. Personalized walkthrough.

Ready to govern every AI tool your team uses?

One platform to enforce policies, track compliance, and prove governance across 80+ AI tools.

Book a demo