Establishing AI Governance and Compliance Strategy

What are the risks of inadequate AI governance and compliance?

Poorly governed AI programs can undermine strategic goals – poor governance increases the risk of investing in AI development that isn’t aligned with the company’s long-term strategy. This makes it harder to drive technological innovation that unlocks meaningful growth and product objectives. 

Taking a narrow view of AI strategy increases operational risk – if you invest in AI without understanding how it can (and should) drive value for your organization, you’re more likely to encounter regulatory issues, develop suboptimal products, negatively impact the customer experience and consumer trust, and struggle with efficiency issues in the future. 

AI programs will become increasingly regulated over the next 5 years – AI regulation is nascent and diverging in substantive ways, but geographic regions, countries, and even individual states are developing increasingly robust compliance frameworks. Companies that anticipate utilizing AI in the future should already be building a foundation of AI governance and transparency.

Who should own your AI compliance/ governance program?

CEOs are ultimately responsible for the ramifications of governance programs – due to the high-level legal, data security, and strategic risks posed by poorly managed AI initiatives, CEOs should sign off on any compliance, governance, and ethical data programs.

Chief Privacy Officers (CPOs) or Chief Information Security Officers (CISOs) should lead program development – over 40% of privacy leaders have received AI governance responsibilities in the past year. While CEOs help set the strategic direction of governance programs, CPOs/CISOs are often best qualified to lead their tactical direction, development, and deployment across the entire organization.

Committees provide oversight and improve accountability – AI has controversial and meaningful implications for all departments. Cross-functional Committees include representatives from HR, Legal, Compliance, Finance, Marketing, IT, and Product who vet proposed governance strategies, policy updates, and key decisions where AI, strategy, technology, and operations intersect. 

Don’t let team size delay governance – at small organizations, technology leaders can spearhead governance programs and outsource cataloguing support. 

What types of companies need to build out formal AI governance and compliance programs?

Any organization using AI needs governance – while legal and regulatory requirements are just emerging, governance should be considered a non-negotiable common-sense business practice for companies that use–or work with vendors and partners who use–their data for AI development and deployment. 

Certain industries need more robust programs – AI governance incorporates data security and data privacy compliance alongside sound data governance. Requirements are more rigorous in sectors such as:

  • Critical infrastructure – technology infrastructure that supports energy, water, transportation, communications or utility management
  • Finance – financial services and financial infrastructure that supports regional, national, and international markets or governments
  • Healthcare – healthcare and medical services that support better detection, diagnosis, and treatments; better health outcomes; that improve the patient experience; and that support the cost containment of treatments, devices, or processes
  • Government – municipality, state/province, or national government infrastructure and companies that do business with governments

Sensitive and personal data necessitates strong governance – data that relates to individual personhood and human rights, and to confidential business practices/IP needs additional protection and oversight. This includes issues of bias, fairness and access regarding:

  • Social benefits (e.g., food assistance)
  • Education 
  • Healthcare decisionmaking
  • Employment (e.g., screening, recruiting)
  • Biometrics (e.g., facial recognition)
  • Housing
  • Insurance and lending (e.g., credit screening)
  • Children
  • Criminal justice risk assessments

Legal and Regulatory Frameworks 

What are the primary AI legal/compliance frameworks that companies should be aware of?

EU AI Act
StatusCodified in August 2024; will take effect in phases through the end of 2026
Who it applies toAnyone doing business in the EU or EEA
Key regulationsProduct safety regulations cover:
• Prerequisites
• Mandatory GDPR compliance as the foundation 
• Defined reason, aka “Legal basis to process personal
data”
• Developer-specific standards – requirements for
cataloging
data sources, data accuracy, and the integrity of those
data sources used to train a model, as well as ongoing
model testing 
• Deployer-specific standards – requirements for: 
Validating data provenance (including data permissions
and how well the data is tested with supporting
documentation)
Model behaves as designed/expected
Human review of model outputs at key milestones
Ongoing process improvement of model uses
Models are not used for prohibited purposes
NIST (National Institute of Safety and Technology) AI Risk Management Framework
StatusOriginally released in 2023, with Generative AI addition released in 2024 and future iterations to follow
Who it applies toNIST is a voluntary framework of AI risk management best practices developed by a subsection of the US Department of Commerce as a practical guide for private and public sector organizations. Jurisdictions such as Canada, the UK, and APAC (Japan, Korea, and Singapore) are incorporating its principles into future regulatory frameworks. 

Large organizations (and smaller organizations that work with larger organizations) can benefit from adopting the NIST AI RMF principles, which might inspire regulations in the future.   though all organizations will likely benefit from it in the future as its voluntary standards become regulations.
Key regulationsBest practices for a more integrated data foundation – the framework helps companies harmonize governance and risk management across security, privacy, and AI systems and jurisdictions – with the goal of achieving the trustworthiness of AI systems. 

Adaptable governance framework – NIST provides a principle- and operations-based set of best practices that explains how to establish AI governance and risk management. Small organizations can cherry pick principles to apply in the short-term, while larger organizations can adopt a more rigorous approach.

Many additional compliance requirements will be released over the next few years – new recommendations, formal joint multi-lateral statements, and legal standards are emerging at the state, national, and regional level:

  • Colorado – the first state to pass AI requirements. Their requirements mirror the EU AI Act and follow the development vs deployment structure.
  • California – is releasing its Automated Decisionmaking Technology Regulations in late 2025/ early 2026. These AI-equivalent regulations and expectations for governance are based on the California Privacy Rights Act.
  • New York City – implemented the first municipal AI law requiring employers to audit automated hiring tools for bias and notify candidates when AI is used in employment decisions, effective July 2023.
  • United Kingdom – introduced a principles-based approach to AI regulation in March 2024 that focuses on safety, transparency, and fairness across different AI risk levels, opting for existing regulators to oversee AI within their sectors rather than creating a central authority.
  • Utah, Brazil, China, India and Australia are additional jurisdictions to track.

Developing a Governance Program

What are baseline components of an AI compliance and governance strategy that all companies should have in place? 

Establish governance leadership – define a clear accountability structure to ensure that AI governance receives the necessary investments and resources, and doesn’t get deprioritized.

Align on program priorities – clarify what the organization does and does not want to achieve with AI. This will allow you to assess the success of your program in the future.

Set AI governance principles and policies – guidelines and rules should be designed to promote the secure, ethical and responsible development and use of AI that reflects the organization’s values and legal obligations. 

Map AI data sources – map and catalogue data sources and assess the quality and relevance of the information you’re feeding into your models. If you don’t know where the data is coming from, you have no way to mitigate bias or assure the quality of your outputs. 

Plan to deprecate as needed – have a process to identify whether the quality of a data source or piece of technology should be removed. 

Build a communication strategy that focuses on transparency – Marketing should help articulate what AI is used for, what controls are in place, and what your AI use means for different audiences (e.g., customers and investors). Avoid oversharing or using exaggerated language to describe your AI initiatives. 

For examples of how market leaders articulate their strategic frameworks for AI governance, see: Microsoft, IBM, Pfizer, Merck, Apple, HP, Cisco.

What documentation should companies prepare to comply with US state-based or international laws?

Development documentation:

Articulate the goals of AI product development – clearly state the purpose of the product, the objectives of the outputs and impacts of the models you are developing, and how well each product is able to achieve its goals. Adopt an AI assessment framework for this purpose.

Define principles, policies and standards for AI development and deployment – roles and purpose of a governance council, people and functions that can and can’t access and revise model training data and the models, model development life cycle, technical standards, vendor criteria, functional and ethical review and escalation paths for issues and incidents, and decision criteria for escalation paths and response. 

Catalogue models and their overseers – maintain records of each data source and model you’re developing, including who is responsible for it (e.g., a CPO, CTO or council), who has access to it, who is actively using it, and how you plan to utilize it in the future.

Output documentation: 

Assess your models from an effectiveness and ethical perspective – responsible AI developers/ deployers know if their models are achieving their goals and can explain how they evaluate AI performance. Assessments also allow organizations to determine whether model outputs address potential issues around bias and discrimination. You should be able to demonstrate that you have documented and considered your AI program’s impact on people, your company, and society as a whole. 

Note: in the future, in certain jurisdictions, you should expect to file AI governance-related submissions annually to a government agency.

What guidelines should you set on employee-initiated use of AI tools?

Provide basic education – all employees should have a high-level understanding of how AI technology works and what the main risks are (e.g., bias, confirmation bias, discrimination, legal issues).

Set boundaries around AI usage – specify the types of tasks for which it is acceptable (and not acceptable) to use AI. List the AI products (both internal and external) that employees should use for different types of projects. 

Foster a culture of skepticism – people are often tempted to believe the outputs of AI tools without questioning them. However, AI outputs aren’t always accurate, appropriate, or well-phrased. Encourage employees to think critically about the work they create with AI and remain vigilant about detecting and fact-checking AI hallucinations. 

Establish best practices around data – when employees use AI, they should do so in a responsible and useful way. Have policies and standards in place to maintain the integrity and accuracy of training data. Discourage employees from making up data and clarify which types of data inputs are acceptable. Supply employees with optimized prompts that will help the tools perform better. 

Employees shouldn’t represent AI outputs as their own – indicate whether work has been done entirely or partly by AI. Give credit where it’s due.

What additional training should HR receive on AI use?

HR professionals should be especially cautious about how they use AI – there is high regulatory scrutiny of AI applications that screen applicants, choose candidates for interviews, prioritize interview schedules, and evaluate candidates due to issues with bias and discrimination. 

Be able to prove humans are involved in any HR decisions – NYC already requires organizations to demonstrate that they’ve assessed the impact of using AI in employment decisions and can prove active human participation in the recruiting/ employment process. 

How should you evaluate potential AI vendors or vendor relationships from a governance perspective? 

Approach vendor governance the same way you approach vendor procurement risk management – many small vendors with exciting technology don’t yet have a governance strategy. Similar to traditional vendor risk management, you should engage legal counsel along with your security and privacy experts to vet potential partners who could compromise your ability to maintain or reach the level of AI governance you’re striving for. 

Prioritize vendors who can articulate their governance strategy – while no organizations may be fully compliant with standards such as NIST or the EU AI act, strong governance partners should have foundational compliance principles and a plan for how they approach relevant standards/ regulations. 

Data Security

How can companies institute data security practices to prevent leaks from AI-related use cases? 

Control access to data and LLMs – only specific roles should be able to access certain data sources, develop AI models, and test and iterate those models.

Leverage AI to support good security practices – incorporating “security by design” practices into AI model development, deployment, and use can help protect data hygiene and model resilience, and improve how data is stored, accessed, and transmitted within the organization. It can also identify unauthorized access, adversarial attacks, or ransomware attacks faster and flag potential issues when transmitting information between third parties. 

Use fail safes to secure unpatched vulnerabilities – it’s common for engineers to accidentally leave vulnerabilities exposed when optimizing parts of the model. Although these openings are small, they can provide access to large amounts of data and the model itself. Invest in layers of security to mitigate the damage that occurs when this happens.  

Have a strong incident management and backup plan – ransomware attacks are becoming increasingly common, especially for companies that are known to possess valuable or large amounts of data. Have well-documented incident management plans in place to protect against attacks–and to protect the data your model relies on, including the models themselves.

Are there any “safe” ways to use sensitive data with public AI tools? 

All publicly available LLMs represent safety risks – sensitive data usage can be better controlled with internal AI tools. Companies must weigh the risk of providing their data to licensed, publicly available tools against the effort, risk, and development time of building their own solutions. 

NIST is in the process of developing a security testing protocol for public models – standards for assessing the safety of AI tools will improve in the near future, but there is not yet a gold standard. NIST and large companies currently use red teaming strategies to pressure test the security of their models. 

Use Cases 

What makes an AI use case high risk vs. low risk?

Every use case has risk – while certain use cases and AI products are less risky than others, there is never zero risk when a model has the ability to create its own outputs. You can limit risk by:

  • Setting and enforcing company policies
  • Receiving legal counsel
  • Updating security and privacy protections
  • Continuously refining your data
  • Applying technology monitoring tools
  • Putting checkpoints in place to mitigate the impact of potential issues

External-facing use cases are generally riskier than internal ones – risks vary depending on the type of model: foundational, generative, LLM or agentic. For example, some regulators classify chatbots as low risk while others consider them high risk. Chatbots that have unsupervised interactions with customers can have serious consequences. In one case, an Air Canada chatbot gave an inaccurate quote for a bereavement fare. The customer complained, and Canadian Federal privacy regulators investigated, resulting in financial penalties, mandated changes in business practices, and public reputational damage to Air Canada. 

How should organizations address common use cases in their AI governance strategy? 

Seek legal counsel before testing out new use cases – risk levels are highly case-dependent. Due to contractual obligations to B2B clients, industry-specific data privacy standards, and the potential complications of using non-proprietary tools for company projects, it can be difficult to fully gauge risk without obtaining legal advice. 

Use CaseRisk LevelImportant Governance Measures To Institute
Ad-hoc employee queries of a public modelMedium
(however, it depends on the context)
Use boundaries – specify what the model can and cannot be used for, who can and cannot use it, and how extensively they can use it.

Be cautious with models trained on unknown data – you may not fully know what data a public model is trained on. Help employees understand how skeptical they should be when relying on different types of tools for answers.

Avoid using private data – feeding publicly and vetted available data into a public model is less risky than querying with personal data or proprietary business data, which may violate customer contracts or privacy policy commitments.
Using AI tools for product developmentDepends on the use caseConsider the purpose of the tool – AI tools designed to help you create your own IP are less likely to cause IP-related legal problems down the road. 

Note: conduct specific use case reviews with an IP lawyer before using a non-proprietary AI tool for product development.
Building a public model into your product for customer useMediumUnderstand possible licensing issues – involve Legal before rushing into unvetted solutions that could cause functionality, financial, or legal complications in the future. 

Be prepared to assume responsibility – public models are like shadow employees; although you don’t build them, you are responsible for anything they tell your customers because you give them access to your customers on behalf of the company.

Note: the same concerns around exercising caution with models trained on unknown data apply here.
Internal use of an AI tool with customer dataDepends on the nature of the dataTailor boundaries based on customer type – B2C customers (consumers) have different data protections than B2B customers. There are protections for consumers in laws and regulations and published privacy policy commitments. B2B businesses often have contract language that specifies what their data can and can’t be used for.

Overall

What are the most important things to get right?

AI governance is a joint effort, not a competition – every company is currently trying to decide what good AI governance looks like for them. Share best practices with peers who share your background and colleagues who have different perspectives. Everyone benefits by learning about important problems from new points of view. AI governance should be part of a larger enterprise risk management and governance strategy and framework.

Use ethics as a foundation – use NIST or your own principles as a starting point for how your company will think about ethics, responsibility, accountability, transparency, and integrity going forward. 

Know your data – you have the greatest control over the data that feeds and trains your model. Data is the lifeblood of AI models. If you doubt the accuracy and integrity of your data, address those concerns immediately. 

What are common pitfalls? 

Don’t let uncertainty drive inaction – governance policies change as companies and technologies evolve, and AI regulation is currently nascent and may evolve erratically. However, this isn’t an excuse to procrastinate developing a governance program; take this opportunity to be proactive and build a foundation for a more robust program in the future. 

Resist the temptation to brag about compliance – it is trendy for consumers to view AI negatively or with great skepticism. However, making indefensible claims or absolute statements (e.g., “the best AI solution”, “100% compliant”, etc.) is an easy way to attract regulator attention. Avoid using hyperbole or oversharing about your AI program, and beware of potential partners who do so.

Responses