How AI Transparency Impacts Board Responsibilities

San Francisco’s Private Directors Association Annual Summit, held November 2025, focused on “Board Governance: What You Need to Know” and delved deeply into board governance of AI. We both participated as panelists and identified a recurring theme around AI transparency and the exercise of board responsibilities.
As we all know, AI can create tremendous value for a company. The key is to harness that power and ensure prudent oversight as board members fulfilling our Fiduciary Duty, and our Duty of Care. The risks of not doing so could be far reaching - putting the entire company at risk, and potentially, board members, too.
According to The Autonomy Institute, in The rise of AI as a threat to the S&P 500, 2025, “76% of S&P 500 companies added or expanded upon AI-related risks within their most recent 10-K filings”.
AI transparency, and AI board oversight applies to every company who has an employee who uses an LLM, such as ChatGPT, to do something as common as embellishing an email, or writing or updating a brochure. And those simple acts could literally have far-reaching consequences. We believe it’s fair to say this impacts almost every company today.
On a positive note, do your due diligence and provide the effective board oversight that helps the company you serve to mitigate risks and avoid lawsuits not only for the enterprise, but as well as yourselves as board directors. Below you will find several risks to avoid, and several tactics to mitigate those risks. Ensuring AI transparency will help to ensure effective board oversight.
Why is AI Transparency Important?
AI transparency encompasses disclosure on use and performance of a company’s AI capabilities and the systems that document and analyze how the company’s AI tools operate. Transparency enables directors to assess whether the company's AI systems comply with law, align with corporate values, or create unacceptable risks to stakeholders. Three key dimensions of AI transparency that enable oversight include:
Traceability—the capacity to track AI outputs back to their data sources, training processes, and decision logic. This extends beyond simple audit logs to encompass comprehensive documentation of data provenance, model architecture, and the assumptions embedded in algorithm design. This enables reconstruction of decisions and auditing of compliance with transparency rules.
Explainability—the ability to articulate why an AI system reached a specific decision or recommendation. This often involves natural language summaries or visual explanations that bridge the gap between complex algorithmic processes and human comprehension.
Interpretability—ensuring that teams within the organization can challenge the logic underpinning AI decisions before they produce harm. This may require specialized tools for model inspection or visualization that enable technical teams to audit system behavior.
How does AI Transparency impact board responsibilities?
As stated above, AI transparency helps companies and boards ensure their AI systems comply with the law and align with corporate values. It also enables boards and companies to identify and manage risks, including their reputational risk as well as build trust with their stakeholders – internally and externally.
Furthermore, AI regulations are increasing, often requiring greater AI transparency in terms of traceability, explainability and interpretability. For example, the proposed federal Generative AI Copyright Disclosure Act aims to require developers to disclose the copyrighted works used in their AI training datasets.
Below are some of the potential risks/liabilities to boards and companies as they relate to AI Transparency:
- Governance
- AI / data Regulations
- Other Legal Risks
- Reputational
We’ll dig deeper into each of these.
Governance
Boards have a duty of care, i.e. to exercise diligence in business decisions: to be well-informed, well-prepared and take the amount of care a reasonable person would take in a similar circumstance.
This duty is especially important in companies where AI drives customer-facing decisions, hiring recommendations, pricing strategies, or product functionality. Here, AI transparency can become mission critical. Directors who fail to implement board-level AI monitoring systems—or who receive reports of algorithmic bias, accuracy problems, or regulatory concerns and yet take no meaningful action—may breach their duty of care.
A recent analogous example is the Meta case, where it was alleged that Meta’s board breached their duty of care by “ignoring warning signs with Cambridge Analytica", and failing to respond to red flags about data privacy violations before they escalated into a major scandal. Mark Zuckerberg and other directors agreed to pay $190 million to resolve shareholder claims.
AI/Data Regulations
Data and Privacy
Regulation over individual data privacy is well known, especially in EU principally through the General Data Protection Regulation (GDPR). Although there is not yet something similar in the US, data and privacy rights are well embedded in areas such as employment law; and companies must comply with strict regulations in certain industries, such as healthcare, finance, and e-commerce. The result is that data traceability is a must in many areas.
AI Transparency Disclosure Requirements
There are current and pending regulations requiring the disclosure by deployers of different aspects of AI. Companies which have European operations are governed by the EU AI Act which mandates detailed transparency obligations for high-risk AI systems, that can include traceability, explainability, interpretability and appropriate human in the loop (HITL) oversight procedures to deployers.
Although the US doesn’t yet have any Federal regulations, legislation efforts are in progress, such as the Artificial Intelligence Accountability and Personal Data Protection Act.
AI Anti-Discrimination
Explicit legislation on AI discrimination is starting to be implemented in the US. As an example, Colorado’s AI Act ensures consumer protection, by requiring both developers and deployers of certain AI systems to use reasonable care to avoid discrimination and provide clear disclosures to consumers, i.e. companies need to be able to explain how decisions are reached by their AI systems.
Other Legal Risks
Copyright infringement
This is a growing risk category and was considered in an earlier PDA blog article by one of the Authors here. General LLM models such as ChatGPT, Claude, Gemini, Perplexity and others get their data from scraping the Internet and other sources. There are recent lawsuits regarding copyright database rights from large companies such as Thompson Reuters. Are the sources for data in the LLMs referenced, sourced? Is the data the LLMs scraped copyrighted, or fair use?
This can happen so easily and inadvertently- think about an employee in a marketing department who asked a generally released and available-for-free LLM, for example ChatGPT, for a new slogan. Where do we think ChatGPT got their information from? It got it from scraping the Internet or other sources. You might end up with a slogan that has a copyright on it, and you are infringing the copyright because you did not know the source of the slogan.
AI transparency needs to cover not only data is built into the work products from AI, but also into the day-to-day activities where AI is used for internal job responsibilities and deliverables. In other words, every employee should understand the sources of all information – what is the source? Is it copyrighted data or is it fair use data? The key is to cite all sources in their output and review the need for data licensing. This is also known as data provenance.
Private companies are also being sued by companies such as Getty, Disney, and Warner Brothers citing copyright infringement by using images to train and power private company’s products, that they don’t have the licenses or the right to use.
Data and Privacy
Think of this example- a new employee on the team decides to use their personal version of Claude and upload to their LLM the company strategic long range plan, all of the customer data, upload all of the sales data (yes, all confidential), and ask their LLM to come up with the most compelling, most incredibly exciting marketing messaging that differentiates the company above and beyond any of their competitors.
Think about that. They’ve just loaded all your confidential and propriety customer information into a generally available LLM. This information could show up later in someone else’s LLM request. Not only is there a data breach that may have occurred in the example above, there’s also a privacy breach that may have occurred.
Two things we’d like to highlight regarding data breaches: it’s been said that 95% of data breaches are caused by a human; and lawsuits related to data breach have grown exponentially since 2020.
Bias and anti-discrimination
Outside of specific AI regulations, directors who approve AI deployments in certain high-risk areas - hiring, promotion, customer service, or credit decisions - may also face significant liability for bias and discrimination. Here again, AI transparency through traceability and explainability are critical to validating decisions. There was a recent lawsuit from a private company against Workdaythat had to do with applicant / hiring bias. We all know that employment law is critically important for our companies, both private and public. One additional step that is to mitigate this risk is fairness testing, as well as ensuring that there is a human in the loop (HITL).
Misleading marketing claims
Claiming the application of AI in a company’s products and services is an extremely attractive and differentiated value proposition. But it may be far from truthful. Does your product do what you claim? If there is a marketing statement that can’t be substantiated there is a risk of being sued by customers due to misleading statements. Consider this even worse scenario – you could also be facing a class action suit.
And it’s not only customers who may sue. Federal agencies are starting to crack down on ‘AI washing.’ For example, FTC recently fined a company $193K for claiming to “create a robot lawyer.”
Reputational
Reputational damage and loss of trust with stakeholders can sometimes be even more damaging to a company than its legal liabilities, especially if there is widespread media coverage. Publicly stating AI transparency and fairness principles from the beginning can help in mitigating the fallout.
Steps Boards and Companies can take.
The list below, while by no means exhaustive, gives you several potential actions to provide oversight and help mitigate AI risks that your company might be facing on their AI maturity journey.
1. Board Level AI-Governance Oversight. Designate which committee(s) holds primary responsibility for AI governance - which may be the audit committee for financial AI applications, the risk committee for enterprise-wide AI risks, or even a dedicated technology committee. Committee charters should explicitly incorporate AI oversight responsibilities, including reviewing AI strategy, assessing AI-related risks, and monitoring AI transparency frameworks.
2. AI Reporting. AI as a material risk has increased significantly in the past few years. Boards need to require management to report on AI deployments, transparency measures, AI audits, AI compliance, AI KPIs and incidents requiring board attention. You want to know the company is managing AI results, auditing the lineage, sources of data, abiding by their AI policy.
3. AI Risk Oversight. Boards should also require management to conduct comprehensive AI risk assessments at least annually. Boards need to discuss these risks and risk mitigation steps on a regular basis and document them in board minutes to show proactive oversight. The case McDonald’s Corp. Stockholder Derivative Litigationdemonstrates that boards who receive red flags and fail to effectively monitor the company’s officers, for example, through required reporting, may face liability.
As a side note, Board directors have asked us, “Can I use an AI assistant to do board minute notetaking?” We have heard from many webinars that we’ve attended from Silicon Valley Director’s Exchange (SVDX) and at an AI Symposium from Stanford University, as well as other sources – simple answer - NO! Output from AI is likely to be discoverable and may be erroneous due to hallucination risks.
4. AI Policy. Ensure the company has an AI policy that defines guardrails across the company, including acceptable AI use, transparency requirements, and related governance processes. These policies will govern, amongst other things, how employees are allowed to use LLMs and data sets. And they should require that, before any AI system is deployed in high-risk applications, the organization documents system purpose, data sources, decision logic, accuracy metrics, bias testing results, and human oversight mechanisms. As importantly, the board needs to ensure there are appropriate controls supporting effective implementation of these policies.
5. Establish a Corporate AI Governance Committee. This is critical to effective oversight and needs to be cross-functional covering all corporate functions: IT, HR, Finance, Engineering, Marketing, etc. This committee reviews AI use cases, assesses risks, approves high-risk deployments, and monitors ongoing performance.
6. Employ Scenario-based Governance. Realistic testing along with adversarial testing (“red teaming”) are additional ways to battle test and attack your AI solutions, which shows not just oversight, but proactive oversight.
7. Formalized AI Education. Companies should establish an AI education program not only for leadership and functional areas, but all employees, to understand and embrace what the guardrails are and what the AI policies for the company are, as well as AI education for board members.
8. AI Checklists and Frameworks. There are several frameworks available - such as within the EU AI Act and the National Institute of Standards and Technology (NIST) - which can help build AI governance process that can help prepare companies for the future. Boards are advised to benchmark against these frameworks.
9. Use of Third-Party Vendors. Many companies incorporate third parties into their products and solutions, but those providers may in fact have issues with their AI transparency. Be sure to work with your company lawyers to develop robust vendor agreements that, for example, include IP indemnification, providing some protection if you are sued.
Independent Assurance on Oversight
You may consider engaging an independent AI assurance provider who can review your AI policies and AI controls that the company has put in place. This will help to ensure you and the company are diligent about being compliant with laws, regulations, etc.
In addition, there are numerous governance and compliance software solutions that are available.
Directors and Officers (D&O) Insurance
We’ve talked to several board members who believe D&O covers board members if they get sued. Yes and no.
Hopefully you never get sued, but if you do it’s important to know the limits on D&O Insurance including the total coverage. If the board is sued, as an example in a class action suit, and the insurance runs out, the directors and officers must retain their own lawyers to prove their innocence.
The good news is, if you are providing proper oversight which is well documented and can prove good business judgment, you are in a good position to prove your innocence.
Summary
Serving as a board director is becoming more complex with increasing responsibilities. Greater board oversight is needed today because of AI than ever before.
It’s up to all board members to: 1) stay educated on AI and upcoming rules and regulations; 2) implement effective AI governance oversight, which includes ensuring AI transparency, and; 3) be proactive in encouraging companies to develop AI strategies, while also implementing risk management strategies.
ABOUT THE AUTHORS

Patricia Watkins is an experienced board member, Go-To-Market (GTM) Strategist and Sales Growth Expert. She has held senior leadership roles in Sales, Marketing, Alliances, and Channels, with Fortune companies including HP, Teradata, AT&T, NCR, and a number of start-ups in Silicon Valley. Patricia has led new teams starting at $0 million to existing teams delivering in excess of $800 million in annual sales.
Patricia graduated with a BBA from The University of Texas, and an MBA from Santa Clara University, both with honors. She has served on public, private, advisory, and non-profit boards. She is currently on 6 Boards.

Jennifer McFarlane is an experienced board director, having served on seven boards of public companies, ESOPs, and VC-backed organizations, frequently as Chair of Audit and Compensation Committees. She brings over two decades of expertise as CFO in high-growth firms spanning the energy, manufacturing, and health technology industries, preceded by ten years as an investment banker. Jennifer is recognized as a 2025 NACD Directorship 100 Honoree and holds the NACD.DC designation.
