Building Trust in AI: Why Assurance is Essential
In recent years, the impact of artificial intelligence (AI), from streamlining corporate operations to new products for the public, has dominated the business headlines. Leading technology companies have invested billions into AI data centers to support increasing demand for AI – and that demand is showing no signs of slowing down.
Public companies across industry sectors are also investing heavily in this technology, embedding AI into day-to-day operations, business processes, and customer experiences. But for these investments to succeed, one element is essential: trust.
From investors and regulators to employees and everyday consumers, stakeholders want to know that AI-enabled systems and outputs are reliable, transparent, secure, and used responsibly. This is where auditors come in.
In my role at the Center for Audit Quality (CAQ), I work closely with senior audit leaders to understand the impact of AI and other advanced technologies on public companies, our profession, and the capital markets.
Our new publication, The Role of the Auditor in AI: Present and Future, explores how companies are using AI, challenges giving rise to a lack of trust in the technology, and why auditors are uniquely positioned to enhance trust and confidence in AI. Read on for my key takeaways about how auditors’ tried-and-true approach and evolving skillset supports this ever-changing landscape.
Stakeholders want to know that AI-enabled systems and outputs are reliable, transparent, secure, and used responsibly.
How Public Companies Use AI Today
Companies across industries are embedding AI in various ways, with the goal of enhancing the efficiency and effectiveness of operations and improving employee and customer experiences. In the CAQ’s Spring 2025 Audit Partner Pulse Survey, audit partners pointed to the top five areas of AI use:
- Process automation (59%)
- Customer experience, service, and support (48%)
- Predictive analysis (28%)
- Targeted marketing (26%), and
- Cybersecurity (22%).
The level of reliance on AI in these areas can vary. In critical processes, such as financial reporting, human oversight remains essential to ensure accuracy and reliability. In other processes, AI may be able to operate with limited human involvement, automating certain activities, and freeing employees for higher-value work. Trust is a critical factor to support reliance on AI.
In critical processes, such as financial reporting, human oversight remains essential to ensure accuracy and reliability.
Transparency and Disclosures Around AI
As AI adoption expands, stakeholders are asking for greater transparency. Companies are beginning to disclose information about their AI strategies, risks, and governance, both in regulatory filings and in voluntary reports.
Many companies have begun including AI-related information in their Form 10-K filings. Our analysis of Form 10-Ks filed with the SEC in 2024 found that 72% of S&P 500 companies discussed AI, often highlighting risks in Item 1A. Risk Factors or detailing investments and strategy in Item 1. Business. These disclosures show that AI is shaping both risks and opportunities.
Beyond regulatory filings, many companies have published their AI principles and governance frameworks on their websites. These often emphasize values like accountability, transparency, reliability, and privacy. Some companies at the leading edge are also releasing standalone reports on their approach to responsible AI. Voluntary adoption of frameworks, such as the NIST AI Risk Management Framework, is also common. These steps signal that companies are approaching AI with rigor and responsibility.
As the frameworks and regulations around AI continue to evolve, the CAQ will continue to monitor and share resources as they become available.
Many companies have begun including AI-related information in their Form 10-K filings.
Challenges in Building Trust in AI
While AI has the potential to transform businesses, it presents new challenges that can undermine stakeholder trust if not managed effectively:
- Explainability and Interpretability: AI can be a “black box,” making it difficult for users to understand how or why a system produced a given output. Stakeholders may be concerned with how companies evaluate the appropriateness of outputs from AI tools.
- Reliability and Accuracy: AI systems, particularly generative AI, can hallucinate, producing false but convincing outputs. Errors can shake confidence when stakeholders expect consistency.
- Data Privacy and Cybersecurity: AI tools may inadvertently expose sensitive data or create new vulnerabilities to cyberattacks. Surveys show that investors are highly concerned about privacy and security risks associated with AI use.
- Responsible and Ethical Use: Stakeholders expect companies to use AI fairly, ethically, and in compliance with regulations. Issues like bias, discrimination, and lack of transparency can create reputational and regulatory risks.
Companies are at different stages in addressing these issues, but one theme is clear: stakeholders are seeking greater confidence that companies are using AI responsibly and effectively managing risks related to the use of AI.
AI tools may inadvertently expose sensitive data or create new vulnerabilities to cyberattacks.
The Auditor’s Expanding Role in AI
A report from EY found that respondents across the globe have low levels of trust that companies will manage AI with their best interests in mind. Independent assurance provided by trusted gatekeepers – public company auditors – can help bridge trust gaps and provide confidence that companies are managing AI responsibly. Public company auditors bring independence, rigorous professional standards, and deep expertise in evaluating systems and controls, setting them apart from other assurance providers.
AI assurance services directly address a company’s use of AI, such as their design and implementation of AI governance policies or procedures, their controls designed and implemented to address relevant risks in critical business processes and supporting technologies, or their compliance with AI-related regulations. These services can enhance stakeholder trust and confidence in AI by providing insights about how the company manages its use of AI.
Firms like KPMG and PwC have recently announced new AI assurance services to help companies meet the need for greater trust in their use of AI. This is the Audit Effect in action; auditors applying their unique skillset to enhance confidence in new forms of company-reported information. And while AI assurance is in the early stages and continuing to evolve, stakeholders can look to public company auditors to bring increased trust and transparency to the use of AI.
The CAQ will continue to monitor AI’s impact on the public company auditing profession and U.S. capital markets. For the latest insights, explore more resources and follow the CAQ on LinkedIn.
This is a default quote.