A Board’s Expectations on AI Strategy and Governance
- Institute of Corporate Directors

- 14 hours ago
- 4 min read
By: Dr. Erika Fille T. Legara, FICD
Fellow
Institute of Corporate Directors
As artificial intelligence (AI) becomes increasingly embedded in everyday business tools, boards of directors face a subtle but critical governance challenge. AI is no longer confined to bespoke systems or advanced analytics platforms. Today, even commonly used applications, such as productivity software, enterprise systems, and customer platforms, come with AI-enabled capabilities by default.
This reality requires boards to recalibrate their perspective. AI should no longer be viewed solely as a standalone technology initiative, but as a capability layer increasingly woven into core business processes and decision-making. Consequently, AI governance is no longer about overseeing a few “AI projects,” but about ensuring that AI-enabled decisions across the organization remain aligned with strategy, risk appetite, and ethical standards.
Much like financial stewardship, effective AI oversight demands clarity, accountability, and proportionality. Boards must therefore shape their expectations of management accordingly.
A well-governed organization treats AI as both a strategic enabler and a governance concern. Management’s role then is twofold; that is, to leverage AI in pursuit of enterprise objectives, while acting as stewards of the risks that accompany automation, data-driven decisions, and algorithmic scale. Regardless of formal structure or title, those accountable for AI must be both strategist and steward, translating AI capabilities into business value, and governance principles into operational discipline.
Outlined below are key areas where boards should focus their oversight.
1. A Clear Definition of What Constitutes an AI System
Boards should expect management to articulate what the organization considers an “AI system” for governance purposes. As AI capabilities are increasingly embedded in standard software, not all AI warrants the same level of oversight.
A practical approach distinguishes between embedded or low-risk AI features and material AI systems.
Embedded or low-risk AI features are typically bundled into commonly used tools and support routine tasks. Examples include AI-assisted spelling or document summarization in productivity software, automated meeting transcription, email prioritization, or basic chatbots that route customer inquiries without making binding decisions. These features generally enhance efficiency and user experience and can often be governed through existing IT, procurement, and data policies.
By contrast, material AI systems influence consequential decisions or outcomes. Examples include AI used in credit approval, pricing, fraud detection, hiring or performance evaluation, customer eligibility, underwriting, claims assessment, or predictive models that materially affect financial forecasts or risk exposure. These systems introduce financial, regulatory, ethical, or reputational risk and therefore require stronger controls, clearer accountability, and board-level visibility.
This distinction enables boards to focus governance efforts where they matter most, without impeding operational efficiency or innovation.
2. AI Strategy as a Guide for Technology, Data, and Capability Investments
Boards should expect AI initiatives, whether embedded or stand-alone, to be explicitly linked to enterprise strategy, not only in intent but also in investment implications. A clear AI strategy serves as a critical guide for technology and capital allocation decisions, ensuring that investments are coherent rather than opportunistic.
Management should be able to explain how AI ambitions translate into concrete requirements for enterprise data platforms, system architecture, and enabling infrastructure. This includes clarity on data availability and quality, integration across core systems, architectural choices that allow AI systems to be scaled and governed, and decisions around cloud, on-premise, or hybrid environments.
Equally important, boards should look for alignment between AI strategy and capability development. This includes investments in talent, operating models, governance processes, and decision workflows necessary to use AI responsibly and effectively. AI value is realized not through tools alone, but through organizations prepared to absorb, trust, and act on AI-enabled insights.
Without a clear AI strategy, technology investments risk becoming fragmented, duplicative, or misaligned with business priorities. With it, boards gain confidence that data, platforms, and capabilities are being built deliberately to support both innovation and control.
3. Governance Structures, Accountability, and Decision Rights
Boards should expect clarity on how AI-related decisions are governed, escalated, and owned. This includes clear accountability for AI strategy, deployment, and ongoing oversight, regardless of formal titles or organizational design.
In practice, governance gaps often surface through unintended consequences. For example, an AI-enabled pricing or eligibility system may unintentionally disadvantage certain customer groups, triggering reputational or regulatory concerns without any explicit change in board-approved policy. Similarly, AI-supported hiring or performance evaluation tools may introduce bias into people decisions, despite being perceived as objective or neutral.
These outcomes rarely stem from malicious intent. More often, they reflect unclear decision rights, insufficient oversight, or assumptions that embedded AI tools fall outside formal governance.
Not all organizations require a dedicated AI governance council. For larger or more complex enterprises, a cross-functional council may help coordinate strategy, risk, ethics, and compliance. For smaller organizations, existing structures, such as audit, risk, or technology committees, may be sufficient.
What matters is not the structure itself, but the clarity of decision rights, escalation paths, and accountability. Boards should be able to answer a simple question: When an AI-enabled decision produces unintended consequences, who is accountable. And how does the issue reach the board?
4. Risk Management, Assurance, and Board Oversight
Boards should expect AI risks to be identified, assessed, and managed with the same rigor applied to other model-driven and judgment-intensive systems. This includes governance frameworks covering data quality, assumptions, human oversight, third-party dependencies, bias and fairness, cybersecurity, and regulatory compliance.
A useful reference point for many boards is how Expected Credit Loss (ECL) models are governed. While ECL models are not AI systems, boards are familiar with the discipline applied to their oversight: clear ownership, documented assumptions, model validation, stress testing, management overlays, and regular challenge by audit and risk committees.
Similarly, material AI systems should be subject to defined approval processes, ongoing monitoring, and independent review. Boards should expect visibility into how AI models are performing, what assumptions they rely on, how exceptions are handled, and how outcomes are tested against real-world results.
Internal audit, risk management, and compliance functions should be equipped to review AI controls and escalate concerns appropriately. As with ECL, complexity should not reduce scrutiny; rather, it should increase the level of governance and assurance applied.
As AI becomes a pervasive feature of modern organizations, governance maturity, not technological sophistication, will distinguish responsible adopters from risky ones. Management may deploy AI capabilities and committees may oversee specific risks, but ultimate accountability remains with the board of directors. In an environment where AI is increasingly everywhere, informed and proportionate oversight is no longer optional. It is a core duty of good governance.

%20(1).png)




Comments