• Home
  • AI & Workforce Strategy
  • Executive Coaching
  • Education & Policy
  • About
  • Insights
  • Contact
  • ai-is-not-a-technology-problem-its-a-leadership-one
  • What_Boards_Get_Wrong_About_AI_Risk
  • Why_Workforce_Transformation_Fails_Without_Leadership
  • The Hidden Role of Education Systems in the AI Economy
  • Decision Velocity Is Becoming a Leadership Liability
  • Why Governance Must Precede AI Innovation
Tommy Reddicks

What Boards get wrong about AI risk

Boards are designed to think about the future of an organization. That orientation makes artificial intelligence both unavoidable and unsettling.

On one hand, AI presents clear ethical and operational hazards — from misuse and inaccuracies to data exposure and reputational risk. On the other hand, it represents a fundamental shift in how organizations think, decide, and structure themselves. For many boards, that combination creates discomfort. The future is uncertain, the pace is unfamiliar, and the implications are difficult to fully model.

This is why policy and guardrails matter early. But it is also why boards cannot afford to lead with fear alone. Alongside risk management, boards must retain the ability to imagine, to explore, and to ask how exponential AI growth could enhance — not just threaten — the organization they oversee.

The Risk Boards Focus on Most — and the Risk They Miss

Most boards are over-focused on current risk — the risks that exist right now.

These risks are real: data privacy, compliance, accuracy, misuse, and reputational exposure. They deserve attention. But in most healthy organizations, “now risk” should be handled primarily by the CEO and C-suite, with board oversight rather than board micromanagement.
What is often left largely unchecked is future risk.

Preparing the organization for what AI will mean three, five, or ten years from now is far more difficult — and far less tangible. Yet this is where boards add the most value. In many sectors, particularly education, boards spend disproportionate time managing today’s exposure while underinvesting in tomorrow’s preparedness.
The greatest AI risk for many organizations is not what AI might break today — it is what the organization will fail to become if leadership waits too long to adapt.

How AI Complicates Traditional Board Oversight

AI adds a new dimension to governance, and boards feel it almost immediately.

The first instinct is often to create “yet another committee.” In some cases, additional structure is warranted, especially given the strategic and operational reach of AI. But governance is not just about adding layers — it is about clarity.

AI will also simplify aspects of board work. Reporting, dashboards, and data review can become more dynamic and accessible. The challenge is not information availability; it is interpretation, accountability, and decision ownership.

Boards must decide where AI oversight lives, how it connects to strategy, and how it integrates with existing governance models — rather than treating it as an isolated technical domain.

Where Accountability Breaks Down

Without a clearly defined AI policy, accountability erodes quickly.
When roles are not explicit, organizations can drift into large-scale AI-driven decisions that materially affect structure, workforce, or mission — without appropriate board visibility or approval. This creates tension between executive authority and board oversight, often after the fact.

Effective governance requires clarity about:
  • Which AI decisions belong to management
  • Which require board involvement
  • How the board stays informed without slowing execution
Absent that clarity, AI initiatives can quietly reshape organizations in ways neither leadership nor boards fully intended.

The Questions Boards Should Be Asking — and Often Aren’t

Many boards ask whether AI is a risk. Fewer ask how it will reshape the organization over time.

Boards should be asking:
  • How will AI impact this organization in one year? Three years? Five years? A decade?
  • Who is leading and guiding our AI initiatives — and who is accountable?
  • How are we protecting our intellectual property in AI-enabled workflows?
  • What training does the board itself need to understand organizational AI use?
  • What policies guide AI adoption and use?
  • How is the board kept informed about AI practices across the organization — not just at the top?
These are governance questions, not technical ones.

Workforce Impact: The Risk Boards Avoid

AI will impact the workforce. The challenge is that no one can yet define the full scope of that impact.

Early indicators suggest that process-driven and data-oriented roles with limited nuance are most vulnerable. AI efficiencies will allow fewer people to produce more output. As AI systems evolve, additional roles will undoubtedly be affected.

Because the future is unclear, many organizations avoid confronting workforce implications directly. This avoidance is understandable — but dangerous.

Boards and leadership teams should be engaging in intentional, even uncomfortable, scenario planning. Not as a prediction exercise, but as preparation. Assuming that all roles will simply “evolve upward” risks unintentional blindness if reality unfolds differently.
It is possible to stay ahead of the curve. It requires honesty, foresight, and leadership.

What Good Governance Looks Like in an AI-Enabled Organization

Good AI governance is not reactive.

It is forward-thinking, policy-driven, risk-aware, and collaborative. It requires boards to remain engaged without becoming operational. It demands close partnership with executive leadership, not distance.
Most importantly, it treats AI not as a side issue, but as a strategic force shaping the organization’s future.

Boards that get this right will not eliminate risk — but they will ensure that AI becomes a tool for resilience rather than a source of surprise.


Tommy Reddicks is an executive coach and AI strategy advisor focused on leadership, workforce transformation, and education systems.

Tommy Reddicks
CEO, Paramount Schools of Excellence
Executive Coach & AI Strategy Advisor
Indianapolis, IN

Home                                                       Leadership & Civic Engagement
AI & Workforce Strategy                        CEO, Paramount Schools of Excellence
Executive Coaching                                 Founder, Kelly Wensing Community Fund
Education & Policy Leadership               Founder, Monumental Chef Showdown
Insights                                                     Facebook
About                                                         LinkedIn
Contact                                                       X
Powered by Create your own unique website with customizable templates.
  • Home
  • AI & Workforce Strategy
  • Executive Coaching
  • Education & Policy
  • About
  • Insights
  • Contact
  • ai-is-not-a-technology-problem-its-a-leadership-one
  • What_Boards_Get_Wrong_About_AI_Risk
  • Why_Workforce_Transformation_Fails_Without_Leadership
  • The Hidden Role of Education Systems in the AI Economy
  • Decision Velocity Is Becoming a Leadership Liability
  • Why Governance Must Precede AI Innovation