Gartner predicted that by 2026, over 80% of enterprises would use generative AI, 50% of knowledge workers would use AI daily and that by 2027, 60% of AI adoption efforts would shift from prototyping to operationalisation.
The UK’s corporate governance framework, built on the Companies Act 2006, was designed in an era where assets were tangible and risks linear.
In 2025, directors face a new reality: AI-driven decisions, algorithmic liabilities, and cyber or data breaches that make Tech governance as critical as financial oversight.
In this article, we will analyse real-world cases on directors’ liabilities for tech failures, map AI and data risks to legal duties under UK law and provide a 5-step action plan to future-proof your governance.
1. Directors’ legal duties in the digital age
Under Section 172 of the Companies Act 2006, directors must act to promote company success while considering:
- long-term consequences (e.g. AI bias lawsuits)
- stakeholder trust (e.g. data privacy breaches)
- high standards of conduct (e.g. transparency in algorithmic decisions).
Yet, a 2023 Financial Reporting Council (FRC) review found that the vast majority of FTSE 350 companies had no formal process to oversee AI risks.
Regulators (e.g. the ICO) now also view data protection as a directorial duty and a director’s failure to implement basic cybersecurity measures after numerous warnings could be considered as a breach of their duty of care, which the ICO has specifically warned directors about.
In the digital age, directors’ duties demand proactive oversight of AI, data and cyber risks to safeguard the company’s technological integrity. Companies that embrace these shifts will not only mitigate liability but also promote sustainable growth and long-term resilience.
2. Three AI and data risks that all companies should be aware of
A. AI Bias and discrimination
AI tools which have been used in hiring, lending, or policing sometimes embed racial, gender, or disability biases, which could lead to a breach of the UK Equality Act 2010.
In 2023, we could already note that class action lawsuits were being filed in the U.S. against companies whose AI recruitment tools were allegedly engaging in into discriminatory practices.
In the UK, compensation for breaches under the Equality Act are unlimited and the board’s failure to audit the algorithm’s training data could constitute indirect discrimination.
Therefore, before deploying AI, a board should question the data that trains the model and how are being tested for.
B. Data Privacy and GDPR fines for supervisory liability
In Various Claimants vs. WM Morrisons (2020), the Supreme Court held Morrisons vicariously liable for a rogue employee leaking payroll data. Crucially, the court noted that the board’s lack of oversight into internal data access controls contributed to the breach.
Therefore, remember to appoint a Data Protection Officer (if required) and mandate quarterly (or at least annual) GDPR audits.
C. Cybersecurity liabilities
In 2021 British Airways was fined £20 million for a breach caused by unpatched software. The ICO’s ruling highlighted that the board’s lack of vulnerability management processes violated their duty of reasonable care.
In the Corporate Governance Code Guidance published, the UK Corporate Governance Code 2024 explicitly links cyber-risk to board accountability. Yet dedicated cyber committees for UK boards of listed companies are still not the norm (NCSC, 2023) and one of the conclusions following the 2023 Review of Corporate Governance Reporting was that more work was needed by most companies to demonstrate robust systems, governance and oversight.
As a first measure in the right direction, start adopting the NCSC’s Cyber Governance Principles!
3. Five Steps for listed companies to start future-proofing their governance
Step 1: Create a Technology & AI Sub-committee
- Composition: include at least one tech-literate NED or external AI ethicist.
- Mandate: review all high-risk AI deployments quarterly.
Step 2: Conduct an AI Governance Audit
- Map AI use cases (e.g., HR, credit scoring).
- Test for bias using the ICO’s AI Auditing Framework.
Step 3: Embed Cyber-Risk in Board Reports
- Align with FCA guidelines on cyber-risk disclosures.
- Require the CISO to report directly to the board bi-annually.
Step 4: Update D&O Insurance for Tech Risks
- Ensure coverage includes AI bias claims, GDPR fines, and ransomware payments.
- Try re-negotiating “wilful neglect” clauses that could be used against you in case of insurance claims.
Step 5: Publish an AI & Data Governance Charter
- This will signal commitment to stakeholders (regulators, customers, investors – remember that shareholders have sometimes in the past initiated derivative actions against company directors for breaches of duty related to a lack of proper oversight on data privacy matters, e.g. in the SolarWinds Corp case).
- Some Tech companies have also linked a percentage of bonuses to ethical AI metrics.
Conclusion
The UK’s corporate governance is evolving rapidly, reshaped by AI, data-driven risks and expectations of transparent processes and decisions. Forward thinking companies are already implementing simple steps: elevating privacy and cybersecurity discussions to board level discussions, embedding ethical AI frameworks where relevant and incorporating tech governance when publishing annual reports.
By acting now, companies can future-proof their operations and turn governance into a competitive advantage.