The ESG Elephants in the Boardroom

Sue Milton of CRUF

Sue Milton


Artificial intelligence (AI) tools and environmental, social and governance issues (ESG) are two titans for companies to deal with because AI is not only a technological disruptor, but also an ESG disruptor and the complications it causes can become the elephants in the boardroom.

AI’s analytical ability can drive operational improvements, including helping the analysis of ESG data for faster and more extensive sustainability reporting. However, it can also blind us to AI’s need for the very resources ESG requirements are asking us to refine. 

Companies need to report on their Scope 1 (direct), Scope 2 (indirect) and Scope 3 (life cycle and supply chain) emissions. 

Research finds that training an AI model emits about 284 tons of CO2, nearly five times the lifetime emissions of the average American car. Cooling AI’s data centres requires a lot of water creating some regional water stress. 

Therefore, the correlation between a company’s increasing use of AI and AI’s increasing demand for energy and water is clear. 

AI models have been trained on predominantly male, US, Anglo-Saxon historic data and language so, by default will contain certain biases. When combined with our tendency to believe the highly realistic contexts created by AI, we potentially create a vast pool of vulnerable people open to exploitation. There’s also the very real risk of AI socially engineering us into ever greater exploitative or fraudulent actions via deepfakes that perform and sound like real people and organisations. 

We urgently need AI systems that can support multiple languages beyond English, in different cultural and geographical contexts. Companies must invest in people as much as they do in technology. Then staff, from the chairman to the doorman, can avoid exploitation by understanding how to get the best from AI without harming themselves or others. 

The indiscriminate use of AI tools without due thought and consideration has the potential to disrupt corporate governance, because many tools produce false yet plausible output, blurring the boundary between truth and fiction, and also because many people seem to be unaware of this fundamental flaw: 

  • AI results are imprecise “and, if not wholly wrong, always just a little bit wrong”1, leading to a new form of computing’s Garbage In Garbage Out (GIGO). 
  • Boards are not monitoring companies’ AI usage. A survey conducted among members of the Institute of Directors in 2022 found that approximately 80% of boards lacked the capability to assess their utilisation of AI.
  • Boardrooms are at risk of contributing to AI misuse as much as bad actors. 

Boards will be tempted to use AI for creating the annual report and financials, including writing the narrative.2 Auditors and Audit Committees will be tempted to apply AI during assurance. 

AI tools may well be able to create reporting quickly and easily, but at what cost to accuracy? The sum of a lot of things being ever so slightly wrong would lead to the report overall being quite a lot wrong – this is a real danger considering the fact that businesses and investors make decisions based on the information companies report.

Shareholders, companies and regulators are not concerned enough about this yet, as in the short term, it seems we have enough expertise to check AI’s output but, as we rely more on AI – what happens when we no longer have the expertise to check the output?

All four of these disruptions will influence every company, and every director needs to be aware that leaving an elephant unchecked in the boardroom may well lead to economic disruption.

1 I first heard Claire Bodanis of Falcon Windsor use the phrase “and, if not wholly wrong, always just a little bit wrong” when she presented at CRUF’s 6th March 2024 meeting. I plagiarise unashamedly as it is so true. More about what Claire is doing on the responsible use of AI in reporting can be found here.

2 Based on Claire Bodanis at the 6th March 2024 CRUF meeting.

We have a collective obligation to ensure companies and governments work together to encourage usable, safe, secure, people-friendly AI to obtain the greatest benefit with the least harm.

Sue Milton is a governance specialist, covering corporate and IT governance. Sue advises governments and organisations on how to increase corporate effectiveness and is currently involved in the UK Government’s governance, audit, and digital reforms, focusing on our reliance on information technology, on company director staking a more proactive and granular approach to risk and control management, and on the need to integrate and demonstrate ESG (environment, social and governance needs) within the strategy and culture of the organisation. 

Disclaimer: The views expressed in the blog are those of the author and do not necessarily represent the views of all CRUF participants. To read more about the CRUF’s views on this and other topics, please visit the ‘Our Views’ section of this website.


Leave a Reply

Your email address will not be published. Required fields are marked *

This website uses cookies, for more information click here