While the sudden boom in Artificial Intelligence (AI) promises to transform entire industries, it also presents new complexities for financial decision-makers involved in environmental, social and governance (ESG). What are the risks and opportunities presented by AI, and what can those working in ESG do to safeguard the future?
There’s no denying it, AI technology is booming and primed to transform not only the way we work but the very work that humans are needed to perform. And this change is happening now, with the market value of generative AI predicted to skyrocket from USD40 billion in 2020 to USD1.3 trillion by 2032, according to Bloomberg Intelligence (BI).
The impact of AI is so far-reaching it has been likened to the discovery of electricity, meaning that practically every industry will be affected by AI. This has significant implications regarding AI’s impact on society, the ethical use of AI, and ESG.
ESG refers to nonfinancial data relating to environmental impact, social impact, and corporate governance. These factors exist to focus business attention on human well-being and how businesses impact the world, beyond their bottom line.
In general, investors seek to minimize ESG risks or influence companies to lower their ESG risk to reduce potential reputational damage and align with stakeholders’ priorities (read about ESG and responsible investing). Recently the biggest focus of ESG has been related to climate change, but due to AI’s rapid growth and impact, it is essential that AI also becomes front of mind as part of ESG considerations.
Balancing AI risks and benefits
No conversation about AI would be complete without some appreciation of the enormous benefits AI presents. Previously time-consuming tasks have the potential to be automated or augmented, presenting enormous value and increased competitiveness to companies that harness its potential.
AI has already been proven to improve cancer diagnoses, assist scientific research, detect extreme weather events, tackle hazardous tasks, and has the potential to address complex societal challenges.
But these benefits come at a cost. For one, AI threatens to disrupt entire industries leading to large-scale job losses. According to the OCED, 27% of jobs are in occupations at high risk of automation, including non-routine high-skilled roles in finance, medicine and law, among others. There is ongoing discussion about whether this will lead to large-scale unemployment and at what speed at which this job displacement may occur, but it’s a risk that companies will need to take into account.
While there is no shortage of predictions about the productivity benefits of AI, concerns about its environmental and societal impacts are widespread. For example, the data center industry, which powers AI applications, is estimated to contribute 2%–3% of global greenhouse gas emissions.
Further complicating the matter is that different AI applications have different carbon footprints depending on the computing power they require to run. And with the volume of stored data growing exponentially leading to greater energy consumption and e-waste, the environmental impact of AI is predicted to grow.
Other concerns relate to individuals’ rights to non-discrimination, personal data protection and privacy. As AI requires access to large datasets to “learn”, and this data may include personal information, there are concerns about who has access to this data, how it is stored, and how it will be used.
Another issue is that AI can reflect human biases in their activities. There have already been several high-profile instances of AI discriminating based on ethnicity and gender such as in calculating insurance premiums, screening job applicants, and assessing the likelihood of criminals reoffending.
One reason for this is that due to the complexity of AI, it can be difficult to understand how an AI arrived at a result. This concept is referred to as AI being a “black box” in which the features that make the AI work are hidden from its users. Data and prompts are fed in and results come out without certainty about how those results are determined. While researchers are working to make the ways that AI functions more explainable, this uncertainty presents significant risks.
As a result, companies leveraging AI or investing in ones that do, need to adapt their approach to ESG, enabling them to take advantage of AI’s benefits while mitigating ESG risk.
ESG framework for AI
Currently, there are many different ESG approaches, and more than one can be used for a single investment product, leading to confusion, uncertainty, and the rise of “greenwashing”, in which disclosures intentionally or unintentionally mislead investors. These issues are likely to be made even more challenging due to the unique factors of AI.
In response, institutions such as the Sustainability Accounting Standards Board (SASB), the Global Reporting Initiative (GRI), and the CFA Institute are working to form ethical standards that fairly represent and fully disclose an investment product’s ESG issues.
Safeguarding the future
ESG stakeholders have the potential to safeguard the future by advocating for responsible AI development. To achieve this, ESG frameworks need to be adapted to account for the unique risks and benefits presented by AI. This will require collaboration between AI developers, investors and ESG professionals, along with ongoing research and dialogue about AI’s ESG implications.
The power of AI holds incredible potential to create value and contribute towards human good when aligned with ESG principles.
You may also be interested in
Want to learn more about Data science?
With the advent of data science, the investment industry is changing rapidly. Investment firms are facing new challenges and these changes will have implications for your career.