We're using cookies, but you can turn them off in your browser settings. Otherwise, you are agreeing to our use of cookies. Learn more in our Privacy Policy

Why ethical decision frameworks are critical for AI in investment management

colorful montage of a small figure of a person in a maze of data
Published 22 Mar 2024

Recent advances in artificial intelligence will have wide-ranging implications for the investment industry. Individuals and institutions have a part to play in ensuring AI is developed and deployed responsibly.
 

Since the launch of ChatGPT in November 2022, the surge in enthusiasm for AI has been accompanied by rising concerns over the risks it poses. Even OpenAI, the developer of the wildly popular generative AI chatbot, has struggled to reconcile the ethical debate over the safety of AI and the commercialization of the technology.

Policymakers have responded with measures to mitigate the potential technical, social, ethical and security risks, including the convening of an AI Safety Summit in the UK in November 2023 and an executive order on AI from US President Joe Biden. Yet comprehensive regulation remains a fair way off. One of the farthest along is the European Union, which is close to finalizing an AI Act that will come into effect in stages from 2026, encapsulating wide-ranging measures to protect citizens.

A CFA Institute study in 2022 highlighted similar concerns in the investment industry. While the potential of advanced machine learning is exciting for professional investors, there are ethical considerations around how new tools will source, analyze and act on data – and what their use could mean for the financial markets. (See Figure 1)

Figure 1: Major Risks and Governance Issues Faced by Investment Firms Adopting AI and/or Big Data What Risks or Governance Issues, If Any, Does Your Organization Face in Adopting AI and/or Big Data? Transparency of algorithms Protection of intellectual property rights Operational risks Compliance risks Protection of data/data privacy Reliance on third-party vendors We do not face any risks in AI and big data adoption 64% 63% 49% 38% 36% 11% 1% 0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100% Survey of 976 institutional investors conducted by Coalition Greenwich for the 2022 CFA Institute Investor Trust Study. Source:

Personal responsibility

Ahead of the enactment of binding AI rules for the industry, data scientists and investment professionals can take it upon themselves to act ethically.

“People shouldn't wait to be told what to do. They should use a common-sense approach and try and make sure the things they do are for good,” said Sam Livingstone,  Head of Data Science, Jupiter Asset Management.

But rather than leaving employees to make these decisions on an ad-hoc basis, it is incumbent on firms to provide clear guidance in comprehensive and relevant AI governance frameworks.

Appropriate rules will vary from industry to industry. “If you’re trying to predict myeloma in children, people are probably going to allow you more leeway in your approach because what you are trying to solve for is so important,” said Livingstone. There could, for example, be a case for taking a more relaxed stance on data privacy.

The CFA Institute last year laid out a decision framework more suited to the development of responsible AI applications in investment management.

The framework covers three distinct steps: obtaining input data; building, training and evaluating the model; and deploying the model and monitoring it. (See Figure 3.)

Firms also need to formulate and adopt a broader framework to manage the risks and opportunities brought about by AI, encompassing organizational culture, risk management, skills, and competency. This begins with leadership establishing a clear vision for the development and use of AI in the firm’s business model, and entails establishing accountability within a risk management framework and providing relevant business units with sufficient knowledge, skills and capabilities in AI and data science.

CFA Institute’s research also identified organization-level requisites for AI to be successfully used in a variety of applications, including investment analysis, portfolio management, risk management, trading, automated advice, and client onboarding.

“Instilling an ethical decision framework in AI-driven investment processes is critical to ensure applications serve the best interests of clients. Given the complexity of AI projects, senior leadership must establish a strategic vision and ethical culture for AI development within the organization,” said Rhodri Preece,  Senior Head, Research, CFA Institute at the time of the launch.

“While the use of AI in investment management is still relatively formative, it is appropriate that we examine the ethical aspects of AI implementation to guide future developments responsibly,” Preece added.  

Fiduciary safeguards

Individual CFA charterholders are also obliged to ensure their use of AI in the investment process complies with the CFA Institute Code of Ethics and Standards of Professional Conduct. “You're constantly studying and reaffirming those,” said Julia Bonafede,  Co-Founder, Rosetta Analytics

The relevant principles and provisions include individual professionalism; integrity of the capital markets; duties to clients and to employers; investment analysis, recommendations, and actions; and conflicts of interest.

For instance, under Standard II: Integrity of Capital Markets, investment professionals must ensure that the data sourced for and processed by AI tools does not come from material non-public information. Standard II also calls for the periodic testing of AI models to ensure that trading decisions do not lead to market distortions or other outcomes that could be construed as manipulative. And Standard III: Duties to Clients requires respecting the confidentiality of clients’ data as well as disclosing how AI is incorporated into the investment process.

In addition to fulfilling professional responsibility, adhering to these standards can help establish trust among clients in both the technology and the overall efficacy of the investment approach. The alternative is “not making it clear what you stand for,” said Livingstone.

A hard path to tread

Of course, adhering to the principles of data integrity, accuracy, transparency, interpretability, and accountability will be challenging in this fast-evolving area. Any number of subtle missteps could compromise data integrity and accuracy, which, could lead to financial losses and magnify risks.

Interpretability is an especially hot topic. “There are many clients who can’t invest without a high level of explainability,” said Richard Fernand, CFA, Head, Certificate Management at CFA Institute. 
The trouble is, “it’s not really possible to pinpoint the decision process for a neural network,” said Bonafede.

Although Shapley and other line models can be used to gauge the contributions of various factors to a predicted outcome, “they’re not perfect frameworks,” added Bonafede (See Figure 2). In attempting to force an AI model into a more linear and explainable framework, “you potentially water down its success.”

Figure 2: Hypothetical Shapley Values for a Given Stock Sentiment Momentum Volatility ESG Score Asset turnover Change in accruals EPS growth rate Dividend growth rate Debt/equity ratio Free cash flow Interest rate Book/price ration Market capitalisation Market beta -0.010 0 0.010 Feature value contribution Average predicted excess return (international equities) = 0.05 Actual predicted excess return (stock XYZ) = 0.065 Difference = 0.015 The Shapley value is the average marginal contribution of the given feature to the prediction (averaged across various combinations of features). In this simplified illustration, the feature that has the largest Shapley value is book-to-price ratio (0.004), and the largest negative contributor on average is asset turnover (Shapley value of -0.0015). The sum of the Shapley values equates to 0.015-the difference between the actual prediction (6.5%) and the average prediction (5%). CFA Institute, Ethics and Artificial Intelligence in Investment Management: A Framework for Professionals Notes: Source:

Keeping finance safe

The dangers of failing to responsibly develop and deploy AI extend well beyond constituting a liability for individual firms. It could even magnify market volatility and instability, warned Gary Gensler, head of the US Securities and Exchange Commission.

Gensler has called on regulators to prevent the accelerating adoption of AI from jeopardizing financial stability, and has proposed rules to govern its use. Much of it comes down to protecting consumers, which, in turn, depends largely on the objectives assigned to a given AI system, according to Gensler.

For instance, “When AI plays you in chess, do you not think they have an intention to beat you?” he asked. Rather, that objective was assigned by a human. Cognizant of this, Gensler’s proposed rules include stipulating that firms’ use of AI to boost their profit should not come at the expense of their clients.

Of course, acting against a client’s interests runs counter to the fiduciary duty of a money manager – as enshrined in the CFA Institute Code of Ethics and Standards of Professional Conduct. Given the growing trend for regulators to adopt a “same activity, same risk,” approach to technologies disrupting the financial sector, a similar stipulation is likely to be incorporated in future laws governing the development and use of AI within the investment industry.

By proactively embracing credible and effective AI governance firms can not only ensure that they are ready for future regulation, but by building trust and curtailing risk, could also achieve a considerable competitive advantage.

But it will not be enough to rely on frameworks alone. Because the technology is evolving so quickly, there will inevitably be gaps between new developments and rules to account for them. It is therefore imperative that “every individual should be responsible for their own behavior,” said Livingstone. “And not say ‘I can do what I like until someone tells me I can’t.’ That’s not the sort of world anyone wants to be in.”

Figure 3: CFA Institute Ethical Decision Framework for AI

Ethical Considerations

Workflow Step

Data Integrity

Accuracy

Transparency and Interpretability

Accountability

Obtain input data

What is the source of the data? What sampling methods are used, and how are data cleansed? Are data labels fair and accurate (if using supervised ML)? Is the dataset representative? How are potential biases accounted for or corrected? Do data sourcing initiatives respect data privacy laws? Is the confidentiality of client data protected? Does the input data contain any potentially material, non-public information?

Check the validity and veracity of the data. Are the data relevant to the problem specified? Do the data permit fair and accurate inferences?

Are descriptions of the input data retained? How are data described and referenced in the investment process or in reporting to clients and to supervisors?

How are data sourcing initiatives governed? How are input data stored, and are they securely maintained? Are roles and responsibilities clear?

Build, train, and
evaluate model

Is there sufficient sampling history to effectively train the model to inappropriately weight certain features or groups?

Does the model perform as intended? Will the model deliver accurate and suitable outcomes for the client? Does the desired level of accuracy come at the cost of excessive model complexity? Refine and iterate model parameters as appropriate.

Are the model features and their contribution to the outcome interpretable? Can the model features be adequately communicated to clients and supervisors?

Is there a robust evaluation and approval process (such as via a committee) before model enter a live environment? How are potential conflicts of interest evaluated? How are potential adverse client outcomes or potential market distortions addressed?

Deploy model and monitor

Conduct periodic reviews of the input data to monitor for the emergence of biases. Does the dataset remain sufficiently representative?

Does the model deliver good out-of-sample performance, with results that are accurate, robust and generalizable? Conduct regular testing and reviews to understand if there are any changes to model performance over time.

Does the process by which the AI tool learns from the data evolve over time? Does the contribution of features to the outcome change over time? If so, how are such issues explained and communicated to clients?

Conduct periodic testing to ensure the model stays true to the client mandate, and check for style drift where appropriate. Where models deviate from their original parameters, what contrils are in place to negate adverse client outcomes? Is model performance disclosed appropriately in client reporting?

Source: CFA Institute, Ethics and Artificial Intelligence in  Investment Management: A Framework for Professionals.

Want to learn more about Data science?

With the advent of data science, the investment industry is changing rapidly. Investment firms are facing new challenges and these changes will have implications for your career.

Learn more
Abstract data science illustration