AlphaCert Insights

AI and the potential implications for ESG - AlphaCert Event

Written by AlphaCert | Jul 30, 2023 10:41:00 PM

AlphaCert recently held the Ahead of the Curve Luncheon Event on 2 August 2023 with over 30 people in attendance. Stephen Huppert, an Independent AlphaCert Advisor with over 35 years experience in the investment fund industry wraps up the key learnings from the event.

In opening the recent ‘AlphaCert: Ahead of the Curve Luncheon’ event, Phil Pietersen, CEO of AlphaCert, reminded us how quickly technology can change the world by noting that the iPhone is not even 20 years old, and the first version was both hyped and derided. In 2023, the hottest topic in the world of technology is AI, which is evolving at an incredible pace. This has very real implications for investment managers needing to consider ESG factors. 

 

Leveraging Large Language Models as Business Innovatory 

The first speaker was Malen Hurbuns, GM – Microsoft Engineering at ClearPoint. Two big talking points currently are AI, especially ChatGPT, and ESG.

Malen delved into Large Language Models as part of the AI landscape and explained how this is the technology behind the most know model, ChatGPT. Large Language Models such as ChatGPT are examples of a class of artificial intelligence techniques that involve creating models capable of generating new and original text based data. That is, they are good at predicting what comes next.

He introduced a new term for me, “prompt engineering”, which ChatGPT tells me is “the process of carefully crafting or designing prompts for language models like GPT-3.5 to achieve specific desired outcomes“. There is as much smarts in how to ask the question and ‘prompt’ the AI model.

Before launching his demo, Malen talked about some of the risks associated with ChatGPT. Firstly there are security concerns — given it is open source, anything you enter into the system is available to all. Malen demonstrated how Microsoft has created an environment in its Azure cloud platform that allows you to deploy an instance of ChatGPT in a secure Enterprise environment that is unique to your organisation. This approach allows businesses to control what information ChatGPT can access based on the security profile of the person asking the questions. Malen showcased how you can also instruct ChatGPT to only answer if it has the information to do so — that is, not to hallucinate. Most of us will be familiar with the concept of hallucinations in ChatGPT, with some quite humorous but not useful case studies in the media recently – like the creation of new ‘court cases’ in the USA that never happened!

For his example, Malen used a collection of publicly available PDF documents related to BP (shareholder reports, third party sustainability reports, etc.) and then asked ChatGPT questions such as:

  • “Which ESG issues are most material to the company’s business?”
  • “How has the company’s ESG scores changed over time?”
  • “How is the company addressing these issues?”

A nice feature of the demonstration was that ChatGPT provided citations to references with its answers.

There were plenty of questions from the floor, showing that many could see the potential of this approach, which helps mitigate some of the risks and concerns associated with the general open version of ChatGPT.

 

Generative Artificial Intelligence: Challenges, limitations, and opportunities

Following Malen’s presentation and demonstration, we heard from Tom Barraclough, one of the founders of The Brainbox Institute, a think tank and consultancy that works with governments and businesses on questions arising at the intersection of law, policy and technology.

Tom explained how Generative AI is changing our assumptions about how media is created. You no longer need a camera to create images, a microphone to create audio or a keyboard to create text. This also means that we cannot assume that images, audio or text are authentic or generated by humans.

He stressed the importance of AI Literacy, which, according to the EU proposal for an AI Act, refers to “skills, knowledge and understanding that allows providers, users and affected persons … to make an informed deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause.”

Tom highlighted three key issues with AI that we are all grappling with and gave some insightful examples of how these in real life – bias in the training data, accuracy and reliability of the output, and assurance and oversight of the systems.

Tom’s key message was that while we need to be aware of the risks and limitations of AI, this should not stop us from looking for ways to take advantage of these emerging technologies. He talked about issues with training data, including copyright and biases. He reminded us that Generative AI doesn’t think, in the same way that a submarine doesn’t swim though it demonstrates many of the characteristics of swimming.

Several times during his presentation, Tom explained that AI considerations are socio-technical. That is, they relate to the interaction between humans and technology. We can automate tasks through AI but we cannot delegate authority. He urged us to take a human rights or human-centric approach to Responsible AI.

Many doomsayers are concerned that AI means the end of civilization as we know it. Tom is a technology optimist who focuses on making wise choices and understanding limitations. “Be distrustful of both hype and doom”, he concluded.

Managing Investment Data and ESG as a subset of that is complex and doesn’t have a standard operating model to manage that data today. With the emergence of AI, there is opportunity to increase efficiency, accuracy and productivity in managing investment data. We will continue to create opportunities to learn more about how AI can impact investment fund operations and compliance.