AI in Asset Management: Ride it or Resist it

Revant Nayar, CTO of FMI Tech comments on the current industry discussions around the uptake of artificial intelligence in the asset management sector.

Over the last couple of weeks, I have heard the word AI so often it rings in my ears in moments of drowsiness. Among the slew of conferences and events in Miami from early January to mid February (Equities Leaders Summit, IConnections, Trade Tech FX and AYU events), AI was among the top most thrown around buzzwords. After leading several round table discussions, sitting on panels and informal discussions about AI, I can claim I have never seen this level of interest in the asset management industry around artificial intelligence. Amidst the excitement there is a fair degree of trepidation and skepticism as well about AI adoption across funds, asset managers, service providers and allocators. AI is not new. However, while AI was traditionally the domain of quantitative hedge funds, the difference this year is that a lot of discretionary funds and portfolio managers have gained familiarity with it through the advent of widely available large language models like ChatGPT.

Among the first questions I was posed in all panel discussions was the difference between AI, LLMs and machine learning. I explained that LLMs were a special case of ‘AI’ models (essentially, neural networks), which were in turn a special case of ‘machine learning’ models (being practically anything that can capture nonlinearity in a dataset). Another common initial question was when we as asset managers should use AI in the first place. I always made a distinction between using off the shelf LLMs like ChatGPT and building AI models in house. The former should be used by all asset managers- it can be used to automate processes, write code, summarize data sets, and even stay on top of existing literature in finance, mathematics or data science.

However, the use of custom AI/neural networks for modeling must be done very carefully when building one’s own models for a custom use case. In fact they should be used, namely, when linear tools are demonstrably not sufficient. Then I discussed that for high signal -to-noise ratio problems in risk management (such as hedging and volatility surface calibration), AI might well have important use cases. Predicting volatility or calibrating a local volatility model to an options surface are problems in fact ripe for AI, since we know they have nonlinearity and high signal to noise ratios. Similarly, the data suggests that relationships between stocks and hedging instruments is often nonlinear with a well defined functional form, which would make a neural network suitable for learning its dynamics. Other use cases participants brought up are usage for order matching and execution for highly liquid securities (where there is a large amount of data available). Another firm uses AI routinely for predicting macroeconomic indicators such as inflation and unemployment. However, for alpha generation, AI suffers from the pitfalls of overfitting, regime dependence and correlation to linear strategies. As per a Bloomberg report, over 98 percent of funds running AI driven alpha strategies have failed. Allocators have routinely complained AI driven funds simply end up doing trend following or mean reversion, and the lack of transparency makes it difficult to analyze the cause of drawdowns.

I was asked whether the future of portfolio managers is doomed with the advent of AI. It was an interesting question. While it is true that entry level quant and engineering jobs at quantitative hedge funds have plummeted over the last year (I can say from experience GPT can do what they do, faster and better), I expressed the opinion that portfolio managers are not in danger. That is because to beat the market by definition, they must have a view of the markets that is not the consensus view to have any chance of having alpha, and GPT essentially represents the consensus view. No matter how competent or able GPT becomes, it will still represent a consensus view and the best human managers will have to stay ahead. Some participants expressed concerns about the crowding of strategies, which was on the rise even before the advent of GPT. Apparently, GPT is making it worse. On the discretionary side, PMs and allocators are seeing very similar language across market analyses and reports, as a tell-tale sign of the rise of people using LLMs that train on the same datasets to write similar reports. One panelist foresaw that this would mean that access to proprietary datasets would be a greater distinguishing feature between managers, since they would be trained on the same LLMs anyway.

I was asked how discretionary fund managers can learn more about AI. I mentioned that the best way would be to use AI for that very purpose; since GPT can explain concepts to one at one’s own level and enable people to follow up. At the end of one of the panels, I was asked what my biggest fears are about AI. While someone mentioned hallucinations, I expressed my biggest concern being the potential increasing homogeneity and conformity in idea generation in quant finance and beyond, as GPT creates a unified source of truth. Some panelists mentioned the propensity of AI models to hallucinate, but I believe that can be countered by having overlay instructions within the model itself like some latest variants have. A bigger concern that was raised is data privacy, as people are concerned about providing training data for LLMs.

Interestingly, at the second AI panel I did, I was asked the opposite question- what am I excited about regarding the future of AI. I had to think far less to answer this question, as I am an AI optimist at heart. I mentioned I am excited by physics-enhanced and transparent AI, since these approaches reduce overfitting and account for non-stationarity. Also I mentioned the burgeoning field of quantum AI and quantum-inspired AI, which we have worked on at FMI as well, that enable much more robust decision making. In particular that is important for market making in illiquid securities (a point raised by our MD Jack Sarkissian at one of the panels), or for making forecasts in asset classes and securities where there is limited history of data available. In physics we use three guiding principles for creating models- parsimony, falsifiability and the ability to predict more than we put in. AI typically falters at alpha generation because by default it does not respect these principles; physics-enhanced AI is a step to remedy that.

One discussion point that was notable by its absence was the use of LLMs in providing sentiment scores for use within discretionary and quantitative research processes. This was a huge industry around 2015-2018, and the hype seems to have died down. I speculate that it is because the alpha in these scores decayed very rapidly, since for several years, a lot of top funds have had access to the same LLMs and these trades have become overcrowded. Also, it became obvious after the roundtable discussion led by FMI CRO Mihail Amarie, many people fear the mass adoption of generative AI. Many organizations are taking proactive steps toward limiting the scope and the devices where AI is used in order to be able to better protect intellectual property. Another concern when it comes to AI is the accuracy of the information conveyed by the available large language models and the compatibility of such sources with strict compliance requirements. Nevertheless, everyone seemed to accept as inevitable the adoption of AI, but we have to take good care to really understand what it is and what it is not and the limitations. I think such panel discussions are great means of promoting a healthy awareness among funds about AI and its uses and pitfalls, particularly for discretionary portfolio managers, who may not have been exposed to these tools before. It is also useful to brainstorm novel ways to adopt AI based technologies for asset managers, and stay ahead of the competition.


Bio: Revant Nayar, CTO, FMI Technologies LLC

Revant Nayar is currently Principal and CIO at Princeton AI and FMI Tech, among the first quantum AI based quantitative hedge funds, and Research Affiliate at Stony Brook University. He started his career in theoretical physics research at IAS and Princeton University, before serving as Research Fellow at NYU in 2020-21, where he pioneered physics-inspired approaches to options pricing and hedging with Peter Carr. In 2021, he launched Princeton AI, bringing quantum AI-driven approaches to asset management, achieving a Sharpe ratio above 3 in the Indian market, before expanding to the US market in 2023.

He has over the years been featured as speaker and panelist at several leading finance and AI publications and conferences including W&F Magazine, CIO Outlook, Bloomberg, Strata, Global AI Conference, YPO, EMEX and QuantMinds. He also runs among the largest econophysics research collaborations in the world (FTERC), bringing students, researchers and professors from top universities around the world to generate research at the intersection of physics, mathematics and quantitative finance. Today, Revant is acknowledged in the industry as a pioneer in quantum and econophysics-based asset and risk management, earning recognition as a promising figure among the new wave of quant fund managers.

Read more about Revant in his AYU Member Spotlight

Previous
Previous

Multi-custody: the endgame for institutional digital asset safekeeping 

Next
Next

AYU Member Fund Focus: Whale Capital