AI in Financial Services: A Force That Will Change Things, Not Just Hype
People's interest in AI (artificial intelligence) has exploded in the last six months; this has driven up the stock prices of companies directly (like Microsoft) and indirectly (like Nvidia) linked to AI. AI-mania has led to the most significant rise in tech stock prices since 2002. From January 1 to the end of May, the S&P 500 Index (SPX) increased by 13 per cent. AI-related stocks made up 9.1 percentage points of that rise or 70 per cent of the index's gain.
Today, fast-paced markets may bid up AI, but the hype may die down in a few months. From the financial sector's point of view, however, the new technology is an exciting new territory.
Backroom
Natural language processing (NLP), combined with the growing amount of public and private data, can be used in new ways to cut back-office costs in a big way. Simply put, AI can make automatic evaluations of risks associated with a particular customer and transaction chain much more accurate; this changes how we do KYC (know your customer) and counterparty-risk assessments. This promise comes with the ability to learn independently, which you already have and are still improving. AI technology can copy itself and work independently in its next breakthrough step.
Several health, finance, academic research, and education studies have shown that even widely used AI programmes like ChatGPT1 can quickly improve accuracy. (ChatGPT is a significant language model (LLM) that uses machine learning (ML) to turn data and textual proof into self-generated text. So, it's just one version of the AI software already out there, but it's the one that's been getting a lot of attention in recent months.)
AI software can also set priorities for its learning goals to get the most out of the data it has to process. In this way, it makes it easier to compare the truth of different parts of an investigation. In other words, unlike human risk assessors or compliance officers, AI doesn't need to be repeatedly told how to improve its ratings in specific ways.
ChatGPT can review documents, keep track of them across all compliance channels, and fill out data and legal forms with near-human accuracy. AI that can self-evaluate data inputs before learning will put software ahead of people in months, not years.
The same rules apply to coding as to writing papers. When making its output, GitHub Copilot3 can be set to only look at open-source code sources that are known to be safe. Extending ChatGPT from security software company Torq can make managing identities easier.
Language-based AI is likely beneficial for almost any application that blends quantitative and qualitative evaluations. Recently, Alejandro Lopez-Lira and Yuehua Tang, both finance teachers at the University of Florida (UF), taught early AI programmes to use language-based inputs to figure out whether news about certain stocks was good or bad. The result was interesting in two ways. First, AI could create "emergent [predictive] abilities"; each version of AI could go beyond the abilities planned when it was first coded. Second, ChatGPT could guess stock movements better than a random walk based on its analysis of news stories. The results, according to the authors, "suggest that including advanced LLMs in the process of making investment decisions can lead to more accurate predictions and improve the performance of quantitative trading strategies."
Moe Manshad, an assistant professor of Computer Information Systems at the University of Northern Colorado (UNC) who studies how AI is used in business and education, said that using AI in financial trading could "make the market more stable and less prone to speculative bubbles by making investors better educated." Jeremy May, CEO of AI-focused start-up Parallel, which works on back-office solutions for managing assets, said, "Using AI early in the data flow that feeds the daily NAV process lets us find and fix exceptions before they even get to the operational teams." This not only makes it easier to keep track of things, but it also speeds up the process."
AI-based systems promise to change how data is protected and managed. Microsoft's Security Copilot lets clients use natural-language ChatGPT prompts to ask about security threats and learn more about them. The service not only creates intelligence focused on the client, but it also helps with predictive research of possible weaknesses. ChatGPT can also mimic security breaches that have already happened and change attack scenarios to fit the systems and needs of each client.
The front desk
Before AI, standard financial services had to deal with competition from fintech (financial technology). Fintech companies had trouble getting market share because their services were too complicated, there were bottlenecks in marketing and sales support, and they needed better customer relationship management (CRM). All these things made it hard to use fintech across the different bases of financial services. AI will change that.
When dealing with complicated problems, traditional fintech tries to reduce back-office bottlenecks by trading down client expectations of service quality. Fintech uses robots a lot to help with customer service. Current versions of these overburden customer experiences with long, general lead questions; this makes their use more harmful than helpful for companies that work with high-net-worth (HNW) or otherwise sensitive clients.
AI can help reduce the cost of customer service by helping to question and process data, write and review engagements, and qualitatively and quantitatively analyse complex client scenarios. This can have a significant effect on how analysts, fund managers, and their clients talk to each other. Not surprisingly, the FP&A (financial planning and analysis) part of the market has been using ChatGPT a lot in the last few months.
Ian Rosen, the executive vice president and chief sales officer of Magnifi, which is part of the AI-focused fintech company TIFIN, says that AI will be used more and more in portfolio structuring and communication. Rosen said, "With the introduction of ChatGPT in particular, PMs are trying to use AI tools to pull data from a wider range of sources to use in their current investment processes." "But there is a big worry that because these big open AI-driven models aren't trained and tested with the same rules as bespoke systems, open AI tools could mess up the PMs' goals or break assumptions or intentional constraints in ways that human analysts would find hard to spot." Notably, TIFIN has been using custom AI in all of its trading market services since 2020.
AI also gives fintech companies the chance to improve the speed and quality of their communications with specific market segments and possible clients. This can increase sales and reduce the cost of finding new leads and turning them into sales. In the hands of a retail banker, AI can place cheaper, better-targeted ads tailored to clients' tastes and past decisions. This is different from current models, which focus on simple averages, past search signals, and group aggregates.
Based on these and other new findings about AI, Goldman Sachs analysts have come to the conclusion that about 35% of all jobs in finance are now at risk of being taken over by AI.
In the light of truth
Even though these skills are impressive, modern AI is still a long way from being very accurate when it comes to tests with a lot of information. A study released in JMIR Medical Education (JME) found that ChatGPT-3 could correctly answer between 42% and 64.4 % of questions from medical test banks; this was a big step up from earlier versions of the software, and it suggests that as we make more AI models, these programmes will get more and more accurate. We're not there yet, though.
AI still needs help with embedding software into core services, getting access to high-quality data pools for AI training, improving the personalisation of service offerings and support, and delivering the consistency and reliability that are needed to meet regulatory standards in the sector.
Two other parts of AI adoption, trust and legacy systems, will make it hard to use AI as a disruptive tool in banking and finance.
When it comes to trust, a number of studies in the United States and Europe show that the core demographics for financial services—higher-net-worth generational cohorts—will need to see AI close the trust gap between tech and personal services. When it comes to how easy it is to get answers, customers tend to trust in-person services more than tech-based ones. Bias replication and local binding of AI offers to clients' data are two other things to worry about. Over time, these problems will be fixed. AI will first be used in the front offices of banks, mainly to help with standard delivery models.
Research from Amsterdam University Medical Centres (Amsterdam UMC) suggests that AI will be put to use in this way, starting with a long time when it will be closely watched by humans.14 "Researchers who use ChatGPT run the risk of being misled by false or biased information," the authors wrote. If researchers use LLMs in their work, experts need to be careful. Checking facts and making sure they are authentic will have to be done by experts. Other studies that look at how AI can be used in settings where identity and source are essential say the same thing.
Many service providers in the banking and asset-management subsectors sit on large, complicated, disconnected, or siloed datasets. Most of the time, they need tools to help them get information about clients from these pools of data. GPT (Generative Pre-trained Transformer)-based proprietary AI systems can and should be taught on these datasets; this will give better insights into and management of assessed risks, structured-product portfolios, and service quotes. Zurich Insurance Group and Paladin Group, two of the biggest insurance companies, use AI platforms to improve their underwriting systems and make their business products as good as possible. Putting this kind of technology to use in banks is a reasonable next step. TIFIN, as already stated, looks into this aspect of AI training by placing its own and other datasets into its AI solutions.
According to data collected by FactSet, the number of times S&P 500 companies mentioned AI in their first-quarter 2023 results calls went up by 80 per cent from last year. As a sector, Financials was behind Communications Services (75%) and Information Technology (66%), as well as Industrials (23%) and Consumer Discretionary (22%). Only 19% of financial companies talked about AI in their investor communications. Some US banks have even banned the use of this new technology because they are so worried about it; this is a safety measure since open AI platforms raise privacy and data protection questions. But that shouldn't stop the field from working with new technology.
AI is here to change the way banks and finance work. This change will happen whether the current people in charge want it to.