Finding value in generative AI for financial services – MIT Technology Review

[ad_1]

Financial services firms have started to adopt generative AI, but hurdles lie in their path toward generating income from the new technology.

With tools such as ChatGPT, DALLE-2, and CodeStarter, generative AI has captured the public imagination in 2023. Unlike past technologies that have come and gone—think metaverse—this latest one looks set to stay. OpenAI’s chatbot, ChatGPT, is perhaps the best-known generative AI tool. It reached 100 million monthly active users in just two months after launch, surpassing even TikTok and Instagram in adoption speed, becoming the fastest-growing consumer application in history.

According to a McKinsey report, generative AI could add $2.6 trillion to $4.4 trillion annually in value to the global economy. The banking industry was highlighted as among sectors that could see the biggest impact (as a percentage of their revenues) from generative AI. The technology “could deliver value equal to an additional $200 billion to $340 billion annually if the use cases were fully implemented,” says the report. 

This story is only available to subscribers.

Don’t settle for half the story.
Get paywall-free access to technology news for the here and now.


Subscribe now

Already a subscriber?
Sign in

For businesses from every sector, the current challenge is to separate the hype that accompanies any new technology from the real and lasting value it may bring. This is a pressing issue for firms in financial services. The industry’s already extensive—and growing—use of digital tools makes it particularly likely to be affected by technology advances. This MIT Technology Review Insights report examines the early impact of generative AI within the financial sector, where it is starting to be applied, and the barriers that need to be overcome in the long run for its successful deployment. 

The main findings of this report are as follows:

  • Corporate deployment of generative AI in financial services is still largely nascent. The most active use cases revolve around cutting costs by freeing employees from low-value, repetitive work. Companies have begun deploying generative AI tools to automate time-consuming, tedious jobs, which previously required humans to assess unstructured information.
  • There is extensive experimentation on potentially more disruptive tools, but signs of commercial deployment remain rare. Academics and banks are examining how generative AI could help in impactful areas including asset selection, improved simulations, and better understanding of asset correlation and tail risk—the probability that the asset performs far below or far above its average past performance. So far, however, a range of practical and regulatory challenges are impeding their commercial use.
  • Legacy technology and talent shortages may slow adoption of generative AI tools, but only temporarily. Many financial services companies, especially large banks and insurers, still have substantial, aging information technology and data structures, potentially unfit for the use of modern applications. In recent years, however, the problem has eased with widespread digitalization and may continue to do so. As is the case with any new technology, talent with expertise specifically in generative AI is in short supply across the economy. For now, financial services companies appear to be training staff rather than bidding to recruit from a sparse specialist pool. That said, the difficulty in finding AI talent is already starting to ebb, a process that would mirror those seen with the rise of cloud and other new technologies.
  • More difficult to overcome may be weaknesses in the technology itself and regulatory hurdles to its rollout for certain tasks. General, off-the-shelf tools are unlikely to adequately perform complex, specific tasks, such as portfolio analysis and selection. Companies will need to train their own models, a process that will require substantial time and investment. Once such software is complete, its output may be problematic. The risks of bias and lack of accountability in AI are well known. Finding ways to validate complex output from generative AI has yet to see success. Authorities acknowledge that they need to study the implications of generative AI more, and historically they have rarely approved tools before rollout.

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

[ad_2]

Source link

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top