Artificial intelligence (AI) is rapidly advancing in today’s business world, finding its way into tools and software that companies use daily. In early 2023, Salesforce announced its native generative AI tool, Einstein GPT, which will deliver AI-created content capabilities throughout the platform. IBM’s human resources platform, watsonx Orchestrate, lets HR professionals automate day-to-day tasks and even train the platform to perform new skills.
With so many software and tech companies touting AI capabilities, it was only a matter of time before AI entered the field of financial statement spreading.
In this post, we’ll look at what an AI lending platform is and the critical ethical and security considerations community banks and credit unions must make before implementing AI into their loan analysis process.
The short answer is yes. If you’re using a broad definition of AI, there are already automation capabilities and machine learning algorithms within lending platforms today that have streamlined rote tasks, saving loan analysts countless hours on spreading financial statements. AI has the potential to further optimize spreading by analyzing financial statements and making risk assessments.
Theoretically, a loan analyst could input financial data and a well-trained AI model would determine whether the bank should issue the loan and on what terms. It could take automation to a whole new level, dramatically cutting the time needed to process loans. However, the federal government has stated there needs to be more evaluation of the effectiveness of these capabilities.
The obvious benefit of this kind of capability is saving time. By making faster, data-driven decisions, small business lenders can spend more time building their portfolios and less time manually checking financial spreads, troubleshooting errors, and fixing broken formulas.
As noted, many tools today claim AI capabilities are for automation and streamlining processes, with some AI lending platforms touting greater potential; for example, tools that detect anomalies in financial statements that could pose risk and lead to rejecting the loan application. Similar tools could be used to identify fraud and unethical business practices. AI is also a powerful tool for forecasting trends, which could possibly help small banks and credit unions better identify market risk in certain segments of the economy.
There are limitations to using AI in lending. The Consumer Financial Protection Bureau (CFPB) is looking into whether new regulations on AI in lending platforms are necessary and has issued guidance on using these tools for loan analysis. Right now, it’s up to banks and credit unions to ensure their tools are effective, accurate, and ethical. Here are some of the considerations:
While the time-saving and efficiency benefits of using AI in lending may seem appealing, the increased burden on Vendor Management to validate the model- an ongoing requirement if the program is regularly incorporating new data and updating its assumptions- could offset the potential gains. Coupled with the risk of spending days or weeks during an audit or exam reviewing and defending the decisions made by the model, the institution could easily end up spending more time by employing systems that use AI in the decision-making process.
The Biden Administration released an Executive Order in late October 2023 that urges the government agencies overseeing banks to keep a close eye on how these institutions use AI to make decisions and interact with customers. The order instructs federal agencies to “consider using their full range of authorities to protect American consumers from fraud, discrimination, and privacy threats and to address other risks that may arise from the use of AI, including risks to financial stability.”
Banking regulators and the Federal Trade Commission have said they will use existing fair lending laws to ensure decisions made by computer models are free from discrimination. For example, the CFPB wants to prohibit data brokers and other companies from selling consumer data without their permission to prevent AI developers from using unauthorized customer data to build computer models and algorithms. The oversight may even extend beyond financial data and loan analysis.
The CFPB is also looking at banks’ use of chatbots and other machine-learning tools for customer service tasks.
For 30 years, FISCAL has worked alongside community banks and credit unions to help them make their spreading and tracking tasks more efficient. In this work, security and adherence to banking regulations have been paramount. Our automation tools use only the data within a bank’s existing system and do not store sensitive data outside of the institution’s databases.
Currently, it seems that AI may be best used to analyze large datasets and provide recommendations for human review and further analysis. Examples could include recommending customers who may be interested in a specific loan or deposit product, analyzing large volumes of transactions for potential fraud, or reviewing existing loans in a portfolio for developing weaknesses.
FISCAL FORWARD also offers the flexibility lending institutions need to serve their small business customers. We believe that credit decisions should be carefully made by analysts, officers, and loan committees, and we are following closely the news of AI lending platforms that claim to automate these decisions. Above all, we remain dedicated to ensuring FISCAL provides efficiency without taking control away from end users and local institutions.
For more information about the automation capabilities of FISCAL FORWARD, our latest version of the FISCAL spreading software, please reach out and schedule a demo today.