Artificial intelligence (AI) is rapidly advancing in today’s business world, finding its way into tools and software that companies use daily. In early 2023, Salesforce announced its native generative AI tool, Einstein GPT, which will deliver AI-created content capabilities throughout the platform. IBM’s human resources platform, watsonx Orchestrate, lets HR professionals automate day-to-day tasks and even train the platform to perform new skills.
With so many software and tech companies touting AI capabilities, it was only a matter of time before AI entered the field of financial statement spreading.
In this post, we’ll look at what an AI lending platform is and the critical ethical and security considerations community banks and credit unions must make before implementing AI into their loan analysis process.
Can Artificial Intelligence Help Spread Financial Statements?
The short answer is yes. If you’re using a broad definition of AI, there are already automation capabilities and machine learning algorithms within lending platforms today that have streamlined rote tasks, saving loan analysts countless hours on spreading financial statements. AI has the potential to further optimize spreading by analyzing financial statements and making risk assessments.
Theoretically, a loan analyst could input financial data and a well-trained AI model would determine whether the bank should issue the loan and on what terms. It could take automation to a whole new level, dramatically cutting the time needed to process loans. However, the federal government has stated there needs to be more evaluation of the effectiveness of these capabilities.
Benefits to Small Business Lending
The obvious benefit of this kind of capability is saving time. By making faster, data-driven decisions, small business lenders can spend more time building their portfolios and less time manually checking financial spreads, troubleshooting errors, and fixing broken formulas.
As noted, many tools today claim AI capabilities are for automation and streamlining processes, with some AI lending platforms touting greater potential; for example, tools that detect anomalies in financial statements that could pose risk and lead to rejecting the loan application. Similar tools could be used to identify fraud and unethical business practices. AI is also a powerful tool for forecasting trends, which could possibly help small banks and credit unions better identify market risk in certain segments of the economy.
AI Challenges for Small Institution Lending
There are limitations to using AI in lending. The Consumer Financial Protection Bureau (CFPB) is looking into whether new regulations on AI in lending platforms are necessary and has issued guidance on using these tools for loan analysis. Right now, it’s up to banks and credit unions to ensure their tools are effective, accurate, and ethical. Here are some of the considerations:
- Poor data quality: This refers to how the AI lending platform got the data it uses for machine learning and making decisions. AI models like ChatGPT require large amounts of high-quality data for training. Inaccurate data or not enough data can lead to poor or biased results.
- AI bias: AI bias is a universal challenge and has already been identified as a problem in lending analysis tools. For example, a March 2022 class action lawsuit alleged that Wells Fargo’s online refinancing calculator discriminates against minority and female mortgage applicants because the algorithmic tool requires ZIP code, education, and area code, attributes the CFPB has identified as proxies for race. Using AI tools that have not been tested adequately for bias could leave your institution at risk of lawsuit and violation of fair lending laws.
- Explainability: Explainability, or the requirement to tell consumers why they were denied, helps borrowers understand how to improve their chances of credit but also holds lenders accountable for their decisions and provides evidence in the event of discrimination. Depending on the AI lending platform, explainability may be more complex, and banks may need help understanding why the algorithm provided the results it did.
- Privacy: Financial data is often sensitive, so using it to train an AI model could raise privacy concerns. Banks are required to ensure the security of their data and prevent breaches, not to mention maintaining privacy to ensure trust with their clients.
- Lack of flexibility: One of the major challenges of small business lending, whether using AI or not, is that small business owners often do not use standard accounting practices. Financial statements may include tax returns, personal financial statements, and business financials connected to other entities. Having a fully automated system that does the decision-making for the lender often isn’t a great fit for community banks and credit unions for these and other reasons.
While the time-saving and efficiency benefits of using AI in lending may seem appealing, the increased burden on Vendor Management to validate the model- an ongoing requirement if the program is regularly incorporating new data and updating its assumptions- could offset the potential gains. Coupled with the risk of spending days or weeks during an audit or exam reviewing and defending the decisions made by the model, the institution could easily end up spending more time by employing systems that use AI in the decision-making process.
Regulation of AI in Lending and Financial Services
The Biden Administration released an Executive Order in late October 2023 that urges the government agencies overseeing banks to keep a close eye on how these institutions use AI to make decisions and interact with customers. The order instructs federal agencies to “consider using their full range of authorities to protect American consumers from fraud, discrimination, and privacy threats and to address other risks that may arise from the use of AI, including risks to financial stability.”
Banking regulators and the Federal Trade Commission have said they will use existing fair lending laws to ensure decisions made by computer models are free from discrimination. For example, the CFPB wants to prohibit data brokers and other companies from selling consumer data without their permission to prevent AI developers from using unauthorized customer data to build computer models and algorithms. The oversight may even extend beyond financial data and loan analysis.
The CFPB is also looking at banks’ use of chatbots and other machine-learning tools for customer service tasks.
Where FISCAL Stands on AI
For 30 years, FISCAL has worked alongside community banks and credit unions to help them make their spreading and tracking tasks more efficient. In this work, security and adherence to banking regulations have been paramount. Our automation tools use only the data within a bank’s existing system and do not store sensitive data outside of the institution’s databases.
Currently, it seems that AI may be best used to analyze large datasets and provide recommendations for human review and further analysis. Examples could include recommending customers who may be interested in a specific loan or deposit product, analyzing large volumes of transactions for potential fraud, or reviewing existing loans in a portfolio for developing weaknesses.
FISCAL FORWARD also offers the flexibility lending institutions need to serve their small business customers. We believe that credit decisions should be carefully made by analysts, officers, and loan committees, and we are following closely the news of AI lending platforms that claim to automate these decisions. Above all, we remain dedicated to ensuring FISCAL provides efficiency without taking control away from end users and local institutions.
For more information about the automation capabilities of FISCAL FORWARD, our latest version of the FISCAL spreading software, please reach out and schedule a demo today.