Skip to main content

,

AI in credit: Policy considerations

Artificial intelligence (AI), a machine’s ability to perform the cognitive functions we associate with the human mind, is being used to facilitate processes and improve efficiency in business. As different forms of AI gain popularity in the finance sector, credit professionals must become experts on how AI can support their role and simplify workflow.

Why it matters: Even if your company isn’t utilizing AI, it’s critical to establish credit policies to safeguard against risk.

Artificial intelligence (AI), a machine’s ability to perform the cognitive functions we associate with the human mind, is being used to facilitate processes and improve efficiency in business. As different forms of AI gain popularity in the finance sector, credit professionals must become experts on how AI can support their role and simplify workflow.

Why it matters: Even if your company isn’t utilizing AI, it’s critical to establish credit policies to safeguard against risk.

ChatGPT is AI … but all AI is not ChatGPT. AI allows for enhanced creativity and efficiency in a variety of sectors that include customer support, predictive analytics and risk management. It is important to differentiate between ChatGPT and other forms of AI when building a policy.

“To simplify the relationship, consider generative AI as a broad field of study and innovation, with ChatGPT being a standout product within this domain,” reads an Upwork article. “Generative AI encompasses foundational techniques and concepts, and ChatGPT puts these principles into practice.”

By the numbers: According to a recent eNews poll, most credit professionals (95%) surveyed do not have a written credit policy in place for the use of AI or ChatGPT.

Steve Frederiksen, credit manager of global business at Underwriters Laboratories LLC (Northbrook, IL), said that his company banned ChatGPT usage due to its novelty and complexity, making it difficult to comprehend the specific applications of different AI tools. “My company is observing the pros and cons of ChatGPT, as well as the polices of other institutions, before creating its own policy,” he said. “Personally, I’d consider using AI when performing financial analysis for nonpublic readily available companies.”

The benefit to using AI is that it is adaptable and can learn over time. “The bad thing is that we don’t have the transparency and some of the control that we have when credit professionals are doing the research and looking through the customer’s files themselves, making the decision based on the information in front of them,” said Kathleen McGee, partner in the Emerging Companies and Venture Capital, White Collar Criminal Defense Department of Lowenstein Sandler LLP (New York, NY).

McGee suggests that every company establish and adopt policies and procedures for staff to follow. Effective policy will include topics such as training and an awareness of how AI impacts a particular industry or company. “It should also include external facing policies, whether that be terms of use, privacy policies or contractual language,” she said. “Other external policies include your website disclaimers with respect to AI usage and data collection, as well as indemnifications, representations and warranties.”

Here are some areas to consider when creating an AI policy, according to Corporate Governance Institute:

  • Data privacy and security: A policy should be put in place that outlines how the company will collect, store, and protect the data used by AI systems. This includes ensuring that only authorized personnel access the data and that it is stored securely.
  • Bias and discrimination: AI systems can reflect and amplify human biases and prejudices. The policy should address how the company will ensure that AI systems do not discriminate against individuals or groups based on protected characteristics such as race, gender or age.
  • Transparency: The policy should require that AI systems used in the workplace are transparent and explainable. This means employees should understand how AI decisions are made and why specific outcomes are generated.
  • Employee training: The policy should mandate training for all employees working with AI systems, covering effective and ethical use, understanding of its limitations, and potential work impact.
  • Accountability and responsibility: The policy should clearly define who is responsible for AI systems’ decisions in the workplace. This includes holding individuals and departments accountable for the outcomes generated by AI systems.
  • Ethical considerations: The policy should address ethical concerns surrounding the use of AI in the workplace, such as the potential impact on employment and the ethical use of AI in decision-making.
  • Continuous monitoring and improvement: The policy should require ongoing monitoring and modification of AI systems used in the workplace to ensure that they are functioning as intended and are not causing unintended consequences.

Credit professionals should discuss with AI software vendors about the program’s functionality. They should inquire about regular testing for consistent results and the possibility of auditing the program. “Ask if there’s a way to describe with a sufficient degree of transparency what the outcome was based on so that we can as a company relay that along with an adverse credit determination,” McGee said. “I strongly advise against using ChatGPT or any open-source AI software for creditworthiness decisions due to the uncertainty of the data’s accuracy and source.”

Jamilex Gotay, senior editorial associate

Jamilex Gotay, a Towson University alum, holds a B.S. in English. Her creative writing background fuels her success as a writer, journalist and award-winning poet. Fluent in English and Spanish, with intermediate French skills, she’s passionate about travel and forging connections. When not crafting her latest B2B credit story, she enjoys quality time with loved ones, outdoor pursuits and creative activities.