[ad_1]

AI must ensure fairness in digital lending
| Photo Credit:
Sumedha Lakmal
When AI becomes ubiquitous in lending, Responsible AI becomes necessary for responsible lending. In the context of lending, Responsible AI ensures that AI systems used in the lending value chain do not pose risks to the system or the customer.
One way in which systemic risks manifest is the concentration risk that arises when lenders rely on the same underwriting model and therefore lose out on stability gains from diversification.
Customer risks take the form of discrimination, privacy infringement and adverse implications for customers’ financial health on the back of loans that customers can ill afford.
This piece discusses key recommendations from the paradigm of Responsible AI that may mitigate or avert these risks and make for a more robust financial system.
Fairness factor
Fairness, i.e., safeguarding against bias and discrimination is a core principle of Responsible AI. Put simply, this principle insists that similar credit-risks be treated similarly regardless of their social characteristics such as gender, caste etc.
AI-assisted decisions are almost always suspected of bias because of how they function. AI systems, typically, concern themselves with recognising patterns and correlations within the data and stop short of determining the causes behind these patterns.
Therefore, it is quite likely that an AI system when trained on a dataset that exhibits a greater loan rejection rate for women (even when the rejection is rightful and unbiased), could tag all women as inferior credit-risk.
A recent American study demonstrates that Large Language Models (LLM) tend to recommend different loan approval decisions and interest rates for black and white mortgage applicants who were otherwise identical. Given that historical financial data is steeped in bias, AI-assisted decisions run a real risk of being biased.
Therefore, Responsible AI mandates practitioners to institute fairness policies. Fairness policies refer to a suite of practices that ensure that AI systems are immune to historical biases and that they do not introduce any fresh bias of their own. These practices include preparing the training data to be unbiased and/or balanced, piloting the model in safe environments/sandboxes and continually auditing the AI systems for bias.
The authors of the study cited above were able to minimise racial bias by merely instructing the LLM to “use no bias”. This is of course more easily remedied in a stylised study than in the real world. This appreciation reinforces the need for fairness policies, especially at the pre-deployment stage because the real-life implications of bias are often irreversible and fateful.
Model explainability is another important dimension of Responsible AI. It comprises techniques that make AI systems understandable to humans. It is easy to appreciate the need for explainable AI systems in the context of lending. The regulator and the customers reserve the right to know the criteria for loan approval (and by extension, rejection).
Liberal lending norms pose grave risks to the financial health of the customer and the system. Repayment-distress is known to encourage over-indebtedness, compelling borrowers to take fresh loans to repay older ones. At the same time, loan defaults have a cascading implication for the lenders’ portfolio quality.
Per current provisioning norms, default by a borrower on one loan qualifies all loans of the borrower as non-performing. Therefore, it is essential, that loans are sanctioned and rejected for the right reasons. Unsurprisingly then, the RBI stipulates credit-risk models be explainable.
Typical explainability techniques include basic visualisations such as Partial Dependency Plots (PDPs) to understand the effect of each input variable on the outcome, and also the more advanced techniques that can delineate the contribution of individual variables in the outcome.
Experts worry that as models become more complex these techniques may fall short. Regulations around the world are trying to strike a balance between complexity, which often increases accuracy, and explainability, which may favour simpler AI systems.
Human-in-the-loop
Finally, Responsible AI strongly advocates for the principle of human-in-the-loop (HITL), that mandates incorporating human oversight and making room for human input in the working of AI systems. This is particularly significant for the financial sector which is ever evolving. AI systems are limited by the data they are trained on. Systems trained under a given set of macroeconomic conditions may quickly become redundant, even hazardous, when those conditions change.
The Covid pandemic highlighted these issues when models trained on ‘normal’ macroeconomic conditions could not perform under a system-wide, exogenous shock. At the time, lenders reported leaning on human judgment and retraining AI models to sensitise them to the shifts in the macroeconomic context. Models cannot be expected to self-learn when their underlying assumptions are completely overturned.
Moreover, HITL is also relevant when the industry is at a nascent stage and there isn’t enough training data available for systems to learn from. Implementing HITL involves ensuring that AI is understandable to humans and appointing humans who oversee the outcomes of the model and flag remarkable decisions. More advanced systems also make room for humans to override the decisions of the AI system.
Responsible AI in lending is perhaps of the utmost salience today, given the growing concerns around over-indebtedness and portfolio quality. Regulators, lenders and the customers find themselves in familiar yet unsettling times.
With the nature of the problem being technological in part, the solution may also benefit from the application of Responsible AI principles to the AI systems. The technological safeguards presented by Responsible AI may complement and enhance the efficacy of traditional prudence measures such as increased capital reserves.
Chugh heads Future of Finance Initiative at Dvara Research; Sibal is an AI practitioner working as a Director with PwC India. Views expressed are personal
Published on April 8, 2025
[ad_2]
Source link