Anyone with even a basic understanding of finances knows that the traditional credit score usually depends on five foundational factors. These are your payment history, your current and previous debt levels, your credit age (which is different from your actual age), new accounts, and your credit mix.
However, we are living in 2025, and artificial intelligence algorithms are being used to influence credit decisions with a never before seen level of accuracy and data depth. For example, they are looking at a wider array of behaviors such as the shops that you frequent, i.e. are you going to discount stores or are you buying beyond your means? How and when do you pay your utility bills, i.e. are you waiting until the last second to do this? Even the way you send text messages, the language you use, and something as seemingly harmless as what percentage you decide to charge your phone at are all painting a picture of you as a person which is being used to determine your credit risk. It is scary good at this and can generally predict credit worthiness better than we can ourselves.
The Data Revolution in Lending
This feels like a sudden shift, but the technology has been progressing for the last couple of years, and the traditional FICO scores are outdated. They only use around 20 different variable data points. Machine learning is taking on thousands of variables in comparison. For example, one Chinese lender is analyzing over five thousand variables in mere seconds (these are the guys checking when you charge your phone). The simple logic here being that people who plan ahead, such as keeping their phone fully charged before it gets low, will make for more reliable borrowers as they take responsibility.
Other data points that can be included in the analysis range from your Netflix subscription payments to even your LinkedIn connections.
The Promise of AI Credit Models
The big promise that comes with the adoption of these AI credit models is that they will make lending more inclusive and safer. For example, Zest Finance issued a report where they approved 20 percent more applicants using AI while at the same time cutting the rate of loan defaults by almost 30 percent. Pretty impressive.
These alternative data sources will help tens of millions of credit invisibles in the US alone, and countless millions more across the globe who are struggling with the traditional barriers to credit access.
There are multiple payday loan providers that are also looking at this technology to refine how they lend and to whom. It is easy to think of payday loan providers as the enemy, but the reality is they provide a service that many people need, and any steps that allow these lenders to issue credit more safely is okay with me.
The Black Box Problem
One of the problems with this advent of technology is that the AI decisions cannot always be explained, even by the people that helped create them. One major example of this came with Apple Card. An investigation showed that a woman received a much lower credit limit than the average man, despite having better scores on multiple metrics. You are left with a black box issue where complex neural networks are making decisions without any obvious transparent reasoning, which makes it difficult for human brains to understand and leaves borrowers scratching their heads as to why they have been rejected.
Behavioural Biometrics and Surveillance
This technology falls in with behavioral biometrics and a surveillance era, all in the name of your security and safety, of course. AI is now tracking your physical and digital behavior. It is looking at such subtle details as the typing pressure and speed at which you type on your phone, the angle that you hold your phone, or how you move your mouse on a desktop PC. These micro details are designed to help build a hack proof impression of you as a customer, but some are also being used in your risk analysis as previously mentioned.
AI in Fraud Detection
The fraud detection is working. JP Morgan’s COiN system can review loan contracts in seconds, which is saving hundreds of thousands of professional lawyer hours every year. A report claims that PayPal was able to reduce the rate of false fraud alerts by 50 percent using deep learning algorithms. Some of this is quite simple, such as AI spotting impossible spending patterns like two distant locations in quick succession.
Bias and Fairness Concerns
Unfortunately, because AI is built on data streams created by humans, there will inevitably be an inherited bias despite our best efforts to keep it out. For example, AI based fraud detection risks unfair treatment of specific groups such as sex workers, legal marijuana businesses, and people that fall outside the usual scope of what is considered a normal transaction pattern. Because these algorithms rely on existing data to learn, they will take inherited biases as foundational knowledge without questioning why this is the case.
Feedback Loops and Unintended Consequences
We also risk the interest feedback loops when relying on artificial intelligence. For example, you can be denied credit due to an existing credit score which has fallen due to using predatory lenders in the past. The high interest rates that come with this increase the loan default risk, which actually reinforces the assumption that AI has made. On the other side of this coin, one AI system linked increased financial responsibility to geographic proximity to fancy country clubs.
The Social Credit Parallel
This all echoes quite closely to China’s social credit system, and we are right to be a little alarmed at surrendering total control to the algorithm. Where does this end? If you are buying alcohol, are you a higher insurance risk? If you sleep in late in the mornings, are you higher risk?
Western Lenders and Data Privacy
Of course, Western lenders claim to be different, and the data usage will be much more ethical. But words are cheap, and once this financial behavioral DNA becomes part of the AI system, it is going to be very hard to remove it.
And then we have the whole concern over how all of this data is protected and how long before every little detail about you gets sold to the highest bidder. What are they going to do with your footprint?

Alex Rivers is a cybersecurity analyst and founder of The Hack Today. With over a decade of experience in ethical hacking and digital threat analysis, Alex writes to make breaking security news accessible and actionable to everyone. He has worked with fintech startups, government bodies, and security firms to expose critical vulnerabilities before they could be exploited. When he’s not dissecting zero-day exploits, he’s deep-diving into bug bounty reports or walking his dog.

