Your credit score used to depend on five simple factors;

  • Payment history
  • Debt levels
  • Credit age
  • New accounts
  • Credit mix

Now an algorithm might deny your payday loan because you shop at discount stores, use all caps in text messages, or charge your phone at unusual hours. Welcome to the brave new world of AI-powered credit decisions, where machines know more about your financial future than you do.

The shift happened quietly. Traditional FICO scores evaluate around 20 variables. Modern machine learning models, on the other hand, digest thousands of data points. Everything from your social media activity to how quickly you scroll through terms and conditions. One Chinese online lender analyses 5,000 variables in seconds, including whether applicants let their phone battery drop below 20%. The reasoning? People who plan ahead keep their phones charged. Crazy stuff.

Banks claim these systems democratize credit access. Zest Finance reports their AI models approve 20% more applicants while reducing defaults by 30%. For “credit invisibles”—the 45 million people (in the US alone) with insufficient credit history—alternative data offers a path to payday loans and other ‘higher risk’ credit previously impossible to obtain. Your Netflix subscription payments, utility bill history, even your LinkedIn connections become tangible proof of creditworthiness.

But here’s where things get a wee bit uncomfortable. Apple Card faced investigation after multiple women received credit limits 20 times lower than their husbands despite higher credit scores. The algorithm couldn’t explain why. That’s the black box problem: when machines make decisions through complex neural networks, even their creators can’t always trace the logic. Your loan rejection becomes a convoluted data nightmare where nobody can tell you exactly what went wrong because there’s too much information and too many neural pathway jumps involved by uber-computers.

Behavioural biometrics add another layer of sci-fi surveillance to lending decisions. The pressure of your keystrokes, the angle you hold your phone, how you move your mouse—all create unique patterns that AI systems memorize. Banks tout this as the ultimate security, but it’s equally about assessment. Type too fast? Maybe you’re impulsive. Too slow? Perhaps you’re uncertain about finances. One payday loans lender purportedly discovered people who typed in lowercase were better credit risks than those using proper capitalization.

The fraud detection side shows AI’s genuine value. JPMorgan’s COiN system reviews commercial short term loan agreements in seconds, work that consumed 360,000 hours of lawyer time annually. PayPal reduced false positives by 50% using deep learning, saving legitimate customers from wrongly frozen accounts. When someone buys gas in Cape Town then designer handbags in Moscow ten minutes later, AI doesn’t need a coffee break to spot the problem.

Yet algorithmic fraud detection creates its own casualties. Sex workers, legal marijuana businesses, and anyone with “unusual” transaction patterns get flagged and frozen out of financial systems. The algorithms learn from historical data poisoned by human prejudice. If past lending discriminated against certain area codes, then AI perpetuates that discrimination with mathematical precision.

Credit scoring through machine learning promises to eliminate human bias, but often it just launders discrimination through data. Variables that seem neutral; shopping locations, friend networks, device types, all correlate strongly with race and class. Ban those factors, and the algorithm finds proxies. It discovers that people who buy furniture on Tuesday afternoons default less, never asking why only certain demographics can shop during work hours.

The feedback loops can get vicious. Denied credit because of your data profile, you turn to predatory loan sharks. Their high interest rates increase your default risk, “proving” the algorithm right. Meanwhile, errors in training data become enshrined as truth. One system learned to associate financial responsibility with living near country clubs, effectively redlining by GPS coordinates.

China’s social credit system previews one possible future: your financial options determined by an omniscient algorithm tracking every purchase, interaction, and digital footstep. Buy too much alcohol? Higher insurance premiums. Play video games past midnight? Lower credit limit. The system doesn’t judge as such; it just calculates correlations humans would never spot.

American lenders insist they’re different, but the trajectory points toward similar ends through market forces rather than government mandate. Every app on your phone, every website you visit, every product you buy teaches algorithms about your financial DNA. The promise of “financial inclusion” through AI masks a reality where your digital shadow determines your economic opportunities.

The solution isn’t abandoning AI but demanding transparency. The European Union’s “right to explanation” for algorithmic decisions offers one potential model for this. Explainable AI techniques can illuminate these black box decisions. Regular audits could catch discriminatory patterns before they become systemic. Until then I’m afraid we’re the guinea pigs in a vast experiment where machines learn to judge human creditworthiness through correlations we don’t understand, can’t challenge, and never consented to share in the bloody first place. The algorithm knows if you’ll repay that loan. It just can’t tell you why it knows, or whether what it “knows” is actually true.