You probably already know that most banks are already using artificial intelligence and various different behavioural analytics metrics to help them detect fraud way earlier than they could before. Most of the time this happens before any money has actually even changed hands.
The identity theft market is booming, the global losses are estimated to exceed over 10 billion US dollars, and in 2024 alone there was almost one and a half million Americans directly affected by fraud. Safe to say that something needs to be done.
Building Your Digital Financial Fingerprint
These different fraud prevention systems work by developing a type of fingerprint for everybody that uses their systems. It basically becomes your financial digital fingerprint, and it’s developed by looking at a plethora of different metrics. Most of it is focused on your buying habits so when you buy, and that’s as detailed as what time of day you usually make your purchases. That could be impulse purchases as you lie in bed at night, or it could be as simple as the time of month that you decide you need to buy dog food again because you’re running low. All of this develops a unique fingerprint.
Alongside this, these systems are using biometric factors as well. We are no stranger to this; almost every smartphone nowadays has either facial recognition and/or thumbprint to access the phone. But this is actually going a step further than that. Again, we’re getting into granular details such as how you swipe across your phone, what direction you swipe from, the pressure you apply with your thumb, the speed and rhythm and manner in which you type, the angle that you hold your phone. All of these are totally subconscious factors that inform a unique fingerprint and are very hard to fake.
Multi-Factor Authentication Evolution
Multifactor authentication is also evolving, which is a good thing ultimately. So we’re not just talking about text messages being sent to your phone anymore. Financial institutions are using more sophisticated, invisible, layered scoring that assigns risk based off of all of these different factors such as the device you use, where you’re based, the network that you’re using, and many other factors.
This gets a little ‘1984ish’ for my taste because these analytics go as far as to start assigning risks based off of how you behave. They’re making best guesses on your risk based off of the behaviour of the herd. For example, if you use capital letters too much or exclamation points, there are supposedly algorithms that assign you as higher risk you seem more impulsive. If you are leaving your battery to go into less than 10% and you trigger the low battery warning, you’re higher risk because you’re not planning ahead.
This all reminds me of the Chinese social credit system. How far are we really from your dad or your brother or a close friend being irresponsible with their money, and because they’re part of your social circle you start to incur premiums on your insurance, higher interest rates, or you could eventually just be disqualified from getting credit altogether. Not that I know that is likely, but I think we need to shine a torch on the pros and the cons of this. Nobody’s denying that this will keep people safer, but there is also a dangerous undercurrent flowing here that gives the financial institutions much greater control.
Facial Recognition: Powerful but Flawed
Another controversial subject to discuss here is the importance of facial recognition technology. Again, we’re all familiar with this. It is central and the first line of defence against fraud. The systems using this technology are becoming increasingly good at detecting a real face, but there are still plenty of challenges, and there are multiple weaknesses there that are being exposed. We’ve all heard of deep fake technology and the obvious risks that that applies to using facial recognition as the primary metric for detecting unauthorised use.
To get around this, banks are using a series of different photos. They are implementing techniques that utilise ‘life detection’, things like asking you to blink, turn your head into different poses, make different faces, lots of subtle micro movements that AI struggles to keep up with, for now at least…
Privacy Trade-Offs and Future Risks
There are, of course, some serious privacy trade-offs here. This vault of different biometric data points—your behaviour, your location, how you live your life—makes an insanely detailed profile of you as a person and everybody you know. Whilst security is improving, it’s inevitable that there will be future data breaches. I believe there always will be, and the value of the data that gets leaked is going to be of a value that none of us can fully comprehend yet. But just imagine someone gaining access to everything that makes you ‘you’ online. What could they do?
I don’t want to fearmonger unnecessarily, but we need to be thinking about this, looking into what the future would look like. The security development will continue to roll out. There will be quantum resistant encryption designed to resist the hacking efforts of the supercomputer era. There will be decentralised identity verification and self-sovereign identity, which is letting users control their biometric data, and additional advancements in AI that can predict fraud long before it happens.
I believe there is a middle ground here where we can have effective digital footprints that help protect us without falling off the cliff of data control.

Alex Rivers is a cybersecurity analyst and founder of The Hack Today. With over a decade of experience in ethical hacking and digital threat analysis, Alex writes to make breaking security news accessible and actionable to everyone. He has worked with fintech startups, government bodies, and security firms to expose critical vulnerabilities before they could be exploited. When he’s not dissecting zero-day exploits, he’s deep-diving into bug bounty reports or walking his dog.