Last week the commissioner of the Metropolitan Police made a statement that seems to indicate he believes that one of the causes of bank fraud is the lax security practices exercised by bank customers, namely not using anti-virus and not being careful with their passwords.

To be clear, the real cause of bank fraud is the group of people who have decided to make their living by stealing other peoples' money. There doesn't seem to be a way to convince them to stop and the current international legal climate makes it difficult to prosecute the people behind these crimes when they operate across international boundaries so instead we have to focus on how to make systems more resistant to their attacks.

Sir Bernard Hogan-Howe does have a point in that consumers do often exercise lax security practices but blaming those practices for the ease with which accounts can happen is disingenuous. We can't expect everyone to be a computer security expert; it seems a bit harsh to hold an 80 year old grandmother liable for the loss of her savings because she didn't recognize a phishing email.

Even in a hypothetical ideal world where all of our users are infallible security experts we still can't trust the endpoint: The perfect user may have hardened the security configuration of their computer, kept their patches and anti-virus completely up to date, chosen a strong and unique password, and they never fall for a phishing email, but there will always be a zero-day vulnerability that can't be patched and a new piece of malware that can't be detected by anti-virus. After all, if multi-billion dollar corporations with dedicated network security teams can't protect themselves there's no way we can expect any one user to do so either.

When designing a consumer-facing service (or any system for that matter) we have to accept that the users aren't going to know how to protect themselves and will likely to some very silly things that might endanger their own security and the security of the system. Even the Met Police itself is not immune to users behaving badly. We should design systems in such a way that we make no assumptions about the security of the endpoint, so we must verify every action they take. The more significant or unexpected the action the more verification should be involved. Paying your monthly electric bill? No problem. Wiring a large amount of money to a new account in Russia? We might want to double-check on that.

We already have the tools to make this type of fraud more difficult. Two factor authentication has been a go-to solution for quite a while, utilizing either a physical token or a number sent to a mobile phone for secondary authentication. Banks are already intimately familiar with two factor authentication, they've been using it for years in the form of ATM cards: the card itself is the physical token ("something you have"), while the PIN is "something you know". They seemed to have abandoned what they already knew as they moved online in favor of simple passwords which are much easier to defeat.

There are other solutions that can be employed as well: checking the geographic location of an IP address, requiring time delays or secondary verification on certain types of transactions, utilizing certificate files to control which computers the account can be accessed from, comparing against historical data, using human "sanity check" reviews, etc. None of these will likely eliminate fraud entirely but should reduce it.

Of course there is a reason why banks and other online services that are the subjects of fraud do not always employ these types of security measures: they add cost and complexity. Banks are constantly weighing the cost of fraud against the cost of adding additional security measures. These costs are one of the reasons why the US has taken so long to switch to EMV (chip-based payment cards) despite its track record (pdf) for card-present fraud reduction in Europe.

Those costs don't just include the expenditure to implement and maintain additional security measures, they also must account for the number of consumers that simply decide that the additional security features are too much of a headache and switch to a competitor. We see this here in the US with the move to EMV: we have been given Chip and Sign cards instead of Chip and PIN, and even then the chips are only being used rarely. One of the reasons behind this decision is that banks and processors are worried that consumers will forget or forgo their PIN and use cash instead, depriving them of lucrative transaction fees.

To bring this full circle, perhaps the liability for fraud losses is already as it should be: the banks have the ability to make their applications more resistant to fraud regardless of whether the event is the result of grandma falling for a phishing email or a security expert getting infected by the latest stealth malware, but they have made a decision to err on the side of convenience rather than security. The banks' liability for fraud losses then is the just "punishment" for the decision they have made, an "acceptable risk" on their balance statements, and a risk that Grandma and the rest of us really have no control over.

Far from only being applicable to banks, this should be a lesson to any organization that runs a consumer-facing service. Many Internet applications contain data that is of some value to somebody and often the only people who can most effectively secure that data is the organization that created the system to begin with. This isn't to say everything should have tight controls all the time but they could at least be made an option. Or even better, make the security the default and only allow users to disable these sorts of measures after adequate warnings.