Bill Nagel

Yesterday’s announcement that the Clear service could soon be baaaaack, along with a spate of recent client questions on electronic credentials and biometrics, have triggered this post.

My colleague Andrew Jaquith’s analysis of the myriad problems with the way that Verified Identity Pass and the TSA handled the Clear shutdown in June (including the potential for customers’ PII to be sold off) was spot on.

So with the prospect of the return of a registered traveler program in the US, the ongoing debate over REAL ID, and the slow but steady advances in the use of various biometric technologies for identity authentication, it’s worth taking a step back and separating the wise from the foolish, and hopefully knock down a few pervasive myths while we’re at it.

What separates a “good” implementation that protects privacy and civil liberties from a “bad” one? The upshot: The devil’s in the details – and, quite often, in the database.


Worry #1: What if my biometric's stolen? I can't replace my fingers, after all.

This is indeed a valid concern if: (1) the full fingerprint (face, iris, etc.) scan is stored; (2) it’s stored in the clear; and (3) subsequent scans to authenticate a person compare the full fresh scan with the stored full scan. In the case of the FBI and other law enforcement agencies, at least (1) and (3) are likely to be true. And if these databases proliferate and are used by other agencies, then maybe the anti-gummint folks are on to something.

But this is simply not the case for the vast majority of biometric implementations. For example, when you enroll a fingerprint, the software chooses a few dozen discrete points (called “minutiae”) from that scan and applies a hash function to it, essentially boiling down the fingerprint data to a single number. Authenticating involves taking a fresh fingerprint scan, choosing the same minutiae from it, and then hashing it. If the two hash values match, the user is authenticated. There’s no way to reverse-engineer the full fingerprint from either the raw or hashed minutiae.


Worry #2: There's no way they can properly secre all of that network traffic phoning home to check my (biometric) ID credentials or sync the local and central databases.

This is likely to be true in some cases – and that’s why it’s important to understand the distinction between one-to-one (1:1) and one-to-many (1:N) matching.

1:1 matching asks the question, “Is this person in front of me the same person as on this credential?”, whereas 1:N matching asks the question, “Who is this person? Look at everyone’s data and find who matches the credential.” Ideally, 1:1 matching would be used for the everyday task of authenticating people, and 1:N would be reserved for instances where there is a pressing need to identify someone.

1:1 matching is far more secure and does a better job of protecting privacy (and it’s faster and more efficient, too!). The point of authentication doesn’t retrieve your data from a remote central database (risking network-based attacks) or check it against a local copy of such a database (risking this kind of potential data leak situation).

Amsterdam’s Schiphol Airport has a registered traveler program called Privium that uses 1:1 matching. Travelers’ iris scan data is stored on a secure smart card chip. When travelers need to pass through security or passport control, they stick the card in the reader, scan their iris, and if the live scan matches the one stored on the card, they’re good to go. No database to compromise at the point of authentication.


Worry #3: The electronic ID card I would have to carry is the biggest threat to my liberty and privacy.

This scenario has been playing out in the UK for several years now. Unlike in other European countries with electronic IDs, the British are not happy with the “liberty-killing” prospect of having to carry and show identification. The Government has obliged by keeping the terms of the debate focused on the cards themselves, while in the background quietly assembling and cross-linking extensive databases of intrusive and critical personal information, such as DNA records.

The proliferation of poorly secured databases containing large volumes of critical personal information is a far bigger threat to privacy and civil liberties (leaving aside the question of ubiquitous CCTV cameras for another day) than the requirement to produce ID on demand – particularly when the database holder has a rather checkered history of data security.

That’s why I say, “It’s the database, stupid.” Right, that’s enough for now – tune in next time to find out why airport security really works.