Saturday, 13 December 2003

Abusable Technologies

Ed Felton (of Freedom to Tinker) wrote yesterday that he is involved with a new venture called the Abusable Technologies Awareness Center. This looks like a great project.

I would like to comment briefly on one post in ATAC's weblog, "Face Recognition and False Positives." This post raises the point of "a classic security mistake: ignoring the false positive problem." I addressed this issue in "Static Measurements & Moving Targets," my law-school thesis paper on biometrics and privacy in the context of consumer banking. In that paper, I looked at the problem from a perspective opposite Ed's. He describes facial recognition in an identification application, where its goals are substantially different from what its goals would be in an authentication application.

The designer of an application that flags passers-by as registered sex offenders has an incentive to overinclude suspects for security reasons — that is, to err on the side of false positives. The designer of an ATM authentication application, on the other hand, has the opposite incentive — to err on the side of false negatives, to prevent fraud. The point is that false positives are not solely a privacy issue: they also represent a security risk, depending on the context.

That said, I do agree with Ed's basic point, as I wrote back in October ("Terrified of Terror Profiling?"). I supported the point there with links to articles by computer security expert Bruce Schneier and mathematician John Allen Paulos.

Posted at 5:07:54 PM | Permalink

Trackback URL: http://www.danfingerman.com/cgi-bin/mt-tb.cgi/116
Topics: Civil Liberties, Cyberlaw, Politics, Privacy, Skeptical Inquiry, Technology
Email this entry to:


Your email address:


Message (optional):




Powered by Movable Type