Wednesday, May 10, 2006

Security Research and Computer Crime - Where do we Draw the Line?

This is interesting - the case of Eric McCarty, a security researcher and sysadmin charged by Federal prosecutors last month with "knowingly having transmitted a code or command to intentionally cause damage" to the University of Southern California's applicant website (I noticed the FBI press release uses the word "sequel" instead of SQL. I hope that wording didn't come from the complaint itself...).

Apparently, McCarty exploited a SQL injection flaw to access student data (which included social security numbers and dates of birth) in the database backing USC's website. He then notified SecurityFocus via email, who notified USC of the vulnerability. USC shut their site down for two weeks while it was being fixed (my guess is the "damage" comes from the fact that USC had to take their applicant website offline, since McCarty didn't do anything malicious with the information). Here is the text of the statute he is alleged to have violated (see section (5)(A)(i)).

The case, and others like it, show the ethical conflict involved in some computer crime prosecutions. In particular, this case reminds me of Randal Schwartz's case from way back in 1993. The same issues were raised back then - in that case, Schwartz was running the crack program to disclose weak passwords, but without authorization. In the end, he was convicted of three state felony charges.

Unfortunately, the law is pretty clear in these cases. It appears McCarty violated Federal law, a felony that could land him up to ten years in jail. This seems quite excessive, however, given McCarty's intent. Perhaps some exceptions are needed in various Federal and state computer crime statutes to allow for legitimate security research, although the question of what is legitimate and what is not could be unclear. Sure, USC had to take down their site for two weeks (does anyone else but me think that's a long time to fix a SQL injection bug?), but just think of how long it would have been down after a real compromise. At the same time such an exception could be said to encourage security research, you could see this used as a loophole that crackers with malicious intent could use to escape prosecution. "Really, detective, I was just testing their security". Web application security testing can also be dangerous, unintentionally resulting in database or web server outages. For now, the risks of doing "stealth" research just are not worth it, whatever the intent.

Technorati Tags: , ,

3 comments:

m said...

He embarrassed USC, committed an act which most people can't understand, and this is therefore a "crime".

I cracked a couple of systems with the sysadmin's or some responsible persons permission. Usually the impetus was provided by some boasting. I admit it was fun to hand a list of 80% of a system's user ids and passwords to management, and watch the smirks fall off their faces.

Even though I always did it with permission, nobody was ever grateful. I decided to stop before I got too many people ticked off at me.

Randal L. Schwartz said...

And just so that comment isn't confused with my case, I'll add that my prosecution was (wild guess) pushed by the embarassment I caused as well, and not related at all to any "damage" except from a misunderstanding.

Doug said...

Thanks for the comments. I know that even from legitimate security assessment work I've done over the years that there is frequently an adversarial relationship between you, the consultant, and the IT staff where you are working. They view you as an outsider with the power to embarrass or even cost them their jobs. I think this stems from the fact that at many businesses, the IT staff know that security is a farce, hidden behind partially-working or poorly maintained infrastructure. Sometimes they just have no idea what security is about, and don't want their superiors to know.