It was pretty fascinating to see that John Cardillo of Sentry had gone to TechCrunch to break the Facebook sex offenders story. I spent a year on and off talking with Cardillo so I could pitch his technology to the big portal company I was employed by at the time, and this story brought back all kinds of memories of my big company’s senior staff reactions when we tried to get them to let Cardillo present his technology to them.
My interest was in protecting people, particularly women, who were meeting others online. The whole idea we could screen out someone during the sign-up process, rather than bounce them later when someone complained was pretty compelling to my team. However, when we pitched the service to upper management, we got shot down. Our executives were extremely concerned about the appearance of accepting liability and what having to screen for offenders might mean, especially if the big company didn’t deploy this technology throughout.
The questions from management when we showed them how the screening worked went as follows:
- If we proactively remove sex offenders from our service, are we guaranteeing or implying we can guarantee a standard of safety we’re actually not prepared to enforce?
- What is our liability if we miss a few?
- What if we say someone is an offender who isn’t? Can that person sue our ass?
- What if we say some little college boy who had sex with his high school girlfriend is an offender and we bump him? We’re men and that makes us so uncomfortable that guys like that can get singled out. (Really, an SVP type said that).
However, I had to confess to mixed feelings on all this.
You know, when you think about it, the odds that there are sex offenders in any social network are fairly high. But the odds that these sex offenders are felony offenders, with solid IDs and solid convictions, identifiable for sure, is not that certain. Even in a database as savvy as Cardillo’s, the possiblity of a false match to a user profile has not been quantified.
And even if it was, is being a sex offender enough to guard against?
How about the people who were convicted not of a sexual offense, but of assault and battery against a partner? And the ones with communicable STDs?
Does the service find a way to get data on all these issues and police the community? Or is being convicted as a sex offender a worse crime than, say, beating your partner? If you have the technical means to track crime, should you?
For MySpace, with its high numbers of teens, the deep partnership with Cardillos’ company has been key in deflecting criticism; on the more staid Facebook, screening out offenders with a third party seems a lot more complicated (as in, what if you are wrong?)
My point here is that while it is definitely both good marketing and a
reflection of what women and parents want to say your service is wiped
clean of convicted sex offenders, the whole concept of policing a
service is a slippery slope.
Is this a big problem, or are we just watching Cardillo use TechCrunch to rail on Facebook, which TC, wanting page views, is all too happy to do?