BeehiveID: Using Super Powers for Good


With 290,000 disclosed cases of reported internet fraud to the FBI, it’s easy to see why BeehiveID co-founders Mary Haskett and Alex Kilpatrick believe online identity security is broken.

The two have been friends since their teenage years, when they met at skydiving school.  However, they didn’t set out to create a system that would identify online con artists until much later.

The idea started while Haskett was running large-scale biometric systems on a defense contract with the U.S. government in the Middle East, but she said it got too difficult to work overseas and she wanted to use this technology closer to home soil.

 “[We said] let’s try and find a way to use our super powers for good.”

“We both felt like the way we do identity online is fundamentally broken,” she said. ” [And asked ourselves] can we find a way to use this amazing technology that’s incredibly cool that doesn’t require us to fly to Kabul?”

They developed a system that, like a credit score, measures how confident the co-founders are that a profile is not a bot.  A low score – 400 or lower – is usually a bot or fake account and a high score – no more than 850 – means the account reflects a valid user.

This is a portion of the author’s BeehiveID scorepage.

After getting a grant through TechStars and a successful demo day, the Austin-based start-up launched in March 2013. So far, Haskett and Kilpatrick said they’ve analyzed over 10,000 profiles for dating and social media sites, including mine.

After about 10 minutes of processing, BeehiveID reported that my personal Facebook profile got a score of 802 – which Haskett said is a very high score.

To get my score, BeehiveID looked at most of my Facebook data, from my very first post – in 2006, when I was just beginning my sophomore year of undergrad  — to the post I made a few minutes before I clicked on the BeehiveID site. They looked at some of my friends — the ones I have the most interaction with on Facebook over time — the number of posts and comments I’ve left, and some of the ancillary online friendships I’ve made over the years.

All this data – about 100 megabytes, which could hold roughly a couple volumes of encyclopedias – was used to asses the strength of my personhood.

“Now that people have been using Facebook for several years, they’ve created this pattern of interaction,” Kilpatrick said. “They’re posting things people are commenting on. They’re commenting on other people’s posts… All these things connect together and build a network. And we’re looking at the density of that network as well as the age of that network itself.”

 BeehiveID’s complex dig through megabytes of online data proves the validity of a person isn’t much different from how we identify who is trustworthy and who is not offline, Haskett said.

 “You use Facebook over time, you put in a little bit of effort everyday. It’s not something that you really think about, but it’s actually representing the thousands of hours, hundreds of thousands of hours [you’ve spent] to create these connections,” Kilpatrick said. “We basically work on the premise that people who want to scam the system want to hide their tracks – they don’t want to be identifiable, but as you use Facebook you actually are identifying yourself.” A bot is more likely to have a shallow network, and a real person will have a complex, “deep” network.

Photo Courtesy of BeehiveID
Left: A visualization of a complex network built by a verified profile. Right: A visualization of a shallow network built by a bot or fake profile. Each dot represents a different connection to the owner of the profile.

Haskett said after BeehiveID is done scoring a profile, all data is erased from the system. For privacy’s sake – something both Haskett and Kilpatrick have a history of championing – BeehiveID doesn’t keep any personal information.

When they were in Afghanistan they noticed how a person there defines their identity was incredibly different than how we define it in the U.S. today.

“They had an interesting idea on identity that in some ways was weaker and in some ways was stronger. In the U.S. your identity is tied… to [government] documents, which are effectively as strong as the weakest link,” Kilpatrick said.  “[In the Middle East] their notions of identity were tied to tribe and family. You could fake this government document [over there], but if you really wanted to prove to someone your identity, you had to know somebody in common. You had to have this network effect. They didn’t care about the documents.”

Haskett said that’s a level of safety that we don’t have in the U.S. anymore. “The internet breaks that entirely. We’re now doing business with strangers across the world and there’s a lot of good from that but there’s also just this enormous opportunity for fraud and deception,” she said.

Mary Haskett, CEO, president and co-founder.
Mary Haskett, CEO, president and co-founder.

 A February 13, 2013 report by George Mason University researchers studied whether deception and trustworthiness can be detected via certain language used in written messages. According to the report, in written messages, humans tend to correctly identify the fraudulent message only about 56 percent of the time.

Scammers out there now have intricate systems they use to swindle money off of unsuspecting internet perusers in ways that wouldn’t happen in a face-to-face interaction, with body clues and eye contact.

“A lot of the scammers come from foreign countries and they just paste in some message that they know works,” Haskett said. “If somebody online talks about how honest they are and how trust-worthy they are: run away.”

One of the next steps for BeehiveID is a dive into photo-matching for dating sites, although Haskett said they’re always coming across new data and new patterns with which to identify scammers.

“I want to be the default way we do identity online,” Haskett said.