If a video designed to recruit people into extremist groups pops up online, it stands to reason that you could just flag it to have it removed and the problem is solved. But that’s not so easy. These videos are easily replicated, so one video could suddenly appear on a variety of websites. It’s time-consuming to track down and try to remove each one. One professor at Dartmouth College has developed software that would help find all those copies. David Brooks is a reporter for The Concord Monitor and writer at granitegeek.org, and he joined NHPR’s Peter Biello to talk about this new technology.
So who is this professor and what did he create?
His name is Hany Farid and he’s a computer science professor at Dartmouth. He’s been there a couple of decades, and he’s quite well known in what you would call “digital forensics.” He’s developed a lot of software that analyzes images.
His latest project analyzes video images. Specifically what it does is that it creates a “robust hash” — “hash” being a computer term for a digital signature of an image. So it’s a “robust hash” of a video. What happens is that a person, in some capacity, such as somebody who works for Facebook or somebody who works for a nonprofit who keeps an eye on extremist groups, identifies a video as being a recruitment video for an extremist group. It could be something as bad as a beheading video, or just somebody ranting and saying, “go blow up so and so.”
It’s identified by a person, and then the software can create a robust hash of that video, which can automatically identify the video if it’s being posted somewhere else. It goes into a database. Any website or social media group that subscribes to the database will have the video automatically identified and blocked, or taken down instantly when it tries to get reposted.
And this has the potential to save the people who manually hunt these things down a lot of time. And it’s based on, or does something similar to, technology that previously existed that tracked down child pornography.
That’s right. But Farid told me that the technology of the two is very different. He hasn’t actually done any peer research, or published any research, on this new video technology. He says he’s done it deliberately to make it harder to circumvent. It’s the same idea, though.
It’s a way for software to identify this same image without a person having to take the time. It’s much more complicated, obviously, with video, not only because there’s vastly more information in a video than in a still image, but because videos can be edited. They can have stuff added on to them, or they can have words put over them, and it’s still appears to be the same video to humans. The idea of robust hashing is that it will still be recognized by the software as the same video even if it’s of different length.
Most of the time when a video is removed from a website, you have to do it manually—you have to contact a moderator, or maybe the owner of the website—how does this software get around that, or does it?
It’s implemented by the hosting organization—say Facebook. Their IT guys would set it up so that anything in a video that matched a robust hash would not be posted, or would be immediately taken down.
This raises concerns, because if the robust hash is not accurate, you could be having innocuous videos being taken down for reasons that were not obvious to the person. Farid recognized that. One of the things that’s complicated about this technology is that it has to work on an internet scale. So it can’t be right 99% of the time, because you’re talking about millions of videos that go up that are being checked. So that was part of the complication.
So this will be on sites like Facebook, and possibly other social media sites. But it wouldn’t have any bearing on a site owned and operated by ISIS?
That’s correct. But I mean if it’s an ISIS site it’s going to be blocked anyway, so that probably is not entirely relevant. It’s more about the secondary actors, which we have unfortunately seen in the last week or two, is where ISIS is really strong. It inspires people who aren’t directly connected to it to do horrible things. And one of the horrible things they do is they share these kinds of videos through their own personal Facebook pages. They get somebody excited about it in some obscure town. And that is where this technology would be the most effective, because it would block that from happening.
The goal overall being to slow down the recruitment of these people who will find it if they are looking for it, but wouldn’t happen to stumble upon it if they’re using a site that runs one of these.
Yes, that’s a fair description.