Digital Behavior: Exploring The Ethics Of Our Cyber Lives

Jun 9, 2017

We can now live-stream events through programs like Facebook Live and YouTube, turning us all into potential quasi-celebrities. But what are the ethical implications of sharing our personal lives or even criminal acts online? How has the role of bystander changed in the digital era, and how should social media companies deal with objectionable material? 


GUESTS:

  • Nora Draper - Communications professor at UNH and a member of the Prevention Innovations Research Center at UNH, where she works on the role of bystanders online and in social media.
  • Hany Farid - Professor of computer science and digital forensics at Dartmouth College. He has worked on various technologies that identify and remove offensive images, video, and audio from the Internet and social media platforms.
  • Leah Plunkett - Associate professor of legal skills at the UNH School of Law and fellow at Harvard's Berkman Klein Center for Internet and Society, where she studies the digital lives of young people. 

How are tech companies preventing or correcting sensitive or criminal content on their platforms?

Hany Farid: 

The reality is that nothing makes these companies move to protect the online communities, except legislation, advertising dollars, and bad press. … Left to their own devices, these companies are not interested in moderating and removing content—it’s not in their self-interest. What I’ve seen over the years, from the child pornography issue, to the extremism issue, to the criminal issue, is that the company’s press releases always tell us how seriously they take this, they always tell us how important it is. But when you look at the action on the ground, what they’re actually doing, how much effort they’re putting into this issue—it is fairly pathetic. It’s not until we start to threaten legislation, it’s not until advertisers flee en masse from these platforms, do these companies finally turn around and say, okay, we have a problem we have to do something. … Extremist groups, both domestic and foreign have weaponized the internet. We know this. … Just last month the EU released a report that said when tech companies are notified of extremist content, the takedown [rate] is less than 50 percent. So this is not even about being proactive.

Why wouldn't these businesses always remove potentially troublesome content when they receive a request?

Hany Farid:

I don’t have a good answer for that. One of the reasons they give, is ‘We get a lot of reports, and we’re overwhelmed with that.’ Well, my answer for that is, you don’t seem to have trouble hiring engineers and advertisers to make a lot of money, but sometimes you have trouble hiring people to deal with the negative consequences.

How do other governments handle take-down notices?

Hany Farid:

In Germany, for example, they are considering fines on the order of $25 million for each time the company does not respond to a takedown notice. Civil liberties groups are concerned that this will have a chilling effect on speech issues, because will the companies then just start taking things down overly aggressively to avoid these $25 million fines? So the legislation is not a cure-all, because now what’s happened is we’ve waited a long time to respond, and now we are overreacting in some ways. So there’s a middle ground there we have to find to have a free and open internet.

How can bystanders respond when they witness something problematic or illegal?

Nora Draper:

I think a lot of us are probably familiar with the idea of active bystander intervention in a kind of offline-heroic capacity, where somebody physically puts themselves in the middle of an altercation to stop a fight or to stop an assault. One of the things that the program Bringing in the Bystander, which is out of Prevention Innovations Research Center, tries to get people to think about, is the much more mundane or everyday acts of intervention that we can engage in to stop a situation from escalating [in real life] … So, for example, maybe intervening when you hear harassing or discriminatory language being used, to prevent a situation from escalating. Turning on the lights at a party, or offering someone a glass of water if you feel like something is about to happen or could happen. … One of the things that’s complicated to think about is what those intervention efforts might look like online. It’s not always clear what the digital equivalent of turning on the lights at a party might be.

What is the online equivalent of "turning on the lights at a party?"

Leah Plunkett:

There’s no ‘one size fits all’ online version of that, and I think one of the key things to keep in mind because of the seamlessly connected way our lives are with technologies at this point, the online version of that might be an offline behavior. So, to give a concrete example, I heard from a teacher recently about being contacted by students saying that someone else in their class was talking about self-harm and actually posting some pictures on a social media site. And, the students, rather than trying to digitally engage, took an active bystander role of contacting the teacher directly and saying, ‘we’re seeing this and we’re scared.’ And the teacher did the common sense thing of picking up the phone and calling the parent. … Responding to digital content might require some quote-unquote ‘brick and mortar’ behaviors as well.

What should people do if they see something criminal on Facebook Live?

Leah Plunkett:

I would say don’t watch it. If there’s any doubt in your mind about whether law enforcement knows, notify law enforcement, but I would say don’t put your eyeballs on it, don’t like it, don’t share it, don’t screenshot it for future use unless you’re a researcher in this space or law enforcement in this space, but essentially don’t give it the digital currency of eyeballs on the screen.