Nieman Foundation at Harvard
HOME
          
LATEST STORY
From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam
ABOUT                    SUBSCRIBE
May 21, 2014, 9:49 a.m.
Audience & Social
LINK: www.npr.org  ➚   |   Posted by: Justin Ellis   |   May 21, 2014

Reader comments can be a source of agita for news sites. Popular Science decided to shut its comments off for good last September, and last month the Chicago Sun-Times decided to put theirs on temporary hold in hopes of finding a better system.

At NPR’s Code Switch, Matt Thompson writes about how the site that covers race and culture has tried to handle discussion during its first year. Topics like race and ethnicity can bring out the worst in commenters, which is why Code Switch threw the gauntlet down early, telling readers what types of comments will get yanked from the site. The watchwords are moderation and transparency:

For everyone upset about a deleted comment or revoked posting privileges, there’s likely another person who values a more tightly managed discussion. And while some would prefer for us to err on the side of allowing more comments rather than fewer, a loosely managed discussion inhibits some from participating. We value all of these users, but there’s a genuine tension between their needs.

Implicit in the mission of covering race, ethnicity and culture is a mandate to unearth less-often-heard voices and perspectives. We keep that in mind when trying to balance the needs of users we hear from a lot alongside users more prone to lurking. If a discussion about the cultural dynamics of hip-hop starts off with a comment dismissing hip-hop as not being real music, then hip-hop aficionados — those likeliest to have insights to share about the topic — are less likely to participate. Part of what we’re trying to do is create a community where that doesn’t happen. Similarly, for some users, the most pressing issue in race and ethnicity is higher rates of criminality among some demographics, and every thread about race becomes an opportunity to discuss that. Our aims here are somewhat different.

So if we delete your comment, it’s not necessarily because we think the comment is “bad” or “wrong,” or because we want to suppress your point of view. Most often, it’s because the comment doesn’t get at the topic we’re aiming to discuss at that moment, in this space. We are trying to curate a discussion that is intelligent, unique, and novel — a discussion that moves us — and that may require removing comments we think are not directly contributing to the focus of the conversation at that time.

Matt also talks about the importance of having the right tools for enabling conversation, including better filters for comments.

Show tags
 
Join the 60,000 who get the freshest future-of-journalism news in our daily email.
From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam
Within days of visiting the pages — and without commenting on, liking, or following any of the material — Facebook’s algorithm recommended reams of other AI-generated content.
What journalists and independent creators can learn from each other
“The question is not about the topics but how you approach the topics.”
Deepfake detection improves when using algorithms that are more aware of demographic diversity
“Our research addresses deepfake detection algorithms’ fairness, rather than just attempting to balance the data. It offers a new approach to algorithm design that considers demographic fairness as a core aspect.”