Social Media · Politics

How to incent good behavior on social media (avoid spam/trolls)? Human moderation vs. algorithms?

Brian Bensch Founder & CEO at Snow Schoolers

March 18th, 2016

I am working on a social community forum/platform in the politics space. Clearly politics is controversial. Would appreciate people's thoughts on ways to approach the objective of content moderation. The ideal solution is one that fully incentivizes civil discourse, while avoiding inflammatory, spammy, or otherwise unproductive user-generated content. 

I'd be especially interested in hearing from anyone on the FounderDating staff. It doesn't appear like there's much/any moderation in the Discuss forums here, and for the most part I think the threads here are high quality. Perhaps that's simply a testament to the application/voucher/paid membership process that sufficiently filters the types of users on the site.

The way I see it there are three approaches to moderation:
- human moderators (i.e. reddit - with mods posting guidelines/expectations and the power to remove people)
- algorithmic moderation (textual parsing of comments/language & preventing bad posts)
- user selection filtering (rigorous signup/application process to weed out the bad eggs)

Thoughts on what is most effective?

David Fridley Founder at Synaccord

March 18th, 2016

Brian, I am working on a similar project.  - How to we get around the political polarization and gridlock to find the solutions that unite us. 

I think there is no, one solution and you have to many things.  Here is what we've figure out about the methods you mentioned:
- human moderators - even with good intentions they are corruptible, especially over time when large sums of money are involved. (eg Congress). But it's also very hard to be unbiased and remain so over time.  But what if you have 160M participants.  It doesn't scale with consistency.

- algorithmic moderation - while simple strait forward checks are appropriate the algorithms used need to be transparent and well understood by the participants. If people don't understand why their posts get rejected the system will lose legitimacy. (School teachers in some places are going crazy because their evaluations depend on algorithmic calculations based on student test scores that they have no visibility into, and the algorithms are proprietary)

- user selection filtering - important but only goes so far - you want to make sure that people are confident that other people only get one account, and that appropriate people are participating (eg. not the ones's hired on fiverr to vote up a point). But it's a democracy and everyone should get to participate.

What we're working on is more like reddit's voting up process. Each post is shown to a small random group of people. They can vote it up. If no one votes it up then it only impacted that small group, but not the entire community. Also, people are encouraged to give feedback so that the person can learn how what they wrote is perceived by other people.

Brad Harkavy General Manager at LiveData, Inc.

March 20th, 2016

One of the reasons FounderDating works is that is a community in which admission is restricted and generally like-minded. To get a two sided political discussion going among folks from different political view points will be much more difficult unless of course, you aggressively enforce a policy of NO "inflammatory, spammy or otherwise unproductive"  Perhaps, you can grant the group members the right to flag policy violators to a human curator. The curator has the right to bar specific posts or people if they violate the policy.

Derick Smith Distributed Systems Entrepreneur

March 18th, 2016


Firstly it may be beneficial to think about your own beliefs and the principles of free speech. Does any discussion, especially political, benefit from free speech, or is it less than desirable?

If you decide free speech is a good thing, then perhaps the way to solve the perceived problems of "inflammatory, spammy or otherwise unproductive" contributions may be best moderated by making people responsible for what they say.

An approach that has been proposed in the cryptocurrency community is to add a cost component i.e. any commentator needs to put down a bond to have the right to comment. If the community find their contribution useful (liked) they get a reward, adding to their bond value. If their comment is deemed undesirable (disliked) they are penalized, reducing their bond value.

Pierre-R. Wolff Business and Corporate Development Executive / Professional Connector / Kitesurfer

March 24th, 2016

From my experiences at Tribe and later with Livefyre, politics is perhaps one of the toughest categories to hope for civil discourse. Foreign Policy is perhaps one of the better sites at managing this and they use a combination of human and algorithmic moderation. The way it tends to work is that the algorithmic phase knocks off a good bit of the spam, but is also able to do a pretty good job at classifying offensive content (ie. bullying, racism, profanity, etc.). The classified comments with less confidence are highlighted for review by the human moderators, but they can also review all comments if they chose to. Fox News for example, has a tougher time just because of the sheer volume of inflammatory comments. They had taken an approach of allowing all comments to make it on to the site, but could quickly remove anything as it was flagged by users or by their moderators. Both Livefyre and DISQUS have spam detection and moderation tools so you might want to check out if their services make sense for your site.

Roger Hector CEO @ TopTrack LLC

March 18th, 2016

Brian, You may want to consider a prominent reminder of this key principal at the very top of the page. If you name it: "Civiil Discourse", it will remind everyone that this is required to participate in the discussion. This can also be reinforced with a subtitle such as: “Only thoughtful reasoning allowed” (or something like that). The effectiveness of other approaches to moderation will depend a bit on the scale you operate at, and some value can be found in each direction. But clearly you should expect bad posts.