Online Incivility is More Than Swearing On Social Media

There are no quick fixes for online incivility. Photo: AFP

American politics are extremely contentious right now, and online uncivil political discussions receive a lot of attention. However, when online incivility is discussed, the behavior is often not clearly defined, and people are tempted by seemingly easy quick-fixes that do not fix the root of the problem. Those in technology are particularly prone to try to solve the problem by using machine learning algorithms, but this approach has a major flaw.

Civility is not just politeness, and incivility is not just rudeness or profanity. Ideally, civility allows (or should allow) all members of a society to discuss, on equal footing, how they wish to govern themselves as a group. Defining incivility as profanity makes it seem objective and easy to detect, but this definition does not provide a full picture of the nature and outcomes of incivility.

While some believe that the public should only discuss political issues rationally, others believe that expecting people to engage in perfectly logical and restrained public discussions is unrealistic and that it doesn’t fully capture all aspects of political communication (many of which are inherently emotionally charged).

Furthermore, politeness standards arbitrarily applied to public discussions favor certain groups (namely, people in already powerful positions) while restricting the speech of other groups. Standards of politeness may be used to police the way in which things uncomfortable for the dominant group are communicated and drive these ideas out of the sphere of communication by labeling them as emotional, illogical, or impolite. However, for those directly affected by certain issues, it is often hard to articulate a position about that issue in a way that is detached and impartial-seeming.

If civility is valued because it lets all people in a given society come together to discuss how best to govern themselves, then such a narrow, politeness-focused definition of civility falls short.

Definitions of incivility should focus on how it affects the common good instead of just individuals and should distinguish between heated discussions and uncivil discussions. According to communication scholar Zizi Papacharissi “incivility can then be operationalized as the set of behaviors that threaten democracy, deny people their personal freedoms, and stereotype social groups.”

Civility is cooperation on the mass scale but allows for emotions and human variety to play a role in that cooperation. This means that profanity may not be a great marker of incivility, but behaviors that discourage people from participating will be. It may be more difficult to build an algorithm to detect this, however.

How do we fix this?

Detecting and removing online incivility can be very difficult. Technology can make it easier for organizations to monitor what people are posting, but as discussed above, incivility is not simply profanity. Building a system to flag posts with profanity, then, is not sufficient when it comes to curbing incivility.

Incivility with lasting effects is often more subtle than a clear “F@#& you” – it can be whatever behaviors systematically exclude groups of people from participating in the conversation. It can be difficult to build an algorithm to detect norms or patterns of behavior, so many platforms stop at detecting profanity. However, merely detecting incivility is not the only potential way of dealing with it. The platform itself, and its policies, can be leveraged to help enforce the desired behavior on the site.

Begin by talking to the people you want to serve. Find out what behaviors they think disrupt their interactions on the site and take care to keep power dynamics in mind when thinking about these behaviors. Use these behaviors as the basis of whatever machine learning algorithm you build to manage incivility, if you choose to use one at all.

The behaviors your users point out to you as uncivil should be communicated to your moderators, who often hesitate to remove offensive content if it does not contain profanity. Explicitly highlighting undesirable behaviors to your moderators may overcome this tendency.

In addition to using algorithms and moderators to manage incivility, you may also find it helpful to leverage the design of your platform to communicate to your users what desired behaviors are. You can do this by clearly posting your policies in a place where users see them frequently right before posting, to remind them what behaviors are good and which are discouraged.

Other features such as upvotes and downvotes or reaction icons are powerful communication tools for your users, and you should think through how they can be used both positively and negatively.

Think about how users are identified on your site: is it anonymous, pseudonymous, or are users names present? Anonymity has benefits and drawbacks. It allows people to speak freely when they might otherwise feel constrained, but it also makes it harder to find and punish perpetrators and easier for people to forget the humanity of their victims.

Think about how financial incentives on your end shape your user’s experiences. Platforms may have an incentive to allow incivility, as its presence can lead to higher engagement (likes or clicks), which results in financial rewards for the platform.

Think about how to structure your platform to encourage positive interactions among users that will result in the engagement you need, but that will not promote negative behaviors.

There are no quick fixes for online incivility, as tempting as some may seem. However, it is still possible to deal with incivility, and I would encourage those of you managing it to do so thoughtfully, in a way that puts the experiences of your users first.

Related Post