Shame On Who? Experimentally Reducing Shame During Political Arguments on Twitter

Summarizing results and sharing reflections

Amanda Baughan
5 min readNov 9, 2022
Hand typing on a smartphone in a dimly lit bar or restaurant.
Photo by Alex Ware on Unsplash

The year was 2020. There was so much sociopolitical turmoil, and it felt all-encompassing. I was isolating from most friends and family, and trying as hard as I could to focus on completing a study to see if people could argue more respectfully online. I especially thought about my friends and family back home in Florida, and how different their daily reality was from my life in Seattle. It was a stark contrast that felt difficult to bridge at times.

Twitter had become a daily source of — often horribly distressing — news. As I saw hot take after hot take on Twitter about 2020’s events, I was interested in how people talk to each other on social media — specifically, how we make assumptions about other people’s intentions and subsequently respond to what they’ve said. I was wondering what could be done to change the tenor of online discourse for the better. I had been reading about ingroup favoritism, and I was curious about how to help conversations across the political aisle. The more I read, the more I realized that relatively small interventions had the potential to change how people viewed an outgroup. I was curious to see how this might apply on somewhere as casual, and frequently vitriolic, as Twitter.

So, I designed a study where I would pose as two personas — one mainstream U.S. liberal, and one mainstream U.S. conservative — and disagree with others who seemed to share or not share this identity. To do so, our two personas sent over 500 tweets over the course of two weeks, disagreeing on a set of pre-defined topics: ethics in government, unemployment, national budget, the response to COVID-19, healthcare and mail-in voting. These personas made the same arguments over and over again, and they only varied in one key way: half the time the disagreement would include a respectful preamble, saying, “I respect your views and…” and half the time I launched right into the disagreement. I, along with a team of collaborators, evaluated whether people were respectful, shaming, or neither in response. After the study was completed, I debriefed the people I tweeted at to let them know I was a researcher and offer them compensation and a chance to remove their data. Not all of these users accepted direct messages on Twitter, so we reached out to as many as possible.

I found that respect does improve the tone of online arguments, but ingroup favoritism still appears to be more influential. An encouraging insight is that respectful disagreement brought outgroup disagreements to the same level of respectful and shaming replies as a neutral ingroup disagreements. However, across all disagreements, using respect didn’t reduce shaming responses received outright. I am optimistic about how a relatively short respectful preamble to a disagreement could change the tone of discourse online, at least towards more respect received in return.

This table displays how shared identity was effective at boosting respectful responses and reducing shaming responses, but being respectful only increased respectful responses without affecting shaming responses.
The fourth row of this table shows how using respect in outgroup conversations (Outgroup & Respectful) resulted in similar quality of conversations as neutral disagreements with an ingroup (Ingroup & Neutral).

However, it’s important to talk about the methods we used. In this study, we used deception by posing as a persona and not as researchers, and there was a lack of informed consent. I chose this approach because I wanted to remove the social desirability bias of doing this study in a lab setting. I was curious to see if a respectful disagreement from an unknown user would be enough to lessen the shaming that often accompanies online conflict.

Deception in a research study is usually only considered acceptable if 1) no other non-deceptive method exists to study the phenomenon of interest; 2) the study makes significant contributions to scientific knowledge; 3) the deception is not expected to cause significant harm or severe emotional distress to research participants; and 4) the deception is explained to participants as soon as the study protocol permits (source). Waiving participant’s informed consent requires 1. The research involves no more than minimal risk to the subjects. 2. The waiver or alteration will not adversely affect the rights and welfare of the subjects. 3. The research could not practicably be carried out without the waiver or alteration. 4. Whenever appropriate, the subjects will be provided with additional pertinent information after participation. (45 CFR 46.116.d, source).

While I think receiving a disagreement on Twitter poses minimal risk to people’s safety, I would very carefully consider the need to use this research method again, and I encourage others to use similar caution. If you’re feeling inspired to use similar methods for a paper, I highly recommend reaching out to the SIGCHI ethics committee and collaborating with someone who works in digital ethics. When I was unsure of what to do next, I asked the SIGCHI ethics committee for their input, and they helped me consider new points. For example, in this study it was highly important to paraphrase participants quotes so they could not be later identified. I personally grew a lot as a researcher by working on this paper, and it helped me to grow in my own understanding of what I consider ethical online research.

All this is to say, the findings in Shame on Who are worthwhile, and I think that my methods are just on the right side of an ethical line for me. I’m grateful to have had the opportunity to grow by working on this, and I commend all researchers working on difficult problems of how to bridge political gaps and maintain communication across contentious issues.

--

--