A study about online identity (using HuffPo comments) found that people using “‘stable pseudonyms’ created a more civil environment than real user names”. This jibes with my personal experience and is in line with the comment guidelines for the site.

Discussion  5 comments

navarro

the wording in your guidelines is reasonable. you make your first preference clear without closing off other approaches-- "When choosing a display name, I encourage you to use your real name (or at least your first name and last initial) but you can also pick something that you go by when you participate in communities online. Choose something durable and reasonably unique (not "Me" or "anon"). Please don't change this often. No impersonation."

my online handle is one that i have been using for 30 years across multiple sites. unlike some of my firends of color and trans friends i do not use it out of fear of being identified but i so like to keep control of my privacy. another site which allows for stable pseudonymns is boingboing. they seem to have had some real success with the model. they also use the discourse features to help provide social control. i think what you have here is a really good thing.

i hope our community continues to be a good thing.

Jason KottkeMOD

That's great to hear. MetaFilter and Flickr were my guides...stable pseudonyms were kind of the norm in those spaces and they seemed to work well. No one really knew who "dry_blanket_22" was IRL but that person had a well-established persona online that everyone accepted as what they needed to know to communicate with them. (Note: I totally made up that username just now for rhetorical purposes...)

Reply in this thread

tlyczko

This article is *three* years old!!
Please, when quoting such things, make a note of the date(s) involved!!
*Much* has changed since 2021 when this article was published.
Incivility and violence have only increased.
People have to think about the risks of being doxxed, threatened, swatted, etc. for expressing opinions and thoughts. Saying something that some people don't like wrt Israel or Palestine, for example. Or talking Arabish with your friends gets you shot at.
Transgender people constantly deal with numerous issues and prejudice wrt their present and past names.
Companies routinely scour the Internet to find people's socials or other online presences to screen them out.
A 'true names' policy can sound good on first glance but it can have serious repercussions for people who don't fit the norms of white or cis or not-rich, etc.

Trent Seigfried

I wonder if this is more of a measure of the decrease in the quality of online discourse over time. If the period of stable usernames preceded the era of real Facebook names, I would not be shocked that the earlier period was more civil just because it was earlier.

Robb Monn

I worked at Huffpost through all three of these phases in a technical capacity adjacent to comments and moderation, as director of technical operations and eventually as head of engineering. This study has significant questions to answer about their methods and assumptions that is summed up here:

"Second, we know that HuffPo used both manual and algorithmic moderation in all three phases, but we do not know how the policies changed under the different identificatory regimes."

Given whay I know about Huffpo's moderation system and their statement that they don't have understanding of them I'd say that nothing reported in this study should be considered valid for a few reasons.

One is that Huffpost used many different systems of moderations following many changing standards throughout their days as a big news site (#3 in the US at one point) and the biggest news-based community, which they were for several years.

They started with no moderation, then human moderation with evolving standards and practices that were overseen by a brilliant community team. Then they bought Julia in 2010(? ish) which was a very early machine learning moderation system that was trained on millions of human made moderation decisions before launch and whose training and internals were constantly updated and improved for years. Julia was dropped for Facebook comments later on, at which point Facebook did most of the basica moderation but was still assited by human moderators.

My first critique of this analysis is that the authors have no data or understanding of the moderation actions that resulted in suppression of comments or users. How can an analysis make any claims without this data? So far as they know the comment-flow was more hostile during the periods they observe as more civil, but there were just far more comments suppressed. Just one of dozens of other internal details of the operation that would invalidate their conclusions would be that Huffpost quiet-deleted comments for a significant period of time -- meaning that you could post, and you would see your posts in context when you were logged in but no one else would see them. They also silent-banned users. This and other details of implementation create a great deal of complexity and secondary effects.

I can attest that moderation was very, very active and that lots and lots of comments were moderated down and out of the comment threads... again indicating significantly less civility than any retrospective analysis would be able to discern without all the data.

I also find it interesting that this study chose Huffpost for the analysis. At the site's hights of success and profit the comment threads were the reason for their SEO dominance and were considered to be the most important secret sauce. Huffpost moderation was the best in the business by a long measure. With the methodology presented it would make sense to me to say that Huffpost would appear to be the most civil of the big sites of the time. So it is interesting that this study focuses singly on Huffpost *and* reports that their theories indicate this differential.

While the authors do cover some of this in their section on Limitations, they don't cover near enough to justify their results... instead this reads as another cherry-picking study where authors had a theory and found a dataset that confirmed it while being unaware of fundamental reasons why that dataset was an outlier, making it impossible for them to build in the needed controls in their methods.

Honestly Jason, I think you should take this link down or write a retraction.

Hello! In order to leave a comment, you need to be a current kottke.org member. If you'd like to sign up for a membership to support the site, you can explore your options here.

Existing members can sign in here. If you're a former member, you can renew your membership.

Note: If you are a member and tried to log in, it didn't work, and now you're stuck in a neverending login loop of death, try disabling any ad blockers or extensions that you have installed on your browser...sometimes they can interfere with the Memberful links. Still having trouble? Email me!

Advertise here with Carbon Ads

This site is made possible by member support. ❤️

Big thanks to Arcustech for hosting the site and offering amazing tech support.

When you buy through links on kottke.org, I may earn an affiliate commission. Thanks for supporting the site!

kottke.org. home of fine hypertext products since 1998.

🍔  💀  📸  😭  🕳️  🤠  🎬  🥔