Concerns about the new NSFW system

I noticed the new NSFW system on the nightly server and wanted to share my appreciation for many of the new features. I think it is excellent that titles are no longer blurred along with thumbnails for sensitive videos, the “Warn” display level is a strong middle ground between “Display” and “Blur”, and the “NSFW Summary” content warning field was something this platform sorely needed.

However, I also wanted to express my concern about the new NSFW flags.

Under the new system, creators will be able to additionally flag their sensitive videos as containing “violent content”, “shocking or disturbing content”, and “sexually explicit material”. Further, viewers will be able to set their viewing preferences for each flagged category granularly.

The Fediverse is a shared, global platform, but there is no shared, global definition for what these terms mean.

I live in Los Angeles, where the weather regularly exceeds 35 degrees Celsius. Because of this, I often wear sheer, loose clothing to deal with the heat. I do not consider what I wear to be sexually explicit. Still, I understand that others around the world might not share my definition, so I mark such videos on PeerTube as containing sensitive content. The term “sensitive” is ambiguous enough that doing so creates very little personal conflict for me.

Should I now additionally flag such videos as sexually explicit? I disagree with that notion vehemently, but I also understand that many living in religiously conservative regions consider how I dress to be very much that.

Setting NSFW flags is optional to creators, but they will not feel optional to viewers.

If we allow viewers to tailor their viewing preferences to “Hide” sexually explicit content, and then they see a video that they deemed sexually explicit but that its creator did not, they will grow upset and report the video. This creates more work for two teams of moderators, those of the receiving and original PeerTube servers, and requires them also to make a judgment call on terms that share no consensus.

The same applies to the other NSFW flags. If I upload a Let’s Play video of a fighting video game containing guns, is that “violent content” even if it’s a game children play, such as Fortnite?

And what exactly is “shocking or disturbing content”? Some might say that public nudity is shocking or disturbing content, even though it is legal in many areas across the global West and normalized in many other areas around the world.

NSFW flags create more work for creators and moderators to fairly define what the world has already proven cannot be fairly defined. There is no precedent for any similar flagging system on comparable ActivityPub platforms such as Mastodon, Pixelfed, Pleroma, or Misskey, all of which have concluded that editable content warning fields alone create the least amount of work while being the most democratic flagging system.

I strongly feel that the new NSFW flags, as implemented in this commit, should be removed entirely.

1 Like

I think that defining with more granularity what is sensitive is an improvement.

Then, do not exist a single catchitall definition for each and people is offended also by the pic of your cat so it is on the hands of the single instance moderators decide what must be flagged, how, and whatnot.

Hi @letydoesstuff and thanks a lot for your early feedback.

To add a little bit of context, this system has been created with the help of our UX/UI designers who worked and benchmarked multiple sensitive content systems (from TV rate system, big platforms like Twitch/Instagram, federated software like PixelFed/Mastodon, etc.).

The biases we’ve chosen were:

  • stick to a classic west european classification in the first implementation
  • consider main peertube use cases (federated, CDN oriented, controlled platform, media, institutions)

We’ll publish their report in the following weeks/months.

We have changed the UI to make it clear to viewers the NSFW flags are subjective information added by video authors, and so videos may not be well categorized from their point of view.

I agree it’s too vague. We decided to remove this flag.

I agree but we can’t use this field to filter content. It’s the reason why we also added the NSFW flags system, that is optional.

We have changed a little bit the UI so the content warning input is before the NSFW flags in the video manage page to increase its impact. We also have reworded the NSFW flags label to explain it’s an help/indication they can give to viewers.

We still think this system can be useful for users to relax NSFW policy on some specific content so we want to give it a try :slight_smile: But if it’s not, we’ll remove NSFW override system in future versions.

In a second phase, we also plan to give moderators the ability to override NSFW flags themselves. This would help them to identify NSFW content on their instance using filters.

But since NSFW flag user settings may still not be suitable for some admins/moderators, we also introduced an instance config to disable it.

I hope these changes address your concerns

2 Likes

Thank you very much for your thoughtful and detailed response. The recent changes address many of my concerns, and I appreciate having a way for instances to opt out of the flag system entirely.

I especially appreciate the clarifying information about the flags added to viewer settings, and I have some thoughts on how to improve that further.

I feel “Redefine NSFW policy for specific content” would be better rewritten as "Override sensitive content policy for certain videos”.

“Override" is much more easily communicated, swapping “NSFW” for “sensitive” would help increase consistency across PeerTube, and using “certain videos” instead of "specific content” cements the fact that not all videos will have such flags.

I also feel…

:warning: These criteria are chosen by the video’s author and are subjective, so some videos may not be correctly categorized from your point of view.”

…would be better rewritten as…

“These subjective criteria may be optionally applied to certain videos. The classification of some videos may differ from your personal judgment.”.

The lone emoji in viewer settings is unnecessary clutter, “certain videos” aligns with the earlier change suggested, and the rest unspecifies who adds such flags, as it may be either the video creator or a local moderator.

I’ve also considered how this wording might translate, and “這些主觀分類標準可能會被選擇性地加到部分影片。部分影片的分類可能與您的個人判斷有所出入。” and “Estos criterios subjetivos pueden aplicarse opcionalmente a ciertos videos. La clasificación de algunos videos puede diferir de tu juicio personal.” respectively come across fine in (Taiwanese) Mandarin and (Mexican) Spanish.

I hope my suggestions here are helpful, and thank you again for considering my earlier concerns.

The fact that there are plans to give local moderators the ability to apply sensitive content flags to videos themselves is the most important information I have learned today and brings me great relief.

My concern over the flags can be summarized as “I worry viewers will report what they feel are incorrectly flagged videos, risking moderators blocking or deleting videos someone spent a lot of time working on over an entirely optional feature.” If moderators can just add a flag or even mark a video as sensitive themselves, that risk decreases significantly.

2 Likes