Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

0001 The underlying goals and general concerns of public lists #25

Open
pfrazee opened this issue Jun 25, 2023 · 22 comments
Open

0001 The underlying goals and general concerns of public lists #25

pfrazee opened this issue Jun 25, 2023 · 22 comments

Comments

@pfrazee
Copy link
Contributor

pfrazee commented Jun 25, 2023

Open replies get spam, harassment, and arguments. We need a mechanism for counteracting that. 0001 proposes that we invest in user lists for this. Let's talk about the logic, what's working, and what has me and others concerned.

Friends of friends (FoaFs)

We want to create a pathway of trust between the OP and the replier, and then only allow replies if that pathway exists. The simplest example is

Does the OP follow the replier?

This kind of pair-wise relationship doesn't scale well. We want the circle of trust to enable introductions between relative strangers.

One way to expand it is to "expand" the relationship graph, by asking

Does the OP follow the replier, or does the OP follow someone who follows the replier?

This is known as "Friend of a Friend", or FoaF. There's an entire study of this kind of approach that can get quite sophisticated1. I'll give a "simpler" example of a FoaF solution

  • Does the OP follow the replier? or
  • Are there two FoaF connections? or
  • Are there four FoaFoaF connections? or
  • Are there eight FoaFoaFoaF connections? etc

You're giving a weight to each social connection you have. If you directly follow somebody, that's a 1. If they're a FoaF, that's a 0.5. If they're a FoaFoaF, that's a 0.25. You find all these connections and add them up, and if they add up to at least 1 then they're good to go.

follow_score = followed + (foafs / 2) + (foafoafs / 4) + (foafoafoafs / 8)
can_reply = follow_score >= 1

Since blocks are public, you can also factor them in as a negative weight. This could be the direct opposite of before

  • Does the OP block the replier? or
  • Do two people the OP follows block the replier? or
  • ...etc

We'll call that "Block of a Friend", or BoaF.

So then the math kind of works out like this:

can_reply = (follow_score - block_score) >= 1

where

follow_score = followed + (foafs / 2) + (foafoafs / 4) + (foafoafoafs / 8)
block_score  = blocked  + (boafs / 2) + (boafoafs / 4) + (boafoafoafs / 8)

The math on this can easily vary. You might want to consider a one foaf connection to be suitable for replying, for instance.

Solving the limits of FoaFs

The friends-of-friends idea is definitely good but it still suffers from an introduction problem. A new user, for instance, won't be followed by anybody at first. How would they ever reply to anybody?

There are a couple ways we could address it.

Follow requests

We could certainly create a way to request follows. This isn't out of character for social networks; facebook and linkedin work this way.

The downside is that there's going to be a lot of request spam. This also makes it hard to curate your follows on the basis of liking a person's posts; we'd need good ways to tune your following feed so that there's minimal "follow regret." (We need to do this anyway.)

"Community follow bots"

Suppose somebody creates a "Bluesky Users" bot account. The sole purpose of this account is to follow people that join the network, and unfollow them or block them if they violate a specific norm. Other people would then follow the "Bluesky Users" account to create a FoaF connection between a wide community.

It's nice how simple this approach is.2 Once you understand the logic, you know that you just have two things to pay attention to: follows and blocks.

The main concern I have is that it might not be obvious to people at first. "Follow X community account so people can reply to you" is not the most intuitive model for social networking.

Other answers?

Happy to hear other ideas for solving the introduction problem off of a FoaF framework. The solution I landed on with 0001 was lists, so let's dig into that.

Lists as an answer to the introduction problem

Let's look at the original logic for who can reply

Does the OP follow the replier?

0001 proposes we use lists to make it more sophisticated

  • Does the OP follow the replier? or
  • Is the replier on a list that OP trusts?

In a way, the lists are acting like a bulk follow that creates a trust connection. They're logically very similar to the Community follow bots idea, but they're a little more obvious about how they work.

The subtle but key concept in 0001: lists that scale up

One of the proposals in 0001 is "Membership APIs" which is a set of tools to make lists joinable & leavable, and to give multiple people admin control over a list. When you use those tools, the list takes on a totally different flavor. It starts to feel more like a group or a subreddit.

The way I'm imagining/hoping this would be used is, there would be lists that are relatively easy to join but which then enforce norms of membership. When somebody acts up, they get removed from the list and thus lose access to the replies of that community.

In a way, the community lists are a form of reputation management. The admins of the large lists are essentially upholding a standard of behavior, and membership means you've upheld that standard.

Problems with 0001's proposals

There are a number of concerns that I and others hold for the proposals in 0001, and I'm writing this issue mainly to highlight them and start a conversation about whether they can be mitigated or if a different approach is needed.

In short: public lists have a lot of failure modes. I'm concerned that

  • Non-consensual inclusion in public lists will lead to harassment
  • That list owners will make decisions based on personal grudges, to run protection rackets, or generally act without accountability
  • Conversely, that list owners will subject to constant harassment over their decisions

Let's dig into each of these.

How public lists lead to harassment

It's pretty trivial to think about how a list can be used to create targets of harassment. All someone has to do is create a list of "Bad People" and then encourage their followers to dogpile the people on the list and anybody included. This isn't a problem unique to public lists but it's a real problem to be sure.

Inclusion on the lists can easily work like a scarlet letter, as mentioned in 0001. I'm very concerned about incentivizing people to label one another as good or bad in a public fashion.

List owners abusing authority

Block lists have a complicated history in their past usages. They carry a lot of weight, and being included on a block list with no context leaves readers assuming the worst. There have been accounts of people being included by some accident and having no recourse.

There is a huge temptation for list owners to make decisions based on personal motives. I think we can all identify with the urge to yeet somebody into the shadow realm for a personal slight. List admins are human and are going to act like humans.

Even more pernicious is the "protection racket" abuse of power. It looks something like this: "Hey you've been added to a popular blocklist my company runs due to the following report. For the low price of $59.99 we can run an appeals process to potentially remove you." I'm told by some graybeards that anti spam providers have hit this territory, and I think it's no stretch to imagine it happening here.

List owners being abused

Given how intense the ramifications of a list can be, their admins are going to be under some pretty intense pressure from users. Somebody who sets out to help protect their community may soon find themselves on the receiving end of a lot of anger.

It doesn't even have to be anger that makes a list-admin miserable. The minute you become a mod, you get the "mods mods help help!" messages. Suddenly you've been signed up for a bunch of emotional labor in your social, non-work space.

How do we solve these problems?

What we need to establish is whether these issues can be mitigated, and how much of the premise needs to be reconsidered.

There's a balance between creating something simple but properly structured. We don't want to overcomplicate the network. Much of this is wrapping our heads around what properties are essential for things to work. Too simple is just as bad as too complex.

Diassociating lists from users

We want lists to exist separately from servers so that they have independence. We don't want a situation where the company can seize control over them. This means the lists need to live on accounts, as they are portable (can move between servers).

However we also need to consider that, if lists are meant to act like community institutions, then they need more support than an individual can provide. Attaching lists to individuals places an enormous burden upon them. If we expect lists to handle these kinds of things, we may need a simple way to create accounts with the express purpose of being managed by multiple users.

Reconsider non-mutual public lists

Public lists that can add anybody are just spicy. For every positive usecase there seems to be an equally negative one. We may need to seriously consider whether we're comfortable with that outcome.

Requiring consent for inclusion in a public list would mean that mutelists and blocklists are toothless. We'd have to orient the network toward positive inclusion instead; people essentially start out untrusted and then join lists to gain reputation.

Invest in ways to structure these things

Sometimes I think we need to be viewing moderation through the lens of a judicial system, where there are processes for establishing rules, adjudicating decisions, and running appeals. Where a lot of the current ideas run aground is with trust and accountability; how do we trust people to run these kinds of moderation systems, and how do we counteract the ways abuse can occur?

This makes me think that if lists are the solution, then the membership APIs may need to get pretty detailed with their admin/moderation features.

Summary

The basic question is really simple: How do we decide who can interact with us? Where things go nuts is when we answer that question. The unintended consequences of the solutions are hard to grapple with.

This blogpost inside a GH issue is meant to give some more context to the challenge and help facilitate some follow on discussions with folks. Which of these concerns resonate? Which solutions sound useful? Where should we steer our thinking from 0001?

As always, thanks to everyone for your thoughts.


1 Anybody who has studied FoaF graph analysis will probably be pretty frustrated by how basic I'm being, but I'm trying to give a very accessible introduction to the ideas to make sure we don't exclude anybody.

2 Secure Scuttlebutt, a project I worked on in the past, uses this exact technique.

Related discussions:

@agentjabsco
Copy link

@pfrazee I have a lot of different directions I want to go in with my replies to this, if it's okay with you I'm gonna do each one as a separate message (the Twitter posting style is fundamentally in my blood)

@QuantumFractal
Copy link

QuantumFractal commented Jun 25, 2023

Feed Ranking is super hard to do in a straight forward analytical manner. IIRC most sites (LinkedIn included) use an ML based feed mixer that uses thousands of features. Directly allowing users to curate lists short circuits this, making engagement (positive or negative) to be laser focused onto a group of individuals if the account promoting the list is bigger.

Parking my comment here, I'll go dig up some papers about this ⌚

@pfrazee
Copy link
Contributor Author

pfrazee commented Jun 25, 2023

@agentjabsco that's fine!

@QuantumFractal this isn't really about feed ranking if by that you mean prioritizing posts to show up in a feed

@agentjabsco
Copy link

"Community follow bots"

Suppose somebody creates a "Bluesky Users" bot account. The sole purpose of this account is to follow people that join the network, and unfollow them or block them if they violate a specific norm. Other people would then follow the "Bluesky Users" account to create a FoaF connection between a wide community.

It's nice how simple this approach is.2 Once you understand the logic, you know that you just have two things to pay attention to: follows and blocks.

The main concern I have is that it might not be obvious to people at first. "Follow X community account so people can reply to you" is not the most intuitive model for social networking.

it's worth noting that the What's Mid bot on Bsky already works more or less like this: I've also proposed treating accounts more or less like a subreddit

I like the way Bsky currently ties stuff like Lists and Feeds to individual account presences: I'm not really sure how else to put this, but the closest way I can see the semi-convergent-yet-incompatible evolution of Bsky's current approach to bots vs. lists vs. feeds and algorithms at the "app" level is that it feels kind of like C++, where you have all these different language features that can describe more or less the same types of abstraction, but each of them kind of work at cross purposes and that ends up being the language's biggest problem?? (not sure how to put it in non-nerd terms)

I guess what I'm gesturing at here is that all these different approaches work for different users right now... I think it might be less important to sweat the particular presentational UX/engineering, and more to keep an eye on how to make it so that each of these approaches being taken by different communities / dimensionalities / whatever don't end up being any more of a headache than, say, the fifteen different types of "community" Facebook has had over its lifetime

@QuantumFractal
Copy link

@pfrazee It's just feed ranking applied to a single post. Replies to a post get scored, and posts that don't meet the threshold don't get shown. A list approach, either a "approve-list" or "deny-list" would work similar to Google+'s circles (and Twitter's new circle feature), however, that doesn't seem like it would work when that list needs to be federated, which can open up abuse. I wonder if there's any way to have a list that's hashed so it couldn't be easily enumerated, but easily checked for verification?

@agentjabsco
Copy link

following straight from that last comment: I'm actually thinking something that might make the most sense on this front would be to have something like "streams" that's distinct from "feeds" (I know this terminology is super confusing but I'm not sure how to make it better)

(actually, the right way to do it would be to shift up the naming here so the thing I'm calling "streams" here would actually become the new "feeds", and feeds would be renamed "algorithms" - this sounds impenetrably nerdy, but I think the reality of the situation is that virtually every person online is already comfortable calling anything that Feeds You Content an "algorithm")

but so the idea here is that an account's "Streams" would be pretty much like G+ "circles" - they could choose to multiplex their posts across a set of various discrete lists, and those could be anything from "different interests I have" (the way Google+ used Circles) to "one feed for my Good Posts and one feed for my Bad Posts" (where 10% of my posts would get listed on both)

this probably doesn't make much sense the way I'm pitching it, sorry: it might make more sense if I drew it as a mockup (and there'd still be a bunch of discoverability/surfacing issues, not to mention the C++ three-stooges-syndrome problem I described earlier around maintaining a federated system while trying to support six different mechanics for the same social dynamics)

@agentjabsco
Copy link

but so the idea here is that an account's "Streams" would be pretty much like G+ "circles" - they could choose to multiplex their posts across a set of various discrete lists, and those could be anything from "different interests I have" (the way Google+ used Circles) to "one feed for my Good Posts and one feed for my Bad Posts" (where 10% of my posts would get listed on both)

to be clear here, the difference between this and the Algorithms that Bsky currently calls "feeds" are that these Circle-esque "streams" would just be a multiplexing of the kind of "linear series of posts" that are currently modeled as being one-per-account: under the hood, they could be modeled as either an extension of the current post schema or a separate Lexicon(?) (still haven't really dived into the AT protocol docs, sorry)

@enn-nafnlaus
Copy link

I 100% share your concerns, Paul!

@agentjabsco
Copy link

agentjabsco commented Jun 25, 2023

Sometimes I think we need to be viewing moderation through the lens of a judicial system, where there are processes for establishing rules, adjudicating decisions, and running appeals. Where a lot of the current ideas run aground is with trust and accountability; how do we trust people to run these kinds of moderation systems, and how do we counteract the ways abuse can occur?

I have genuinely considered proposing the idea of running moderation for high-subjectivity cases through an in-platform Court System, complete with a jury of anonymous peers pulled at random from the active userbase as a mandatory Duty

@thecubic
Copy link

thecubic commented Jun 25, 2023

I think the desire is that lists can be repudiated by the author (ain't my list, prove it lmao), which makes them fit very poorly into an authenticated system. I'd personally keep it away from any social graph spaghetti

Tradeoffs being tradeoffs I would throw list portability right in the trash, or at least make it so the object must be re-generated on data movement. That relaxing is an opportunity to add repudiation decoupling. for example,

bob makes a list. this is private - because it is encrypted by a DEK and bob's key is (one of the) KEKs. I cannot prove that bob has created a "no jerks" list that includes eve (bob can demonstrate DEK-unlock, or announce that he will add and then remove Weird Al and do that for proof, if he wants that life), just that he has a list

... but the PDS can do this proof because it possesses the KEK. that's where "trust your PDS" does the magic - on any mutation, the PDS publishes a user-decoupled plaintext copy of the list (all of the plaintexts are downstream from encrypted, user-owned lists), and that is the socially-perceived list. Some mechanism needs to be present so that hooligans in the federation can't bash the public copy, of course

federation only need share the plaintext copies to make a universe-coherent list, and home server and hosted user have a portable copy that is the secret basis of the public list

I almost made a list recently but I knew the second I did, the Eye of Sauron would stare right at me and I didn't want my life to be filled with ringwraiths, so no list

@Bossett
Copy link

Bossett commented Jun 25, 2023

One of the things that occurs to me is that Lists are a strong ties situation - someone has actively chosen to plan someone on a list, or follow a list. Someone manages the list. They can be used in a bunch of different ways, and moderation is just one of them. Relationships are weak, they're transient and while a community may rally around lists, an individual may use many lists and touch people from many different areas for all sorts of reasons.

I'd propose two features:

FoaF as UI

I'd suggest just bringing that concept to a post, giving each post a 'distance' or 'weight', and using that primarily for UI. That gives you a nod toward 'is this person cool?' by maybe giving them a little colour flag. Given that there is a 'neutral' / 'good' (lots of friends in common) / 'bad' (lots of blocks from friends) - this can also help out with discovery once you have a couple of people you liked followed. The key here is that I like it as UI concern, not as a strong moderation presence.

At some point of maturity - just let people auto-hide people with X -ve score with a content control.

How to manage lists

On lists then, you have a much more active set of controls. Essentially work as implemented now, with some features for management. Re-use existing metaphors - I create a list, and it auto-follows me. Lists then are managed in two modes: AND and OR (possibly with thresholds). People followed by the list can add to it. In an OR list, everyone is on the list. In an AND list, all (or threshold #) need to add the same person for it to work. Seniority based on time added worked for subreddit moderation, and would probably work here (with some tools for people to call for mods to remove).

User follows list, list follows trusted users.

@agentjabsco
Copy link

regarding friend-of-a-friend algorithms for determining who is allowed to reply to a post: no. no. absolutely not. Twitter already got this one perfect years ago:

image

to this point from the blog post announcing moderation proposals:

A great experience should be simple to use. It shouldn’t be overly complex, and there should be sensible defaults and well-run entry points. If things are going well, the average user shouldn’t have to notice what parts are decentralized, or how many layers have come together to determine what they see. However, if conflict arises, there should be easy levers for individuals and communities to pull so that they can reconfigure their experience.

for every single post I have ever seen a person make in the real world, there is absolutely zero benefit to telling the user to game out a fuzzy math problem instead of making it any more complicated than these three options.

FoaF may have a place for, like, spam heuristics, but it's not something that should ever, ever get surfaced in anything less dorky than nostr

@Bossett
Copy link

Bossett commented Jun 25, 2023

I think 'who can reply?' is a good example of managing some of the problem. I'd consider adding lists to those options as well - so adding 'List X' there. It would be really good if you could get one degree away - 'People you follow and their followers' - because I think it would be a very bad outcome if people arrive, and then can't interact because everything is list-gated.

If I bring a friend onto the site, I want to be about to vouch for them immediately - so follows & follow-of-follows kinda solves that.

A big problem I see on Twitter is that big accounts only interact with big accounts, and they do it by just kinda lobbing things out there without really having back-and-forths in the replies. I think 'People you follow' encourages this as there's a lot of politics around who you follow. That's the issue I see with the one-step 'people you follow' option.

@agentjabsco
Copy link

agentjabsco commented Jun 25, 2023

There's a broader existential point to be made about the necessity of pushing back against algorithmic solutions to social problems, which is a problem that has pervaded pretty much every communication the Bluesky team has put out about moderation (even when it's not the focus of the communication, it always finds a way of sticking its awful head in) that, as far as I'm concerned, can only really be adequately covered on bsky itself

xkcd 2610 but it's social media

@agentjabsco
Copy link

agentjabsco commented Jun 25, 2023

One easy solution that strikes me to the question of "how do we stop users from abusing lists" would be to make it possible for users to report lists (you know, the way they can also report posts and/or entire accounts right now)

EDIT: oh wait that was literally the last line of the first post in issue #1 lolol

@Bossett
Copy link

Bossett commented Jun 25, 2023

I think all these systems will be inevitably abused - so something reportable (like a list) has a big advantage over something algorithmic. People will abuse or game either, but a list you can ban.

@agentjabsco
Copy link

agentjabsco commented Jun 25, 2023

I think all these systems will be inevitably abused - so something reportable (like a list) has a big advantage over something algorithmic. People will abuse or game either, but a list you can ban.

The Tyranny of Structurelessness goes hand-in-hand with this

if I might try to sum up my take here: when considering your moderation strategies, there are gonna be places where you're gonna feel an impulse to use something more precise than a blunt instrument, and that is the wrong call - there are some things that are, in fact, best solved with blunt instruments, because anything more leads to the kind of quagmire the CIA has used to sabotage entire nations since World War II.

EDIT: link fixed because the content on the CIA's own page for this has been lost somewhere in a series of blog migrations

@anarisis
Copy link

To inject some I Don't Know What I'm Talking About From a Technical Perspective But I Sure Am A User of Social Medias perspective:

It sounds like lists as proposed are operating as two essentially separate tools that are analogous to both Facebook's lists (a curated selection defined by a poster of users who can or cannot interact with a given post) and Twitter's lists (a curated selection of users that an audience can choose to view as a feed). Both of these functions are useful, but pooling them into a single feature seems to be causing some problems? The first functions well as private curation (not unlike Twitter's Circle feature) and doesn't necessitate consent from listed users because its purpose is to delineate the list owner's boundaries around their own space. Public lists that can be shared are useful for feed curation and describing community membership, but as pointed out open up a lot of possible avenues for abuse which only increase in number as various forms of posting functionality is coupled with them.

I think it might be valuable to define which individual functions are useful on a user level (I only want replies from XYZ group, or I want to exclude interaction from ABC group, or both) and which functions are useful on a network level (here is a curated public list relevant to a topic or community) and break them apart if not in posting UI (which could add bloat/confusion for users) then in the underlying structure to make sure the usefulness of one doesn't enable abuse with the other. While both are — in essence — Lists™, the problems they solve and the issues they raise are very different so viewing them in aggregate may create an unsolvable conflict.

@sneakers-the-rat
Copy link

Networks of trust are a good idea for resisting abuse, but all of these ideas have to start from a place of "how will this work with federation" or else they become meaningless. It's possible to dream up all the cool features you want in a context where you can assume all the actors are well behaved and obey the spec, but that's simply not true with federation, so the adversarial case is the base case.

basically questions like these need to get answered before considering features built on top of lists:
#18

FOAF computations, particularly if individual accounts can define their own, become extremely costly graph computations quickly. This is tractable in p2p systems or centralized systems, but has the effect of limiting federation to only the most well resourced host: will all big graph services and app views and feed generators need to compute a FOAF graph for every action? echoing the above comments as well that having multiple overlapping permissions systems would be a mess, so permissions need to get architected cleanly as a basic part of the protocol rather than something bootstrapped off lists (though hey, that could be the implementation, proto development is a closed process as far as I can tell)

@Bossett
Copy link

Bossett commented Jun 26, 2023

You only need to calculate when a new follow happens - orders of magnitude less often than every action, and results are very cachable (i.e. just copy my followed list into your foaf list).

And for a lot of these - the cosmetic side is going to be all that really matters. So long as you use a well behaved client/server, it should (either through protocol-level validation, or manipulating the graph as 'invalid' data arrives) sort things out on the way to the user. So who cares that someone in some other server comments on your post when you have them blocked? Your service will drop it, and they're just screaming into the void.

That is - the adversarial case may be the base case for server-to-server comms, but so long as you make bad behaviour loud enough at a line level (oh, replied to a blocked thread, will increment their 'bad' counter til we hit a limit and defederate), and invisible to the end user, I don't think that forces you to accept premature trade-offs.

@sneakers-the-rat
Copy link

You only need to calculate when a new follow happens - orders of magnitude less often than every action, and results are very cachable (i.e. just copy my followed list into your foaf list).

This is an implementation question - who is doing the calculation? So if it's at the client/PDS level, then sure, a client would be able to listen for all changes in the n-deep layers according to the different lists and rules that I have configured, cache different computations, great. It's sort of costly but tractable for a client, and the user would probably be able to see "ok that set of filtering lists is too complex and the computation never completes so maybe I need to simplify."

But it seems like this would also have to be computed at the level of big graph services, feed generators, and app views, where you would need to compute graph intersections for the "show this to {x}" from the poster and "show this from {y}" for each of the recipients. Even in the best case where all the clients are cached and up to date, you would need to compute a bunch of pairwise graph intersections. That might be simplified somewhat by sharing lists so each PDS doesn't have its own completely idiosyncratic permissions structure, but it's an empirical question to what degree that happens.

My point isn't so much "this is too expensive to be viable," but this might be too expensive for a given class of users for a given type of node in the network - would this make it too expensive to run a big graph service or app view from the random VPS I rented for $10? These are choices that can strongly affect the health of the network, and whether or not some of the loftier ambitions of a decentralized social media system can be realized or not.

(oh, replied to a blocked thread, will increment their 'bad' counter til we hit a limit and defederate)

Right, who can do the defederation, and how is that implemented? Sure my client can block things, but is that the level of safety that is being spec'd for? My individual PDS might be able to block direct communication with another PDS, but that's not how federation in atproto is designed so far. If it's all within the client, and the graph services and feed generators can just willfully ignore it, then my rules are strictly cosmetic and only apply to me and my friends who are also running well-behaved clients and share my blocklists. If it's within the indexing systems, then how does that weaken federation since now more of the system logic takes place within the network, and again you have to consider whether the other network players are well behaved.

And for a lot of these - the cosmetic side is going to be all that really matters. [...] So who cares that someone in some other server comments on your post when you have them blocked? Your service will drop it, and they're just screaming into the void.

Part of my intention here is asking whether the developers share this view that only functioning at a cosmetic level is OK.

My argument is that these are only premature tradeoffs for a given set of priorities. If you have user safety as a baseline priority, these are not premature tradeoffs, they are fundamental design decisions. If instead you have platform convenience as a baseline priority, then sure this might be a premature tradeoff since you can just hide it at the client level. Design priorities determine implementation - so if you need this to actually work as it appears, that might include some implementation at the indexing and feed generation layer vs. just at the client level.

Anyone who has been a victim of harassment online would likely tell you that a strictly cosmetic blocking feature is not acceptable, and not being able to see it while everyone else can can be even more dangerous than being able to see it and adapt behaviors/network usage to compensate for it. If a decent chunk of the network is not running well behaved clients it is decidedly not screaming into the void, but potentially calling targets for an organized hate network.

These aren't hypotheticals, these are lessons learned from existing social networks, decentralized and not. For example, on the fediverse, mastodon servers can choose to make the list of servers they have defederated from public on their API and /about page. This seems initially like a relatively uncontroversial thing to do, I want potential users to know they will be safe here because we have blocked the hate instances, and I want to make my blocklist available to be reused by other instances - sort of the same kind of logic as is in this proposal with shared blocklists. The problem is that hate groups will scrape these lists and use them to target abuse - if you have blocked {set of hate instances} then you are a target, and that can come in the form of DDoSing the instance, hunting down individual accounts, and more. If you have a public account on one of these instances with a public blocklist, then you won't see all the replies and quote posts (they do exist on the fediverse too in a number of clients, just not base masto) of people doxing you and pointing you out to the mob from those instances - the cosmetic case described here. The problem is that those posts might be visible to anyone who doesn't defederate from them, the group might spin up some temporary instances to come harass you in a way you can see, or they might use them as a coordinating point for IRL violence (it has happened many times, and not just on fedi). This is part of the reason why certain safety mechanisms exist like AUTHORIZED_FETCH where rather than being publicly available, you need authorized credentials to see the post in the first place - opt-in rather than opt-out federation.

So again, yes it is possible to set the standard that "cosmetic-only functionality is OK," but that is also a statement that a certain amount of abuse is ok, and I want to hear the atproto/bsky devs thoughts on where they draw that line. If that's ok with them, then it might be worth letting all the queer people currently on the platform know. If it's not ok with them, then my questions remain - how they intend to make these features actually work in a federated context.

@navix
Copy link

navix commented Jul 2, 2023

It would be great to have some invite-system for lists: users inside the list will share their credibility with invitees, and we will have the ability to cascade deletion of bad actors (API first - tools will be delivered by 3rd party devs).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

9 participants