• 3 Posts
  • 40 Comments
Joined 1 month ago
cake
Cake day: November 10th, 2025

help-circle

  • Yes this is a great discussion. Generally speaking I see how user local-first posting mirrored by relays under e2ee can help solve some of the downsides of instance based federation, however it seems like the actual implementation makes or breaks the utility.

    I have a concept that came up in reading the comments on lobster, which is that the issue of incomplete data due to asynchronous/intermittent downtime push and pull by users to/from the relays as well as inconsistent relays behavior leads to inevitable incomplete/non-consensus/out of date data access (something federation also suffers from).

    My idea is that relays, bothe standard and specialized, could host a dedicated encrypted ledger for each user/key that has posted to it (potentially within a time limit, or with approval) that holds only a sequential identifier (counted since the first event by the key) of the user’s most recent activity and a unique identifier/key associated with the event the activity was associated with (so that edits would be associated with the UI of the post being edited for example, or a new message to an ongoing thread would use the UI of the thread and the UI of the message.) Limit this log to very few entries and replace it every time it is updated, say between 1 and 10, and you would keep the size of the file very small, and the pushed update from the user/key would also be very small.

    This way a user could push activity log updates to a broader set of hosts/relays than the actual content/event was sent to while keeping the cache/data burden on the broader network down. Ideally this would mean that not only the Relays but also users following the user/key could hold the log (enabling gossip without large cache burden). Unlike a blockchain where the ledgers would need to cross-sync with each-other and seek consensus on larger data chunks, in this case the reader of the ledger can always default to the most recent sequential identifier, and that identifier would be generated by publishing key/user.

    This way time code variance isn’t an issue, and at time of login a user can pull the logs for all users/keys they follow from relays OR peers they follow and determine the number of events posted by each user/key since they last pulled updates. Then the client could crawl the relays for the actual events with sequential identifiers between those AND stop crawling once they are found.

    One issue I see with this sort of system is in the case of deleted events, so perhaps the log would also need to include a string of the sequential identifiers of events which have been deleted within a given time period.


  • My understanding is that the content is essentially self-hosted, so content removed from relays still exists on the posting user’s client and can be accessed directly, just like a website sending out RSS. So saying it “still exist on the network” is technically true, but only in the same way that you would say that about say bittorrent or the open web. What people host/post is present raw, what is amplified/“curated”/relayed is filtered. Client settings/config sets default and custom user content interaction, like a browser which can have adblock or not.

    In principle, this seems like a decent solution, but I can see why different users prefer different protocols, differing to moderation takes a burden off of the user to vet inbound content. The same can be achieved via relays but the culture of “curation” seems weaker because the pressure from the userbase is lower to optimize it, as users are not solely reliant on any one relays. An odd network effect, but a truly invested curation/admin team could just as easily build a well “curated” relay as a well moderated instance.





  • Client-side curation sounds like whitelisting effectively, if you follow only the curated feed and the curated feed resigns all events posted by selected keys, that’s a whitelist and seems like a decent solution for casual users so long as they can find trusted curators and clients that enable them to be easily discovered and subscribed to. What client is best for this currently?

    On the flipside, if those “curators” were able to export and import lists of keys to automatically exclude from feeds, that would be very useful for the curators who have to manually or automatically sort events and new users to build their feeds. Is that feature currently available? Eliminating known bot accounts from feeds seems like minimum viable feature set for new curators in the current state of play.










  • DEX is actually pretty good when used with a keyboard and external monitor. I also dont love thr Samsung walled garden, but I end up buying their products because I use my phones for several years at a time before replacing them so top end hardware specs are a priority and especially cameras.

    I would go Sony but the data band support in the US is incomplete, and I can’t get caught out by poor cell service while traveling.

    I am considering going Pixel next but Graphene hasn’t been announced for Pixel 10 yet so I’m a bit on the fence, I guess I could buy an older model and give it a try wifi only for a bit to see how I like it.



  • A lot of the issue with foldable is the non-standard aspect ratio. This gets to a standard tablet aspect ratio, so should run out of the box with most apps without additional modification.

    Also DEX support on-device means it can run fully windowed applications and use mouse and keyboard natively, which is a big boost in functionality for productivity applications.



  • If I’m reading(skimming) the documentation right, it seems like anyone who can pass the challenge can download the full node and see the full record of interactions. IPFS is not a perfect privacy network, so user accounts can in theory be traced back.

    So basically as with Fedi instances it is fully on the Node host to set who can get in based on the challenge, and what is hosted there is their liability. Only difference is Plebbit allows any user to spin up a new instance/community node ad-hoc and they aren’t responsible for maintaining infrastructure beyond what is required seed the nodes they host.

    Is that right? I’m not sure but hopefully someone better in the know will correct me if not.