• 0 Posts
  • 63 Comments
Joined 2 months ago
cake
Cake day: June 7th, 2025

help-circle


  • Forums have mods and admins. As long as they don’t allow a topic habitually then, per my understanding for the UK law right now, that would make it exempt.

    Compliance with the ID law is actually quite expensive if you contract Persona as the ID checker. If 1 user of a site not based in tje UK or about UK things posting 1 news article a mod deletes in 10 minutes is enough to trigger a $50,000 compliance contact, then it’s enough to be amazing standing for an easily won lawsuit about burdens on small business.


  • I’ve read what seems like 30+ articles and explainers about the UK law the last few days - this has some lousy (official) defintions. I think the most recent episode of Power User with Taylor Lorenz might cover some of this enough to get the overall sense.

    The topics under scrutiny of the “user-to-user” site is extremely vague beyond obvious porn, but it amounts to if it allows the sharing of links of basic news of any topic, it counts. Because in terms of categorizing “harmful content” for minors, seeing fucking protests happening anywhere, at all is “controversial adult content.” But if the links are limited to a very specific topic, say Honda Ridgeline owners, privacy and cyber nerd shit no one cares about) etc., cooking, and other innocuous things, it’s a grey zone that doesn’t demand compliance. YMMV, but even for a fascist wannabe set of policies can’t justify “harmful” material for kids with a Linux forum or a forum for owners of the Honda Ridgeline (WTF?)





  • Great question!

    First, that the definition of content that is considered “adult” doesn’t necessarily mean every forum qualifies. Privacyguides.org likely would not. A car forum likely would not. Facebook must comply because links shared can be “harmful” anywhere on the platform. The fractured nature of Web 1.0 is a feature now, not a bug.

    Second, that proxy measures can reasonably work for forums with smart admins. If I register with an email I can show has been in use since 2007, some forums are willing to accept that as enough evidence. I saw an article somewhere I can’t find right now that someone was accepting 5 year old tickets to a concert or something that was an 18+ event. Typically age verification laws are focused on large Web 2.0 platforms and can include lower cost, lower threshold options for sites with a very small number of users.

    Finally, that it might simply take a longer time for anyone to care or even notice some smaller sites. By the time someone comes calling, policies might have already changed several times and reasonable exemptions now mean no work is needed.








  • So then you object to the premise any LLM setup that isn’t local can ever be “secure” and can’t seem to articulate that.

    What exactly is dishonest here? The language on their site is factually accurate, I’ve had to read it 7 times today because of you all. You just object to the premise of non-local LLMs and are, IMO, disingenuously making that a “brand issue” because…why? It sounds like a very emotional argument as it’s not backed by any technical discussion beyond “local only secure, nothing else.”

    Beyond the fact that

    They are not supposed to be able to and well designed e2ee services can’t be.

    So then you trust that their system is well-designed already? What is this cognitive dissonance that they can secure the relatively insecure format of email, but can’t figure out TLS and flushing logs for an LLM on their own servers? If anything, it’s not even a complicated setup. TLS to the context window, don’t keep logs, flush the data. How do you think no-log VPNs work? This isn’t exactly all that far off from that.


  • My friend, I think the confusion stems from you thinking you have deep technical understanding on this, when everything you say demonstrates that you don’t.

    First off, you don’t even know the terminology. A local LLM is one YOU run on YOUR machine.

    Lumo apparently runs on Proton servers - where their email and docs all are as well. So I’m not sure what “Their AI is not local!” even means other than you don’t know what LLMs do or what they actually are. Do you expect a 32B LLM that would use about a 32GB video card to all get downloaded and ran in a browser? Buddy…just…no.

    Look, Proton can at any time MITM attack your email, or if you use them as a VPN, MITM VPN traffic if it feels like. Any VPN or secure email provider can actually do that. Mullvad can, Nord, take your pick. That’s just a fact. Google’s business model is to MITM attack your life, so we have the counterfactual already. So your threat model needs to include how much do you trust the entity handling your data not to do that, intentionally or letting others through negligence.

    There is no such thing as e2ee LLMs. That’s not how any of this works. Doing e2ee for the chats to get what you type into the LLM context window, letting the LLM process tokens the only way they can, getting you back your response, and getting it to not keep logs or data, is about as good as it gets for not having a local LLM - which, remember, means on YOUR machine. If that’s unacceptable for you, then don’t use it. But don’t brandish your ignorance like you’re some expert, and that everyone on earth needs to adhere to whatever “standards” you think up that seem ill-informed.

    Also, clearly you aren’t using Proton anyway because if you need to search the text of your emails, you have to process that locally, and you have to click through 2 separate warnings that tell you in all bold text “This breaks the e2ee! Are you REALLY sure you want to do this?” So your complaint about warnings is just a flag saying you don’t actually know and are just guessing.



  • Both your take, and the author, seem to not understand how LLMs work. At all.

    At some point, yes, an LLM model has to process clear text tokens. There’s no getting around that. Anyone who creates an LLM that can process 30 billion parameters while encrypted will become an overnight billionaire from military contracts alone. If you want absolute privacy, process locally. Lumo has limitations, but goes farther than duck.ai at respecting privacy. Your threat model and equipment mean YOU make a decision for YOUR needs. This is an option. This is not trying to be one size fits all. You don’t HAVE to use it. It’s not being forced down your throat like Gemini or CoPilot.

    And their LLM. - it’s Mistral, OpenHands and OLMO, all open source. It’s in their documentation. So this article is straight up lies about that. Like… Did Google write this article? It’s simply propaganda.

    Also, Proton does have some circumstances where it lets you decrypt your own email locally. Otherwise it’s basically impossible to search your email for text in the email body. They already had that as an option, and if users want AI assistants, that’s obviously their bridge. But it’s not a default setup. It’s an option you have to set up. It’s not for everyone. Some users want that. It’s not forced on everyone. Chill TF out.



  • OK… So, the initial question was “how could anyone support this?” right?

    I’m simply explaining how some people see the argument. I never said I see it like this.

    So I’m by no means defending any of this other than it being technically possible, and at that, this falls far short of anything resembling acceptable in my book.

    Parents who vote and would support this would do so based on limited technical knowledge and a total ideological investment in “preventing” any exposure. Which, we agree, is idiotic.

    Y’all really need to chill out with your pitchforks.