

Hey guys, I found the Clanker!
Hey guys, I found the Clanker!
Forums have mods and admins. As long as they don’t allow a topic habitually then, per my understanding for the UK law right now, that would make it exempt.
Compliance with the ID law is actually quite expensive if you contract Persona as the ID checker. If 1 user of a site not based in tje UK or about UK things posting 1 news article a mod deletes in 10 minutes is enough to trigger a $50,000 compliance contact, then it’s enough to be amazing standing for an easily won lawsuit about burdens on small business.
I’ve read what seems like 30+ articles and explainers about the UK law the last few days - this has some lousy (official) defintions. I think the most recent episode of Power User with Taylor Lorenz might cover some of this enough to get the overall sense.
The topics under scrutiny of the “user-to-user” site is extremely vague beyond obvious porn, but it amounts to if it allows the sharing of links of basic news of any topic, it counts. Because in terms of categorizing “harmful content” for minors, seeing fucking protests happening anywhere, at all is “controversial adult content.” But if the links are limited to a very specific topic, say Honda Ridgeline owners, privacy and cyber nerd shit no one cares about) etc., cooking, and other innocuous things, it’s a grey zone that doesn’t demand compliance. YMMV, but even for a fascist wannabe set of policies can’t justify “harmful” material for kids with a Linux forum or a forum for owners of the Honda Ridgeline (WTF?)
At this point even the music discovery is being enshitified with AI bands taking up more and more space as low-cost filler.
The last 10 years of the internet are committing seppuku in front of us. Gird your musical loins, friend. We’ll need jams in MP3 format in the dystopian hellscape that’s rushing to meet us.
Sure you can post links, but that’s not the topic of the forum, and it’s not specific the a xountrybor market, which is also a factor right now with the UK law, so it doesnt ping as a problem worth dealing with.
This 1 million%.
The fact that coding is a big corner of the use cases means that the tech sector is essentially high on their own supply.
Summarizing and aggregating data alone isn’t a substitute for the smoke and mirrors of confidence that is a consulting firm. It just makes the ones that can lean on branding able to charge more hours for the same output, and add “integrating AI” another bucket of vomit to fling.
Great question!
First, that the definition of content that is considered “adult” doesn’t necessarily mean every forum qualifies. Privacyguides.org likely would not. A car forum likely would not. Facebook must comply because links shared can be “harmful” anywhere on the platform. The fractured nature of Web 1.0 is a feature now, not a bug.
Second, that proxy measures can reasonably work for forums with smart admins. If I register with an email I can show has been in use since 2007, some forums are willing to accept that as enough evidence. I saw an article somewhere I can’t find right now that someone was accepting 5 year old tickets to a concert or something that was an 18+ event. Typically age verification laws are focused on large Web 2.0 platforms and can include lower cost, lower threshold options for sites with a very small number of users.
Finally, that it might simply take a longer time for anyone to care or even notice some smaller sites. By the time someone comes calling, policies might have already changed several times and reasonable exemptions now mean no work is needed.
I’m literally going around trying out old school forum sites this week.
The documentation says it’s TLS encrypted to the LLM context window. LLM processes, and the context window output goes back via TLS to you.
As long as the context window is only connected to Proton servers decrypting the TLS tunnel, and the LLM runs on their servers, and much like a VPN, they don’t keep logs, then I don’t see what the problem actually is here.
You said yourself that it wasn’t actually wrong or deceptive or inaccurate, but rather “confusing.”
read your own words.
SMH
No one is saying it’s encrypted when processed, because that’s not a thing that exists.
Because this is highly nuanced technical hair splitting, which is not typically a good way to sell things.
Look, we need to agree to disagree here, because you are not changing your mind, but I don’t see anything compelling here that’s introduced a sliver of doubt for me. If anything, forcing me to look into it in detail makes me feel more OK with using it.
Whatever. Have a nice day.
It is e2ee – with the LLM context window!
When you email someone outside Proton servers, doesn’t the same thing happen anyway? But the LLM is on Proton servers, so what’s the actual vulnerability?
So then you object to the premise any LLM setup that isn’t local can ever be “secure” and can’t seem to articulate that.
What exactly is dishonest here? The language on their site is factually accurate, I’ve had to read it 7 times today because of you all. You just object to the premise of non-local LLMs and are, IMO, disingenuously making that a “brand issue” because…why? It sounds like a very emotional argument as it’s not backed by any technical discussion beyond “local only secure, nothing else.”
Beyond the fact that
They are not supposed to be able to and well designed e2ee services can’t be.
So then you trust that their system is well-designed already? What is this cognitive dissonance that they can secure the relatively insecure format of email, but can’t figure out TLS and flushing logs for an LLM on their own servers? If anything, it’s not even a complicated setup. TLS to the context window, don’t keep logs, flush the data. How do you think no-log VPNs work? This isn’t exactly all that far off from that.
My friend, I think the confusion stems from you thinking you have deep technical understanding on this, when everything you say demonstrates that you don’t.
First off, you don’t even know the terminology. A local LLM is one YOU run on YOUR machine.
Lumo apparently runs on Proton servers - where their email and docs all are as well. So I’m not sure what “Their AI is not local!” even means other than you don’t know what LLMs do or what they actually are. Do you expect a 32B LLM that would use about a 32GB video card to all get downloaded and ran in a browser? Buddy…just…no.
Look, Proton can at any time MITM attack your email, or if you use them as a VPN, MITM VPN traffic if it feels like. Any VPN or secure email provider can actually do that. Mullvad can, Nord, take your pick. That’s just a fact. Google’s business model is to MITM attack your life, so we have the counterfactual already. So your threat model needs to include how much do you trust the entity handling your data not to do that, intentionally or letting others through negligence.
There is no such thing as e2ee LLMs. That’s not how any of this works. Doing e2ee for the chats to get what you type into the LLM context window, letting the LLM process tokens the only way they can, getting you back your response, and getting it to not keep logs or data, is about as good as it gets for not having a local LLM - which, remember, means on YOUR machine. If that’s unacceptable for you, then don’t use it. But don’t brandish your ignorance like you’re some expert, and that everyone on earth needs to adhere to whatever “standards” you think up that seem ill-informed.
Also, clearly you aren’t using Proton anyway because if you need to search the text of your emails, you have to process that locally, and you have to click through 2 separate warnings that tell you in all bold text “This breaks the e2ee! Are you REALLY sure you want to do this?” So your complaint about warnings is just a flag saying you don’t actually know and are just guessing.
Really? This article reads like it’s AI slop reproducing Proton copy then pivoting to undermine them with straight up incorrect info.
You know how Microsoft manages to make LibreOffice pulls errors on Windows 11? You really didn’t stop to think that Google might contract out some slop farms to shit on Proton?
Both your take, and the author, seem to not understand how LLMs work. At all.
At some point, yes, an LLM model has to process clear text tokens. There’s no getting around that. Anyone who creates an LLM that can process 30 billion parameters while encrypted will become an overnight billionaire from military contracts alone. If you want absolute privacy, process locally. Lumo has limitations, but goes farther than duck.ai at respecting privacy. Your threat model and equipment mean YOU make a decision for YOUR needs. This is an option. This is not trying to be one size fits all. You don’t HAVE to use it. It’s not being forced down your throat like Gemini or CoPilot.
And their LLM. - it’s Mistral, OpenHands and OLMO, all open source. It’s in their documentation. So this article is straight up lies about that. Like… Did Google write this article? It’s simply propaganda.
Also, Proton does have some circumstances where it lets you decrypt your own email locally. Otherwise it’s basically impossible to search your email for text in the email body. They already had that as an option, and if users want AI assistants, that’s obviously their bridge. But it’s not a default setup. It’s an option you have to set up. It’s not for everyone. Some users want that. It’s not forced on everyone. Chill TF out.
95% of senior government officials across Africa will give you a fancy-ass, taxpayer funded, full color laminated plastic business card that lists their email address for official business as Gmail or Hotmail.
Like… Today. Right now. I used to have a collection. Once got one with a 3D hologram thing. Fucking hotmail and gmail address.
OK… So, the initial question was “how could anyone support this?” right?
I’m simply explaining how some people see the argument. I never said I see it like this.
So I’m by no means defending any of this other than it being technically possible, and at that, this falls far short of anything resembling acceptable in my book.
Parents who vote and would support this would do so based on limited technical knowledge and a total ideological investment in “preventing” any exposure. Which, we agree, is idiotic.
Y’all really need to chill out with your pitchforks.
Yeah.
It fucken worked, didn’t it?