

If it gets you talking about it, even in the context of telling them to shut the fuck up, it’s working :)
If it gets you talking about it, even in the context of telling them to shut the fuck up, it’s working :)
Because TurboTax lobbied to change the narrative to “we already have private market solutions for tax, therefore the government hosting a no-cost option is actually wasteful and bad for the budget”
Oh my bad. According to another commenter it is sandboxed though
They have Google services but through a third party wrapper called MicroG, which keeps it sandboxed to a degree that you can keep it from doing system-level actions like this
edit: not microG, as evidenced by the strikethrough I put in very soon after receiving the first of several replies clarifying the situation. I would encourage you to read one of them before adding your own. <3
Just ask them to answer your question in the style of a know-it-all Redditor because you need the dialog for a compelling narrative or something
I spent like 40 hours on XC2 and uh, idk I really liked the world design but wasn’t a fan of the effectively gacha mechanics to unlock new fighters. The story seemed to have a really slow start (which I’m not necessarily against) but the combat wasn’t my thing unfortunately. The Japanese voice acting is definitely a lot better than the English, and was worth waiting for the download on even though I didn’t end up playing that far in.
People developing local models generally have to know what they’re doing on some level, and I’d hope they understand what their model is and isn’t appropriate for by the time they have it up and running.
Don’t get me wrong, I think LLMs can be useful in some scenarios, and can be a worthwhile jumping off point for someone who doesn’t know where to start. My concern is with the cultural issues and expectations/hype surrounding “AI”. With how the tech is marketed, it’s pretty clear that the end goal is for someone to use the product as a virtual assistant endpoint for as much information (and interaction) as it’s possible to shoehorn through.
Addendum: local models can help with this issue, as they’re on one’s own hardware, but still need to be deployed and used with reasonable expectations: that it is a fallible aggregation tool, not to be taken as an authority in any way, shape, or form.
On the whole, maybe LLMs do make these subjects more accessible in a way that’s a net-positive, but there are a lot of monied interests that make positive, transparent design choices unlikely. The companies that create and tweak these generalized models want to make a return in the long run. Consequently, they have deliberately made their products speak in authoritative, neutral tones to make them seem more correct, unbiased and trustworthy to people.
The problem is that LLMs ‘hallucinate’ details as an unavoidable consequence of their design. People can tell untruths as well, but if a person lies or misspeaks about a scientific study, they can be called out on it. An LLM cannot be held accountable in the same way, as it’s essentially a complex statistical prediction algorithm. Non-savvy users can easily be fed misinfo straight from the tap, and bad actors can easily generate correct-sounding misinformation to deliberately try and sway others.
ChatGPT completely fabricating authors, titles, and even (fake) links to studies is a known problem. Far too often, unsuspecting users take its output at face value and believe it to be correct because it sounds correct. This is bad, and part of the issue is marketing these models as though they’re intelligent. They’re very good at generating plausible responses, but this should never be construed as them being good at generating correct ones.
90 days to cycle private tokens/keys?
That’s really good to know, and not how I thought the system worked previously. I thought instances were responsible for all vote aggregation and simply reported totals to each other at regular intervals, plus submitting comments/edits from users which are more obviously public
Y’know, that’s fair. I think I misspoke, and meant to say that the admins of your instance can see your IP but not the admins of another (assuming you’re not self hosting on your home PC without a VPN), but I’m not 100% sure that’s true because I’ve never looked at the protocol.
If every interaction is already public on the backend/API level, then simply not showing the info to users is just a transparency issue.
The more I’m thinking about this, the more I believe it’s a cultural/expectations thing. On websites like Tumblr, all of your reblogs and likes are public info, but it’s very up front about that. Social media like Facebook, IG, and sites like Discord, it’s the same; you can look through the list of everyone who reacted.
Data is not suddenly public just because some people have access to it. Data is public when it’s available for anyone to look at. Privacy is almost always going to be a trust issue on some level, and very few things are possible to do truly anonymously. Some data will always be available to someone in a position where it’s possible to abuse. Instance admins can see your IP address. Should that be available for everyone to see?
I mean, that’s already true and why the federation model is used in the first place. If another instance can’t be trusted, you can disconnect your own from it (extremely easy if you self-host, if you are a standard member of a larger instance it might require convincing)
sigh
This is a No Safety Smoking First, not a Don’t Dead Open Inside
I think it depends on the reason you do not use it. The Luddites were primarily frustrated over automation displacing their high-skill job with low-skilled ones that produced worse quality goods. It’s a 2 for 1: we are losing the jobs we need to survive, but also we lose the personal touch from the work of artisans + lose appreciation for their talent.
I am not carte blanche against AI as a concept, but it really does seem like a technology that makes interactions worse quality, more depersonalized, and on top of that it has a horrible externalized environmental cost which benefits nobody in the long run.
Addendum: I believe technology has the power to be liberating when it provides for all of us, and oppressive when it concentrates wealth+power into the hands of moguls and tyrants.