

Audiobookshelf. Not exactly a “hidden” gem at this point, but I’m putting it here for today’s lucky 10,000. Simply the best way to store and stream audiobooks. Does podcasts too, and ebooks, although there are better tools for those.
Audiobookshelf. Not exactly a “hidden” gem at this point, but I’m putting it here for today’s lucky 10,000. Simply the best way to store and stream audiobooks. Does podcasts too, and ebooks, although there are better tools for those.
Because of course they’re fucking TERFs.
The problem, as always, is that parents don’t want to put the work into educating their children, they want the government to wave a magic wand and make the problem go away. And that’s what gets you half assed solutions like this.
The point is that clouds aren’t inherently bad, and actually come with a lot of important upsides; they’ve become bad because capital owns and exploits everything in our society, poisoning what should be a good idea. The author is arguing that while there’s nothing fundamentally wrong with self-hosting, it’s not really a solution, just a patch around the problem. Rather than seeking a kind of digital homesteading where our lives are reduced to isolated islands of whatever we personally can scratch from the land, we should be seeking a digital collectivism where communities, not exploitative corporations, own the digital landscape. Sieze the means of file-sharing, in effect.
I have to support a remote client that uses Starlink. It’s a nightmare. We can deal with slow connections, we can deal with bad ping, but with Starlink what we get is the entire connection dropping every minute or so, and coming back up a short while later. It’s unbelievably bad.
The Nvidia Shield is still the best option for this. I’ve tried all kinds of homebrew solutions and always had headaches. In the two years I’ve had my Shield, I’ve never had a problem. Smart Tube Next lets me cast YouTube without ads, Kodi/Jellyfin gives me all my media library, plus I’ve got official apps for Nebula, Dropout and Spotify. Custom launcher removes what little amount of ads there were (and that was unobtrusive background banner stuff even at its worst). Plus the pro version can handle some pretty powerful emulators.
There’s no way they actually retrained it for this, that would be much too expensive. They’re just editing the initial prompt to convince it to act more “right wing” and it’s performing the assignment to the best of its ability. The problem is that a chat-bot doesn’t understand context, so it just plays the character it’s been given as full mask off all the time, and as a result you get this.
This is really cool. I maintain a lot of systems that have to be worked on from time to time by far less experienced techs than myself (due to our relationship with the business partners that use the systems) and this sort of thing could be amazing for providing a kind of inline user manual.
My son has doubled in size every month for the last few months. At this rate he’ll be fifty foot tall by the time he’s seven years old.
Yeah, it’s a stupid claim to make on the face of it. It also ignores practical realities. The first is those is training data, and the second is context windows. The idea that AI will successfully write a novel or code a large scale piece of software like a video game would require them to be able to hold that entire thing in their context window at once. Context windows are strongly tied to hardware usage, so scaling them to the point where they’re big enough for an entire novel may not ever be feasible (at least from a cost/benefit perspective).
I think there’s also the issue of how you define “success” for the purpose of a study like this. The article claims that AI may one day write a novel, but how do you define “successfully” writing a novel? Is the goal here that one day we’ll have a machine that can produce algorithmically mediocre works of art? What’s the value in that?
I guess the value is that at some point you’ll probably hear the core claim - “AI is improving exponentially” - regurgitated by someone making a bad argument, and knowing the original source and context can be very helpful to countering that disinformation.
The key difference being that AI is a much, much more expensive product to deliver than anything else on the web. Even compared to streaming video content, AI is orders of magnitude higher in terms of its cost to deliver.
What this means is that providing AI on the model you’re describing is impossible. You simply cannot pack in enough advertising to make ChatGPT profitable. You can’t make enough from user data to be worth the operating costs.
AI fundamentally does not work as a “free” product. Users need to be willing to pony up serious amounts of money for it. OpenAI have straight up said that even their most expensive subscriber tier operates at a loss.
Maybe that would work, if you could sell it as a boutique product, something for only a very exclusive club of wealthy buyers. Only that model is also an immediate dead end, because the training costs to build a model are the same whether you make that model for 10 people or 10 billion, and those training costs are astronomical. To get any kind of return on investment these companies need to sell a very, very expensive product to a market that is far too narrow to support it.
There’s no way to square this circle. Their bet was that AI would be so vital, so essential to every facet of our lives that everyone would be paying for it. They thought they had the new cellphone here; a $40/month subscription plan from almost every adult in the developed world. What they have instead is a product with zero path to profitability.
Seconding this. Itzg’s server is so easy, I taught my 15 year old niece to run one.
What we’ve seen very clearly with fair use is that you end up being forced to defend it, as opposed to it being presumed. That means it’s very easy for a rightsholder with money to go after every use, fair or not, and force the user to spend time and money defending themselves (and also probably face a preliminary injunction that takes the image down until the case is over, which will often be after its newsworthy).
It’s not the standard because it will likely have a LOT of unintended consequences.
How do you share evidence of police brutality if they can use copyright to take down the video? How do newspapers print pictures of people if they have to get the rightsholders permission first? How do we share photos of Elon Musk doing a Nazi salute if he can just sue every site that posts it for unauthorized use of his likeness?
Unless this has some extremely stringent and well written limitations, it has the potential to be a very bad idea.
Thanks, I didn’t know about that.
I love Seafile, but I’m not sure it really meets OP’s requirements. For example I’m not aware of any way to do upload without a login in Seafile.
Actually one of the characters in 1984 works in the department that produces computer generated romance novels. Orwell pretty accurately predicted the idea of AI slop as a propaganda tool.
There are, as I understand it, ways that you can train on AI generated material without inviting model collapse, but that’s more to do with distilling the output of a model. What Musk is describing is absolutely wholesale confabulation being fed back into the next generation of their model, which would be very bad. It’s also a total pipe dream. Getting an AI to rewrite something like the total training data set to your exact requirements, and verifying that it had done so satisfactorily would be an absolutely monumental undertaking. The compute time alone would be staggering and the human labour (to check the output) many times higher than that.
But the whiny little piss baby is mad that his own AI keeps fact checking him, and his engineers have already explained that coding it to lie doesn’t really work because the training data tends to outweigh the initial prompt, so this is the best theory he can come up with for how he can “fix” his AI expressing reality’s well known liberal bias.
Honestly, none that are all that great. I tried Kodi in various forms, LibreElec, OSMC, MythTV, Steam Big Picture, and KDE TV (or whatever its called), but you’re just never going to get a great experience with stuff like Netflix and YouTube on Linux.
In the end, I bought myself an Nvidia Shield, switched out the launcher for one without ads, installed Smart Tube Next for ad-free YouTube, and I couldn’t be happier with the results. I’ve got my apps for Nebula and Dropout. I’ve got Kodi and Jellyfin for my home library. It has barely any power consumption, it boots fast, it runs a huge variety of emulators, the included remote works great (plus there’s a remote app for your phone that controls the entire system), and the wife acceptance factor is exceptional.
I’m really big on self-hosting and building all my own stuff; I use lots of repurposed hardware salvaged from companies I and my friends work at and I try to avoid off the shelf products. But I’m genuinely kicking myself for not buying a Shield sooner. It really is the best TV solution for a self hoster.
LocalSend should be called God Send because it’ll save your life. It’s AirDrop, but for everything and open source. Works really well, no setup, no server.