• 1 Post
  • 14 Comments
Joined 2 years ago
cake
Cake day: July 6th, 2023

help-circle
  • If we can’t say if something is intelligent or not, why are we so hell-bent on creating this separation from LLMs? I perfectly understand the legal underminings of copyright, the weaponization of AI by the marketing people, the dystopian levels of dependence we’re developing on a so far unreliable technology, and the plethora of moral, legal, and existential issues surrounding AI, but this specific subject feels like such a silly hill to die on. We don’t know if we’re a few steps away from having massive AI breakthroughs, we don’t know if we already have pieces of algorithms that closely resemble our brains’ own. Our experiencing of reality could very well be broken down into simple inputs and outputs of an algorithmic infinite loop; it’s our hubris that elevates this to some mystical, unreproducible thing that only the biomechanics of carbon-based life can achieve, and only at our level of sophistication, because you may well recall we’ve been down this road with animals before as well, claiming they dont have souls or aren’t conscious beings, that somehow because they don’t very clearly match our intelligence in all aspects (even though they clearly feel, bond, dream, remember, and learn), they’re somehow an inferior or less valid existence.

    You’re describing very fixable limitations of chatgpt and other LLMs, limitations that are in place mostly due to costs and hardware constraints, not due to algorithmic limitations. On the subject of change, it’s already incredibly taxing to train a model, so of course continuous, uninterrupted training so as to more closely mimick our brains is currently out of the question, but it sounds like a trivial mechanism to put into place once the hardware or the training processes improve. I say trivial, making it sound actually trivial, but I’m putting that in comparison to, you know, actually creating an LLM in the first place, which is already a gargantuan task to have accomplished in itself. The fact that we can even compare a delusional model to a person with heavy mental illness is already such a big win for the technology even though it’s meant to be an insult.

    I’m not saying LLMs are alive, and they clearly don’t experience the reality we experience, but to say there’s no intelligence there because the machine that speaks exactly like us and a lot of times better than us, unlike any other being on this planet, has some other faults or limitations…is kind of stupid. My point here is, intelligence might be hard to define, but it might not be as hard to crack algorithmically if it’s an emergent property, and enforcing this “intelligence” separation only hinders our ability to properly recognize whether we’re on the right path to achieving a completely artificial being that can experience reality or not. We clearly are, LLMs and other models are clearly a step in the right direction, and we mustn’t let our hubris cloud that judgment.


  • What I never understood about this argument is…why are we fighting over whether something that speaks like us, knows more than us, bullshits and gets shit wrong like us, loses its mind like us, seemingly sometimes seeks self-preservation like us…why all of this isn’t enough to fit the very self-explanatory term “artificial…intelligence”. That name does not describe whether the entity is having a valid experiencing of the world as other living beings, it does not proclaim absolute excellence in all things done by said entity, it doesn’t even really say what kind of intelligence this intelligence would be. It simply says something has an intelligence of some sort, and it’s artificial. We’ve had AI in games for decades, it’s not the sci-fi AI, but it’s still code taking in multiple inputs and producing a behavior as an outcome of those inputs alongside other historical data it may or may not have. This fits LLMs perfectly. As far as I seem to understand, LLMs are essentially at least part of the algorithm we ourselves use in our brains to interpret written or spoken inputs, and produce an output. They bullshit all the time and don’t know when they’re lying, so what? Has nobody here run into a compulsive liar or a sociopath? People sometimes have no idea where a random factoid they’re saying came from or that it’s even a factoid, why is it so crazy when the machine does it?

    I keep hearing the word “anthropomorphize” being thrown around a lot, as if we cant be bringing up others into our domain, all the while refusing to even consider that maybe the underlying mechanisms that make hs tick are not that special, certainly not special enough to grant us a whole degree of separation from other beings and entities, and maybe we should instead bring ourselves down to the same domain as the rest of reality. Cold hard truth is, we don’t know if consciousness isn’t just an emerging property of varios different large models working together to show a cohesive image. If it is, would that be so bad? Hell, we don’t really even know if we actually have free will or if we live in a superdeterministic world, where every single particle moves with a predetermined path given to it since the very beginning of everything. What makes us think we’re so much better than other beings, to the point where we decide whether their existence is even recognizable?


  • I host a Plex server for close to 70 friends and family members, from multiple parts of the world. I have over 60TBs of movies, tv shows, anime, anime movies, and flac music, and everyone can connect directly to my server via my reverse proxy and my public IPs. This works on their phones, their tvs, their tablets and PCs. I have people of all ages using my server, from very young kids to very old grandparents of friends. I have friends who share their accounts with their families, meaning I probably have already hit 100+ people using my server. Everyone is able to request whatever they want through overseerr with their Plex account, and everything shows up pretty instantly as soon as it is found and downloaded. It works almost flawlessly, whether locally or remotely, from anywhere in the world. I myself don’t even reside in the same home that my Plex server resides. I paid for my lifetime pass over 10 years ago.

    Can you guarantee that I can move over to jellyfin and that every single person currently using my Plex server will continue having the same level of experience and quality of life that they’re having with my Plex server currently? Because if you can’t, you just answered your own question. Sometimes we self host things for ourselves and we can deal with some pains, but sometimes we require something that works for more people than just us, and that’s when we have to make compromises. Plex is not perfect, and is actively becoming enshittified, but I can’t simply dump it and replace it with something very much meant for local or single person use rather than actively serving tens to hundreds of people off a server built with OTC components.


  • My dude, I understand your unwillingness, but docker is just a fancy new way of saying “install apps without it being a major PITA”. You just find the app you want on docker hub or some other docker repo, you pull the image, you run it, et voila, you have a container. No worrying about python suddenly breaking, or about running 5 commands in a row to spin up an app (I used to do this, including the whole python rain dance, to run home assistant. I feel stupid now).

    Decluttarr actually has a section to set up their container:

    https://github.com/ManiMatter/decluttarr#method-1-docker

    It’s step by step, all you have to do is get docker installed on your machine, then copy paste that text into a file, and run the docker command mentioned in the same directory as the file.

    Trust me, you want to learn this, because after the first 15 minutes of confusion, you suddenly have the holy grail to self hosting right at your fingertips. It takes me all of 5 minutes to add a new service to my homelab all because it’s so easy with docker. And it’s so ubiquitous and popular! TrueNAS SCALE uses docker for all its apps, the idea of containers essentially reshaped Linux desktop to be what it is today, with flatpaks and all.






  • I was also on the fence. Ended up jumping into it all a few months agk, and my plex server went from a very small and informal media repository that a few friends kept nagging me about because I always procrastinated downloading, categorizing, and adding media to it, to now a vast collection of thousands of movies and hundreds of shows, spanning about 50 users, around 40TB+ of content (which reminds me I need more drives soon…) and everyone requests whatever they want. There’s still work to be done, there always is, especially if your server grows and your peers start using it (wait to see that one person start requesting Korean stuff that never gets found automatically), but it’s a night and day difference for me, and the organization of it all helps me concentrate and tackle stuff quicker.

    So the stack usually goes like this:

    -sonarr, radarr, readarr, lidarr, etc. : they each specialize in a media format (series, movies, books, music, respectively), they will fetch Metadata from known Metadata sources, and will perform searches on whichever indexer you like (think piratebay for torrents, or nzbgeek for NZBs from usenet). They’ll connect to your download client and send torrents and NZBs to be downloaded, will know if a download fails and search again, and will import completed items automatically. They’ll organize everything, rename everything, and keep track of quality with constant upgrades to your media by parsing RSS feeds from said indexers. They won’t go out of their way to downloading things you didn’t ask for, you have to ask for everything. You can monitor collections for movies on radarr if you want future movies, but that’s about it as far as waiting for new content not explicitly requested.

    -overseerr, requestrr, etc. : these are front ends that you can share with your friends and family. You only need one. They’ll be able to search for content as well as browse trending or new contenr, see if it’s in your library, request content, and follow the progress of the requested content. No need to tell anyone “this isn’t done yet”, they can just check what’s available and whatnot, and you can designate request quotas per user and decline requests.

    -jackett, prowlarr, etc. : these helper services will make it easier for you to keep track of your indexers. They’ll communicate with the content handling arr services to provide them all the indexers they need. You only need one. You set them up once on these services rather than once for each arr service. They also have the ability to perform better manual indexer searches than the main arr stack services.

    -honorable mention, bazarr: this little fella will integrate with your arr services to monitor all media and download subtitles for it all, set to your standards. It even has the ability to use a WhisperAI server (speech to text LLM developed by openai) as a source for subtitles, so you could create your own subtitles if you don’t find any. Of all of them, I find this one to feel the jankiest, but it does a decent enough job, even if not perfect by a long shot.

    There’s other services that I haven’t messed with. For instance, there’s Tdarr which is used for automatic remuxing and conversion of media files to whichever format you prefer, in order to standardize your entire library. I feel like this is a destructive service that could easily backfire if I’m not careful (say, HDR H265 conversion to H264, buhbye dynamic range and color accuracy forever on that file if you don’t provide an accurate tone mapping which is usually not a one size fits all thing, so a lot of intervention anyway) , so I’d rather not even risk it.

    Almost everything can be thrown into docker containers, and you can find some pretty decent guides on YouTube by searching for these services one by one. After the first one, you’ll get the gist of it all I think. Bazarr runs as a service (at least on windows) and has some bug with its front-end sometimes, which requires you to restart the service to get into the page at all, though apparently setting the service to delayed start fixes the issue, which I did and haven’t run into this bug since, so something to keep in mind.

    As others mentioned, there’s guides to setting up qualities, filters, exclusions, and priorities on your content, and trash guides are usually where you go for that. I find that trash has a high standard for quality, which will eat through your storage like a bodybuilder eating 20 eggs for breakfast in a single seating, so you will always have to play around with your preferences and it will take some time to get things just right (some edge case scenarios on content are hard to spot at first, but you’ll get that one download of a very questionable release that will make you tear your hair off for a bit), but it will get better as you tinker around.

    So to summarize, if you have even a little bit of trouble maintaining your media repository, these are a must. Even if you don’t, the process of searching stuff, downloading stuff, renaming and categorizing stuff, and then checking that everything is OK on plex by comparing stuff on thetvdb and whatnot, its a lot of time-consuming work even if you don’t notice it, and all of it can be automated by the arr stack easily. I have a couple of friends helping as admins of it all, and they’re just as freaky on management as I am, so we all just work together to get everything right, and it’s really helpful and easy to go down this route. Good luck and have fun!

    Ah, final tidbit, if you don’t yet use the usenet, this is the moment where you will realize you have to spend money on it because it’ll help that much more than torrents once your arr stack is going at it. I’m at two usenet indexers and I think two usenet content providers. I want more. Help.