

Here I thought it was going to be about telnet or something
Here I thought it was going to be about telnet or something
?
Reproducibility of what we call LLM 's as opposed to what we call other forms of machine learning?
Or are you responding to my assertion that these are different enough to warrant different language with a counterexample of one way in which they are similar?
I skimmed the paper, and it seems pretty cool. I’m not sure I quite follow the “diffusion model-based architecture” it mentioned, but it sounds interesting
I’m not talking about the specifics of the architecture.
To the layman, AI refers to a range of general purpose language models that are trained on “public” data and possibly enriched with domain-specific datasets.
There’s a significant material difference between using that kind of probabilistic language completion and a model that directly predicts the results of complex processes (like what’s likely being discussed in the article).
It’s not specific to the article in question, but it is really important for people to not conflate these approaches.
There really needs to be a rhetorical distinction between regular machine learning and something like an llm.
I think people read this (or just the headline) and assume this is just asking grok “what interactions will my new drug flavocane have?” Where these are likely large models built on the mountains of data we have from existing drug trials
That’s true, but all of their problem with docker are that it’s Linux
docker is a layer to run on top of Linux’s KVM
My understanding is that this is only true for docker desktop, which there’s not really any reason to use on a server.
Sure, since containers use the host’s kernel, any Linux containers do need to either have Linux as the host or run a VM (as docker desktop does by default), but that’s not particularly unique to docker
Is it really vendor lock-in if you can fork it at your whim?
Try rereading the whole tweet, it’s not very long. It’s specifically saying that they plan to “correct” the dataset using Grok, then retrain with that dataset.
It would be way too expensive to go through it by hand
The middlware that stream uses to run Windows games doesn’t yet (fully) support HDR normally.
Valve has a bodge that works for the steam deck using gamescope, and while (iirc) it’s possible to get that to work in regular desktop kde, it’s a bit of a pain.
Broader support is coming though
You could use an alternate repo, like the fedora one which is curated to only open source software
In the end, clearing my shader cache seemed to fix it
In the event someone else runs into this, go Steam>Settings>Downloads>uncheck “Enable Shader Pre-Caching” then check it again
Nevermind, still poor performance
The interface “running” is one thing, but does it know to run games in wine/proton? Does it know to grab the Linux versions of games if available? Mono doesn’t make that automatic.
Does this work well on Linux? Looks like it’s dotnet based
Also, the readme says it requires windows
Don’t use jellyfin.server.local
.local is reserved for mdns, which doesn’t support more than one dot. (Though it may still sometimes work).
In any case, to make that work you need either a DNS server on your network or something like duckdns (which supports wildcard entries).
You might be able to get the same hash if you did a backup of the disk in iso format. It doesn’t matter though since you wouldn’t be able to use that format to play anything.
All that to say that these seem to be the wrong tools for what you’re actually trying to do.
If you’re keeping the files as mkv, you’re reencoding them.
Also, if you’re reencoding the files, it’s extremely unlikely for your hash to match someone else’s
makes software for pirates
please avoid pirated versions
Good luck with that.
There is no ethical consumption under capitalism