

I use it to role play historical counter factuals, like how I could win the battle of Cannae through tactics, or how I could invent the telegraph in 13th century France. It’s worth every watt <3
Doing the Lord’s work in the Devil’s basement
I use it to role play historical counter factuals, like how I could win the battle of Cannae through tactics, or how I could invent the telegraph in 13th century France. It’s worth every watt <3
Don’t have to imagine it when you can just remember it. Getting online in the late 90s was a horror show, seriously dialup was super unreliable. And that was 20 years after it’s inception, it was shit but also extremely popular.
As long as you don’t share anything you should be okay! It’s not very ethical tough… But you can share your files list with users you trust (for example people you have already downloaded from).
At any rate i never heard of anyone getting in trouble for using soulseek.
I’m more of a slskd kind of guy, even if the webapp is pretty terrible
A hundred times soulseek, all the music nerds are there!
Yeah it’s pretty shit. My guess is torrent sites lost a ton of traffic from API users (stremio, kodi etc…) so they have to squeeze harder on those remaining page views.
There is a trick! You first need to left click the magnet link, this will bring a modal vpn ad with (iirc) a “continue without vpn” button. That button is the actual magnet link
As far as I can tell they’re stock from Android
Same, I’ve been looking for something like that for quite some time
I am looking for a solution for a ~1TB collection, and the Glacier Deep Archive storage tier is barely above 1$/m for the lot. You may want to look into it ! If I remember correctly, the retrieval (if you one day need to get your data back) was around 20$ to get the data in a few hours, or 2$ to get it in a couple days.
Turns out she does know how babby is formed
When you read that stuff on reddit there’s a parameter you need to keep in mind : these people are not really discussing Lemmy. They’re rationalizing and justifying why they are not on Lemmy. Totally different conversation.
Nobody wants to come out and say “I know mainstream platforms are shit and destroying the fabric of reality but I can’t bring myself to be on a platform except it is the Hip Place to Be”. So they’ll invent stuff that paints them in a good light.
You’ll still see people claiming that Mastodon is unusable because you have to select an instance - even though you don’t have to, you can just type Mastodon on Google, click the first link, and create an account in 2 clicks. It’s been ages. But the people still using Twitter need the excuse because otherwise what does it make them?
I do it because it’s easy and it’s free but if it was difficult i’d probably still do it.
I’ve only had issues with fitgirl repacks i think there’s an optimisation they use for low RAM machines that doesn’t play well with proton
If I understand these things correctly, the context window only affects how much text the model can “keep in mind” at any one time. It should not affect task performance outside of this factor.
Yeh, i did some looking up in the meantime and indeed you’re gonna have a context size issue. That’s why it’s only summarizing the last few thousand characters of the text, that’s the size of its attention.
There are some models fine-tuned to 8K tokens context window, some even to 16K like this Mistral brew. If you have a GPU with 8G of VRAM you should be able to run it, using one of the quantized versions (Q4 or Q5 should be fine). Summarizing should still be reasonably good.
If 16k isn’t enough for you then that’s probably not something you can perform locally. However you can still run a larger model privately in the cloud. Hugging face for example allows you to rent GPUs by the minute and run inference on them, it should just net you a few dollars. As far as i know this approach should still be compatible with Open WebUI.
There are not that many use cases where fine tuning a local model will yield significantly better task performance.
My advice would be to choose a model with a large context window and just throw in the prompt the whole text you want summarized (which is basically what a rag would do anyway).
If you like to write, I find that story boarding with stable diffusion is definitely an improvement. The quality of the images is what it is, but they can help you map out scenes and locations, and spot visual details and cues to include in your writing.
holy shit you’re right i don’t know where i got the idea that it was the same format
There’s absolutely a push for specialized hardware, look up that company called Groq !