

this is 100% right, you don’t need an AI to describe something you’re already looking at. This is an absurd feature (again aside from the accessibility portion but that’s not what this is).
this is 100% right, you don’t need an AI to describe something you’re already looking at. This is an absurd feature (again aside from the accessibility portion but that’s not what this is).
yes, 100%, do not use an LLM for anything you’re not prepared to vet and verify all of. The longer an LLM’s response the higher the odds it loses context and starts repeating or stating total gibberish or makes up data to keep going. If that’s what you want (like a list of fake addresses and phone numbers to prototype an app), great, but that’s about all it’s going to really do.
I think they should stop.
Something to talk about when you’re making the 900th unnecessary video about the switch 2
Please keep in mind Jack Dorsey is just some guy who’s had the same shit idea twice.
My 9th gen intel is still not the bottleneck of my 120hz 4K/AI rig, not by a longshot.
Yeah I got mind refurbished also, so someone else took the first hit on driving it off the lot (and waiting for it to be built). I guess they didn’t use it to its full extent though. That didn’t make it “cheap” though.
It’s sort of a niche within a niche and I appreciate your sharing some knowledge with me, thanks!
Hmm maybe in the new year I’ll try and update my process. I’m in the middle of a project though so it’s more about reliability than optimization. Thanks for the info though.
I usually run batches of 16 at 512x768 at most, doing more than that causes bottlenecks, but I feel like I was also able to do that on the 3070ti also. I’ll look into those other tools though when I’m home, thanks for the resources. (HF diffusers? I’m still using A1111)
(ETA: I have written a bunch of unreleased plugins to make A1111 work better for me, like VSCode-like editing for special symbols like (/[, and a bunch of other optimizations. I haven’t released them because they’re not “perfect” yet and I have other projects to be working on, but there’s reasons I haven’t left A1111)
I just run SD1.5 models, my process involves a lot of upscaling since things come out around 512 base size; I don’t really fuck with SDXL because generating at 1024 halves and halves again the number of images I can generate in any pass (and I have a lot of 1.5-based LORA models). I do really like SDXL’s general capabilities but I really rarely dip into that world (I feel like I locked in my process like 1.5 years ago and it works for me, don’t know what you kids are doing with your fancy pony diffusions 😃)
Oh I meant for image generation on a 4080, with LLM work I have the 64gb of the Mac available.
It fails whenever it exceeds the vram capacity, I’ve not been able to get it to spillover to the system.
Oh I didn’t mean “should cost $4000” just “would cost $4000”. I wish that the vram on video cards was modular, there’s so much ewaste generated by these bottlenecks.
Apple price gouges for memory, yes, but a 64gb theoretical 4090 would have cost as much in this market as the whole computer did. If you’re using it to its full capabilities then I think it’s one of the best values on the market. I just run the 20b models because they meet my needs (and in open webui I can combine a couple at that size), as I use the Mac for personal use also.
I’ll look into the Amd Strix though.
I know it’s a downvote earner on Lemmy but my 64gb M1 Max with its unified memory runs these large scale LLMs like a champ. My 4080 (which is ACHING for more VRAM) wishes it could. But when it comes to image generation, the 4080 smokes the Mac. The issue with image generation and VRAM size is you can think of the VRAM like an aperture, and the lesser VRAM closes off how much you can do in a single pass.
Oh I may have been interested in helping them before, but since they’re sooo edgy I’m sure they can fuck themselves. I mean, fuck themselves. Help themselves. There it is. Also fuck them. :)
There are now 14 competing universes
yeah I did this almost 30 years ago and could recite it from scratch, haven’t made a cable since hs
and retire gracefully, where the device becomes open source and available to the community of owners who have invested in it.