

Homie you just described sales taxes which are already a thing
Homie you just described sales taxes which are already a thing
I always keep a usb around with freedos for that reason. Works great and you don’t have to deal with windows
They are already tagged as explicit/clean in the metadata as well as separated by folder with an [E] tag if explicit. I could manually rematch them but my library is large so I’d really rather not
I like Plexamp but there’s a couple of things to be aware of depending on your music library that took me a while to figure out:
These probably aren’t issues to the majority of users with just their favorite songs in mp3 or flac 16-44, but it’s something for people with larger hi-res/multichannel libraries to be aware of that I recently learned.
I just bought a few refurbished 12TB WD Ultrastars off Amazon and it actually says it’s sold by ServerPartDeals. Not sure if it’s the same people but interesting if they are
Maybe they are thinking of iVentoy which is not open source but is by the same dev
Only one monitor? Trash
Yes it looks like it is included in the official docker image
You can watch rss feeds to follow all CVEs like Microsoft’s https://api.msrc.microsoft.com/update-guide/rss
NIST used to have an rss feed for CVEs but deprecated it recently. They still have other ways you can follow it though https://nvd.nist.gov/vuln/data-feeds
Or if you just want to follow CVEs for certain applications you can host/subscribe to something like https://www.opencve.io/welcome which allows you to filter CVEs from NIST’s National Vulnerability Database (NVD)
It looks like it should be possible as both your cpu and motherboard support Intel VT-d
https://download.asrock.com/Manual/Z690 Extreme.pdf
PCIe pass through isn’t enabled by default in Proxmox and requires some manual changes to the bootloader (grub or systemd-boot) as well as loading some kernel modules. You may also need to enable VT-d in your BIOS. You can read proxmox’ guide for enabling PCIe pass through here:
You can always do both and expose some services outside your network and keep the others local only while still being able to access them yourself with a vpn.
From what I read disk wear out on consumer drives is a concern when using ZFS for boot drives with proxmox. I don’t know if the issues are exaggerated, but to be safe I ended up picking up some used enterprise SSDs off eBay for that reason.
I haven’t tried any of them but I did just listen to a podcast the other week where they talk about LlamaGPT vs Ollama and other related tools. If you’re interested it’s episode 540: Uncensored AI on Linux by Linux Unplugged
8th gen is when support was added for HEVC I’m pretty sure
Looks like the docker images built by mattermost are only for amd64 architecture . You could try an image built by someone else such as this one that seems to be regularly updated. I haven’t used any of them though so I would look through the repo/dockerfiles before deploying any unofficial images.
Thanks, I may hold off on ceph for now in that case
Unfortunately the drives in the enclosure are 3.5. I do have a spare SATA spot in each of the 7040s but you can only fit 1 SATA drive in the 3040s and no m2 drives. That’s why I am trying to decide whether it would be better to sacrifice a SATA port on one of the 7040s for (hopefully) better speeds and stability or use USB and put an extra drive in each of the 7040s
Ok thanks and ya I plan to upgrade to something better suited for the job at some point. I just want to get started and use what I have as efficiently as I can.
Ya I realize this isn’t a great way to go about storage but I already have the enclosure so I might as well use it for now. At some point down the line I will build something that will work better.
If I connect it using USB I am able to see each drive individually in Proxmox. I am unsure if it will be the same if I use eSATA. In the manual it says that the eSATA interface card needs to support Port Multiplier which I fear means the eSATA to SATA option may not work but I was hoping someone here may know more about that.
If I have to go the USB route and I am able to use each drive individually, would you recommend going with a ZFS pool or ceph?
bro wut