• 0 Posts
  • 267 Comments
Joined 8 months ago
cake
Cake day: January 2nd, 2025

help-circle

  • 7zip is an archive format - creating an iso requires the raw files (unless you have 7zip installed in the VM to extract the zip file).

    All that is unnecessary though, just enable a shared folder via the VM software (I assume they all do it now, VMware has had this feature forever). This isn’t a network share, it’s a virtual network share that only exists within VMware for that specific VM, and by default it’s read only.

    Or put the files on a thumb drive, and connect that thumb drive to the VM.

    Or enable networking on the VM, copy the files in, then disable the network card in the VM.

    Getting the files in doesn’t require any special security, it’s when you’re executing the files that the VM needs to be isolated.



  • Onomatopoeia@lemmy.cafetoSelfhosted@lemmy.worldDNS server
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    3 days ago

    Ah, unbound has the root DNS servers hard coded. That’s a significant point.

    Any reason you couldn’t do the same with any other DNS server such as PiHole?

    I’m really trying to understand why I’d run two DNS servers in serial, instead of one. All this sounds like it’s just a different config that (in the case of unbound) has been built in - is there something else I’m missing that unbound does differently?

    Why couldn’t you just config the TLD’s as your upstream DNS in whatever local DNS server? Isn’t that what enterprises do?








  • Onomatopoeia@lemmy.cafetoSelfhosted@lemmy.worldDNS server
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    5 days ago

    Cool, thanks for the clarification. This is good info to have in here in general.

    So unbound by default discovers other DNS servers, if I’m understanding that correctly. I’ve never used it, does it not use your ISP’s DNS by default, or does that depend on user config?

    What if your PiHole is configured to use other than your ISP’s DNS?




  • What backup on Google cloud?

    Graphene add nothing of substance to the privacy or security landscape

    Hahahahaha, your whole comment is laughable, it’s the very definition of hubris (that is the combination of being arrogant and incorrect).

    I,for one, haven’t used Google as a backup since 2010. Anyone with any awareness of privacy doesn’t use it.

    I run Lineage, with no Google services, no Google connectivity. I actively block connections to known-untrustworthy domains and IP addresses.

    Graphene is the high-water mark in privacy and security on Android.

    You should probably actually understand what’s going on before prognosticating in ignorance.



  • External failure rates are hit and miss because frankly, they often get abused.

    I’ve had some last for 10 years, other for 2, and I’m not kind to mine.

    I’d say they have 2 issues to deal with: temp and being dropped. The cases have no cooling, and larger drives need greater cooling than smaller drives.

    I currently have 2 external drives (4 TB each) I use for local redundancy. When sync happens, they get quite warm, so I keep an old 80mm case fan on them. These are 5 year old drives that have been running this way for 3 years. SMART doesn’t report any errors or high temps, but without the fan I’m sure it would.


  • It won’t make streaming slower unless you have multiple clients streaming, as in several.

    I find the network to be the bottleneck - my gigabit network connection saturates with 3 streams, and I’m using a conventional hard drive for my media (OS is on an M2 drive). This doesn’t seem to affect the video quality though.

    Frankly SSD is overrated for common stuff, there are other bottlenecks that usually hit us first, such as network or processing.

    As you build out, make sure you consider backup in your costs, don’t spend your money just on storage.

    Also, since you have a mini there may only be room for one drive internally and almost no cooling. Larger drives will have issues with heat.



  • My server currently doesn’t have a video card (just the crappy on-board), it transcodes fine for one user (which is all I ever have). I don’t even notice an uptick in cpu when it’s transcoding (I’m sure it does, it just doesn’t seem to impact performance).

    This is a 5 year old Dell SFF, running Windows, with 3 Windows VMs in VMware. It has no trouble transcoding, while converting videos using Handbrake. It’s maxed, but it does it.

    I do plan to get a video card, it’s just not urgent.

    Edit: Just did a test and with 2 simultaneous transcodes, Jellyfin will jump the cpu about 5% on video start, but settle back down to less than 1%.

    Disk usage skyrockets with the second transcode, bouncing between 50% and 90%, and the network connection has some hard spikes at video start (with a gigabit connection).


  • Hell, I’ve found ripping a DVD to MKV results in a 3-5gb file.

    Then converting that MKV using handbrake, I can bring the size down as much as 75%. When you’re talking about a thousand videos, that adds up.

    TV series (especially older stuff) I can consistently reduce 80%+. This makes a real difference for shows that were on for 10 years.

    And these all look fine on a 65" TV from 6’ away. Why store more if I don’t have to?