• 0 Posts
  • 21 Comments
Joined 2 years ago
cake
Cake day: June 23rd, 2023

help-circle
  • Its not for everyone but I use Cisco Aironet APs with a virtual wireless LAN controller. Ubiquiti is popular among the community. They’re cost effective and work well in a home/small business environment. Aruba InstantOn are decent as well from my experience, but they’re cloud managed and this is self-hosted after all :)

    I’ve extensively used Cisco, Meraki, Fortinet, Cambium, Aruba, Ubiquiti and Juniper in a professional setting. Avoid Fortinet and Cambium APs if you can, my experience is that they can be pretty unstable.

    Generally speaking if you’re going to have multiple APs, you’ll want something that’s centrally managed so the APs are able to be aware of each other and manage clients effectively.


  • This is the method I use in your scenario, OP. You can use Folder2iso to get the files in that you need. If the OS has official VMware tools, you can also mount the VMware Tools ISO straight from workstation into the VM and this will give you the clipboard service so you can copy and paste files between the host and VM, if this scenario is permitted within your isolation needs.

    Otherwise, go the ISO route. You just can’t bring stuff out of the VM back to the host is all.




  • TIL about Fedora, last I knew it was a rolling bleeding edge OS. Clearly lots of movement in the Red Hat camp.

    As for gaming, drivers were not the problem for me. Getting games to run with ease was. On OpenSUSE, I just install Steam, enable Proton and basically go at that point. Red Hat was non-trivial to do this. Could be a skill issue, but I had a better time getting going with OpenSUSE TW.


  • Sort of, OpenSUSE Tumbleweed. I started on OpenSUSE Leap but had issues getting things like GPU and Steam working. Red Hat was also a non-starter because of the lack of gaming functionality.

    TW works great for gaming and the enterprise features I care about (like domain joining) work out of the box. Its certainly harder to set up than something more geared towards home use (typically one of the various the downstreams of Debian or Arch) but that doesn’t bother me.


  • Second to this - for what its worth (and I may be tarred and feathered for saying this here), I prefer commercial software for my backups.

    I’ve used many, including:

    • Acronis
    • Arcserve UDP
    • Datto
    • Storagecraft ShadowProtect
    • Unitrends Enterprise Backup (pre-Kaseya, RIP)
    • Veeam B&R
    • Veritas Backup Exec

    What was important to me was:

    • Global (not inline) deduplication to disk storage
    • Agent-less backup for VMware/Hyper-V
    • Tape support with direct granular restore
    • Ability to have multiple destinations on a backup job (e.g. disk to disk to tape)
    • Encryption
    • Easy to set up
    • Easy to make changes (GUI)
    • Easy to diagnose
    • Not having to faff about with it and have it be the one thing in my lab that just works

    Believe it or not, I landed on Backup Exec. Veeam was the only other one to even get close. I’ve been using BE for years now and it has never skipped a beat.

    This most likely isn’t the solution for you, but I’m mentioning it just so you can get a feel for the sort of considerations I made when deciding how my setup would work.


  • As others have mentioned its important to highlight the difference between a sync (basically a replica of the source) vs a true backup which is historical data.

    As far as tools goes, if the device is running OMV you might want to start by looking at the options within OMV itself to achieve this. A quick google hinted at a backup plugin that some people seem to be using.

    If you’re going to be replicating to a remote NAS over the Internet, try to use a site-to-site VPN for this and do not expose file sharing services to the internet (for example by port forwarding). Its not safe to do so these days.

    The questions you need to ask first are:

    1. What exactly needs to be backed up? Some of it? All of it?
    2. How much space does the data I need backed up consume? Do I have enough to fit this plus some headroom for retention?
    3. How many backups do I want to retain? And for how long? (For example you might keep 2 weeks of daily backups, 3 months of weekly backups, 1 year of monthly backups)
    4. How feasible is it to run a test restore? How often am I going to do so? (I can’t emphasise test restores enough - your backups are useless if they aren’t restorable)
    5. Do you need/want to encrypt the data at rest?
    6. Does the internet bandwidth between the two locations allow for you to send all the data for a full backup in a reasonable amount of time or are you best to manually seed the data across somehow?

    Once you know that you will be able to determine:

    1. What tool suits your needs
    2. How you will configure the tool
    3. How to set up the interconnects between sites
    4. How to set up the destination NAS

    I hope I haven’t overwhelmed, discouraged or confused you more and feel free to ask as many questions as you need. Protecting your data isn’t fun but it is important and its a good choice you’re making to look into it


  • Back in the day when the self-hosted $10 license existed I was using JIRA Service Desk to do this. As far as ticketing systems go it was very easy to work with and didn’t slow me down too much.

    I know you don’t want a ticket system but I’m just curious what other people will suggest because I’m in the same boat as you.

    Currently I haphazardly use Joplin to take very loose notes and sync them to Nextcloud.

    If you want a very simple option with minimal setup and overhead you could use Joplin to create separate notes for each “part” of your lab and just add a new line with a date, time and summary of the change.

    I do also use SnipeIT to track all my hardware and parts, which allows you to add notes and service history against the hardware asset.

    Other than that, I’m keen to see what everyone else says




  • Authelia is popular, as is Keycloak. I believe Red Hat develops Keycloak or at least has a hand in it.

    I’m on this journey as well, figuring out what I’m going to use. Currently most of my services just use LDAP back to AD but I’m looking to do something more modern like SAML, oAuth or OpenID Connect so that I can simplify the number of MFA tokens I have.

    Just as an anecdote you may find useful - Personally I used to run an Active Directory for Windows and FreeIPA for my Linux machines and have managed to simplify this to just AD. Linux machines can be joined, you can still use sudo and all the other good stuff while only having one source of truth for identity.




  • Zabbix can do everything you’re asking and can be connected to Grafana if you want custom visualisations. Most importantly, it contextualises what you need to know on the dashboard, as in it only tells you about things that require your attention.

    You’re of course able to dive into the data and look at raw values or graphs if you wish, and can build custom dashboards too.

    I’ve used it in both home lab and production scenarios monitoring small to mid size private clouds, including windows and linux hosts, docker, backups, SAN arrays, switches, VMware vSphere, firewalls, the lot. It’s extremely powerful and not terribly manual to set up.

    If metrics is all you want and aren’t too fussed on the proactive monitoring focus, Netdata is a great option for getting up and running quickly.





  • Sure, I’ve used it both in Server and NAS scenarios. The NAS was where we had most issues. If the maintenance tasks for BTRFS weren’t scheduled to run (balance, defrag, scrub and another one i can’t recall), the disk could become “full” without actually being full. If I recall correctly it’s to do with how it handles metadata. There’s space, but you can’t save, delete or modify anything.

    On a VM, its easy enough to buy time by growing the disk and running the maintenance. On a NAS or physical machine however, you’re royally screwed without adding more disks (if its even an option). This “need to have space to make space” thing was pretty suboptimal.

    Granted now I know better and am aware of the maintenance tasks, I simply schedule them (with cron or similar). But I still have a bit of a sour taste from it, lol. Overall I don’t think it’s a bad FS as long as you look after it.


  • This for sure. As a general rule of thumb, I use XFS for RPM-based distros like Red Hat and SuSE, EXT4 for Debian-based.

    I use ZFS if I need to do software RAID and I avoid BTRFS like the plague. BTRFS requires a lot of hand holding in the form of maintenance which is far from intuitive and I expect better from a modern filesystem (especially when there are others that do the same job hassle free). I have had FS-related issues on BTRFS systems more than any other purely because of issues with how it handles data and metadata.

    In saying all that, if your data is valuable then ensure you do back it up and you won’t need to worry about failures so much.