• 0 Posts
  • 5 Comments
Joined 2 months ago
cake
Cake day: June 20th, 2025

help-circle
  • Add to that, for an extant installation I’d rec Incus for the VM work with its web-ui. You get to keep your kernel, you’re less tied at the hip to it.

    2 port Intel NIC + some switch and your server is a router too. Opnsense’s web ui is great, can be difficult to find stuff but searching gets you there, but most is easy enough and it’s the best web ui + automatic updates for routers out there.


  • I’d recc either the Unifi dream router (wifi 6 version for $200, just put a watch on the Unifi page to get it, they come back on the reg) for a one stop shop

    or

    A Dell Wyze 5070 Extended (make sure you get the power brick, needs to be the 130W one for a PCIe device) + some 2 port 1Gbps intel nic and you install opnsense on it directly or in a VM on top of incus/proxmox + external wifi like a Unifi AP + external switch with PoE++ or whatever the AP needs

    Both options are gonna exceed $150 (unless you get fire deals on a super small PC that can hold a PCIe card), but they’ll be great and last. The opnsense box will be here forever, I’ve had mine for a long time now and it’s never given me trouble. External Unifi ap is solid as well, mesh out if needed easily too. Friend has the dream router and it’s also giving no trouble - but at some point Unifi will ditch the security updates and such, but that’s a long ways away for such a solid all in one (plus future mesh as needed)

    I never could get openwrt devices to update how I’d like (automatically, not clear all settings), but opnsense does that no problem. I’ve heard from many people that you need proprietary wifi - the openwrt wifi is meh at best F tier at worst. So you bring your super router with an external wifi (or just the Unifi all-in-one).

    You can flip out Unifi ap or Tp link Omada ap (spelling might be off), but I liked the look of the Unifi controller software better and trust them a smidge more than tp link for local controller software that’s always on

    You can get away with an older cheaper wifi 5 Unifi ap easily also. (Wifi 7 ap cheapest has a fan in it avoid, it’s also like $200+)



  • I do not know of Internet Comment Etiquette, sorry to disappoint! It’s a username that’s humorous to me and fits a core tenant of mine

    Do remember (or put in the .env) the user/pass for your db’s, but they don’t matter much if you know them.

    I’m talking about the process, the ‘user: 6969:6969’ in the docker.compose file. If it’s not there the container runs as the user running docker, and unless you’ve got non-root docker going it’ll run the containers as root. Which could be bad, so head that off if you can! Overall, I’d say it’s a low priority, but a real priority. Some naughty container could do bad things with root privilege and some docker vulnerabilities. I’ve never heard of it that kind of attack in the self hosted community, but as self hosting gains traction I worry a legit container will get an attack slipped in somehow and wreck (prob ransomware) root docker installations.

    First priority is backup - then you can worry about removing root containers (if you haven’t already done so!).


  • You want your docker container’s persistent data mounted to real locations. I use the volumes for non-persistent stuff.

    You want your real locations to have a file system that can snapshot (ZFS, BTRFS).

    Then you can dump the superior Postgres databases and for all other databases and data, you stop the containers, snapshot, start the containers (limits downtime!), and then back up that snapshot. Thanks to snapshot, you don’t need to wait until the backup is done to bring the containers back up for data consistency. For backup I use restic, it does seem to work well, and has self-check functions so that’s nice. I chose restic instead of just sending snapshots because of its coupled encryption and checks, which allow for reliable data integrity on unreliable mediums (anyone, even giant providers, could blackhole bits of your backup!). I copy over the restic binary that made the backup using encrypted rclone, the encryption there prevents anyone (the baddies? Idk who’d target me but it doesn’t matter now!) from mucking with the binary if you did need that exact version to restore from for some reason.

    Note I do not dump SQL or the like, they’re offline and get snapshotted in a stable state. The SQL dump scene was nasty, esp compared to Postgres’ amazingly straightforward way (while running!). I didn’t bother figuring out SQL dump or restore.

    All of your containers should have specific users for them, specify the UID/GID so they’re easily recreatable in a restore scenario. (The database containers get their own users too)

    Addendum for the specific users: Make an LXC container run by a specific user and put the docker container in it if the docker container is coded by F tier security peeps and hard requires root. Or use podman, it is competent and can successfully lie to those containers.

    I don’t test my backups because the time to do so is stupid high thanks to my super low internet speeds. I tested restoring specific files with restic when setting it up and now I rely on the integrity checks (2GB check a day) to spot check everything is reliable. I have a local backup as well as a remote, the local is said snapshot used to make the restic remote backup. The snapshot is directly traversable and I don’t need to scrutinize it hard. If I had faster internet, I’d test restoring remote restic once a year probably. For now I try to restore a random file or small directory once a year.

    Hope the rant helps