• 6 Posts
  • 44 Comments
Joined 2 years ago
cake
Cake day: July 7th, 2023

help-circle
  • Ahh ok, that makes sense. Hah magical algorithm.

    Yeah it’s about 30TB of photos/videos. I only recently got into videography which takes up a ton of space. About 25% of that is videos converted into an editing codec, but I don’t have those backed up to external drives. I also have some folders excluded that I know have duplicates. A winter project of mine will be to clear out some of the duplicates, and then cull the photos/videos I definitely don’t need. I got into a bad data hoarding habit and kept everything even after selecting the keepers.

    I have an in progress folder where I dump everything, then folders by year/month for projects and keepers. I need to do better with culling as I go.

    I like that idea, I will incorporate it into my strategy.

    Thank you for taking the time to help me out with this, much appreciated!


  • I didn’t consider that, excellent point. Forgive my ignorance because I’m not certain how the backup systems work, and feel free to ignore this if you don’t know. I presume they compare some metadata or hash of a file against another file and then decide if it’s the same or not to back up? Let’s say I have a file that I have already backed up, and then there is some ransomware that encrypted my files. Would the back up software make a second copy of the file?

    So for most of the important files, I just do a sync to an external drive periodically. Basically when I know there have been a lot of changes. For example I went on a trip last year and came back with nearly 2 TBs of photos/videos. After ingesting the files to unRAID, I synced my external drive. Since I haven’t done much with those files since that first sync, I haven’t done the periodic sync since then. But now you’ve opened my eyes that even this could be a problem. How would the G-F-S strategy work in this case?

    I thought about zfs or btrfs but my Unraid array is unfortunately xfs and it’s too large at this point to restart from scratch.

    Haha that would be a lot of blurays.




  • Ahh gotcha, I misunderstood that then. I could probably set up a VPN there but don’t want to over complicate it. An always on Pi will be fine I think, they are low power. I could also add a smart switch and set up a schedule or something but I don’t think thats worth the hassle considering the low power usage of a pi.

    Hmmm that’s a good point about syncthing backing up corrupt files. I was thinking to use it because I already use it extensively and I wouldn’t need to mess with port forwarding or anything of the sort.

    I had multiple copies of files previously as a backup “strategy” and it got way out of hand where I have like 1.5m photos lol. What do you recommend as an alternative to syncthing?



  • I think this is the play. I’ll likely just get an enclosure for the two 4TB drives I have and can always buy an external drive in the future and get them to plug it in.

    I don’t have any experience in setting up wire guard so I’ll have to look into that. I was thinking to use syncthing since that skips the need for that, but I think someone in the thread mentioned that may not be ideal in case of file corruption.

    Do you just have raspbian on the pi?



  • Thanks for the detailed reply.

    So my main NAS is Unraid, and I also have a couple of proxmox boxes. Though I’m less concerned about the proxmox boxes as the main files are on the NAS, and I have a proxmox backup server vm set up on Unraid with regular backups there.

    For most of my important files on unraid, I have an external drive that I periodically sync and store in a safe.

    I also have access to a VPS with over 1TB of space which I am still figuring out how to best integrate into my backup strategy.

    For what I’m asking here, I just want to have a simple solution that I can tuck away and have remote access to and just use syncthing or something to keep it updated.








  • Have you tried non official jellyfin clients? I had issues with the official app on AppleTV as well, so I switched to Infuse. The free app supports most things and I think the paid gives access to additional codecs if I recall correctly. The pro upgrade isn’t too expensive and the app is sleek across Apple devices. Only issue I’ve really had is sometimes if a video is not played to the end (as in, stopped during credits) it still shows up in the continue watching, but that might be a problem on my end.

    I think MXPlayer is similar on Android which might be worth checking out.


  • I actually tried this as my second step in trouble shooting, the first being using different ports.

    In the non-omada management software, it defaults to 10G, and if the devices is on before the switch it negotiates 10G correctly and works at full speed (tested with iperf3). As soon as any of the 10G connected devices is rebooted, I’m back to 1G. To fix it, I then have to set the port to 1G with flow control on, apply changes, save config, refresh page, change to 10G with flow control off, apply, save config and it goes back to 10G again. Alternatively I can reboot their switch and it’s fine again.

    In Omada its the same, fewer steps to get there but I have to sometimes do it 2-3 times before it works.

    Same issue with both 10G TP-Link switches, so I’m thinking it might be the SFP. Using Intel SFP+ with FS optical cables. I’m using a DAC for the uplink from the 10G switch to my unmanaged 2.5G switch, and that doesn’t have the problem of dropping, always works max speed.