

I’d much prefer the devs to spend time adding more linux drivers for the hardware and then we can just install linux without android
I’d much prefer the devs to spend time adding more linux drivers for the hardware and then we can just install linux without android
I don’t think there’s a solution for RAM dedupe, so your only solution for runtime efficiencies is (RAM) compression
That has a performance hit for every read, write and paging operation, so, lower performance than you’d expect…
But, I guess you don’t run all 91 apps at the same time, so you’re probably into decreasing returns for the few apps you do run in parallel…?
If your files are flac and you just want to.copy some files you could try Mp3fs
That’ll make your files appear to be MP3 when you access them
You could them use a file transfer mechanism to read them from the mp3fs location onto your phone with - kinda - one step.
So… if I ditch my MP3s (@ 320k) and use Ogg + Opus (@ …? bps?), then I’d have the same / “better” music in less storage space?
Does that work ok with Picard, etc as I’m a bit OCD with metadata
noatime
would help here
Anyone new to a subject gains their confidence (or not) if you’re confident (or not)
So, I’d suggest picking 1 distro to install,.and make sure you’re familiar with it.
Have multiple copies of the installer ready so you’re able to get things running in parallel and then you’re 75% ready.
Also be prepared for people turning up with all their cherished photos on their laptop not understanding what you’re about to do, so they’ll say they’re happy for you to install a new OS and then be upset that pictures of Fluffy aren’t there any more…
No Pairdrop?
This works for me, for any adhoc files I want to get to any device any time, any place…
For any regular file transfers, syncthing.
The in-memory version of the tab template prevents having to restart FF to load the new one from disk now, but, at some point you’re still going to have to restart…
I tend to hibernate my laptop, so it doesn’t reboot often… so surely I’m going to get to the point where FF needs a restart…
Or, complain to the website not to use browser specific features (just like the old Internet Explorer days)
I’ve not looked into it properly yet, but - considering this is still free software - I don’t believe that level of granularity exists.
So, if I wanted to share my holiday photos from last week with 1 friend, and the photos from someone’s party to different friends… nope.
That’s an interesting point…
I’d like to share some (holiday) photos with my friends & family, so I can put those onto Pixelfed / Friendica / etc… I don’t necesarily want to share all the photos…
And that’s using the cloud.
Job Done. The self-hosting + federated cloud future is here!
Rejoice.
With respect, you wouldn’t install these by just doing an update, so pacman -Syu
is fine.
You would have needed to install these manually, or a package that depended on them - both from AUR - so you’d also need to use yay
(etc) to install them.
But - I totally agree with your points that tge names look innocent enough for someone to install those over other packages.
Always look at the AUR (website) at the package details - if it’s new(ish) and has 0 or 1 votes, then be suspicious.
Have a look at Patrick Kennedy’s reviews on yoochoob under ServeTheHome - there’s some fantastic hardware available now
I ended up buying something from AliExpress, which I was initially reluctant to do - but Patrick’s reviews convinced me
For detailed reviews his site’s got the details from the videos: https://www.servethehome.com/
It depends on the sync / backup software
Syncthing uses a stored list of hashes (which is why it takes a long time for the initial scan), then it can monitor filesystem activity for changes to know what to sync.
Rsync compares all source and destination files with some magical high speed algorithm
Then, backup software does… whatever.
Back in the day on FAT filesystems they used the archive bit on each file’s metadata, which was (IIRC) set during a backup and reset with any writes to that file. The next backup could then just backup those files.
Your current strategy is ok - just doing an offline backup after a bulk update, maybe it’s just making that more robust by automating it…?
I suspect you have quite a large archive as photos don’t compress well, and +2TBs won’t disappear with dedupe… so, it’s mostly about long term archival rather than highly dynamic data changes.
So that +2TB… do you drop those files in amongst everything else, or do you have 2 separate locations ie, “My Photos” + “To Be Organised”?
Maybe only backup “MyPhotos” once a year / quarter (for example), but fully sync “To Be Organised”… then you’ve reduced risk, and volume of backup data…?
The main point is that sync (like RAID) isn’t a backup. If ransomware got in and started encrypting all your files, how would you know / protect yourself…
There’s a lot of focus on 3-2-1 backups, so offsite is good, but consider your G-F-S strategy too - as long as this remote copy isn’t your only long-term backup option, then sync might be ok for you
So, syncthing / rsync / etc is fine… but maybe just point it to your monthly / weekly / daily backup folder(s) rather than the main files?
You also had some other suggestions I think, like zfs / btrfs snapshots… which would be a point in time copy of your files.
Or burn the photos to DVD / Bluray and store them at the other location? No power requirements there…
I think most options have been covered here, but I’d like to consider some other bits…
User accounts & file permissions:- if you have >1 account, note that the UserID is internally numbered (starting from 1000, so Bob=1000, Sue=1001) and your file system is probably setup using the numerical UserID… so re-creating the users in a different order would give Sue access to Bob’s files and vice versa.
Similarly, backing up /etc /var etc… you should check if any applications (ie databases) need specific chmod and chown settings
Rsync, tar, etc can cover some of this, you just need to check you rebuild users in the correct order.
Maybe Ansible is another approach? So your disaster recovery would be:
When you get this working, it’s amazing to watch an entire system being rebuilt
Wake on LAN won’t work remotely, so you’d either need to have access to a VPN at their location, or have a 2nd always on device that you can connect to and that could then WoL to your device… or… get a device with an IPMI which you remote into. (All non-VPN forms of remote connection are open to abuse)
I suspect (guess) you’re not going to be able to setup a VPN, so perhaps an always on pi is going to be necessary - so maybe it’ll be that with drives set to spin down when idle?
OpenMediaVault was my preferred choice until everything went docker on it which was getting too complex for a NAS… so I just created my own, which powers on at certain times of the day and off again when CPU / network IO was low enough.
Data transfer with syncthing is great, but I don’t really recommend sync for snapshot backups… (consider your files are all corrupted, it’ll happily sync those corruptions) but I have enough space for a few versions of my files, so in theory I can roll back, but it’s cetainly not a Grandfather, Father, Son strategy.
Not sure why you’ve been down voted - I think the fossify apps are really good.
I even contribute towards their app development
Vivaldi has a CalDav Calendar built in.
If you’re open to that possibility, I’ve been using it on both Windows and Linux laptops and works well with my radicale server.
You could, but if I’m away from home, I’ll take the movies / music / books with me so I can watch / listen / read without buffering, breaks, etx.