

It would need to be open source, distributing proprietary kernel modules is a nightmare that can cause the OS to fail to boot after every kernel update. An open source anticheat kernel module would probably be useless and easy to bypass.
It would need to be open source, distributing proprietary kernel modules is a nightmare that can cause the OS to fail to boot after every kernel update. An open source anticheat kernel module would probably be useless and easy to bypass.
I just tested it on my PC with Proton 10 and it works (at least on my AMD GPU). My guess is the NVIDIA driver is broken.
It actually happened to me today on Arch.
I updated the system, including the kernel, everything went smoothly with no errors or warnings, I rebooted, and it said the ZSTD image created by mkinitcpio was corrupt and it failed to boot.
I booted the arch install iso, chrooted into my installation and reinstalled the linux package, rebooted, and it worked again.
I have no explanation, this is on a perfectly working laptop with a high end SSD, no errors in memtest, not overclocked, and I’ve been using this Arch install for over a year.
The chances of the package being corrupt when I downloaded it and the hash still being correct are astronomically low, the chances of a cosmic ray hitting the RAM at just the right time are probably just as low, the fact that mkinitcpio doesn’t verify the images that it creates is shocking, the whole thing would have been avoided on an immutable distro with A/B partitions.
Generally speaking, Linux needs better binary compatibility.
Currently, if you compile something, it’s usually dynamically linked against dozens of libraries that are present on your system, but if you give the executable to someone else with a different distro, they may not have those libraries or their version may be too old or incompatible.
Statically linking programs is often impossible and generally discouraged, making software distribution a nightmare. Flatpak and similar systems made things easier, but it’s such a crap solution and basically involves having an entire separate OS installed in parallel, with its own problems like having a version of Mesa that’s too old for a new GPU and stuff like that. Applications must be able to be packaged with everything they need with them, there is no reason for dynamic linking to be so important in Linux these days.
I’m not in favor of proprietary software, but better binary compatibility is a necessity for Linux to succeed, and I’m saying this as someone who’s been using Linux for over a decade and who refuses to install any proprietary software. Sometimes I find myself using apps and games in Wine even when a native version is available just to avoid the hassle of having to find and probably compile libobsoletecrap-5.so
I’d say I’m a “time-strapped” user since I have a full time job and I’d rather spend my free time gaming rather than fixing a broken OS, nevertheless… I have 2 PCs with Arch Linux (one for personal stuff and one for work) and a server with NixOS.
When things break on Arch (which is rare these days but it can happen, especially if you play around with things from the AUR), I just rollback with timeshift (it takes just a few seconds with btrfs) and try that update again in a few days. Minor issues I can just ignore or work around them and take care of them when I feel like it, but they usually get fixed with updates within a few days. The only time I felt that it was actively wasting my time was when Plasma 6 came out a few months ago and a lot of little things broke, especially on wayland, but they were fixed rather quickly with 6.1 so I can’t complain too much.
NixOS on the other end has been nothing but trouble and a waste of time ever since I installed it. It took me a week to configure it, some packages are kinda old, most have incomplete declarative config, I had to manually write some units myself, and when things break it drives me crazy because even basic troubleshooting of services can be a pain in the ass because I have to find out where stuff is, know which config files are going to be overwritten, launch the correct nix-shell, … it’s all so tiresome… so I just revert to an older config and hope for the best. To make things worse, major updates often require manual changes to the config or even to application files themselves (looking at you, nextcloud) and you will excuse me if I can’t be bothered to do that on a DECLARATIVE DISTRO. Even debian doesn’t need that, come on! I don’t care what people say on NixOS, this OS is not ready yet, I don’t have time for this shit when I’m working and that server will be going back to debian next summer.
Arch Linux. Everyone said it was hard to use, unstable, etc. but my experience with it has been the exact opposite.
Yes, the install process is needlessly complicated (although it got a lot simpler now that we have archinstall), but the OS itself is rock solid and rarely has any issues that require more than a reboot or a package reinstall to solve. The AUR is a godsend too if you don’t want or don’t know how to compile stuff from source.
When they were installing the alarm at my house I noticed that the main guy had nextcloud on his phone and it sparked a nice conversation about privacy. He has no technical background but managed to self-host it on his old laptop with one of those distros that have an easy UI for self-hosting (don’t remember which one exactly). He’s a pretty cool guy.
Imagine using pirated software and allowing it to go online. Loco 🤯
I already had a script on the router that I used to notify me of network outages, IP changes, keep the DDNS updated, etc. and I thought it was easier to just add a couple lines to that
The jitsi user is a system user so it can’t login even if you set a key for it. Besides, I wouldn’t risk it anyway since that user is in the docker group, if it gets compromised somehow, the attacker would have very high privileges.
I think this one beats them all.
My home server keeps a few services up, including an instance of Jitsi Meet. The server runs nixos and the nixos package for jitsi is incomplete to say the least and doesn’t even support authentication, so I use the docker-compose version and I have a script that runs periodically to keep it updated. So far so good, right? Well, no.
Because the server is at home, I have a dynamic external IP address, so I have to use a DDNS provider, but jitsi doesn’t expect this and uses a stun server at startup to determine the public IP of the server once, so if my connection goes down or is restarted and the IP changes, jitsi needs to be restarted or it won’t work anymore.
The solution?
I’ve been running this setup since mid 2020 and I expect this to continue until IPv6 becomes the norm.
I switched for good in 2019, when I realized that I was wasting more time getting windows into a usable state than the average arch user.
Privacy and usability were the biggest reasons for me.
Good idea, I’ll add it to the to-do list for the next major release.
Occasionally some cloud providers or ISPs chime in and offer their servers to the public. If you have an LS server, you can submit it here: https://librespeed.org/submit
I’m the author of the project. The servers are simply overloaded af unfortunately. It’s a fairly popular project and we don’t have enough servers to support this many concurrent users.
It doesn’t need javascript from “20 different domains”, only a file called empty.php is fetched from those servers to measure the ping. The javascript is hosted on librespeed.org, which is under my control.
Damn, never thought I’d live to see the enshittification of F-Droid. I definitely won’t be using it anymore if this happens.
Hi, I’m the original author of LibreSpeed. When you load the website it downloads a list of servers and tries all of them to see which one has the lowest ping, that’s what you’re seeing.
I know, I’ve been using it since 2010, when it was still called CyanogenMod :)
I’d love to see more studios make puzzle-adventure games, like the ones by Cyan