

https://ec.europa.eu/commission/presscorner/detail/en/ip_25_1339
Everything regarding enforcement is early stages but what they’re aiming for is much more specific than chat control and is based on existing wording in the Digital Services Act.
i’m lizard
https://ec.europa.eu/commission/presscorner/detail/en/ip_25_1339
Everything regarding enforcement is early stages but what they’re aiming for is much more specific than chat control and is based on existing wording in the Digital Services Act.
There’s a disclaimer in the readme: https://github.com/juanfont/headscale/?tab=readme-ov-file#disclaimer
The maintainer Tailscale contributes happens to be the lead developer by commit count at the moment.
They also had a major ass security issue that a security company should not be able to get away with the other day: assuming everyone with access to an email domain trusts each other unless it’s a known-to-them freemail address. And it was by design “to reduce friction”.
I don’t think a security company where an intentional decision like that can pass through design, development and review can make security products that are fit for purpose. This extends to their published client tooling as used by Headscale, and to some extent the Headscale maintainer hours contributed by Tailscale (which are significant and probably also the first thing to go if the company falls down the usual IPO enshittification).
Not them but between those two I’d recommend Kanboard if you’re going to be the only user. Far lighter and easier to administer piece of kit, has everything you’d want from a fancy task list but not much more. WeKan is rather heavy software but does have a few features that are probably quite important for large team use.
PUID
is indeed handled inside the container itself, it’ll run a container-provided script as whatever the container’s UID 0 happens to be first which then drops to whatever $PUID
happens to be inside the container. user=
is enforced by Podman itself before the container starts, but Podman will still run as root in that setup. That means Podman is running “rootful”, while if you started the container manually as $uid using the regular Podman CLI, it would be “rootless”. That is a major difference in a lot of respects, including security, and you can find quite a bit of documentation on the differences between those operating modes online; it wouldn’t fit in a comment. Rootless is generally considered the better mode, though there are some things that still require a rootful container.
In the upcoming NixOS 25.05 or current unstable, there are some tools you can use to run containers rootless as another user more easily using a new $name.podman.user = "";
setting. From what I understand they’ll still be root-managed systemd system services that require sudo to operate, but that means privileges get dropped by systemd before running Podman, instead of dropped by Podman before running the container. This stuff is recent and I haven’t used it, I just happen to know it exists, relevant nixpkgs commit if you wanna dig into it yourself: https://github.com/NixOS/nixpkgs/commit/7d443d378b07ad55686e9ba68faf16802c030025
FWIW, your domain will most likely eventually get used by spammers and then it’ll be an endless string of somewhat expected but unpredictable failures from there on onwards, with no actions you can take to reduce it. It’s good to keep an eye on what comes in but I wouldn’t invest too much effort into failure alerting.
Borg or the like with ‘hardcoded’ plaintext/regularly full-disk-encrypted key is acceptable. Someone that has your unencrypted private key sitting on your server has almost certainly already obtained access to the entire set of data you’re backing up, with the backup key itself only meaningfully guarding access to older backups.
The more important thing is to securely keep extra copies in case the server fails. I keep mine in a group in my password manager, one per repo.
(It’s a joke/reference, I guess it’s not 100% known though. My bad.)
I really do hate “I know what I have so you are going to pay whatever number I set” capitalism though, which is what they do here. These registrars figured out a loophole around the redemption grace period and are, from the start, set up to make you lose the domain and then spend significant money on a completely unfair auction where they have the power to plant fake bids, rather than paying the usual static redemption fees that aren’t that excessive.
Heartbreaking: The Worst Capitalist Practice You Know Just Accidentally Picked A Funny Target
You go to the settings and verify it. You don’t have to host anything, just verify that you own the domain via text file or DNS record and choose to set it as your handle. Bluesky’s ATProto has a couple extra layers of indirection and it’s very easy to get a custom handle as a result.
The downside of this setup is that running your own complete network is completely impossible. If you want to follow theonion.com
, anyone can find did:plc:a4pqq234yw7fqbddawjo7y35
in the DNS without too much work. That’s the identifier for The Onion’s Bluesky account, and even if they swapped back to .bsky.social
, that ID number would stay. But that DID tells you absolutely nothing about where the data is currently hosted.
So how do you figure that out? Well, you register it with https://plc.directory/ which is ran by Bluesky and cannot currently be replaced. There’s fancy cryptography involved that makes it hard for them to spoof data, but they are perfectly capable of simply not giving any data out for any given DID.
I don’t have Obsidian around, but this has been happening elsewhere lately too, almost certainly because of this underlying Electron issue: https://github.com/electron/electron/issues/43819
Unfortunately there’s not much you can do about it. Electron decided to depend on functionality not yet in a released version, and that very interesting choice flows down to everything that updates their Electron on the regular.
Sorry, I’ve had a (self-imposed) busy week, but I have to admit, that also has me rather stumped. As far as I can tell, your second entry should work. If the device is visible in /dev/mapper under a name, it should be able to mount under that name.
The only thing I can think of is that some important module like the ext4 module might be missing somehow? You can get pretty confusing errors when that happens. Dracut is supposed to parse /etc/fstab
for everything needed to boot, and maybe that’s not recognizing your root for some reason. dmesg
might have some useful info at the end after you try to mount it. If that’s what’s happening, you could try to add add_drivers+=" ext4 "
in your dracut.conf and regenerate it (the spaces are important!). But if that’s not it, then I’m probably out of ideas now.
I think you should check your root=
line and add a rd.luks.uuid=
to make it open it. Dracut will by default open the root FS as /dev/mapper/luks-abcdef...
based on the LUKS container UUID. You can get that with cryptsetup luksUUID
. /dev/mapper/root
is just never going to show up unless you’ve assigned a custom name to that with the barely documented rd.luks.name
, and I don’t see that in your setup. The cryptroot
and cryptdm
parameters aren’t used by Dracut either.
With all of that missing it’s just gonna wait for that /dev/mapper/root
to magically show up out of nowhere, without ever trying to open it.
A correct cmdline will probably look something along the lines of root=/dev/mapper/luks-<uuid> modules=sd-mod,usb-storage,ext4 rootfstype=ext4 rootflags=rw,relatime rd.luks.uuid=<uuid>
and once opening with passphrase works, you can start to mess with rd.luks.key=/awesome.key
(and readd quiet
when done debugging, if you want it that way).
ldconfig errors and the missing modules should be fine. musl’s ldconfig is just a bit different but also isn’t required in quite the same way. I don’t think you should need to mess with modules manually. I don’t think you’re using LVM’s userland for your setup, just all the device-mapper kernel modules. Dracut will pull all the necessary bits in for you if you’re setting it up for LUKS.
Dracut may have this functionality already built in via rd.luks.key, so a custom module would really only make sense if you’re trying to do more than that. You can probably get away with just using that if you just want it to work, but if you want to customize stuff:
I suspect your module is running well after the device is already supposed to be cryptsetup open
ed. The way the default crypt module handles it is by setting up udev configuration in a very early phase, and then having udev request the password a little bit later when it finds the device it’s trying to open, until all devices are ready. It’s a complex mechanism compared to Alpine’s straightforward script, but it’s much more flexible when it comes to ordering of things like RAID/network devices/LUKS/etc.
The result of that is that your code would have to run much earlier. There’s some documentation on how hooks work, and the builtin rd.luks.key
/ keydev handler runs at cmdline 10. That’s well before your pre-mount, and probably where you’d want to run your code. Based on a cursory inspection of the other code, you could either cryptsetup open
it yourself if you use the name it expects (rd.luks.name=
cmdline parameter or luks-$luks_container_uuid
), or you could use that /tmp/luks.keys
mechanism (it’s a dracut-internal thing so you won’t find much documentation, but it lives in crypt-lib.sh, cryptroot-ask.sh and probe-keydev.sh).
As for debugging, the cmdline manpage has a few decent enough options. rd.break=cmdline
or similar can force a shell before Dracut goes through a specific phase of hooks. You should be able to manually test doing things similar to your script at that point.
You’d be looking for /usr/share/mkinitfs/initramfs-init
. I’ve never customized that myself, but it looks like there’s already some support for a keyfile if you look for KOPT_cryptroot
and check that block of code. That looks like it’s mostly set up for a keyfile embedded into the initramfs, but I guess it should be possible to replace that code with something that grabs the keyfile off an USB drive.
I suppose you’d make a copy of it, put it somewhere in /etc or whatever and change the mkinitfs.conf
to point to it. init="/etc/whatever/myinitramfs-init"
should do the trick since the config file just gets sourced in. That said you’re definitively heading into unknown territory here. It might be easier to just use Dracut or the like instead.
mkinitfs
doesn’t support running custom shell hooks. mkinitfs
is very, very, very bare-bones custom code and the whole features concept exists only to pull extra files and kernel modules into the initramfs, not for extra logic.
You’d either have to customize the init script itself (not impossible, it’s 1000 lines) and pass -i
/set init=
in the .conf, or install Dracut/Booster instead (which should “just work” if you apk add
them, but I’ve had no need to do so).
All of the cool development-related Nix things like pinning a project to known-good library versions (for regression tests or otherwise) don’t really need you to run NixOS. If you like NixOS then it’s a perfectly usable distro for development work, but all of the powers come from Nix itself, and that can be installed anywhere you feel comfortable with.
The only real pro of running full NixOS is that everything you work on will test a relatively uncommon *nix setup by its nature. Things like developer-only scripts with hardcoded shebangs are more likely to break on NixOS than they would on a conventional Linux distro with Nix installed. That’s something potentially worth fixing as it might also hurt the developer experience on *BSD/Mac systems.
That already happens constantly and I’d consider this the consequence of it, rather than the cause. You can only issue so many vetoes before people no longer want to deal with you and would rather move on.
The recent week of Wayland news (including the proposal from a few hours ago to restate NACK policies) is starting to feel like the final attempt to right things before a hard fork of Wayland. I’ve been following wayland-protocols/devel/etc from the outside for a year or two and the vibes have been trending that way for a while.
Digging into the GitLab & related discussions, the main takeaway I got is that FFmpeg’s API supposedly meshes better with what Wine needs to provide to Windows code, simplifying things overall. GST is pretty heavy on asynchronous/background processing, which is normally something I’d consider good for media, but if the API you’re expected to implement is synchronous then I guess it only adds complexity.
The modern breed of CAPTCHAs is mostly only trying to verify that it’s a full-fat browser. undetected-chromedriver, camoufox, pydoll, patchright and a million other libraries/tools exist. Nothing’s perfect and it’s a cat & mouse game, but this single incident is a sample size of one as well.