• 0 Posts
  • 34 Comments
Joined 1 year ago
cake
Cake day: April 3rd, 2024

help-circle
  • Jesus_666@lemmy.worldtoLinux@lemmy.mlLinux Users- Why?
    link
    fedilink
    English
    arrow-up
    4
    ·
    27 days ago

    I run Garuda because it’s a more convenient Arch with most relevant things preinstalled. I wanted a rolling release distro because in my experience traditional distros are stable until you have to do a version upgrade, at which point everything breaks and you’re better off just nuking the root partition and reinstalling from scratch. Rolling release distros have minor breakage all the time but don’t have those situations where you have to fix everything at the same time with a barely working emergency shell.

    The AUR is kinda nice as well. It certainly beats having to manually configure/make obscure software myself.

    For the desktop I use KDE. I like the traditional desktop approach and I like being able to customize my environment. Also, I disagree with just about every decision the Gnome team has made since GTK3 so sticking to Qt programs where possible suits me fine. I prefer Wayland over X11; it works perfectly fine for me and has shiny new features X11 will never have.

    I also have to admit I’m happy with systemd as an init system. I do have hangups over the massive scope creep of the project but the init component is pleasant to work with.

    Given that after a long spell of using almost exclusively Windows I came back to desktop Linux only after windows 11 was announced, I’m quite happy with how well everything works. Sure, it’s not without issues but neither is Windows (or macOS for that matter).

    I also have Linux running on my home server but that’s just a fire-and-forget CoreNAS installation that I tell to self-update every couple months. It does what it has to with no hassle.


  • To quote that same document:

    Figure 5 looks at the average temperatures for different age groups. The distributions are in sync with Figure 4 showing a mostly flat failure rate at mid-range temperatures and a modest increase at the low end of the temperature distribution. What stands out are the 3 and 4-year old drives, where the trend for higher failures with higher temperature is much more constant and also more pronounced.

    That’s what I referred to. I don’t see a total age distribution for their HDDs so I have no idea if they simply didn’t have many HDDs in the three-to-four-years range, which would explain how they didn’t see a correlation in the total population. However, they do show a correlation between high temperatures and AFR for drives after more than three years of usage.

    My best guess is that HDDs wear out slightly faster at temperatures above 35-40 °C so if your HDD is going to die of an age-related problem it’s going to die a bit sooner if it’s hot. (Also notice that we’re talking average temperature so the peak temperatures might have been much higher).

    In a home server where the HDDs spend most of their time idling (probably even below Google’s “low” usage bracket) you probably won’t see a difference within the expected lifespan of the HDD. Still, a correlation does exist and it might be prudent to have some HDD cooling if temps exceed 40 °C regularly.


  • Hard drives don’t really like high temperatures for extended periods of time. Google did some research on this way back when. Failure rates start going up at an average temperature of 35 °C and become significantly higher if the HDD is operated beyond 40°C for much of its life. That’s HDD temperature, not ambient.

    The same applies to low temperatures. The ideal temperature range seems to be between 20 °C and 35 °C.

    Mind you, we’re talking “going from a 5% AFR to a 15% AFR for drives that saw constant heavy use in a datacenter for three years”. Your regular home server with a modest I/O load is probably going to see much less in terms of HDD wear. Still, heat amplifies that wear.

    I’m not too concerned myself despite the fact that my server’s HDD temps are all somewhere between 41 and 44. At 30 °C ambient there’s not much better I can do and the HDDs spend most of their time idling anyway.


  • “Legally required”, so they’re seeing it in the local laws. Some countries require websites to disclose who operates them.

    For example, in Germany, websites are subject to the DDG (Digitale-Dienste-Gesetz, “digital services law”). Under this law they are subject to the same disclosure requirements as print media. At a minimum, this includes the full name, address, and email address. Websites updated operated by companies or for certain purposes can need much more stuff in there.

    Your website must have a complete imprint that can easily and obviously be reached from any part of the website and is explicitly called “imprint”.

    These rules are meaningless to someone hosting a website in Kenya, Australia, or Canada. But if you run a website in Germany you’d better familiarize yourself with them.



  • Jesus_666@lemmy.worldtoxkcd@lemmy.worldxkcd #3089: Modern
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    3 months ago

    Wasn’t there a Dark Age after Bronze? The one where everyone was scowling the whole time and the stories were so tryhard edgy you could use a typical Youngblood issue as a letter opener?

    (Basically the “pouches” era the sibling comments talk about. Rob Liefeld’s contributions to fashion will never be forgotten.)


  • Of course you wouldn’t use an existing database engine as the foundation of a new database engine. But you would use an existing database engine as the foundation of an ERP software, which is a vastly different use case even if the software does spend a lot of time dealing with data.

    If I want to build an application I don’t want to reimplement everything. That’s what middleware is for. The use case of my application is most likely not to speak a certain protocol; the protocol is just the means to what I actually want to do. There’s no reason for me to roll my own implementation from scratch and keep up with current developments except if I’m unhappy with all current implementations of that protocol.

    Of course one can overdo it with middleware (the JS world is rife with this) but implementing a communication protocol is one of the classic cases where it makes sense.






  • That does make encryption was less appealing to me. On one of my machines / and /home are on different drives and parts of ~ are on yet another one.

    I consider the ability to mount file systems in random folders or to replace directories with symlinks at will to be absolutely core features of unixoid systems. If the current encryption toolset can’t easily facilitate that then it’s not quite RTM for my use case.


  • If you use a .local domain, your device MUST ask the mDNS address (224.0.0.251 or FF02::FB) and MAY ask another DNS provider. Successful resolution without mDNS is not an intended feature but something that just happens to work sometimes. There’s a reason why the user interfaces of devices like Ubiquiti gateways warn against assigning a name ending in .local to any device.

    I personally have all of my locally-assigned names end with .lan, although I’m considering switching to a sub-subdomain of a domain I own (so instead of mycomputer.lan I’d have mycomputer.home.mydomain.tld). That would make the names much longer but would protect me against some asshat buying .lan as a new gTLD.




  • You are insanely naive for saying this. If you’d used non-corporate email servers, like the much smaller email providers out there (which are basically extinct at this point) you’d know just how wrong this actually is. Most smaller email providers out there are blocked or limited by the big ones and the ones that are blocked your mail will never reach the inboxes of people on the big servers, not even the spam folders on those servers. They won’t bounce it back to you either, so it’ll just go into the void.

    Most email these days is used primarily by the all mighty trinity: Gmail, Outlook, and Yahoo, and a Few on Hotmail and AOL and while there are a few smaller companies out there like Proton, when it comes to something that isn’t a company or is self-hosted you can expect a lot of problems with domains being blacklisted, IPs being blacklisted, or both. And it’s actually much worse than defederation.

    I’ve been using a self-administered mail server (running on a root server at a major hosting provider) as my main email provider for well over a decade. I think I’ve encountered one website where that actually led to issues. Heck, the server once got on Spamhaus’s bad side for a week and once we were off the list everything was back to normal.

    Self-hosted mail works very well one you’ve jumped through all of the appropriate hoops (DKIM, SPF, etc.). Sure, running a mail server out of your bedroom probably won’t work very well but if you’re with any kind of reputable hosting provider you should be fine.

    You’re beginning to realize why the decision to limit spam and illegal shit was chosen over catering to the people who want the whole federated world instead of what they’re allowed access to. Ultimately it is better for everyone if the depraved shit and spam gets blocked, than it is for the people who want the whole world to have their way. If you want the world, go to Nostr, you’ll learn why most people do not want the world.

    The problem is that defederation leads to confusing situations. Being told about a response to your post/comment/toot and then finding nothing when you look is bad UX. Better UX would be a notice that what you’re looking for comes from a defederated instance and can’t be viewed – but that’s obviously impossible because your instance doesn’t even know anything is there.

    Not wanting all the content on your instance is perfectly reasonable. But the way defederation works exposes details of the underlying technology to the user in a way many users don’t want to have to deal with, serving as an impediment to growing the fediverse.

    It’s not easy to keep unwanted stuff off your instance while also being user-friendly about it. That’s why I called it tricky.



  • Honestly, this suggests to me that the ability to defederate might be a bug rather than a feature.

    If my instance doesn’t talk to the instance at foobar.example, I might be unable to see (parts of) relevant discussions. This is worse for a microblog like Mastodon than it is in the threadiverse but it’s still something to keep in mind even over here. And most non-enthusiasts don’t want to have to do that.

    Email is an example of a successful federated platform and it barely has defederation support. But in general all mail servers can talk to all other mail servers as long as they provide the right look-at-me-I’m-legitimate signaling. That makes email easy to use for regular people no matter if they use Gmail or their cousin’s self-hosted mail server.

    Perhaps that is how at least the non-threaded fediverse should work… However, that would also mean that some instance hosting heinous shit would keep being visible to everyone. It’s a tricky problem.