• 4 Posts
  • 27 Comments
Joined 5 months ago
cake
Cake day: November 8th, 2025

help-circle



  • Then why did they lock the fucking thread as controversial if it was such an innocent change?

    It’s paving the wave to implement a Californian law that can very easily end up meaning ID verification for everything.

    They could just not have done this at zero cost but decide to go to multiple projects, at this specific time which obviously isn’t coincidental, and actively work to start implementing this on Linux. I guess “Contributed to systemd” on their CV was more valuable than resisting the USA taking control of the whole internet and ending all sense of privacy.


  • Wow that’s an insane level of bootlicking, it was completely free for them to do absolutely nothing about this nonsense law and give the middle finger if asked by the US

    I didn’t care before but it turns out the systemd haters were on to something for a long time, fuck these owners for even considering this and even locking the PR to avoid valid criticism, I hope all the contributors create a fork, jump ship and never let the previous owners commit a single line of code to it







  • A present day AI makes an educated guess which existing source code snippets best match the request, does some testing, and submits code that it judges is most likely to pass code review.

    That’s still on the human that opened the PR without doing the slightest effort of testing the AI changes though.

    I agree there should be a lot of caution overall, I just think that the problem is a bit mischaracterized. The problem is the newfound ability to spam PRs that look legit but are actually crap, but the root here is humans doing this for Github rep or whatever, not AI inherently making codebases vulnerable. There need to be ways to detect such users that repeatedly do zero effort contributions like that and ban them.








  • Quoted from the Arch wiki:

    The current situation of anti-malware products on Linux is inadequate due to several factors:
    
        - Limited Variety: Compared to Windows, there are fewer users/clients resulting in limited interest for companies to develop products for Linux.
    
        - Complacency: Many believe Linux is inherently secure, leading to a lack of awareness and focus on malware protection. This creates a gap in proactive defense mechanisms.
    
        - Lack of Features: Existing tools often lack advanced features which are common in Windows anti-malware products, making them less effective on Linux.
    
    This is especially bad because the amount of malware on Linux is increasing just as the possible attack surface due to the increasing number of Linux-based servers and IoT devices.
    Currently on Linux one of the few existing and actively developed anti-malware solutions is ClamAV.
    

    There is no inherent mechanism that makes your system secure to viruses just because it’s Linux. This is mostly said because, Linux being a small percentage of desktop users, it’s not yet common for hackers to target Linux systems because it’s not worth the hassle when you can just target a much larger audience on Windows that is on average much less tech literate too.

    But as Linux popularity grows, viruses will start popping up on Linux as well, so it’s never a bad idea to use ClamAV. You are already more protected when you use package repositories instead of downloading executables from websites like you do on Windows, and Linux has better file system permissions, but you still need to be careful what you’re downloading and running.




  • It’s one thing for a company to train a model with your code and then create a better copy of what you made and sell it for profit (which I think is an unrealistic thing to happen if their codebase is depending on AI slop code), and it’s another thing that an AI is providing access to public information (the code) that you previously monetized to help people understand it better. I really don’t see how that monetization model would have worked regardless of AI existing, at some point there are going to be enough people out there that understand the code that can build documentation of their own for free. I’m not a lawyer but I don’t see how this violates a GPL license either.

    The only thing FOSS projects have to be wary of about AI is slop pull requests, but code review still had to be done before LLMs existed anyway.

    Also my two cents about the threads regarding Tailwind is that, what FOSS devs wanting to live doing what they do should really hate is not AI making it harder for them to monetize their projects in odd ways, but capitalism requiring them to monetize anything they do for them to be able to live while doing it. FOSS devs should be able to hand out their creations to society without worrying about putting food on the table, their work is no less valuable than that of any engineer working for the big corporations.