

I think you’re thinking of SerenityOS (although it isn’t actually a Linux):
I think you’re thinking of SerenityOS (although it isn’t actually a Linux):
I’m still using an iPhone mini and I haven’t experienced any bad layouts, broken websites, or any difficulty like that. It has the same resolution of the biggest iPhone I’ve ever had (iPhone X) so things are smaller, which would make it a poor fit for someone with poor vision, but for me it’s an absolutely perfect phone. It’s frustrating to know that the perfect phone for me could easily exist, and yet Apple will refuse to make it for me. I’ll be stuck with phones I don’t like for the rest of my life, it seems.
Back in the olden days, if you wrote a program, you were punching machine codes into a punch card and they were being fed into the computer and sent directly to the CPU. The machine was effectively yours while your program ran, then you (or more likely, someone who worked for your company or university) noted your final results, things would be reset, and the next stack of cards would go in.
Once computers got fast enough, though, it was possible to have a program replace the computer operator, an “operating system”, and it could even interleave execution of programs to basically run more than one at the same time. However, now the programs had to share resources, they couldn’t just have the whole computer to themselves. The OS helped manage that, a program now had to ask for memory and the OS would track what was free and what was in use, as well as interleaving programs to take turns running on the CPU. But if a program messed up and wrote to memory that didn’t belong to it, it could screw up someone else’s execution and bring the whole thing crashing down. And in some systems, programs were given a turn to run and then were supposed to return control to the OS after a bit, but it was basically an honor system, and the problem with that is likely clear.
Hardware and OS software added features to enforce more order. OSes got more power, and help from the hardware to wield it. Now instead of asking politely to give back control, the hardware would enforce limits, forcing control back to the OS periodically. And when it came to memory, the OS no longer handed out addresses matching the RAM for the program to use directly, instead it could hand out virtual addresses, with the OS tracking every relationship between the virtual address and the real location of the data, and the hardware providing Memory Management Units that can do things like store tables and do the translation from virtual to physical on its own, and return control to the OS if it doesn’t know.
This allows things like swapping, where a part of memory that isn’t being used can be taken out of RAM and written to disk instead. If the program tries to read an address that was swapped out, the hardware catches that it’s a virtual address that it doesn’t have a mapping for, wrenches control from the program, and instead runs the code that the OS registered for handling memory. The OS can see that this address has been swapped out, swap it back in to real RAM, tell the hardware where it now is, and then control returns to the program. The program’s none the wiser that its data wasn’t there a moment ago, and it all works. If a program messes up and tries to write to an address it doesn’t have, it doesn’t go through because there’s no mapping to a physical address, and the OS can instead tell the program “you have done very bad and unless you were prepared for this, you should probably end yourself” without any harm to others.
Memory is handed out to programs in chunks called “pages”, and the hardware has support for certain page size(s). How big they should be is a matter of tradeoffs; since pages are indivisible, pages that are too big will result in a lot of wasted space (if a program needs 1025 bytes on a 1024-byte page size system, it’ll need 2 pages even though that second page is going to be almost entirely empty), but lots of small pages mean the translation tables have to be bigger to track where everything is, resulting in more overhead.
This is starting to reach the edges of my knowledge, but I believe what this is describing is that RISC-V chips and ARM chips have the ability for the OS to say to the hardware “let’s use bigger pages than normal, up to 64k”, and the Linux kernel is getting enhancements to actually use this functionality, which can come with performance improvements. The MMU can store fewer entries and rely on the OS less, doing more work directly, for example.
Archive Team often uses the Internet Archive to share the things they save and obviously they have a shared goal of saving a copy of everything ever made, but they aren’t the same people. The Archive Team is a vigilante white hat hacker group (well, maybe a little bit grey), and running a Warrior basically means you’re volunteering to be part of their botnet. When a website is going to be shut down, they’ll whip together a script and push it out to the botnet to try to grab as much of the dying site as they can, and when there’s more downtime they have some other projects, like trying to brute force all those awful link shorteners so that when they inevitably die, people can still figure out where it should’ve pointed to.
The .bin and .cue file are the parts of the actual game disc that you want. The .bin file contains almost all of the data and the .cue file contains some extra information about the structure of the CD. All the rest is Internet Archive stuff (and an image of the game cover of course).
To open it, you can convert it to a .iso disk image instead, which any Linux distribution can open as if it were a real CD. This blog post talks about how to do that. The last paragraph about mount
you can probably replace with double-clicking the .iso file in the GUI I would guess.
It’s very good for navigating and editing text quickly, and fantastic for situations like “I need to do the same thing 100 times” with things like macros. Coders are frequently opening a big, complex file, jumping around it a lot, changing big and small parts of it, and doing repetitive tasks. For something more like writing out thoughts for an email, editing them slightly, then being done with that text forever, there aren’t as many advantages, you’re spending most of your time in “insert” mode which is effectively “normal text editor that people are used to” mode. That said, it’s one of those things where when you do get used to it and start to enjoy it instead of being frustrated by how different it is, you start wanting it wherever you have to type anything.