
It’s weird I said “men” here. In the US women usually change their names once and men never. But adoption and grandma and going by initials are all changes I’ve seen and understand.
It’s weird I said “men” here. In the US women usually change their names once and men never. But adoption and grandma and going by initials are all changes I’ve seen and understand.
The story reads as totally normal. Not many men change their name that much but, “I want the same last name as my grandma who I love very much and not my second dad who I don’t talk to” feels right and good and maybe even sweet. The man’s a monster. This is fine.
In projects I work on we use NOCOMMIT for these blockers and they’ll fail the build. It’s honestly lovely to have something to leave yourself a note that the build catches for you.
Many years ago the Unicode Consortium has a fundraiser where you sponsored and emoji. Someone at my company sponsored one and posted to the internal mailing list. Short story short a couple dozen of us sponsored stuff and the company paid us back and wrote a cute blog post. Cheap marketing. Felt good.
Try your local library.
I think it was the EPA’s National Compute Center. I’m guessing based on location though.
When I was in highschool we toured the local EPA office. They had the most data I’ve ever seen accessible in person. Im going to guess how much.
It was a dome with a robot arm that spun around and grabbed tapes. It was 2000 so I’m guessing 100gb per tape. But my memory on the shape of the tapes isn’t good.
Looks like tapes were four inches tall. Let’s found up to six inches for housing and easier math. The dome was taller than me. Let’s go with 14 shelves.
Let’s guess a six foot shelf diameter. So, like 20 feet circumference. Tapes were maybe .8 inches a pop. With space between for robot fingers and stuff, let’s guess 240 tapes per shelf.
That comes out to about 300 terabytes. Oh. That isn’t that much these days. I mean, it’s a lot. But these days you could easily get that in spinning disks. No robot arm seek time. But with modern hardware it’d be 60 petabytes.
I’m not sure how you’d transfer it these days. A truck, presumably. But you’d probably want to transfer a copy rather than disassemble it. That sounds slow too.
Not looking at the man page, but I expect you can limit it if you want and the parser for the parameter knows about these names. If it were me it’d be one parser for byte size values and it’d work for chunk size and limit and sync interval and whatever else dd does.
Also probably limited by the size of the number tracking. I think dd reports the number of bytes copied at the end even in unlimited mode.
I used gerrit and zuul a while back at a place that really didn’t want to use GitHub. It worked pretty well but it took a lot of care and maintenance to keep it all ticking along for a bunch of us.
It has a few features I loved that GitHub took years to catch up to. Not sure there’s a moral to this story.
Windows -> RedHat -> Windows -> Gentoo -> Ubuntu -> RHEL -> Ubuntu -> Debian -> Arch
We’re two years out from the API apocalypse. I think. That’s how I got here.