What caused the jump in the first place? I only just opened an account myself because the folks who run my home instance lemmy.ca started up pixelfed.ca about a week ago and I decided to check it out.
Oh I see. syncthing is more for keeping 2 directories synched between machines. I guess in a sense, it’s more of a Dropbox competitor while croc and sharr are for one-off file transfers. For awhile, I ran an owncloud server at work for internal use. It was pretty good for file synching, but required some port forwarding through the router. These solutions mentioned here seem to all have a public host somewhere to eliminate that need.
Cool. I’ve been using croc lately to move files around, but this differs in that they are supplying temporary off-site storage so that the host doesn’t have to remain online.
That’s pretty sad when you have to pirate a game just to make it playable. I probably shouldn’t admit to having cracked games in the past for this reason. Yarrr!
This is actually the first I’ve heard of Denuvo, but if Wikipedia’s page on it is any indication, it sounds pretty awful! Anyway, I’ve checked the dealbreaker box on the survey.
I suppose the same could be said on the lemmy side. There’s no reason someone couldn’t write a lemmy app that lets you do what an RSS client does in terms of only showing content from a selected subgroup of communities.
You raise a good point that it would be nice to have more control over which group of communities you are drawing from at a given time. (Is there a way to group subscriptions and switch between them?) It’s a bit disconcerting to see 5 tech headlines and then suddenly something about the war in Ukraine or whatever. It jars my train of thought. With an RSS client, you can group feeds however you want.
That said, my experience with RSS readers is not quite so idyllic. In the end, rather than having nicely partitioned feed groups by topic, I wind up having to separate the ones that produce content frequently but with a poor signal-to-noise from those that post once in awhile but are generally worth your time. With something like lemmy, people are helping you do the work of finding the more interesting content from that site that posts every 10 minutes.
deleted by creator
Anyways, did I miss anything?
I think the big problem in link aggregation is how to sort/prioritize content for the end user. RSS does not provide a way to do this, nor should it as far as I’m concerned. It should simply be about public content being tagged in a standardized way for any app to come along and organize it using whatever algorithm.
A simple RSS reader has the problem that the more prolific sites will tend to flood your feed and make it tedious to scroll through miles of links. Commercial news portals try to learn your tastes through some sort of machine learning algorithm and direct content accordingly. This sounds like a good idea in theory, but tends to build echo chambers around people that reinforce their biases, and that hasn’t done a lot of good for the world to put it mildly.
The lemmy approach is to use one of a number of sorting algorithms built atop a crowd-sourced voting model. It may not be perfect, but I prefer it to being psychoanalyzed by an AI.
Btw there was a post from about a month ago where someone was offering to make any RSS feed into a community. I’ve subscribed to a few of them and it’s actually pretty awesome.
Didn’t kbin have a separate mechanism for supporting a post in a more public way? I can’t remember how that worked now, but it was in addition to the regular voting I think?
The thing about the MPW Shell is it was sort of the only game in town if you actually wanted a command line with the classic Mac OS. (There’s an awesome little emulator called SheepShaver if you ever want to explore it btw.) Well, I suppose there was A/UX. I thought it was a miracle when that came out. You have to realize in those early days a good chunk of the operating system itself was actually baked in to ROM. (You had to do desperate things to squeeze a GUI out of such limited resources as existed back then!) So to this day I have no idea how they managed to spin off a 'nix despite that.
Anyways. I wonder, if you made some sort of template format today, to what extent you could write some sort of conversion tool that would scrape a man page or whatever to rough it in and then you could tweak it to get what you want? man pages aren’t super standardized in their format I guess, so it’s probably more trouble than it’s worth. I like to use Python’s argparse
when rolling out scripts myself, and its --help
format is pretty rigid given that it’s algorithmically generated. Might be more plausible with something like that? I had a quick look just now to see if you can drill down into the argparse.ArgumentParser
class itself to pull out the info more directly, but it seems a rather opaque thing that doesn’t expose public APIs for that. Oh well…
This reminds me of something from my ancient past. Back in the early-ish days of Apple, there was a development system called MPW (Macintosh Programmer’s Workshop) which included its own little kludgy shell.
The weird thing about it though was while you could enter commands on the command line like in any shell, you could prefix them with the word commando
(presumably a portmanteau of “command” and “window”) and this window would pop up showing various buttons, checkboxes, etc. correponding to command line options. When you ok’d the window, it would generate the command line for you.
I’m rather hazy about how all this worked, but I think there was some sort of template language to define the window layout if you wanted to add commando support for your own tool? And presumeably, as you say, you could restrict what’s possible with the window interface as you deemed fit?
I have some vague recollection of a hacker convention from the 90s where people were challenged to come up with wireless networking in a one night coding marathon. (This was long before wifi.) So some dude used speech synthesis to get a machine to say “one zero one one zero…” and another to assemble the binary data into packets using speech recognition. It was hilarious, and the dev had to keep telling people to shut up and stop laughing so he could complete the demo.
But anyways… what I’m trying to suggest here is you might have the best luck if your notification sounds contain spoken commands and you use speech recognition to trigger scripts? That tech is pretty mature at this point.
I was astonished to find the other day that LibreOffice has no problem opening ClarisWorks files. That is an ancient Mac format that even Apple’s Pages has long since abandoned.
Oh yeah, that’s another way to make a subshell. But don’t forget to stick the find
in there also:
sudo sh -c 'cd ./testar && find . -maxdepth 1 -type d,f' | ...
The commands within the parentheses run in a temporarily created subshell with its own environment. So you can change the working directory within it and it won’t effect your main shell’s working directory.
Let’s say you’re in the home directory that’s called /home/joe
. You could go something like:
> (cd bin && pwd) && pwd
/home/joe/bin
/home/joe
If find
had something equivalent to tar -C
, you wouldn’t need to do this, but I don’t think it does?
Ok, I actually tried something like this at a terminal. You do still need the -C ./testar
if you use the subshell since tar
won’t know where to look otherwise.
(sudo cd ./testar && sudo find . -maxdepth 1 -type d,f) | sudo tar -czvf ./xtractar/tar2/testbackup2.tgz -C ./testar -T -
This will still give you a listing with ./text.tz
and so on because find
prints ./whatever
when you search .
. I think this is harmless? But I suppose you could remove them if it bothers you.
(sudo cd ./testar && sudo find . -maxdepth 1 -type d,f) | cut -c3- | sudo tar -czvf ./xtractar/tar2/testbackup2.tgz -C ./testar -T -
I think that since you’re piping in the file list from find
, the -C ./testar
in the tar
command is basically irrelevant? You probably need to cd ./testar
before the find
. Maybe you could do that in a subshell so that the cd
doesn’t affect your tar
archive path? So something like:
(sudo cd ./testar && sudo find ./ -maxdepth 1 -type d,f) | sudo tar -czvf ./xtractar/tar2/testbackup2.tgz -T -
I’m not a web dev but was chatting with a friend who is, lamenting web 2.0 for pretty much the same reasons as OP. He’s like “2.0?!? Where have you been? It’s all about web3 and blockchains.” Now where was that comfortable old rock I had been hiding under again?
When the www was in its infancy, I thought there needed to be a standardized way to classify content. Something Dewey Decimal System-ish I suppose? But it would need to be easy for casual content providers to use, since the only way it could work would be in at a grass roots, decentralized level where each provider would be responsible for classifying their own content.
Perhaps there could be tools like expert systems that would ask you a number of questions about your data and then link it up appropriately. It could usher in a golden age of library science!
But then everyone went fuck that. Search engines.