I’ve been setting up a new Proxmox server and messing around with VMs, and wanted to know what kind of useful commands I’m missing out on. Bonus points for a little explainer.

Journalctl | grep -C 10 'foo' was useful for me when I needed to troubleshoot some fstab mount fuckery on boot. It pipes Journalctl (boot logs) into grep to find ‘foo’, and prints 10 lines before and after each instance of ‘foo’.

  • jim3692@discuss.online
    link
    fedilink
    arrow-up
    1
    ·
    2 minutes ago

    docker run --rm -it --privileged --pid=host debian:12 nsenter -a -t1 "$(which bash)"

    If your user is in the docker group, and you are not running rootless Docker, this command opens a bash shell as root.

    How it works:

    • docker run --rm -it creates a temporary container and attaches it to the running terminal
    • --privileged disables some of the container’s protections
    • --pid=host attaches the container to the host’s PID namespace, allowing it to access all running processes
    • debian:12 uses the Debian 12 image
    • nsenter -a -t1 enters all the namespaces of the process with PID 1, which is the host’s init since we use --pid=host
    • "$(which bash)" finds the path of the host’s bash and runs it inside the namespaces (plain bash may not work on NixOS hosts)
  • Lettuce eat lettuce@lemmy.ml
    link
    fedilink
    arrow-up
    3
    ·
    2 hours ago

    The watch command is very useful, for those who don’t know, it starts an automated loop with a default of two seconds and executes whatever commands you place after it.

    It allows you to actively monitor systems without having to manually re-run your command.

    So for instance, if you wanted to see all storage block devices and monitor what a new storage device shows up as when you plug it in, you could do:

    watch lsblk
    

    And see in real time the drive mount. Technically not “real time” because the default refresh is 2 seconds, but you can specify shorter or longer intervals.

    Obviously my example is kind of silly, but you can combine this with other commands or even whole bash scripts to do some cool stuff.

  • eli@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    2 hours ago

    There are a lot of great commands in here, so here are my favorites that I haven’t seen yet:

    • crontab -e
    • && and || operators
    • “>” and >> chevrons and input/output redirection
    • for loops, while/if/then/else
    • Basic scripts
    • Stdin vs stdout vs /dev/null

    Need to push a file out to a couple dozen workstations and then install it?

    for i in $(cat /tmp/wks.txt); do echo $i; rsync -azvP /tmp/file $i:/opt/dir/; ssh -qo Connect timeout=5 $i “touch /dev/pee/pee”; done

    Or script it using if else statements where you pull info from remote machines to see if an update is needed and then push the update if it’s out of date. And if it’s in a script file then you don’t have search through days of old history commands to find that one function.

    Or just throw that script into crontab and automate it entirely.

  • AllHailTheSheep@sh.itjust.works
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    2 hours ago

    I’m a big enjoyer of pushd and popd

    so if youre in a working dir and need to go work in a different dir, you can pushd ./, cd to the new dir and do your thing, then popd to go back to the old dir without typing in the path again

    • donkeyass@lemmy.sdf.org
      link
      fedilink
      arrow-up
      1
      ·
      2 hours ago

      Nice! I didn’t know that one.

      You can also cd to a directory and then do cd - to go to the last directory you were in.

  • harsh3466@lemmy.ml
    link
    fedilink
    arrow-up
    4
    ·
    4 hours ago
    find /path/to/starting/dir -type f -regextype egrep -regex 'some[[:space:]]*regex[[:space:]]*(goes|here)' -exec mv {} /path/to/new/directory/ \;
    

    I routinely have to find a bunch of files that match a particular pattern and then do something with those files, and as a result, find with -exec is one of my top commands.

    If you’re someone who doesn’t know wtf that above command does, here’s a breakdown piece by piece:

    • find - cli tool to find files based on lots of different parameters
    • /path/to/starting/dir - the directory at which find will start looking for files recursively moving down the file tree
    • -type f - specifies I only want find to find files.
    • -regextype egrep - In this example I’m using regex to pattern match filenames, and this tells find what flavor of regex to use
    • -regex 'regex.here' - The regex to be used to pattern match against the filenames
    • -exec - exec is a way to redirect output in bash and use that output as a parameter in the subsequent command.
    • mv {} /path/to/new/directory/ - mv is just an example, you can use almost any command here. The important bit is {}, which is the placeholder for the parameter coming from find, in this case, a full file path. So this would read when expanded, mv /full/path/of/file/that/matches/the/regex.file /path/to/new/directory/
    • \; - This terminates the command. The semi-colon is the actual termination, but it must be escaped so that the current shell doesn’t see it and try to use it as a command separator.
  • utopiah@lemmy.ml
    link
    fedilink
    arrow-up
    1
    ·
    4 hours ago
    fabien@debian2080ti:~$ history  | sed 's/ ..... //' | sort | uniq -c | sort -n | tail
    # with parameters
         13 cd Prototypes/
         14 adb disconnect; cd ~/Downloads/Shows/ ; adb connect videoprojector ;
         14 cd ..
         21 s # alias s='ssh shell -t "screen -raAD"'
         36 node .
         36 ./todo 
         42 vi index.js 
         42 vi todo # which I use as metadata or starting script in ~/Prototypes
         44 ls
        105 lr # alias lr="ls -lrth"
    fabien@debian2080ti:~$ history  | sed 's/ ..... //' | sed 's/ .*//' | sort | uniq -c | sort -n | tail
    # without parameters
         35 rm
         36 node
         36 ./todo
         39 git
         39 mv
         70 ls
         71 adb
         96 cd
        110 lr
        118 vi
    
  • some_guy@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    3
    ·
    5 hours ago

    Search for github repos of dotfiles and read through people’s shell profiles, aliases, and functions. You’ll learn a lot.

  • InFerNo@lemmy.ml
    link
    fedilink
    arrow-up
    18
    ·
    edit-2
    11 hours ago

    I use $_ a lot, it allows you to use the last parameter of the previous command in your current command

    mkdir something && cd $_

    nano file
    chmod +x $_

    As a simple example.

    If you want to create nested folders, you can do it in one go by adding -p to mkdir

    mkdir -p bunch/of/nested/folders

    Good explanation here:
    https://koenwoortman.com/bash-mkdir-multiple-subdirectories/q

    Sometimes starting a service takes a while and you’re sitting there waiting for the terminal to be available again. Just add --no-block to systemctl and it will do it on the background without keeping the terminal occupied.

    systemctl start --no-block myservice

      • wheezy@lemmy.ml
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        5 hours ago

        I have my .bashrc print useful commands with a short explanation. This way I see them regularly when I start a new session. Once I use a command enough that I have it as part of my toolkit I remove it from the print.

  • ☂️-@lemmy.ml
    link
    fedilink
    arrow-up
    8
    ·
    edit-2
    11 hours ago

    ctrl+r on bash will let you quickly search and execute previous commands by typing the first few characters usually.

    it’s much more of a game changer than it first meets the eye.

    • eli@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      3 hours ago

      And I believe shift+r will let you go forward in history if you’re spamming ctrl+r too fast and miss whatever you’re looking for

  • Gary Ghost@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    7 hours ago

    ps -ef | grep <process_name

    Kill -9 proces id

    I googled that -15 is better, I forgot what -9 even did, been using it for years.

    • black0ut@pawb.social
      link
      fedilink
      arrow-up
      2
      ·
      5 hours ago

      The number is the signal you send to the program. There’s a lot of signals you can send (not just 15 and 9).

      The difference between them is that 15 (called SIGTERM) tells the program to terminate by itself (so it can store its cached data, create a save without losing data or corrupting, drop all its open connections gracefully, etc). 9 (called SYGKILL) will forcefully kill a program, without waiting for it to properly close.

      You normally should send signal 15 to a program, to tell it to stop. If the program is frozen and it’s not responding or stopping, you then send signal 9 and forcefully kill it. No signal is “better” than the other, they just have different usecases.

  • qjkxbmwvz@startrek.website
    link
    fedilink
    arrow-up
    7
    ·
    14 hours ago

    nc is useful. For example: if you have a disk image downloaded on computer A but want to write it to an SD card on computer B, you can run something like

    user@B: nc -l 1234 | pv > /dev/$sdcard

    And

    user@A: nc B.local 1234 < /path/to/image.img

    (I may have syntax messed up–also don’t transfer sensitive information this way!)

    Similarly, no need to store a compressed file if you’re going to uncompress it as soon as you download it—just pipe wget or curl to tar or xz or whatever.

    I once burnt a CD of a Linux ISO by wgeting directly to cdrecord. It was actually kinda useful because it was on a laptop that was running out of HD space. Luckily the University Internet was fast and the CD was successfully burnt :)