• 2 Posts
  • 14 Comments
Joined 1 year ago
cake
Cake day: August 22nd, 2024

help-circle
  • I have been using frp to expose one port of my private server to the public one. Then on the public server, I’m using nginx as reverse proxy to enable https.

    This works great for my use case. Regarding security, if the application has a vulnerability, it is still an open door to your private server. My app runs on rootless podman, so only the container and the data it contains would be compromised.



  • kwa@lemmy.zipOPtoSelfhosted@lemmy.worldPodman rootless and ufw
    link
    fedilink
    English
    arrow-up
    1
    ·
    7 months ago

    I wanted to do something similar. But I grouped some containers using pods and it seems it broke the networking.

    Eventually I kept the pods, and exposed everything to the host where caddy can meet the services there. Not the cleanest way, especially as my firewall is turned off.


  • kwa@lemmy.zipOPtoSelfhosted@lemmy.worldPodman rootless and ufw
    link
    fedilink
    English
    arrow-up
    5
    ·
    7 months ago

    I switched at work because of the license changes docker did. I noticed that for my work workflow, podman was a direct remplacement of docker.

    For my homelab, I wanted to experiment with rootless and I also prefer to have my services handled by systemd. Also I really like the built-in auto update from podman




  • kwa@lemmy.zipOPtoSelfhosted@lemmy.worldPodman rootless and ufw
    link
    fedilink
    English
    arrow-up
    2
    ·
    7 months ago

    I should have clarified this. It does not open the ports, but I have setup my firewall to allow a range of IP and the traffic is still blocked.

    I have noticed some inconsistency in the behavior, where the traffic would sometimes work upon ufw activation but never work upon reboot. Knowing how docker works, I thought podman would also mess with the firewall. But maybe the issue comes from something else.






  • I tried llama.cpp with llama-server and Qwen2.5 Coder 1.5B. Higher parameters just output garbage and I can see an OutOfMemory error in the logs. When trying the 1.5B model, I have an issue where the model will just stop outputting the answer, it will stop mid sentence or in the middle of a class. Is it an issue with my hardware not being performant enough or is it something I can tweak with some parameters?