• 6 Posts
  • 209 Comments
Joined 1 year ago
cake
Cake day: April 27th, 2024

help-circle









  • Thanks, had not heard of this before! From skimming the link, it seems that the integration with HASS mostly focuses on providing wyoming endpoints (STT, TTS, wakeword), right? (Un)fortunately, that’s the part that’s already working really well 😄

    However, the idea of just writing a stand-alone application with Ollama-compatible endpoints, but not actually putting an LLM behind it is genius, I had not thought about that. That could really simplify stuff if I decide to write a custom intent handler. So, yeah, thanks for the link!!


  • Thanks for your input! The problem with the LLM approach for me is mostly that I have so many entities, HASS exposing them all (or even the subset of those I really, really want) is already big enough to slow everything to a crawl, and to get bad results from all models I’ve tried. I’ll give the model you mentioned another shot though.

    However, I really don’t want to use an LLM for this. It seems brittle and like overkill at the same time. As you said, intent classification is a wee bit older than LLMs.

    Unfortunately, the sentence template matching approach alone isn’t sufficient, because quite frequently, the STT is imperfect. With HomeAssistant, currently the intent “turn off all lights” is, for example, not understood if STT produces “turn off all light”. And sure, you can extend the template for that. But what about

    • turn of all lights
    • turn off wall lights
    • turnip off all lights
    • off all lights
    • off all fights

    A human would go “huh? oh, sure, I’ll turn off all lights”. An LLM might as well. But a fuzzy matching / closest Levensthein distance approach should be more than sufficient for this, too.

    Basically, I generally like the sentence template approach used by HASS, but it just needs that little bit of additional robustness against imperfections.


  • Thanks for sharing your experience! I have actually mostly been testing with a good desk mic, and expect recognition to get worse with room mics… The hardware I bought are seeed ReSpeaker mic arrays, I am somewhat hopeful about them.

    Adding a lot of alternative sentences does indeed help, at least to a certain degree. However, my issue is less with “it should recognize various different commands for the same action”, and more “if I mumble, misspeak, or add a swear word on my third attempt, it should still just pick the most likely intent”, and that’s what’s currently missing from the ecosystem, as far as I can tell.

    Though I must conceit, copying your strategy might be a viable stop-gap solution to get rid of Alexa. I’ll have to pay around with it a bit more.

    That all said, if you find a better intent matcher or another solution, please do report back as I am very interested in an easier solution that does not require me to think of all possible sentence ahead of time.

    Roger.








  • Grew up on it. My dad set up a Ubuntu 4.10 PC for my brother and I when we were 3/5 (no internet, obv), and it stuck.

    Used Windows for a brief time in highschool to be able to play online with friends.

    Went right back to Linux when going to university. Will never change back, both for ideological reasons and because Linux is just better.

    Next step: NixOS on a phone



  • TBH, it sounds like you have nothing to worry about then! Open ports aren’t really an issue in-and-on itself, they are problematic because the software listening on them might be vulnerable, and the (standard-) ports can provide knowledge about the nature pf the application, making it easier to target specific software with an exploit.

    Since a bot has no way of finding out what services you are running, they could only attack caddy - which I’d put down as a negligible danger.


  • My ISP blocks incoming data to common ports unless you get a business account.

    Oof, sorry, that sucks. I think you could still go the route I described though: For your domain example.com and example service myservice, listen on port :12345 and drop everything that isn’t requesting myservice.example.com:12345. Then forward the matching requests to your service’s actual port, e.g. 23456, which is closed to the internet.

    Edit: and just to clarify, for service otherservice, you do not need to open a second port; stick with the one, but in addition to myservice.example.com:12345, also accept requests for otherservice.example.com:12345, but proxy that to the (again, closed-to-the-internet) port :34567.

    The advantage here is that bots cannot guess from your ports what software you are running, and since caddy (or any of the mature reverse proxies) can be expected to be reasonably secure, I would not worry about bots being able to exploit the reverse proxy’s port. Bots also no longer have a direct line of communication to your services. In short, the routine of “let’s scan ports; ah, port x is open indicating use of service y; try automated exploit z” gets prevented.