Handy – Free open source speech-to-text app

(github.com)

83 points | by tin7in 4 hours ago

17 comments

  • blutoot 1 hour ago
    I have dystonia which often stiffens my arms in a way that makes it impossible for me to type on a keyboard. TTS apps like SuperWhisper have proven to be very helpful for me in such situations. I am hoping to get a similar experience out of "Handy" (very apt maming from my perspective).

    I do, however, wonder if there is a way all these TTS tools can get to the next level. The generated text should not be just a verbatim copy of what I just said, but depending on the context, it should elaborate. For example, if my cursor is actively inside an editor/IDE with some code, my coding-related verbal prompts should actually generate the right/desired code in that IDE.

    Perhaps this is a bit of combining TTS with computer-use.

    • sipjca 58 minutes ago
      I totally agree with you and largely what you’re describing is one of the reasons I made Handy open source. I really want to see something like this and see someone go experiment with making it happen. I did hear some people playing with using some small local models (moondream, qwen) to get some more context of the computer itself

      I initially had a ton of keyboard shortcuts in handy for myself when I had a broken finger and was in a cast. It let me play with the simplest form of this contextual thing, as shortcuts could effectively be mapped to certain apps with very clear uses cases

    • hasperdi 58 minutes ago
      What you said is possible by feeding the output of speech-to-text tools into an LLM. You can prompt the LLM to make sense of what you're trying to achieve and create sets of actions. With a CLI it’s trivial, you can have your verbal command translated into working shell commands. With a GUI it’s slightly more complicated because the LLM agent needs to know what you see on the screen, etc.

      That CLI bit I mentioned earlier is already possible. For instance, on macOS there’s an app called MacWhisper that can send dictation output to an OpenAI‑compatible endpoint.

      • sipjca 57 minutes ago
        Handy can post process with LLMs too! It’s just currently hidden behind a debug menu as an alpha feature (ctrl/cmd+shift+d)
  • kuatroka 39 minutes ago
    Love it. I had been searching for STT app for weeks. Every single app was either paid as a one off or had a monthly subscription. It felt a bit ridiculous having to pay when it’s all powered by such small models on the back end. So I decided to build my own. But then I found “Handy” and it’s been a really amazing partner for me. Super fast, super simple, doesn’t get in my way and it’s constantly updated. I just love it. Thanks a lot for making it! Thanks a lot

    P.S. The post processing that you are talking about, wouldn’t it be awesome.

  • PhilippGille 1 hour ago
    Has anyone compared this with https://github.com/HeroTools/open-whispr already? From the description they seem very similar.

    Handy first release was June 2025, OpenWhispr a month later. Handy has ~11k GitHub stars, OpenWhispr has ~730.

    • kuatroka 34 minutes ago
      I did have tried, but the ease of installing handy as just a macOS app is so much simpler than needing to constantly run in npm commands. I think at the time when I was checking it, which was a couple of months ago they did not have the parakeet model, which is a non-whisper model, so I had decided against it. If I remember correctly, the UI was also not the smoothest.

      Handy’s ui is so clean and minimalistic that you always know what to do or where to go. Yes, it lacks in some advanced features, but honestly, I’ve been using it for two months now and I’ve never looked back or searched for any other STT app.

  • frankdilo 2 hours ago
    This looks great! What’s missing for me to switch from something like Wispr Flow is the ability to provide a dictionary for commonly mistaken words (name of your company, people, code libraries).
    • tin7in 2 hours ago
      It has something called "Custom Words" which might be what you are describing. Haven't tested this feature yet properly.
    • sipjca 1 hour ago
      There’s a PR for this which will be pulled in soon enough, I can kick off a build of the PR if you want to download a pre release version
    • jauntywundrkind 2 hours ago
      I dig that some models have an ability to say how sure they are of words. Manually entering a bunch of special words is ok, but I want to be able to review the output and see what words the model was less sure of, so I can go find out what I might need to add.
  • llarsson 1 hour ago
    A question because I'm not using speech-to-text, but find it intriguing (especially since it's now possible to do locally and for free).

    How have your computing habits changed as a result of having this? When do you typically use this instead of typing on the keyboard?

    • tin7in 1 hour ago
      I use it all the time with coding agents, especially if I'm running multiple terminals. It's way faster to talk than type. The only problem is that it looks awkward if there are others around.
      • johnisgood 1 hour ago
        Interesting. I can think and type faster, but not talk. I am not much of a talker.
        • stavros 41 minutes ago
          Same, whenever I try to dictate something I always umm and ahhh and go back a bunch of times, and it's faster to just type. I guess it's just a matter of practice, and I'm fine when I'm talking to other people, it's only dictation I'm having trouble with.
    • noneofyour 1 hour ago
      Part of my job is to give feedback to people using Word Comments. Using STT, it's been a breeze. The time saving really is great. Thing is, I only do this when working at home with no one around. So really only when WFH.
  • skor 32 minutes ago
    This is so handy, thank you very much. Good work!!
  • dumbmrblah 1 hour ago
    I just set this up today. I had Whispering app set up on my Windows computer, but it really wasn't working well on my Ubuntu computer that I just set up. I found Handy randomly. It was the last app I needed to go Linux full-time. Thank you!
  • bn-usd-mistake 57 minutes ago
    Does anyone have a similar mobile application that works locally and is not too expensive? Mostly looking to transcribe voice messages sent over Signal which does not offer this OOTB
    • bogtap82 54 minutes ago
      There is one single app I've been able to find that offers Parakeet-v3 for free locally and it's called Spokenly. They have paid cloud models available as well, but the local Parakeet-v3 implementation is totally free and is the best STT has to offer these days regardless. Super fast and accurate. I consider single-user STT basically a solved problem at this point.
  • Jack5500 2 hours ago
    The Parakeet V3 model is really great!
  • mrroryflint 1 hour ago
    On a M4 Macbook Air, there was enough lag to make it unusable for me. I hit the shortcut and start speaking but there was always a 1-2sec delay before it would actually start transcribing even if the icon was displayed.
    • kuatroka 26 minutes ago
      Yes, I’ve got the same situation too. I kind of learned to wait for one or two seconds before talking. I am using it with the AirPods, so maybe it’s indeed the Bluetooth thing.
    • jborichevskiy 1 hour ago
      Curious if you were using AirPods or other Bluetooth headphones for this?

      If so, there should be "keep microphone on" or similar setting in the config that may help with this, alternatively, I set my microphone to my MacBook mic so that my headphones aren't involved at all and there is much less latency on activation

    • sipjca 1 hour ago
      What microphone are you using?
  • miniwark 46 minutes ago
    Did this thing (or open-whispr) work well with other languages than english ?
    • dawkins 43 minutes ago
      In Spanish works very well
  • vladstudio 2 hours ago
    Use it daily. Looks and works great.
  • chainmail2029 2 hours ago
    There's a slightly awkward naming overlap with an existing product.
    • unwind 1 hour ago
      Which one? I did a quick search but that didn't turn up anything so perhaps it's a partial word overlap or something.

      I did find the projects "user-facing" home page [1] which was nice. I found it rather hard to find a link from that to the code on GitHub, which was surprising.

      [1]: https://handy.computer/

      • DomB 1 hour ago
        It's the German word for smart phone / mobile phone
      • zavec 1 hour ago
        There's also a sex toy
      • sReinwald 1 hour ago
        [dead]
    • ensocode 1 hour ago
      This is a slightly German-centric comment.
    • xfeeefeee 1 hour ago
      [dead]
  • jborichevskiy 2 hours ago
    Big Handy fan!
  • Dnguyen 1 hour ago
    Would be nice if the output can be piped directly into Claude Code.
  • blutoot 1 hour ago
    Crashes on Tahoe 26.3 Betq 1 :(
    • sipjca 1 hour ago
      Please send me a crash log!
  • dotancohen 2 hours ago
    Looks interesting. Why does it need a GUI at all?
    • tin7in 2 hours ago
      As an alternative to Wisprflow, Superwhisper and so on. It works really well compared to the commercial competitors but with a local model.
    • unwind 1 hour ago
      Ah, that was a typo: you meant "GPU" (Graphics Processing Unit, not "GUI" which of course is Graphical User Interface) since that is listed in the system requirements. Explained implicitly by an existing comment, thanks!
    • sipjca 1 hour ago
      It doesn’t! Just makes it more accessible to more people I feel. There’s a cli version for Mac which I wrote first handy-cli
    • Barbing 2 hours ago
      I hear a CLI request? Tons of CLI speech-to-text tools by the way, really glad to see this. Excellent competitors (Superwhisper, MacWhisper, etc.) are closed/paid.
    • satvikpendem 2 hours ago
      Because local AI models run well on a GPU, better than on a CPU
    • kristianp 2 hours ago
      So more people can use it?