2 comments

  • alyt 3 days ago
    > Does this actually work? Yeah, somewhat! Could it create scripts that erase your drive? Maybe! Good luck!

    Love the transparency lol.

    Cool idea though! I imagine the "erase your hard drive" risk could be mitigated pretty well by adding an extra LLM step to double-check the generated script and explain each part of it (maybe add an `# explanatory comment` at the end of each line), and then an extra human step to confirm before executing the script.

  • statico 3 days ago
    Hey folks, a few days ago I wondered: Given all this LLM availability, why can’t I write shell scripts like this?

      #!/usr/bin/env llmscript
      
      Count all files in the current directory and its subdirectories
      Group them by file extension
      Print a summary showing the count for each extension
      Sort the results by count in descending order
    
    So I made it a reality in an evening an it kinda works: https://github.com/statico/llmscript

    It generates a script and a test suite, and then it attempts to fix the script until it passes the tests.

    It’s written in Go, but I hardly know Go, and used Cursor to generate most of it in a few hours. It works with Ollama and Claude, and I added support for OpenAI but haven’t tested it. You can also run it in Docker if you want to sandbox it.

    • skydhash 3 days ago
      > Given all this LLM availability, why can’t I write shell scripts like this?

      Because it’s more optimal to generate the script once, then run it everywhere, even on resources constrained devices.