They don't mention a twinkle that many task runners seem keen to omit: how do you handle things where there are human steps involved and not everything is automated? How do you track what has worked and what is still left to do if things go sidesways?
I built baker (https://github.com/coezbek/baker) for this some time ago (pre-LLM mostly). It uses markdown with embedded bash and ruby commands to give you a checklist which is run both automated for commands or with human in the loop for things which aren't automated (like login to some admin panel and generate that key, copy it here).
The checklist gets checked off both by human actions (you confirm that you did it) and automated e.g. success bash command runs. So you keep a markdown artifact on where you are in your project and can continue later.
You can wrap commands to run via SSH (of course clunkier than what scotty here does, but you can select a port for SSH).
This is such a neat idea. I am going to adopt this for my own workflows as well, right now I just write private blog entries for stuff I do that I may forget how to do later (provisioning a server, networking, caddy setup, etc etc)
We need a term like potempkin-ware or something to express "I just built a 3 week project in 3 hours and, although it looks nice, there's probably a ton of problems with it because I couldn't possibly review everything Claude puked out properly, use at your own risk".
Many years ago I wrote my own "cloud instance bootstrapper" that would pull a tar off of S3 based on EC2 instance tags / metadata, untar it, then run a script. I never got into Ansible and I hated having to rebuild AMIs for minor changes.
I've been writing my own "task runner" which seems to have some of the same features. I'd say some pros: A nice view of that has run (what has failed, etc.) - which otherwise could be drowned-out by stderr and stdout. Timing information for each "task". Can organize nested tasks. Save all in a structured log.
I founded and developed a similar concept many years back of a web-based SSH dashboard and management console (Commando.io; which I sold). Now a days I use Semaphore UI [1] which uses Ansible playbooks under the hood in my homelab. Pretty happy with it, though setup and configuration did take a bit to get up and running.
I built baker (https://github.com/coezbek/baker) for this some time ago (pre-LLM mostly). It uses markdown with embedded bash and ruby commands to give you a checklist which is run both automated for commands or with human in the loop for things which aren't automated (like login to some admin panel and generate that key, copy it here).
The checklist gets checked off both by human actions (you confirm that you did it) and automated e.g. success bash command runs. So you keep a markdown artifact on where you are in your project and can continue later.
You can wrap commands to run via SSH (of course clunkier than what scotty here does, but you can select a port for SSH).
Thanks for sharing!
Ok.
> run them from your terminal and watch every step as it happens
> and watch every step as it happens
Yes, this is usually how scripts work.
> When everything finishes, you get a summary table with timing for each step.
> If a task fails, its output is shown and execution stops right there so you can investigate.
Yes, I write my larger scripts to do such things...
> Writing plain bash instead of Blade
Yes, probably a good idea.
Call me crazy (you're crazy!) but I'm not seeing the point.
https://github.com/spatie/scotty/issues/1
Literally would be one of the first things I would have tested personally!
> Scotty was built with the help of AI
So it sounds like my heuristic worked. =)
I named "Ansible for the Frugal"
https://github.com/mikemasam/nyatictl
It looks nicer.
I use good old GNU Make.
[1] https://github.com/semaphoreui/semaphore