Does it support dependency handling between targets, and efficient partial remake of only the changed subtree of the dependency graph? Because what many people miss about make is the support for this, and think it is a way to make "commands" for single level recipies, and auto-complete their names in the shell. A simple shell script would be a trivial solution for that already.
Make does:
- topological sorting based ordering of the dependency tree
- skipping of already up to date targets
- supports parallel execution of independent dependency subtrees
The webpage is totally unclear on this, and to me it looks like it only allows for a named entrypoint to some script snippets.
I'm a literal programming fan though, and this is a nice start, but i recommend clarifying the docs on this.
It’s interesting because this is the point of make, but it seems really common for make alternatives to miss this.
And that’s frustrating, because the place where a make alternative could really shine is not in making the syntax for specifying scripts nicer, but in figuring out how to design the dependency APIs so that you aren’t almost guaranteed to get them wrong in subtle ways.
Great point. My take is that most inexperienced people know make only as a bundle of shell scripts with semi-nice unification (not really nice because you just don't have an easy way to pass arguments to your scripts and no automatic help etc.).
Make also supports a ton of other features that a lot of people don't use (or don't directly use) or understand.
It's possible to create makefile targets which say "Don't run this other target on my behalf, but if you're going to run it anyway then run it before me"; for example, don't try to create the build output directory every time you run this target, but if it does need to be created then create it first.
Every time I've tried to look for a clean, effective Makefile replacement there have either been obvious (to me) missing features that make anything but the most basic use cases tricky, or the system is so clearly designed to solve one specific problem (e.g. "compile a bunch of .c files into a binary") and as a result is unsuitable for most use cases that Makefiles would be good for.
Love the makedown name, renaming..
As for a short name available in terminal, we can add alias m="makedown" as part of zsh completion script.
Both:
$ makedown deploy-to-production
and
$ m deploy-to-production --help will work
Aditionally we can generate html out of the, now `makedown.md`, right in the tool:
$ makedown --html makedown.html
or
$ m --pdf makedown.pdf
I appreciate the quickness in renaming a project (x.md wasn’t great), but you’re missing the main point that Make provides - building a project based upon changing input files (DAGs and all of that). Make files are great in that they are flexible and allow this type of usage (especially with .PHONY),
But without the “building” component to it, I found the name makedown confusing. Maybe that’s just me.
But I like the project overall. I’m still trying to get my head around the syntax, but it looks nice. I’m also not sure I’ll switch from keeping scripts like this in my $HOME/.local/bin directory, but I can see how this is an appealing way to work.
I use Makefiles all the time for dependency management, not necessarily for compiling code. For example, in a data analysis workflow, I’ll use a make file to manage the processing from ETL, extracting out whatever data I need, and finally, generating a figure. Whenever one step in the process is updated, the rest are automatically also run.
For one of my projects, I tried something similar where I had code blocks in README.md like:
Usage
-----
`pip install bar` and import `foo`:
```python
import foo from bar
```
Run `foo.alice` with default arguments:
```python
foo.alice()
```
Run `foo.bob` while specifying `baz`:
```python
foo.bob(baz=0)
```
It’s great to see more tools taking advantage of the markdown syntax.
I’m the creator of Mask[0], a very similar tool built with Rust. I was originally inspired by Maid[1], which is an older take on this idea built with Node and no longer maintained I believe.
I see this is based on Node as well, and I appreciate that it currently has zero dependencies. Nice work!
That's nice, but a tool like this *HAS* to be distributed as a single binary, otherwise it's just too much hassle to bootstrap it, specially on Windows, or stripped out docker containers.
I see, good point.
Currently I implemented it in python for portability reasons, since it is available on most POSIX systems by default.
The stripped Docker containers are used mostly for runtime to my understanding, building is done in a different container where more build tools are available.
In case of single binary (now it is single python file), what would be the best way to distribute it to users, since pypi and npm cannot be used?
How is the cross-compilation of these 3 languages compared to Golang? I suggested Golang because it’s super easy, just set the correct GOARCH and you’re good to go.
It'll be nice if you could make it level 2 headers. Reason: if we want to make it html to display as a webpage, we won't end up with multiple H1s, and we can have a H1 for like the name of the app or something.
- renamed to `makedown`
- rewrote in python, since it is available out of the box on most POSIX and in GitHub actions
- Updated to `### [my-command]() Explanation of command` syntax, this way GitHub highlights the commands nicely
The growing popularity of Jupyter Notebooks has made people realize the power and convenience of mixing executable script/code with documentation, in the inverted way from normal. Normally we have code with embedded docs/comments, but in this new approach we have documents with embedded code. Opposite way around. I like where all this is going, but I have a question:
Maybe we can make the embedded code be "format independent" or have different embedding syntax to extract the code from various forms of docs? Technically Markdown is already a special case of plain ASCII text, so that's cool. But since Emacs Org Mode (which is supported even in VSCode via a plugin) we could have a way that's compatible with Org Mode as well? Or would that be replicating existing Org Mode features too much? I'm not experienced in Org Mode other than to prove that VSCode plugin works however, so that's why I have to ask.
It seems like it would be really helpful for AI/LLMs to understand code better if it was surrounded with tons of inline documentation too right?
One "hole" I've seen in all of modern software development is that you also normally have just documentation in the code comments, but no real linking of each code method (or function or class) to various other places in external documentation.
I know we have URLs for that, but it's usually too difficult to get URLs put into code that points to specific areas in the docs, and vice versa. And if you ask a developer a question like "What's the URL for the docs for this method" you'll get a blank stare because generally that concept doesn't exist.
I do something like this with https://speedrun.cc except it runs in the browser on top of your markdown in GitHub. This lets you prompt for inputs and run JavaScript and use a toolbar to context switch. For command lines it copies the command to the clipboard so you can run it.
Really like the simplicity of the project (this is a compliment, the root is not overwhelming with files). Nice that this tool uses only system libraries. Way easier to distribute a single file with leverages already installed languages.
For myself, I just make a directory `workflows` and put all my scripts in it, and organise related scripts into subdirectories, so that I can use tree and filesystem tools to check what subcommands are available.
Thinking about the suggestion regarding command dependencies,
possibly we could add something similar to Makefile:
## [clean]() Cleans the generated files
```bash
rm -rf ./build
```
## [init]() Initializes the build folder
```bash
mkdir -p ./build
```
## [build](clean, init) Builds the project
This command depends on clean, and init, which are executed in that order beforehand.
gcc magic.c -o ./build/magic
This reminds me a bit of org-babel’s support for running blocks in any language.
I like the idea and the execution. This bit though:
> makedown.sh
> npm install -g ...
> #!/usr/bin/env python
Gives me a bit of whiplash. I get wanting to use npm to install, since 1) lots of people have it installed and 2) it’s reasonably cross-platform and it seems like makedown is as well.
I don’t see a reason for it to be named makedown.sh instead of just makedown, though. Make itself doesn’t depend on sh to my knowledge, and you could have a makedown file with no shell build rules at all.
That's an interesting idea you had, it makes me think of a mix between a jupyter notebook and a makefile, sort of, based on md files. I like the concept, but I need to test it to see if it fits my needs. Just a question about python and zsh, do they need to be pre-installed in your OS and accessible from PATH, that's it ?
Instead of "People spent years trying to reimplement Unix, poorly", the new (older actually, as ITS/Emacs predate Unix) motto should be "People still spents decades reimplementing Emacs, poorly". Now, with Org-Mode :)
When --help is provided, the help text from x.md is printed.
Otherwise all the command line parameters are passed to the actual script that implements the command.
published 0.3 version
pnpm install -g @tzador/x.md
- better --help messages with or without command
- ## level 2 headers are used
- the temp file is created in current folder, like that importing npm modules from current project works
Imagine both ideas as inspired by Jupyter Notebooks and their kernels.
Persistent kernel: e.g. each ```python``` code block is the same interpreter process with all the same variables. Same for bash, etc. The process is kept open and lines to exec piped to it. Or use a wrapper like Jupyter kernels (example [1]). Have the ability to run/re-run a specific code block.
Output capture: Yes if you echo something it could appear in an output block following the code block. Very much like a Jupyter Notebook cell showing it's output. The simplest output capture is just text, but you could also figure out how to hook into things like matplotlib and show plots (e.g. option to allow the output to be shown as a markdown sub-section).
Make does:
- topological sorting based ordering of the dependency tree
- skipping of already up to date targets
- supports parallel execution of independent dependency subtrees
The webpage is totally unclear on this, and to me it looks like it only allows for a named entrypoint to some script snippets.
I'm a literal programming fan though, and this is a nice start, but i recommend clarifying the docs on this.
And that’s frustrating, because the place where a make alternative could really shine is not in making the syntax for specifying scripts nicer, but in figuring out how to design the dependency APIs so that you aren’t almost guaranteed to get them wrong in subtle ways.
Thank you for feature suggestion.
It's possible to create makefile targets which say "Don't run this other target on my behalf, but if you're going to run it anyway then run it before me"; for example, don't try to create the build output directory every time you run this target, but if it does need to be created then create it first.
Every time I've tried to look for a clean, effective Makefile replacement there have either been obvious (to me) missing features that make anything but the most basic use cases tricky, or the system is so clearly designed to solve one specific problem (e.g. "compile a bunch of .c files into a binary") and as a result is unsuitable for most use cases that Makefiles would be good for.
I just do mine in bash (make.sh) and it runs scripts from make.d/ which are in whatever (python, js bash, PHP)
Aditionally we can generate html out of the, now `makedown.md`, right in the tool: $ makedown --html makedown.html or $ m --pdf makedown.pdf
But without the “building” component to it, I found the name makedown confusing. Maybe that’s just me.
But I like the project overall. I’m still trying to get my head around the syntax, but it looks nice. I’m also not sure I’ll switch from keeping scripts like this in my $HOME/.local/bin directory, but I can see how this is an appealing way to work.
It’s not just for C!
(Although, it’s mainly used for C)
For one of my projects, I tried something similar where I had code blocks in README.md like:
And a Makefile like: So you could run all those README.md code blocks with:I’m the creator of Mask[0], a very similar tool built with Rust. I was originally inspired by Maid[1], which is an older take on this idea built with Node and no longer maintained I believe.
I see this is based on Node as well, and I appreciate that it currently has zero dependencies. Nice work!
[0]: https://github.com/jacobdeichert/mask
[1]: https://github.com/egoist/maid
The stripped Docker containers are used mostly for runtime to my understanding, building is done in a different container where more build tools are available.
In case of single binary (now it is single python file), what would be the best way to distribute it to users, since pypi and npm cannot be used?
Which language you would suggest?
> Which language you would suggest?
Golang
Pretty sure Rust is only slightly less easy. No idea about Nim.
### [my-command-name]() My command short description
A longer command help
```typescript #!/usr/bin/env deno run console.log("hi") ```
if hashbang is missing, it is infered from the markdown codeblock lang speck, with fallback to bash.
Thank you for all the feedback
[0]: https://github.com/casey/just
I used XC for a bit, which does a similar thing, but have since reverted to make. The self-documenting nature of these tools can be very useful.
https://xcfile.dev/
If you have a README with a list of build tasks, it'll pull its content and give nicer names to whatever is placed in package.json scripts field.
(I'm using comments for that which feels clunky)
I don't update it often, but I still use it almost every day.
Maybe we can make the embedded code be "format independent" or have different embedding syntax to extract the code from various forms of docs? Technically Markdown is already a special case of plain ASCII text, so that's cool. But since Emacs Org Mode (which is supported even in VSCode via a plugin) we could have a way that's compatible with Org Mode as well? Or would that be replicating existing Org Mode features too much? I'm not experienced in Org Mode other than to prove that VSCode plugin works however, so that's why I have to ask.
One "hole" I've seen in all of modern software development is that you also normally have just documentation in the code comments, but no real linking of each code method (or function or class) to various other places in external documentation.
I know we have URLs for that, but it's usually too difficult to get URLs put into code that points to specific areas in the docs, and vice versa. And if you ask a developer a question like "What's the URL for the docs for this method" you'll get a blank stare because generally that concept doesn't exist.
Thanks for sharing!
I like what you're exploring, but it's not "make".
I like the idea and the execution. This bit though:
> makedown.sh
> npm install -g ...
> #!/usr/bin/env python
Gives me a bit of whiplash. I get wanting to use npm to install, since 1) lots of people have it installed and 2) it’s reasonably cross-platform and it seems like makedown is as well.
I don’t see a reason for it to be named makedown.sh instead of just makedown, though. Make itself doesn’t depend on sh to my knowledge, and you could have a makedown file with no shell build rules at all.
Recipes are run through sh by default, though it can be overridden to anything using the SHELL variable (including, say, python).
Agree with the idea already stated here to use <h2> elements instead of <h1>.
I assume that there is no support for the scripts having their own command-line arguments? Or how do you disambiguate?
Anyway, this seems like an interesting demo, but it's hard to imagine the use case.
What would be pros and cons of implementing it in:
- better --help messages with or without command - ## level 2 headers are used - the temp file is created in current folder, like that importing npm modules from current project works
- can capture output and be updated & displayed in the markdown doc.
- persistent kernels
What do you mean by capture output, like if i have ```bash echo "hello" ``` hello would appear in the rendered markdown?
Persistent kernel: e.g. each ```python``` code block is the same interpreter process with all the same variables. Same for bash, etc. The process is kept open and lines to exec piped to it. Or use a wrapper like Jupyter kernels (example [1]). Have the ability to run/re-run a specific code block.
Output capture: Yes if you echo something it could appear in an output block following the code block. Very much like a Jupyter Notebook cell showing it's output. The simplest output capture is just text, but you could also figure out how to hook into things like matplotlib and show plots (e.g. option to allow the output to be shown as a markdown sub-section).
[1] https://github.com/digitalsignalperson/comma-python
At the same time easy to edit and easy to enhance with different build languages as needed.