crucial jq insight which unlocked the tool for me: it's jsonl, not json.
it's a pipeline operating on a stream of independent json terms. The filter is reapplied to every element from the stream. Streams != lists; the latter are just a data type. `.` always points at the current element of the stream. Functions like `select` operate on separate items of the stream, while `map` operates on individual elements of a list. If you want a `map` over all elements of the stream: that's just what jq is, naturally :)
stream of a single element which is a list:
echo '[1,2,3,4]' | jq .
# [1,2,3,4]
unpack the list into a stream of separate elements:
echo '[1,2,3,4]' | jq '.[]'
# 1
# 2
# 3
# 4
echo '[1,2,3,4]' | jq '.[] | .' # same: piping into `.` is a NOP:
only keep elements 2 and 4 from the stream, not from the array--there is no array left after .[] :
Doesn't the command-line utility `jq` already define a protocol for this? How do the syntaxes compare?
(LLMs are already very adept at using `jq` so I would think it was preferable to be able to prompt a system that implements querying inside of source code as "this command uses the same format as `jq`")
Oh wow, it got undeleted. Some editor insisted on deleting it because it was a "personal project" (Stephen Dolan's) even though it has a huge user base. I guess now that it has a proper "org" in GitHub it's different. What nonsense.
It's likely because there's a citation in a paper. That's apparently the bar you need to reach to get Wikipedia to see something as significant enough. I tried to get a draft article about SourceHut ( https://sourcehut.org/ ) to be published after extensive improvements and they refused because there weren't enough third party links. This is despite the fact there's like a dozen pages in Wikipedia about software that is hosted on SourceHut, so it seems notable enough?
Seriously, Wikipedia has been of immense value to society and education.
Yes there are issues with ideologically motivated moderators, poorly cited articles, etc. But even with its flaws, it's an amazing resource provided to the public for free (as in coffee and maybe as in speech also).
You just have to wrap your mind around jq. It's a) functional, b) has pervasive generators and backtracking. So when you write `.a[].b`, which is a lot like `(.a | .[] | .b)` what you get is three generators strung together in an `and_then` fashion: `.a`, then `.[]`, and then `.b`. And here `.a` generates exactly one value, as does `.b`, but `.[]` generates as many values as are in the value produced by `.a`. And obviously `.b` won't run at all if `.a` has no values, and `.b` will run for _each_ value of `.a[]`. Once you begin to see the generators and the backtracking then everything begins to make sense.
I read the man page of `jq` and learned how to use it. It's quite well-written and contains a good introduction.
I've observed that too many users of jq aren't willing to take a few minutes to understand how stream programming works. That investment pays off in spades.
I'm a big fan of jq but won't credit its man page with much. There were (ineffable) insights that I picked up through my own usage over time, that I couldn't glean from reading the man page alone. In other words, it's not doing its best to put the correct mental model out for a newish user.
Also, LLMs are good at spitting out filters, but you can learn what they do by going and then looking up what it’s doing in the docs. They often apply things in far more interesting and complex ways than the docs at jqlang.org do, which are often far too “foo bar baz” tier to truly understand explain the power of things.
Maybe the author would be in a better place to do that, having the expertise already. Also, as a user I'm quite happy with jq already, so why expend the effort?
I use `jsonata` currently at work. I think it's excellent. There's even a limited-functionality rustlib (https://github.com/Stedi/jsonata-rs). What I particularly like about `jsonata` is its support for variables, they're super useful in a pinch when a pure expression becomes ugly or unwieldy or redundant. It also lets you "bring your own functions", which lets you do things like:
```
$sum($myArrayExtractor($.context))
```
where `$myArrayExtractor` is your custom code.
---
Re: "how did it go"
We had a situation where we needed to generate EDI from json objects, which routinely required us to make small tweaks to data, combine data, loop over data, etc. JSONata provided a backend framework for data transformations that reduced the scope and complexity of the project drastically.
I think JSONata is an excellent fit for situations where companies need to do data transforms, for example when it's for the sake of integrations from 3rd-party sources; all the data is there, it just needs to be mapped. Instead of having potentially buggy code as integration, you can have a pseudo-declarative jsonata spec that describes the transform for each integration source, and then just keep a single unified "JSONata runner" as the integration handler.
It's nice because we can just put the JSONata expression into a db field, and so you can have arbitrary data transforms for different customers for different data structures coming or going, and they can be set up just by editing the expression via the site, without having to worry about sandboxing it (other than resource exhaustion for recursive loops). It really sped up the iteration process for configuring transforms.
I have a similar use case in the app I'm working on. Initially I went with JSONata, which worked, but resulted in queries that indeed felt more like incantations and were difficult even for me to understand (let alone my users).
I then switched to JavaScript / TypeScript, which I found much better overall: it's understandable to basically every developer, and LLMs are very good at it. So now in my app I have a button wherever a TypeScript snippet is required that asks the LLM for its implementation, and even "weak" models one-shot it correctly 99% of the times.
It's definitely more difficult to set up, though, as it requires a sandbox where you can run the code without fears. In my app I use QuickJS, which works very well for my use case, but might not be performant enough in other contexts.
Most alternatives being talked about are working on query strings (like `$.phoneNumbers[:1].type`) which is fine but can not be easily modeled / modified by code.
Things like https://jsonlogic.com/ works better if you wish to expose a rest api with a defined query schema or something like that. Instead of accepting a query `string`. This seems better as in you have a string format and a concrete JSON format. Also APIs to convert between them.
Also, if you are building a filter interface, having a structured representation helps:
This is all too cute. Why not just use JavaScript syntax? You can limit it to the exact amount of functionality you want for whatever reason it is you want to limit it.
Cool idea! Although without looking closer I can't tell if "meme" is in reference to the technical or the colloquial meaning of meme.
Admittedly I don't know that much about LLM optimization/configuration, so apologies if I'm asking dumb questions. Isn't the value of needing to copy/paste that prompt in front of your queries a huge bog on net token efficiency? Like wouldn't you need to do some hundred/thousand query translations just to break even? Maybe I don't understand what you've built.
Thank you. That script prompt is just for development and exploration. A production model needs to be trained/fine-tuned on Memelang first. We're working on this now. The math says we can deliver a model 1/2 the size of an equivalent model for SQL.
If you prefer JSONPath as a query language, oj from https://github.com/ohler55/ojg provides that functionality. It can also be installed with brew. (disclaimer, I'm the author of OjG)
Helpful when querying JSON API responses that are parsed and persisted for normal, relational uses. Sometimes you want to query data that you weren’t initially parsing or that matches a fix to reprocess.
speaking of classic databases: can anyone explain to me, a dummy, why any syntax like this or even GraphQL is preferable to "select a.name, a.age from friends a where a.city = 'New York' order by a.age asc"?
There is a standard in RFC 9535 (JSONPath)[1]. But as far as I can tell, it isn't very widely used, and it has more limited functionality than some of the alternatives.
the issue with JSONPath is that it took 17 years for it to become a properly fleshed-out standard. The original idea came from a 2007 blog post [0], which was then extended and implemented subtly differently dozens of times, with the result that almost every JSON Path implementation out there is incompatible with the others.
The AWS CLI supports JMESPath (https://jmespath.org) for the `--query` flag. I don't think I've run into anything else that uses it. Pretty similar to JSONPath IIRC.
Plus, I feel like most, if not all, higher level languages already come with everything you need to do that easily. Well except for go that requires you to create your own filter function.
jq is good but its syntax is strangely unmemorizable. Have used it for a decade and always need to look at the manual or at examples to refresh my knowledge.
Interesting. But looks like it require JSON object. My query language works on top of Linq so it make it compatible with ORMs, IEnumerable and IQueryable.
I hate jq as much as the next guy but it’s ubiquitous and great for this sort of thing. If you want a single path style query language I’d highly recommend JsonPath. It’s so much nicer than jq for “I need every student’s gpa”.
it's a pipeline operating on a stream of independent json terms. The filter is reapplied to every element from the stream. Streams != lists; the latter are just a data type. `.` always points at the current element of the stream. Functions like `select` operate on separate items of the stream, while `map` operates on individual elements of a list. If you want a `map` over all elements of the stream: that's just what jq is, naturally :)
stream of a single element which is a list:
unpack the list into a stream of separate elements: only keep elements 2 and 4 from the stream, not from the array--there is no array left after .[] : keep the array: map over individual elements of a stream instead: This is how you can do things like select creates a nested "scope" for the current element in its parens, but restores the outer scope when it exits.Hope this helps someone else!
(LLMs are already very adept at using `jq` so I would think it was preferable to be able to prompt a system that implements querying inside of source code as "this command uses the same format as `jq`")
Yes there are issues with ideologically motivated moderators, poorly cited articles, etc. But even with its flaws, it's an amazing resource provided to the public for free (as in coffee and maybe as in speech also).
[1] - https://duckdb.org/docs/stable/data/json/overview [2] - https://www.malloydata.dev/
I've observed that too many users of jq aren't willing to take a few minutes to understand how stream programming works. That investment pays off in spades.
``` $sum($myArrayExtractor($.context)) ```
where `$myArrayExtractor` is your custom code.
---
Re: "how did it go"
We had a situation where we needed to generate EDI from json objects, which routinely required us to make small tweaks to data, combine data, loop over data, etc. JSONata provided a backend framework for data transformations that reduced the scope and complexity of the project drastically.
I think JSONata is an excellent fit for situations where companies need to do data transforms, for example when it's for the sake of integrations from 3rd-party sources; all the data is there, it just needs to be mapped. Instead of having potentially buggy code as integration, you can have a pseudo-declarative jsonata spec that describes the transform for each integration source, and then just keep a single unified "JSONata runner" as the integration handler.
It made my life a lot easier
It's nice because we can just put the JSONata expression into a db field, and so you can have arbitrary data transforms for different customers for different data structures coming or going, and they can be set up just by editing the expression via the site, without having to worry about sandboxing it (other than resource exhaustion for recursive loops). It really sped up the iteration process for configuring transforms.
Just use jq. None of the other ones are as flexible or widespread and you just end up with frustrated users.
Which isn't to say jq is the best or even good but its battle-tested and just about every conceivable query problem has been thrown at it by now.
I then switched to JavaScript / TypeScript, which I found much better overall: it's understandable to basically every developer, and LLMs are very good at it. So now in my app I have a button wherever a TypeScript snippet is required that asks the LLM for its implementation, and even "weak" models one-shot it correctly 99% of the times.
It's definitely more difficult to set up, though, as it requires a sandbox where you can run the code without fears. In my app I use QuickJS, which works very well for my use case, but might not be performant enough in other contexts.
Kudos for all the work, it's a nice language. I find writing parsers a very mind-expanding activity.
am I missing something?
To your point abstractions often multiply and then hide the complexity, and create a facade of simplicity.
Things like https://jsonlogic.com/ works better if you wish to expose a rest api with a defined query schema or something like that. Instead of accepting a query `string`. This seems better as in you have a string format and a concrete JSON format. Also APIs to convert between them.
Also, if you are building a filter interface, having a structured representation helps:
https://react-querybuilder.js.org/demo?outputMode=export&exp...
mapValues(mapKeys(substring(get(), 0, 10)))
This is all too cute. Why not just use JavaScript syntax? You can limit it to the exact amount of functionality you want for whatever reason it is you want to limit it.
Admittedly I don't know that much about LLM optimization/configuration, so apologies if I'm asking dumb questions. Isn't the value of needing to copy/paste that prompt in front of your queries a huge bog on net token efficiency? Like wouldn't you need to do some hundred/thousand query translations just to break even? Maybe I don't understand what you've built.
Cool idea either way!
Helpful when querying JSON API responses that are parsed and persisted for normal, relational uses. Sometimes you want to query data that you weren’t initially parsing or that matches a fix to reprocess.
[1]: https://datatracker.ietf.org/doc/html/rfc9535
[0] https://goessner.net/articles/JsonPath/
it might just be a very limited subset?
I implemented one day of advent of code in jq to learn it: https://github.com/ivanjermakov/adventofcode/blob/master/aoc...