People here are laughing of course but I do think there is a deeper truth behind this that's worth exploring:
> A Docker image is a piece of executable code that produces some output given some input.
The ideas behind containerization and sandboxing are rather closely related to functional programming and controlling side effects. If binaries always only read stdin and wrote to stdout, we wouldn't need sandboxes – they would be pure functions.
In the real world, though, binaries usually have side effects and I really wish we could control those in a more fine-grained manner. Ideally, binaries couldn't just do anything by default but actually had to declare all their side effects (i.e. accessing env variables, config, state, cache, logs, DBUS/Xserver/Wayland sockets, user data, shared libraries, system state, …), so that I could easily put them in a sandbox that's tailored to them.
Conversely, I'm waiting for the day when algebraic effects are so common in programming languages that I can safely execute an untrusted JavaScript function because I have tight control over what side effects it can trigger.
I too really want algebraic effects but the majority are panicans and exceptionauts. PL's will always cater to majority. We still don't have proper tail calls because the luddites want stack metaphors. Javanese & Gophers can't conceive of a world of pattern matching, sum types, and null/nil-safe code. Now we have machines writing code for us. It's over.
I'm sure that would work just like how phone apps asking for permission to do each thing has resulted in no phone user ever getting pwned and doxxed by their apps - phone apps are completely safe now, yay!
I think we're talking about fundamentally different things. :) You're talking about the UX of granting permissions, I'm talking about how permissions get implemented at the technical level, irrespective of how you arrived at them.
Surely your proposed solution is not "Don't implement a permission system to begin with"?
I guess what I am saying is at the end of the day you need the program to do the thing. Whatever mutation it needs to do to accomplish the task, that's what you're going to allow. That's exactly what happens with phone app permissions. Everybody just lets Facebook use their microphone (not me of course, but most people).
What you describe would be super cool though. If every program let you know ahead of time what it was going to try to read and write in the world. That does indeed sound useful!
The best kind of absurd experiment, pushing the limits of technology and reason, just to see if it can be done. The adventurous spirit of "What if?" and "Why not!" I love when such an idea is implemented seriously, like having a CI action to test a factorial function. I shudder at its monstrous beauty.
Is there a spark of practical potential? It's intriguing to imagine, how a Docker-like container could be a language primitive, as easy to spin up like a new thread or worker. Not sure what advantage that'd bring, or any possible use case. It reminds me of..
2.1 Xappings, Xets, and Xectors
All parallelism in Connection Machine Lisp is organized around a data structure known as the zapping (pronounced “zapping,” and derived from “mapping”). Xappings are data objects similar in structure to arrays or hash tables, but they have one essential characteristic: operations on the entries of xappings may be performed in parallel.
well, sure, that uses a large number of processing cycles for each small operation. But asking a frontier LLM to evaluate a lisp expression is more or less on the same scale (interesting empirical question whether it's more or less). And, if we count operations at the brain neuron level it would take to evaluate one mentally....
honestly the isolation here is kind of interesting. each call gets a clean env - no shared state, no "function A leaked into B's heap" bugs. pure functions taken to the logical extreme.
cgi was mocked for fork-per-request. lambda said sure, vm-per-invocation. this is just the next step down that road
I get that it's a shitpost, but if you want to take this at all seriously, a Linux container is just a Linux process in its own namespaces separate from the namespaces of its parent or at least separate from PID 1. If you're not actually doing anything requiring OCI bases and layering, as in, like any other sane program, all your functions have the same dependencies, spawn everything in the same mount namespaces at least and just use the host. Then you don't need to mount the docker socket recursively, you don't need docker or a socket at all. This isn't really as crazy as developers think it is because they think containers in Linux are just docker. You can make system calls from within the Lisp runtime itself, including unshare, and bam, you've got a container per function call without needing to shell out and accept all the overhead of a separate container runtime.
Also why are the image builds hard-coded for amd64? Are you really doing anything here that can't be done on arm?
This loses the "feature" of being able to write builtins in different languages/operating systems/whatever. Either way, I think a serious version of this would use threads. Concurrency is the real potential benefit imo.
I was getting warnings without that line and don't know how else to fix it (this is my first time using Docker). A PR would be welcome if there's a better way.
> A Docker image is a piece of executable code that produces some output given some input.
The ideas behind containerization and sandboxing are rather closely related to functional programming and controlling side effects. If binaries always only read stdin and wrote to stdout, we wouldn't need sandboxes – they would be pure functions.
In the real world, though, binaries usually have side effects and I really wish we could control those in a more fine-grained manner. Ideally, binaries couldn't just do anything by default but actually had to declare all their side effects (i.e. accessing env variables, config, state, cache, logs, DBUS/Xserver/Wayland sockets, user data, shared libraries, system state, …), so that I could easily put them in a sandbox that's tailored to them.
Conversely, I'm waiting for the day when algebraic effects are so common in programming languages that I can safely execute an untrusted JavaScript function because I have tight control over what side effects it can trigger.
Surely your proposed solution is not "Don't implement a permission system to begin with"?
I guess what I am saying is at the end of the day you need the program to do the thing. Whatever mutation it needs to do to accomplish the task, that's what you're going to allow. That's exactly what happens with phone app permissions. Everybody just lets Facebook use their microphone (not me of course, but most people).
What you describe would be super cool though. If every program let you know ahead of time what it was going to try to read and write in the world. That does indeed sound useful!
Is there a spark of practical potential? It's intriguing to imagine, how a Docker-like container could be a language primitive, as easy to spin up like a new thread or worker. Not sure what advantage that'd bring, or any possible use case. It reminds me of..
Thinking Machines Technical Report PL87-6. Connection Machine Lisp: A Dialect of Common Lisp for Data Parallel Programming. https://archive.org/details/tmc-technical-report-pl-87-6-con...https://github.com/a11ce/docker-lisp/actions/runs/2216831271...
500+ container invocations to compute factorial(3)
cgi was mocked for fork-per-request. lambda said sure, vm-per-invocation. this is just the next step down that road
Also why are the image builds hard-coded for amd64? Are you really doing anything here that can't be done on arm?
I was getting warnings without that line and don't know how else to fix it (this is my first time using Docker). A PR would be welcome if there's a better way.
/s