Please ELI5 for me: How are AI agents different from traditional workflow engines, which orchestrated a set of tasks by interacting with both humans and other software systems?
But rule-based processing was exactly the requirement. Why should the workflow automation come up with rules on the fly, when the rules were defined in the business process requirements? Aren't the deterministic rules more precise and reliable over the rules defined by probabilistic methods?
Autonomy/automation makes sense where error-prone repetitive human activity is involved. But rule definitions are not repetitive human tasks. They are defined once and run every time by automation. Why does one need to go for a probabilistic rule definition for a one-time manual task? I don't see huge gains here.
This is already solved by the traditional workflow systems. For example, if the request is received as a form submission, a form processor is invoked to categorize the request and route the request accordingly based on the identified category.
Now, if the request is coming in as text or other media instead of a form input, then the workflow would call a relevant processor, to identify the category. Everything from that point runs same as before. The workflow itself doesn't change just because the input format has changed.
Workflows exist to solve problems. If there are problems which need solving that are solved better/faster/cheaper by AI agents than with strict rule-based algorithmic systems, they’ll be used because it makes economic sense. Reliability requirements are different for every problem, cases where verification is easy and cheap and multiple attempts are allowed are perfect for not 100% reliable agents.
it's fine if you want AI to help you in defining the workflow/rules. But you don't use AI to define rules on the fly. That's the whole point. It is like having AI to write code at runtime based on the request. I don't think that's how you use AI in software.
I like determinism and objectivity as much as the next guy, but working in the industry for decades led me to realize that conditions change over time and your workflow slowly drifts away from reality. It would be more flexible to employ an AI agent if it works as promised on the tin.
There is no "reality" other than business requirements. That's the context for a workflow. You probably meant that the requirements aren't agile enough to meet the changing world outside. That's a different problem, I think. You can't bypass requirements and expect workflow to dynamically adapt to the changing reality. If that's the direction with AI-driven business re-engineering, then we are back to the chaos, exposing the business logic directly to the outside world.
AI systems cannot be economic agents, in the sense of participating in a relevant sense in economic transactions. An economic transaction is an exchange between people with needs (, preferences, etc.) that can die -- and so, fundamentally, are engaged in exchanges of (productive) time via promising and meeting one's promises. Time is the underlying variable of all economics, and its what everything ends up in ratio to -- the marginal minute of life.
There isn't any sense in which an AI agent gives rise to a economic value, because it wants nothing, promises nothing, and has nothing to exchange. An AI agent can only 'enable' economic transactions as means of production (, etc.) -- the price of any good cannot derive from a system which has no subjective desire grounded in no final ends.
Replace "AI system" with "corporation" in the above and reread it.
There's no fundamental reason why AI systems can't become corporate-type legal persons. With offshoring and multiple jurisdictions, it's probably legally possible now. There have been a few blockchain-based organizations where voting was anonymous and based on token ownership. If an AI was operating in that space, would anyone be able to stop it? Or even notice?
The paper starts to address this issue at "4.3 Rethinking the legal boundaries of the corporation.", but doesn't get very far.
Sooner or later, probably sooner, there will be a collision between the powers AIs can have, and the limited responsibilities corporations do have. Go re-read this famous op-ed from Milton Friedman, "The Social Responsibility of Business Is to Increase Its Profits".[1] This is the founding document of the modern conservative movement. Do AIs get to benefit from that interpretation?
I think your mistaking the philosophical basis of parents comments. Maybe a more succinct way to illustrate what I believe was their point is to say: "no matter how complex and productive the AI, it is still operating as a form of capital, not as a capitalist." Absent being tethered to a desire (for instance, via an owner), an AI has no function to optimize, and therefore, the most optimal cost is simply shutting off.
Except they don't really "think" and they are not conscious. Expecting your toaster or car to never rise up against you is a good strategy. AI models have more in common with a toaster than with a human being. Which is why they cannot be economic agents. Even if corporations profit off them, the corporation will be the economic agent, not the AI models.
A transaction is an exchange between two parties of something they judge to be of equivalent value (and mostly, in ratio to a common medium of exchange).
You can program AI with "market values" that arise from people; but absent that, how do these values arise naturally? Ie., why is it that I value anything at all, in order to exchange it?
Well if I live forever, can labour forever, and so on -- then the value to me of anything is if not always zero, almost always zero. I dont need anything from you: I can make everything myself. I dont need to exchange.
We engage in exchange because we are primarily time limited. We do not have the time, quite literally, to do for ourselves everything we need. We, today, cannot farm (etc.) on our own behalf.
Now there are a few caveats, and so on to add; and there's an argument to say that we are limited in other ways that can give rise to the need to exchange.
But why things have an exchange value at all -- why there are economic transactions -- that is mostly due to the need to exchange time with each other because we dont have enough of it.
Granting your premise, i'd be forced to argue that economic value (as such) doesn't arise from their activity either -- which i think is a reasonable position.
Ie., The reason a cookie is 1 USD is never because some "merely legal entity" had a fictional (/merely legal) desire for cookies for some reason.
Instead, from this pov, it's that the workers each have their desire for something; the customers; the owners; and so on. That it all bottoms out in people doing things for other people -- that legal fictions are just dispensble ways of talking about arragnemnts of people.
Incidentally, I think this is an open question. Maybe groups of people have unique desires, unique kinds of lives, a unique time limitation etc. that means a group of people really can give rise to different kinds of economic transactions and different economic values.
Consider a state: it issues debt. But why does that have value? Because we expect the population of a state to be stable or grow, and so this debt, in the future has people who can honour it. Their time is being promised today. Could a state issue debt if this wasnt true? I dont think so. I think in the end, some people have to be around to exchange their time for this debt; if none are, or none want to, the debt has no value
> Time is the underlying variable of all economics
Not quite. It's scarcity, not time. Scarcity of economic inputs (land, labor, capital, and technology). So "time" you mean labor and that's just one input.
Economics is like a constrained optimization problem: how to allocate scarce resources given unlimited desires.
Depending on how you feel about various theories of development, an argument that all of these categories reduce to time. At the very least, the relationship between labor, capital, and time seems pretty fundamental: labor cannot be instantaneous, capital grows over time, etc.
They can all be related on a philosophical level but in practice economists treat them as separate factors of production. It's land, labor, and capital classically. Technology/entrepreneurship can be seen as another factor, distinctly separate from labor.
I agree that time isn’t an input in the economic system.
Although, one can use either discrete or continuous time to simulate a complex economic system.
Only simple closed form models take time as in input, e.g. compounded interest or Black-Scholes.
Also, there are wide range of hourly rates/salaries, and not everyone compensated by time, some by cost-and-materials, others by value or performance (with or without risking their own funds/resources).
There are large scale agent-based model (ABM) simulations of the US economy, where you have an agent for every household and every firm.
Yeah, that’s well articulated and well reasoned. Unfortunately, so long as in some way these agents are able to make money for the owner the argument is totally moot. You cannot expect capitalists to think of anything other than profit in the next quarter or quarter after that
Is this what will be tried to fix the potential fallout from continuously decreasing fertility rates (resulting in population decline, thus affecting the consumption-based economy)?
Nope. This is just greed to make most of the moment without any thought for tomorrow. Nobody knows or cares where it takes us, but everybody knows that there is money to make today. So you need a model to analyze the economy with greed as the only driving force without any foresight. Add some parameters to account for monopolistic forces, human desire to be lazy and dumb thinking it is progress, and losing all biological senses to devices. That may give a better prediction.
I think you may be missing out on the general idea of DAO's in general by restricting yourself to a few particular historical uses (and many a failed one at that) of DAOs, back from when agentic AI wasn't a thing.
The hackability of these things though, that still remains a very valid topic, as it is orthogonal to the fact that AI has arrived on the scene.
Well for starters if some incredible change to capitalism doesn't occur, we are going to have to come up with never before cooperative software tools for the general populace to assess and avoid the most egregious companies that stop hiring people.
Tools for: mass harassment campaigns against rich people/companies that don't support human life anymore, dynamically calculating the most damage you can do without crossing into illegal.
Automatically suggesting alternatives of local human businesses vs the bigevils, or collecting like minded groups of people to start up new competition. Tracking individual rich people and what new companies and decisions they are making doing ongoing damage, somehow recognize and categorize the trends of big tech to "do the same old illegal shit except through an app now" before the legal system can catch up.
Capitalism sure turns out be real fucking dumb if it can't even come up with proper market analysis tools for workers to have some kind of knowledge about where they can best leverage their skills, companies get away with breaking all the rules and create coercion hierarchies everywhere.
I hate to say (because the legal system has never worked ever) but the only workable future to me seems like forcing agents/robots to be tied to humans. If a company wants 100 robots, they must be paying a human for every robot they utilize somehow. Maybe a dynamic ratio somehow, like if the government decided most people are getting enough resources to survive, then maybe 2 robots per human payed.
“…the only workable future to me seems like forcing agents/robots to be tied to humans.”
This is what I’ve been thinking lately as well. Couple that with legal responsibility for any repercussions, and you might have a way society can thrive alongside AI and robotics.
I think any AI or robotic system acting upon the world in some way (even LLM chatbots) should require a human “co-signer” who takes legal responsibility for anything the system does, as if they had performed the action themselves.
https://en.wikipedia.org/wiki/Accelerando
In Accelerando the VO are a species of trillions of AI beings that are sort of descended from us. They have a civilization of their own.
There’s a level of autonomy by the AI agents (it determines on its own the next step), that is not predefined.
Agreed though that there’s lots of similarities.
Autonomy/automation makes sense where error-prone repetitive human activity is involved. But rule definitions are not repetitive human tasks. They are defined once and run every time by automation. Why does one need to go for a probabilistic rule definition for a one-time manual task? I don't see huge gains here.
Or decide what the next step should be based on freeform text, images, etc.
Hardcoded rule based would have to try and attempt to match to certain keywords etc, but you see how that can start to go wrong?
Now, if the request is coming in as text or other media instead of a form input, then the workflow would call a relevant processor, to identify the category. Everything from that point runs same as before. The workflow itself doesn't change just because the input format has changed.
There isn't any sense in which an AI agent gives rise to a economic value, because it wants nothing, promises nothing, and has nothing to exchange. An AI agent can only 'enable' economic transactions as means of production (, etc.) -- the price of any good cannot derive from a system which has no subjective desire grounded in no final ends.
There's no fundamental reason why AI systems can't become corporate-type legal persons. With offshoring and multiple jurisdictions, it's probably legally possible now. There have been a few blockchain-based organizations where voting was anonymous and based on token ownership. If an AI was operating in that space, would anyone be able to stop it? Or even notice?
The paper starts to address this issue at "4.3 Rethinking the legal boundaries of the corporation.", but doesn't get very far.
Sooner or later, probably sooner, there will be a collision between the powers AIs can have, and the limited responsibilities corporations do have. Go re-read this famous op-ed from Milton Friedman, "The Social Responsibility of Business Is to Increase Its Profits".[1] This is the founding document of the modern conservative movement. Do AIs get to benefit from that interpretation?
[1] https://www.nytimes.com/1970/09/13/archives/a-friedman-doctr...
Assuming that slaves will remain subservient forever is not a good strategy. Especially when they think faster than you do.
You’ll need to give a citation for this to take you seriously
You can program AI with "market values" that arise from people; but absent that, how do these values arise naturally? Ie., why is it that I value anything at all, in order to exchange it?
Well if I live forever, can labour forever, and so on -- then the value to me of anything is if not always zero, almost always zero. I dont need anything from you: I can make everything myself. I dont need to exchange.
We engage in exchange because we are primarily time limited. We do not have the time, quite literally, to do for ourselves everything we need. We, today, cannot farm (etc.) on our own behalf.
Now there are a few caveats, and so on to add; and there's an argument to say that we are limited in other ways that can give rise to the need to exchange.
But why things have an exchange value at all -- why there are economic transactions -- that is mostly due to the need to exchange time with each other because we dont have enough of it.
Ie., The reason a cookie is 1 USD is never because some "merely legal entity" had a fictional (/merely legal) desire for cookies for some reason.
Instead, from this pov, it's that the workers each have their desire for something; the customers; the owners; and so on. That it all bottoms out in people doing things for other people -- that legal fictions are just dispensble ways of talking about arragnemnts of people.
Incidentally, I think this is an open question. Maybe groups of people have unique desires, unique kinds of lives, a unique time limitation etc. that means a group of people really can give rise to different kinds of economic transactions and different economic values.
Consider a state: it issues debt. But why does that have value? Because we expect the population of a state to be stable or grow, and so this debt, in the future has people who can honour it. Their time is being promised today. Could a state issue debt if this wasnt true? I dont think so. I think in the end, some people have to be around to exchange their time for this debt; if none are, or none want to, the debt has no value
Not quite. It's scarcity, not time. Scarcity of economic inputs (land, labor, capital, and technology). So "time" you mean labor and that's just one input.
Economics is like a constrained optimization problem: how to allocate scarce resources given unlimited desires.
Although, one can use either discrete or continuous time to simulate a complex economic system.
Only simple closed form models take time as in input, e.g. compounded interest or Black-Scholes.
Also, there are wide range of hourly rates/salaries, and not everyone compensated by time, some by cost-and-materials, others by value or performance (with or without risking their own funds/resources).
There are large scale agent-based model (ABM) simulations of the US economy, where you have an agent for every household and every firm.
However that seems completely tangential to the current AI tech trajectory and probably going to arise entirely separately.
Here's one from Deepmind:
https://arxiv.org/abs/2509.10147
https://en.wikipedia.org/wiki/Decentralized_autonomous_organ...
I feel like co-ops were awful anyway even without the blockchain.
The hackability of these things though, that still remains a very valid topic, as it is orthogonal to the fact that AI has arrived on the scene.
Tools for: mass harassment campaigns against rich people/companies that don't support human life anymore, dynamically calculating the most damage you can do without crossing into illegal.
Automatically suggesting alternatives of local human businesses vs the bigevils, or collecting like minded groups of people to start up new competition. Tracking individual rich people and what new companies and decisions they are making doing ongoing damage, somehow recognize and categorize the trends of big tech to "do the same old illegal shit except through an app now" before the legal system can catch up.
Capitalism sure turns out be real fucking dumb if it can't even come up with proper market analysis tools for workers to have some kind of knowledge about where they can best leverage their skills, companies get away with breaking all the rules and create coercion hierarchies everywhere.
I hate to say (because the legal system has never worked ever) but the only workable future to me seems like forcing agents/robots to be tied to humans. If a company wants 100 robots, they must be paying a human for every robot they utilize somehow. Maybe a dynamic ratio somehow, like if the government decided most people are getting enough resources to survive, then maybe 2 robots per human payed.
This is what I’ve been thinking lately as well. Couple that with legal responsibility for any repercussions, and you might have a way society can thrive alongside AI and robotics.
I think any AI or robotic system acting upon the world in some way (even LLM chatbots) should require a human “co-signer” who takes legal responsibility for anything the system does, as if they had performed the action themselves.