- I admire your honesty of telling this. AI is quickly becoming something you rather want to hide.
- I would never want to work at such a company. If I wanted to engineer by human language I would be a politician or a manager. If I wanted to babysit an automaton, I would have been a factory worker.
> If I wanted to babysit an automaton, I would have been a factory worker.
I wonder if horse carriage drivers said the same thing about the advent of cars. Telling the LLM build me a login page instead of laboriously looking up example code in docs and retyping stack overflow snippets is definitely a different way of working, but thinking that makes someone a politician seems like a bit of a stretch.
Cars are reliable and when they break down we can fix them ourselves. I can do that too with any compiler or tool. AI is still and will always be a black box that could hallucinate any moment.
You could say the same with software embedded in new cars, but I would reply that it's the kind of cars that I wouldn't want to drive. Also the car makers have a legal responsibility to make sure it behaves well on the road. AI companies have no responsibilities and put a lot of hidden stuff (like censorship) in their products which makes them unacceptable to me. LLMs are unreliable tools by definition which is not a good thing compared to what I use all the time.
The difference is determinism. A technical inclined person wants to build things from first principles. Assembly or a higher level language are in that nature the same.
Now, most humans are social beings and rather play social "games" with language. That is why technical people used to be called nerds, because they are the exception. Engineers by heart (those of their own choosing rather than because of economical pressure) love the technical reasoning part of their brain.
Now a stochastic model that may lie to you, or respond differently on how your word it today, is a completely different kind of work. It is in principle not engineering, but rather some kind of managing or influencing.
I get the discomfort — I felt the same early on. But I think there’s a misunderstanding of what’s actually happening under the hood with modern code-focused LLMs.
We’re no longer in the realm of vague completions. Models like DeepSeek or Claude 3.7 aren’t just stochastic parrots — they operate like abstract interpreters, capable of holding internal representations of logic, system design, even refactoring strategies. And when you constrain them properly — through role separation, test feedback, context anchoring — they become extremely reliable. Not perfect, but engineerable.
What you describe as “managing” or “influencing” is, in our case, more like building structured interpreter stacks. We define agent roles, set execution patterns, log every decision, inject type-checked context. It’s messy, yes, but no more magical than compiling C into assembly. Just at a radically higher level of abstraction.
There’s a quote that captures this well. In March 2024, Jensen Huang (NVIDIA CEO) said:
“English is now the world’s most popular programming language.”
That’s not hyperbole. It reflects a shift in interface — not in intent. LLMs let us program systems using natural abstractions, while still exposing deterministic structure when designed that way.
To me, LLMs are not the death of engineering. They’re the beginning of a new kind.
I truly believe the next 10 years will make most traditional programming languages obsolete.
We’ll go from prompt → code to prompt → compiled binary, bypassing syntax entirely.
Thanks for following up, maybe I can learn something. I wonder what you mean by a "shared context layer"? Do you run everything local on big rigs and did you train your own models?
The idea I have got now is that you let general off-the-shelf AI models role-play, and one hands it over to the other? But how would you be able to let those use a shared context layer, that is also typed? How is feedback organized in that process?
Great questions — and yes, you’ve got the right intuition: we orchestrate role-specific agents using off-the-shelf LLMs (Claude 3.7, DeepSeek GPT 4.1, GPT-4 Turbo), and they “hand off” tasks between each other. But to avoid total chaos (or hallucinated collaboration), we had to build a few things around them.
The “shared context layer” is essentially a lightweight memory and coordination layer that persists project state, intermediate decisions, and validated outputs. It’s not a traditional vector store or RAG setup. Instead, we use:
• A Redis-backed scratchpad with typed slots for inputs, constraints, decisions, outputs, and feedback
• An MCP (Model Context Protocol) template that defines what agents should expect, expose, and inherit
• Each agent works statelessly, but gets a structured payload that includes relevant validated history, filtered to reduce noise
Agents don’t have full access to each other’s output logs (too much context = hallucination risk). Instead, each one produces an “artifact” + optional feedback object. These go into the shared layer, and the orchestrator decides what the next agent should receive and in what form.
We don’t run anything locally (yet). It’s all API-based for now, with orchestration handled in a containerized layer. That will probably evolve if we scale into more sensitive verticals.
Hope that helps clarify. Happy to dig deeper if you want building something similar.
what you mean by backup plan ? We produce proper code like node.js or similar that is backup and proceed in a normal pipeline. Just the production of the code is different.
Just like you, I think prompting LLMs to produce code for us is the future of the profession. Not necessarily a fan, this is just how I see the reality of it. The person I'm replying to feels this ruins the profession for them, if I'm reading their comment right. Hence the question.
Edit: Oh, you're the post's author. Thanks for sharing and I hope the business is going strong.
I don't understand why you attract downvotes, tried to upvote you, but to answer your question:
I have no real backup plans, but I can see that my (and my peers) knowledge, design sensitivity and architectural skills will become an even more real scarce asset, especially when there is a surge of vibe coded projects.
In the case of OP however, I think he has found a niche (I assume) in which, between deep applications and throw away code, the balance tilts over to the latter. So this is the domain of MS Power Apps, low code prototypes, and Power BI reports. And so, potentially, his personnel was already more apt to not dislike how the nature of their work changed.
You’re absolutely right that there’s a spectrum between deep applications and throwaway code. But I wouldn’t place what we’re doing in the Power Apps / low-code / Power BI category.
The systems we’re building with LLMs (at Easylab AI) aren’t quick prototypes or business dashboards — they’re fully functioning SaaS platforms with scalable backends, custom business logic, API orchestration, test coverage, and long-term maintainability. The difference is: they’re authored through agents, not typed from scratch.
And to your point about design sensitivity and architecture becoming scarce — I couldn’t agree more.
When LLMs handle 80% of the syntactic work, what’s left is the hard stuff: system thinking, naming, sequencing, interfaces, data flows. That’s exactly where our team shifted: less “builders,” more “designers of builders.”
It’s not easier work — it’s just a different level of abstraction.
Thanks for the reply, sincerely. It’s good to talk about this without defaulting to hype or fear.
for your second point, I understand your position, but I strongly believe that it's the future of coding. Coding was a way to translate a machine language to something more understandable, AI coding is simply the next step.
How do your devs feel about this in regards to their career? Are they worried about their DSA/coding skills atrophying? Not knocking, just genuinely curious.
Great question — and one we took seriously early on.
At first, there was some skepticism, and even a bit of anxiety. When we said, “We’re going full AI-assisted development,” the natural reaction was: “What does that mean for my skillset?”
But here’s what happened in practice:
Most of the repetitive tasks — CRUD, glue logic, API boilerplate — disappeared. Instead, devs started focusing on system design, agent orchestration, prompt engineering, constraint writing, testing strategy, and overall architecture.
And they’re thriving.
Nobody’s DSA muscles are atrophying — they’re just being used differently.
If anything, they’ve gained new skills that aren’t widely available yet:
how to design workflows with stochastic tools, how to debug agent behavior, how to build structured memory into LLM stacks. These are things you won’t find in textbooks yet, but they’re very real problems — and deeply technical.
And let’s be real: you don’t forget how to reverse a linked list just because you stopped manually writing route handlers for user creation.
In short: the devs that leaned into it have grown faster, not slower.
And the ones who felt it wasn’t for them — they moved on. Which is fine.
Every shift in tooling brings a kind of Darwinian filtering.
It’s not about better or worse, just about who’s willing to adapt to a new abstraction layer.
And that’s always been part of how tech evolves.
- I admire your honesty of telling this. AI is quickly becoming something you rather want to hide.
- I would never want to work at such a company. If I wanted to engineer by human language I would be a politician or a manager. If I wanted to babysit an automaton, I would have been a factory worker.
I wonder if horse carriage drivers said the same thing about the advent of cars. Telling the LLM build me a login page instead of laboriously looking up example code in docs and retyping stack overflow snippets is definitely a different way of working, but thinking that makes someone a politician seems like a bit of a stretch.
You could say the same with software embedded in new cars, but I would reply that it's the kind of cars that I wouldn't want to drive. Also the car makers have a legal responsibility to make sure it behaves well on the road. AI companies have no responsibilities and put a lot of hidden stuff (like censorship) in their products which makes them unacceptable to me. LLMs are unreliable tools by definition which is not a good thing compared to what I use all the time.
Now, most humans are social beings and rather play social "games" with language. That is why technical people used to be called nerds, because they are the exception. Engineers by heart (those of their own choosing rather than because of economical pressure) love the technical reasoning part of their brain.
Now a stochastic model that may lie to you, or respond differently on how your word it today, is a completely different kind of work. It is in principle not engineering, but rather some kind of managing or influencing.
We’re no longer in the realm of vague completions. Models like DeepSeek or Claude 3.7 aren’t just stochastic parrots — they operate like abstract interpreters, capable of holding internal representations of logic, system design, even refactoring strategies. And when you constrain them properly — through role separation, test feedback, context anchoring — they become extremely reliable. Not perfect, but engineerable.
What you describe as “managing” or “influencing” is, in our case, more like building structured interpreter stacks. We define agent roles, set execution patterns, log every decision, inject type-checked context. It’s messy, yes, but no more magical than compiling C into assembly. Just at a radically higher level of abstraction.
There’s a quote that captures this well. In March 2024, Jensen Huang (NVIDIA CEO) said:
“English is now the world’s most popular programming language.”
That’s not hyperbole. It reflects a shift in interface — not in intent. LLMs let us program systems using natural abstractions, while still exposing deterministic structure when designed that way.
To me, LLMs are not the death of engineering. They’re the beginning of a new kind. I truly believe the next 10 years will make most traditional programming languages obsolete. We’ll go from prompt → code to prompt → compiled binary, bypassing syntax entirely.
The idea I have got now is that you let general off-the-shelf AI models role-play, and one hands it over to the other? But how would you be able to let those use a shared context layer, that is also typed? How is feedback organized in that process?
The “shared context layer” is essentially a lightweight memory and coordination layer that persists project state, intermediate decisions, and validated outputs. It’s not a traditional vector store or RAG setup. Instead, we use: • A Redis-backed scratchpad with typed slots for inputs, constraints, decisions, outputs, and feedback • An MCP (Model Context Protocol) template that defines what agents should expect, expose, and inherit • Each agent works statelessly, but gets a structured payload that includes relevant validated history, filtered to reduce noise
Agents don’t have full access to each other’s output logs (too much context = hallucination risk). Instead, each one produces an “artifact” + optional feedback object. These go into the shared layer, and the orchestrator decides what the next agent should receive and in what form.
We don’t run anything locally (yet). It’s all API-based for now, with orchestration handled in a containerized layer. That will probably evolve if we scale into more sensitive verticals.
Hope that helps clarify. Happy to dig deeper if you want building something similar.
Just like you, I think prompting LLMs to produce code for us is the future of the profession. Not necessarily a fan, this is just how I see the reality of it. The person I'm replying to feels this ruins the profession for them, if I'm reading their comment right. Hence the question.
Edit: Oh, you're the post's author. Thanks for sharing and I hope the business is going strong.
I have no real backup plans, but I can see that my (and my peers) knowledge, design sensitivity and architectural skills will become an even more real scarce asset, especially when there is a surge of vibe coded projects.
In the case of OP however, I think he has found a niche (I assume) in which, between deep applications and throw away code, the balance tilts over to the latter. So this is the domain of MS Power Apps, low code prototypes, and Power BI reports. And so, potentially, his personnel was already more apt to not dislike how the nature of their work changed.
The systems we’re building with LLMs (at Easylab AI) aren’t quick prototypes or business dashboards — they’re fully functioning SaaS platforms with scalable backends, custom business logic, API orchestration, test coverage, and long-term maintainability. The difference is: they’re authored through agents, not typed from scratch.
And to your point about design sensitivity and architecture becoming scarce — I couldn’t agree more.
When LLMs handle 80% of the syntactic work, what’s left is the hard stuff: system thinking, naming, sequencing, interfaces, data flows. That’s exactly where our team shifted: less “builders,” more “designers of builders.” It’s not easier work — it’s just a different level of abstraction.
Thanks for the reply, sincerely. It’s good to talk about this without defaulting to hype or fear.
At first, there was some skepticism, and even a bit of anxiety. When we said, “We’re going full AI-assisted development,” the natural reaction was: “What does that mean for my skillset?”
But here’s what happened in practice:
Most of the repetitive tasks — CRUD, glue logic, API boilerplate — disappeared. Instead, devs started focusing on system design, agent orchestration, prompt engineering, constraint writing, testing strategy, and overall architecture.
And they’re thriving.
Nobody’s DSA muscles are atrophying — they’re just being used differently. If anything, they’ve gained new skills that aren’t widely available yet: how to design workflows with stochastic tools, how to debug agent behavior, how to build structured memory into LLM stacks. These are things you won’t find in textbooks yet, but they’re very real problems — and deeply technical.
And let’s be real: you don’t forget how to reverse a linked list just because you stopped manually writing route handlers for user creation.
In short: the devs that leaned into it have grown faster, not slower. And the ones who felt it wasn’t for them — they moved on. Which is fine.
Every shift in tooling brings a kind of Darwinian filtering. It’s not about better or worse, just about who’s willing to adapt to a new abstraction layer. And that’s always been part of how tech evolves.