As a /senior/ developer I really dislike blanket statements. I've seen the same amount of failures caused by
> “Do we really need that?”
> “What happens if we don’t do this?”
> “Can we make do for now? Maybe come back to this later when it becomes more important?”
as with experimenters. Every system is different, every product is different. If I were building firmware for a CT scanner, my approach towards trying out new things would be different than a CRUD SaaS with 100 clients in a field that could benefit from a fresh perspective.
There are definitely ways to have eager/very open seniors drive systems into hard to get out corners. But then there are people that claim PHP5 is all you need.
> Ah, baby, this is my senior developer. The avoider, the reducer, the recycler. They want to avoid development as much as they can.
There are times when this is good, there are times when actively trying introduce an improvement is the best way forward. A good senior is able to recognise when those times are.
That doesn't sound as good in meetings. The person who can cut scope and get everyone to the "we did it" back patting phase makes everyone feel warm and cozy.
Now combing through analytics to determine whether or not what we did was actually good? Less warm and cozy.
This is where good leadership in the dev team is needed.
Is the improvement likely to reduce maintenance overhead (and thus cost)? Or improve performance allowing for fewer services running (and thus reducing cost)? Or reduce bugs that force people out of a workflow (eg in an online shop, thus fixing it increases sales)?
Or if it’s just tech debt then use Jira (etc) to your advantage and talk about the number of tickets you can close of this sprint due to this engineering initiative.
If the development team and product teams goals are largely aligned then the problem with engineering initiatives is just how you explain them to the product team.
For a large enough problem you need a combination of enough skill (to do the job), enough foresight (to know what likely will go wrong and how much error budget you need), and skin in the game (so you dont just cut things that sound good but instead what is truly needed) - if you don't have all three of these you usually are just talking out of your ass.
Agree. context matter. As a senior developer you need to understand complexity, risk, upsides and and downsides. Understand the business side.
If you are a startup or a big company that is already a cash cow makes a difference when changing a core featrue of the product etc... context context context
A sort of survivor bias. A VP ordered to use elastic search, because it worked well at his company before. Turned out it worked well for us. Listen to the VP to make technical decisions. And use elastic search.
Reminds me when the ELK stack was called just ELK (idek what it is now) we had a server we put it on, and after making the additional dashboards my manager wanted, we learned the limits of ES / ELK. It needs a ridiculous amount of memory, because it will shove everything in memory. Same thing when I learned that MongoDB indexing puts every item in memory as well, which is a yikes, why would you not want to index?
I bet there's money to be made for building a drop-in to either of those two that requires less memory, would save companies a bundle, and make other companies a bundle as well.
There's no high performance database that wont take all of your memory (at least for size of data) if you let it.
That's because it's much, MUCH faster to do it that way, though if you can deal with certain type of latency trade offs for throughput something like turbopuffer can do wonders for your costs.
Most proof of concepts I've seen get traction turned into production.
A rewrite?
I recall a few times everyone promised, if this gets promoted then we will rewrite it from zero. Never happened.
The article touches on responsability, accountability. There is none for risk taker. By definition. You have a crazy idea, you rush it out, you hope clients bite. You profit. It's not even your problem how to make it work, scale, not cost more to run than we sell it for.
The loop on the right. There are companies, two of them are very popular these days, they took it to an extreme. You ship something fast, and since it only scales linearly you go raise money. Successful companies, countless users, some of them even pay. Who's to blame? The senior developer, or simply someone reasonable who asks, how's that sustainable, what's the way out of this? Those are fired, so whoever's left is a believer.
The senior should also start using AI to increase the amount of work done to stabilise the system, in a careful manner. More benchmarks, better testing, better safety net when delivering software, automated security reviews, better instrumentation, and so on.
> And this is how AI affects the two loops
There should be another image illustrating that the amount of mitigations done from senior side, red-/blue-team style.
A really competent senior figures out what the prevailing culture of the company is now, and what it will need to be in 5 years, and adapts as they go. Startups with 5 people maybe don't need extra complexity costing runway. A 500 person business may need that complexity because now there are second-order effects that need to be mitigated for every business decision. It's not a black-and-white "always avoid complexity" it's "add complexity when it makes sense" and even that question has a lot of nuance because sometimes the business just needs to survive for another couple of months.
Right, prioritization and transparency allow you to change the variables that people should be using to solve a problem (and if it doesn't they are not good at the job) - if you have two hours before a storm comes you will be asking "will it take on enough water that I cant bail it out?" instead of thinking about your architecture.
The problem I see is management is playing games with not talking about how much money is available, what the real timelines are, etc - because they fear the people contributing will leave before the critical moment and so people keep making stupid decisions in that context and then you all get to get a new job.
I think that if this becomes an actual problem, there will be such a massive incentive to add AI to the scale/compression/risk avoidance side that there will be automated tools specialized in that kind of work.
I feel like this is shooting from the hip from a single point of view from some semi-large corpo.
Complexity, if it can be reduced to a single measurable dimension, is only one of several factors in a solution space.
There are other properties such as, maintainability, scalability, reliability, resilience, anti-fragility, extensibility, versatility, durability, composability. Not all apply.
Being able to talk about tradeoffs in terms of solution spaces, not just along a single dimension, is one of what I consider the differentiator between a senior and staff+ developer.
“Complexity” understood as the immediate first impression a junior gets looking at some arbitrary facet is always bad and too much and bad.
“Complexity” understood as what’s gonna make development on this system fly easy and fast for the next 10 man-years de facto means side steps when naive approaches would charge straight ahead.
Tortoise and the Hare… the urge to hurry up and burn hard the first two weeks (low hanging fruit, visible wins, MVP!), resulting in ever decreasing momentum due to immature design and in-dev maintenance needs is befuddling to me. So much “faster” for weeks, and it just meant the schedule slipped 6 months.
TRADEOFFS! I think this is IT. Non programmers imagine there aren't tradeoffs. As a programmer one should eventually realise that every possible aspect of design is a tradeoff.
They all influence each other to one extent or another.
And, the Cynefine Framework defines “complexity” a bit differently than the intuitive way it’s often used.
The simple domain is a single dimension. The complicated domain is a system of factors. I think when most people say “complex”, they are really talking about what Cynefine labels as “complicated”.
The Cynefine complex domain is not so easily solved or reduced. It has emergent behaviors. The act of measuring tends to perturb the system. No single solution will ever solve something in the Cynefine complex domain, because the complex system will shift behavior, making solutions that worked before start working against it.
Examples are ecosystems and economies. Software systems tend not to be complicated, not complex, until you start getting into distributed systems.
One of the key insights of Cynefine is understanding that each of the domains has its own way of solving things and that often times, people use solutions and methods from one domain to solve problems characterized by a different domain.
You don’t solve problems in the complicated domain with methods from the simple domain. And you don’t solve problems in the complex domain with methods that work for complicated domains.
Even with AI, there is a clear difference between juniors and seniors.
None of the things I can think of have anything to do with avoiding problems.
To some degree, having 5+ agents working on different projects is similar to leading a team of 5+ people. The skills translate well.
The senior is also able to understand what the agents do, review and challenge it. Juniors often can't.
And finally, the senior has a deeper understanding of what the business and problem domain are, and can therefore guide the AI more effectively towards building the right thing.
The polarization of speed vs scale concern on team is interesting.
Maps to what we believe on our team - functional vs non-functional. AI ships functional features fast but developers are more important than ever in making sure the non-functional aspects are taken care of
As this kind of person, it can be alienating in some teams / companies.
What I've found works best is to convey how the added complexity will affect non-engineers. You have to understand the incentives and trade offs though, and sometimes it's better to take the loss.
If you have the fortune of sticking around with the same leaders for awhile, a few rounds of being vocal, but compromising, will work in your favor. When that complexity comes back around to bite them in the way you described, you will earn some trust.
In my experience the solution proposed will rarely result in a less complex solution. Quick MVPs have the tendency to stick around. As soon as a customer starts using some product or feature, the cost of pivoting goes up. If you wish to experiment, do it on a segment.
The best strategy is to frame your argument from the perspective of the customer:
> This will allow for us to deploy the feature in only X days supporting Y use case with Client W who has been complaining about this shit for Q months now.
Arguments like:
> We should do Z because it would provide future extensibility.
> Z could eventually enable some novel platform capabilities.
> Z is easier to unit test.
Are much less likely to succeed in the business contexts that I have experienced so far.
What I found is that my willingness to communicate and share my expertise is usually not in demand with more junior developers. In general, I find developers uninterested in finding a mentor. They don't look at your linked in profile, they don't look at you as a possible source of knowledge and expertise.
So it's not like I have nothing to share after 30 years of experience in the industry, I just have nobody to share it with.
Exactly my experience. You describe it more diplomatically than I do hah.
To me, young people just don't seem to know, or want to know, that information and knowledge can be gained from a person. It's the arrogance of youth x100
They have a supercomputer in their pocket/on their desk, and an AI that knows 'everything'. I can't imagine what it's like being a teacher right now.
How's your AI going to explain the office politics? The CTO's opinion on things? Talk about recent outages and learnings (details of which are not often on blogs)?
They think all they need is knowledge and facts and none of history, politics, communication etc
I think a lot of is that an AI or Google search won't challenge them, push them, disagree with them - and that's comforting to them, and more desirable than the learning that could happen
I found that the proposers of features "want everything" because they don't know what is critical - they're therefore totally unwilling to accept anything other than "the full monty". So as a senior developer you cannot propose any faster route.
As you might imagine, a lot of these ideas fell by the wayside but we had to develop them in full.
> I don't like the kind of senior developer that says "I found this new tool and it’s pretty cool ..."
Remember that the first half of this statement, the part listed here, is great. I love playing with new tools.
The only bad part is the implicit bit after the dots: "we should use this in our product." You don't want cool things anywhere near your product, unless the cool thing is that they remove complexity.
Unfortunately that's not the case. There are many senior and above level engineers out there who are unskilled communicators but very technically skilled.
This is well-put, but the problem comes when you’ve got leadership looking at what appears to be a fully-functioning version of the product that the market is clearly indicating to them is sufficient to drive revenue. Budgeting the 6 weeks or whatever to translate from “the working version” to “the trustworthy version” is a hard pitch.
This is why part of a senior developer’s job is designing and developing the fast version in a way that, if it goes into production, won’t burn the building down. This is the subtle art of development: recognizing where the line is for “good enough” to ship fast without jeopardizing the long-term health of the company. This is also the part that AI is absolutely atrocious at - vibe code is fast, that’s the pitch, but it’s also basically disposable (or it’s not fast - I see all you “exhaustive spec/comprehensive tests/continuous iteration” types, and I see your timelines, too). If you can convince the org that’s the tradeoff, great, but I had a hell of a time doing it back when code was moving at human speed, and now you just strapped rockets onto the shitty part of the system and are trying to convince leadership that rocket-speed is too fast.
I agree with the author's premise - that one feedback loop optimizes for speed, and the other for scale - but I don't think the market is bearing the conclusion - that AI should be utilized to enable more rapid experimentation, where we better scale what works.
Many vendors seem to be learning (or not learning, but just throwing their weight against it anyway) that adding hastily-generated AI features are causing customer dissatisfaction, as more people brand the features "slop".
In the best case, the users give the company more chances. Infinitely more chances.
In a worse case, the users assume the new feature will always be bad, given their first impression. It's hard for a vendor to make people reconsider a first impression.
The absolute worst case is that AI enables a new market, but the first attempts are so poor that the first movers make people write that market off as a dead end, leading to a lost opportunity.
The unspoken observation on the reason why this happens is it almost always political in the organization to make themselves more valuable and harder to fire / layoff.
That includes gate-keeping behaviour such as not handing off knowledge, sham performance reviews to prevent ambitious juniors from over-taking them (even with AI) and being over-critical to others but absent and contrarian when the same is done to them.
That leverage does not work anymore in the age of AI as having "expensive" seniors begging for a pay-rise can cost the company an extra amount of $$$. So it is temping to lay them off for another one that is a yes person that will accept less.
In the age of AI, I would now expect such experience to include both building and working at a startup instead of being difficult to work with for the sake of a performance review.
I’m inclined to take the author at their word that they’re a copywriter by trade.
I agree that the punchy staccato and the rhetorical questions smell AI-ish, but the way this person uses them, there’s, like, a payload each time. Versus LLM-speak, where the assertions are at best banal and more frequently just confusing.
The written word is how people interact with LLMs. Clarity and precision in writing results in more effective prompting of LLMs. It is just as possible that leaning heavily AI writing will be seen as a marker of not being natively skilled enough at writing to prompt LLMs effectively because of the GIGO principle.
There's no fundamental reason that I have to read random blogposts from people I don't know. I do it today because I find it to be an enjoyable way to learn more about my profession and explore various perspectives on it. If I stop finding it enjoyable because too many people write their posts with AI, I'll stop reading these kind of blogs altogether, in the same way that I (and I suspect many commenters here) do not read even the most lovingly crafted Linkedin posts.
im either the biggest idiot in the world or this person is a terrible "copywriter". I found this post to be nearly unintelligible: "You can’t explain away someone else’s problem using your own problems." WTF does that mean? this would be a good place to put some very simplistic examples of what they mean, but they dont. is that because theyre trying to be succinct? clearly not as the post rambles on and on anyway. I hate posts that are both 1. not explaining their concept and 2. super long winded. That's a problem
are we just trying to say, "use AI for prototyping and customer demos that aren't important to be mature, use senior devs to develop and maintain the real products" ? You could just say that then...? Which I also disagree with as how AI should be used, AI is valid to include as a tool across all forms of development - it just should never be put in charge for production-level software (e.g. no vibe coding of mission critical components).
FTA: “AI agents are the future of software development. We won’t need developers anymore to slow down the progress of a business.”
Almost all business presidents, CEOs, and owners are thinking this. I guarantee you they are sick and tired of developers taking forever on every project. Now they can create the apps themselves.
My comment isn't meant to debate every nitty-gritty detail about code quality, security, stability, thinking of every aspect of how the code works, does it scale, etc. All of those things are extremely important. However, most leadership never cared about any of that anyways. They only heard those as excuses why developers took so long. Over the last decade they put up with it begrudgingly.
You know all the developers that wanted to complain about IT, cybersecurity, DevOPs, cloud architects for getting in their way and if they only had administrator access then they could get everything done themselves because they are experts in networking and everything else? Well, those developers are about to have the worst day ever when every single person on the planet can generate code and will be "experts" in everything as well.
> Well, those developers are about to have the worst day ever when every single person on the planet can generate code and will be "experts" in everything as well.
And society is beginning to suffer from it. AWS alone managed to slop itself into outages twice in a matter of a year [1] (and I bet that's just the stuff that escalates into mass-visible outages, not the "oh, can't start a new EC2 instance of a specific type for a few hours" kind), and a lot of companies were affected.
It's always the same game: by the time the consequences of the beancounters' actions come home to roost, they have long since departed with nice bonus packages, leaving the rest to dig out the mess.
> Ah, well, it can’t yet do the one thing senior developers still do. Take responsibility.
If only higher-ups would recognize that. Instead we see left and right mass layoffs, restructurings and clueless higher-ups who clearly drank not just a bottle of koolaid but a barrel.
> The ‘Speed’ version allows the rest of the business to continue learning from the market, as the senior developers build a trailing version of the system that’s well-reviewed and understandable.
Yeah... that doesn't fly. The beancounters don't care. The "speed" version works, so why even invest a single cent into the "scale" version? That's all potential profit that can be distributed to shareholders. And when it (inevitably) all crashes down, the higher ups all have long since cashed out, leaving the remaining shareholders as bagholders, the employees without employment and society to pick up the tab. Yet again.
> “Do we really need that?” > “What happens if we don’t do this?” > “Can we make do for now? Maybe come back to this later when it becomes more important?”
as with experimenters. Every system is different, every product is different. If I were building firmware for a CT scanner, my approach towards trying out new things would be different than a CRUD SaaS with 100 clients in a field that could benefit from a fresh perspective.
There are definitely ways to have eager/very open seniors drive systems into hard to get out corners. But then there are people that claim PHP5 is all you need.
> Ah, baby, this is my senior developer. The avoider, the reducer, the recycler. They want to avoid development as much as they can.
There are times when this is good, there are times when actively trying introduce an improvement is the best way forward. A good senior is able to recognise when those times are.
Now combing through analytics to determine whether or not what we did was actually good? Less warm and cozy.
Is the improvement likely to reduce maintenance overhead (and thus cost)? Or improve performance allowing for fewer services running (and thus reducing cost)? Or reduce bugs that force people out of a workflow (eg in an online shop, thus fixing it increases sales)?
Or if it’s just tech debt then use Jira (etc) to your advantage and talk about the number of tickets you can close of this sprint due to this engineering initiative.
If the development team and product teams goals are largely aligned then the problem with engineering initiatives is just how you explain them to the product team.
> Yes, yes, of course this is simplistic.
It's an example, put to the extreme, to clearly communicate the ideas. As all things, the golden mean applies, as I understand the article argues for:
> the design of the 'Scale' version is influenced by what worked and what doesn’t work in the 'Speed' version of the system.
The qualities were highlighted because they can all lead to better stability.
I bet there's money to be made for building a drop-in to either of those two that requires less memory, would save companies a bundle, and make other companies a bundle as well.
That's because it's much, MUCH faster to do it that way, though if you can deal with certain type of latency trade offs for throughput something like turbopuffer can do wonders for your costs.
A rewrite?
I recall a few times everyone promised, if this gets promoted then we will rewrite it from zero. Never happened.
The article touches on responsability, accountability. There is none for risk taker. By definition. You have a crazy idea, you rush it out, you hope clients bite. You profit. It's not even your problem how to make it work, scale, not cost more to run than we sell it for.
The loop on the right. There are companies, two of them are very popular these days, they took it to an extreme. You ship something fast, and since it only scales linearly you go raise money. Successful companies, countless users, some of them even pay. Who's to blame? The senior developer, or simply someone reasonable who asks, how's that sustainable, what's the way out of this? Those are fired, so whoever's left is a believer.
Old quote: "There is nothing so permanents as a temporary hack."
The senior should also start using AI to increase the amount of work done to stabilise the system, in a careful manner. More benchmarks, better testing, better safety net when delivering software, automated security reviews, better instrumentation, and so on.
> And this is how AI affects the two loops
There should be another image illustrating that the amount of mitigations done from senior side, red-/blue-team style.
The problem I see is management is playing games with not talking about how much money is available, what the real timelines are, etc - because they fear the people contributing will leave before the critical moment and so people keep making stupid decisions in that context and then you all get to get a new job.
I feel like this is shooting from the hip from a single point of view from some semi-large corpo.
There are other properties such as, maintainability, scalability, reliability, resilience, anti-fragility, extensibility, versatility, durability, composability. Not all apply.
Being able to talk about tradeoffs in terms of solution spaces, not just along a single dimension, is one of what I consider the differentiator between a senior and staff+ developer.
“Complexity” understood as what’s gonna make development on this system fly easy and fast for the next 10 man-years de facto means side steps when naive approaches would charge straight ahead.
Tortoise and the Hare… the urge to hurry up and burn hard the first two weeks (low hanging fruit, visible wins, MVP!), resulting in ever decreasing momentum due to immature design and in-dev maintenance needs is befuddling to me. So much “faster” for weeks, and it just meant the schedule slipped 6 months.
And, the Cynefine Framework defines “complexity” a bit differently than the intuitive way it’s often used.
The simple domain is a single dimension. The complicated domain is a system of factors. I think when most people say “complex”, they are really talking about what Cynefine labels as “complicated”.
The Cynefine complex domain is not so easily solved or reduced. It has emergent behaviors. The act of measuring tends to perturb the system. No single solution will ever solve something in the Cynefine complex domain, because the complex system will shift behavior, making solutions that worked before start working against it.
Examples are ecosystems and economies. Software systems tend not to be complicated, not complex, until you start getting into distributed systems.
One of the key insights of Cynefine is understanding that each of the domains has its own way of solving things and that often times, people use solutions and methods from one domain to solve problems characterized by a different domain.
You don’t solve problems in the complicated domain with methods from the simple domain. And you don’t solve problems in the complex domain with methods that work for complicated domains.
None of the things I can think of have anything to do with avoiding problems.
To some degree, having 5+ agents working on different projects is similar to leading a team of 5+ people. The skills translate well.
The senior is also able to understand what the agents do, review and challenge it. Juniors often can't.
And finally, the senior has a deeper understanding of what the business and problem domain are, and can therefore guide the AI more effectively towards building the right thing.
Maps to what we believe on our team - functional vs non-functional. AI ships functional features fast but developers are more important than ever in making sure the non-functional aspects are taken care of
As this kind of person, it can be alienating in some teams / companies.
What I've found works best is to convey how the added complexity will affect non-engineers. You have to understand the incentives and trade offs though, and sometimes it's better to take the loss.
If you have the fortune of sticking around with the same leaders for awhile, a few rounds of being vocal, but compromising, will work in your favor. When that complexity comes back around to bite them in the way you described, you will earn some trust.
In my experience the solution proposed will rarely result in a less complex solution. Quick MVPs have the tendency to stick around. As soon as a customer starts using some product or feature, the cost of pivoting goes up. If you wish to experiment, do it on a segment.
> This will allow for us to deploy the feature in only X days supporting Y use case with Client W who has been complaining about this shit for Q months now.
Arguments like:
> We should do Z because it would provide future extensibility.
> Z could eventually enable some novel platform capabilities.
> Z is easier to unit test.
Are much less likely to succeed in the business contexts that I have experienced so far.
So it's not like I have nothing to share after 30 years of experience in the industry, I just have nobody to share it with.
To me, young people just don't seem to know, or want to know, that information and knowledge can be gained from a person. It's the arrogance of youth x100
They have a supercomputer in their pocket/on their desk, and an AI that knows 'everything'. I can't imagine what it's like being a teacher right now.
How's your AI going to explain the office politics? The CTO's opinion on things? Talk about recent outages and learnings (details of which are not often on blogs)?
They think all they need is knowledge and facts and none of history, politics, communication etc
I think a lot of is that an AI or Google search won't challenge them, push them, disagree with them - and that's comforting to them, and more desirable than the learning that could happen
As you might imagine, a lot of these ideas fell by the wayside but we had to develop them in full.
Remember that the first half of this statement, the part listed here, is great. I love playing with new tools.
The only bad part is the implicit bit after the dots: "we should use this in our product." You don't want cool things anywhere near your product, unless the cool thing is that they remove complexity.
This is why part of a senior developer’s job is designing and developing the fast version in a way that, if it goes into production, won’t burn the building down. This is the subtle art of development: recognizing where the line is for “good enough” to ship fast without jeopardizing the long-term health of the company. This is also the part that AI is absolutely atrocious at - vibe code is fast, that’s the pitch, but it’s also basically disposable (or it’s not fast - I see all you “exhaustive spec/comprehensive tests/continuous iteration” types, and I see your timelines, too). If you can convince the org that’s the tradeoff, great, but I had a hell of a time doing it back when code was moving at human speed, and now you just strapped rockets onto the shitty part of the system and are trying to convince leadership that rocket-speed is too fast.
Many vendors seem to be learning (or not learning, but just throwing their weight against it anyway) that adding hastily-generated AI features are causing customer dissatisfaction, as more people brand the features "slop".
In the best case, the users give the company more chances. Infinitely more chances.
In a worse case, the users assume the new feature will always be bad, given their first impression. It's hard for a vendor to make people reconsider a first impression.
The absolute worst case is that AI enables a new market, but the first attempts are so poor that the first movers make people write that market off as a dead end, leading to a lost opportunity.
That includes gate-keeping behaviour such as not handing off knowledge, sham performance reviews to prevent ambitious juniors from over-taking them (even with AI) and being over-critical to others but absent and contrarian when the same is done to them.
That leverage does not work anymore in the age of AI as having "expensive" seniors begging for a pay-rise can cost the company an extra amount of $$$. So it is temping to lay them off for another one that is a yes person that will accept less.
In the age of AI, I would now expect such experience to include both building and working at a startup instead of being difficult to work with for the sake of a performance review.
I agree that the punchy staccato and the rhetorical questions smell AI-ish, but the way this person uses them, there’s, like, a payload each time. Versus LLM-speak, where the assertions are at best banal and more frequently just confusing.
There will be different shades of usage and maybe we draw a line somewhere in there.
So even if AI was not used to write an article, it could "smell" like AI to someone who consumes less of it.
are we just trying to say, "use AI for prototyping and customer demos that aren't important to be mature, use senior devs to develop and maintain the real products" ? You could just say that then...? Which I also disagree with as how AI should be used, AI is valid to include as a tool across all forms of development - it just should never be put in charge for production-level software (e.g. no vibe coding of mission critical components).
Almost all business presidents, CEOs, and owners are thinking this. I guarantee you they are sick and tired of developers taking forever on every project. Now they can create the apps themselves.
My comment isn't meant to debate every nitty-gritty detail about code quality, security, stability, thinking of every aspect of how the code works, does it scale, etc. All of those things are extremely important. However, most leadership never cared about any of that anyways. They only heard those as excuses why developers took so long. Over the last decade they put up with it begrudgingly.
You know all the developers that wanted to complain about IT, cybersecurity, DevOPs, cloud architects for getting in their way and if they only had administrator access then they could get everything done themselves because they are experts in networking and everything else? Well, those developers are about to have the worst day ever when every single person on the planet can generate code and will be "experts" in everything as well.
And society is beginning to suffer from it. AWS alone managed to slop itself into outages twice in a matter of a year [1] (and I bet that's just the stuff that escalates into mass-visible outages, not the "oh, can't start a new EC2 instance of a specific type for a few hours" kind), and a lot of companies were affected.
It's always the same game: by the time the consequences of the beancounters' actions come home to roost, they have long since departed with nice bonus packages, leaving the rest to dig out the mess.
[1] https://www.theguardian.com/technology/2026/feb/20/amazon-cl...
If only higher-ups would recognize that. Instead we see left and right mass layoffs, restructurings and clueless higher-ups who clearly drank not just a bottle of koolaid but a barrel.
> The ‘Speed’ version allows the rest of the business to continue learning from the market, as the senior developers build a trailing version of the system that’s well-reviewed and understandable.
Yeah... that doesn't fly. The beancounters don't care. The "speed" version works, so why even invest a single cent into the "scale" version? That's all potential profit that can be distributed to shareholders. And when it (inevitably) all crashes down, the higher ups all have long since cashed out, leaving the remaining shareholders as bagholders, the employees without employment and society to pick up the tab. Yet again.