What it seems like has happened, is that most or all Product Manager oversight was removed from the Heroku project, and an engineering team was given ownership of the whole thing, for the purpose of ongoing maintenance.
But, paradoxically, this has given those engineers free rein to make whatever improvements they deem fit - including things they may have been blocked from working on in the past due to Product meddling and/or corporate bureaucracy.
(Not speaking authoritatively - this situation just, from the outside, appears to have a lot of parallels to teams I've been on that owned "Legacy" services.)
I have built Cuber (https://github.com/cuber-cloud/cuber-gem) a few years ago as a replacement for Heroku and now we use it to deploy all our Rails applications on DigitalOcean Kubernetes. Extremely lower cost, better performance, less bugs, better support...
The blog author isn’t understanding it but it’s quite simple: the product only matters in the context of large enterprise customers.
The large customers still get what they want as long as the ask isn’t too big and that’s why you see new features even though the product is in maintenance mode.
Today, you get the more streamlined experience of push, 3 clicks to restart CI & container build, push 1000 yamls, click to restart the build again, cry when it all fails.
I understand Judoscale is a customer with apprehensions and is asking for clarity. That will definitely raise anxiety.
However, Heroku said they were changing focus. It’s entirely possible to change focus away from something and still do some of it. A focus on things other than new features doesn’t mean, necessarily, no new features at all. Heroku could probably save their customers and partners a lot of anxiety by being clearer and more explicit what they mean.
What a weird article that's microanalysing language in Heroku's blog posts. I mean times are such that pivot-churn is becoming business as usual for most outfits these days so I wouldn't put any stock on C-Suite verbiage.
In my experience, generally Salesforce takes a little while before they notice that they bought you and start imposing uniformity and forcibly regressing you to their mean.
This was a(n internally-)famously hard and lengthy process for them with ExactTarget (read: Marketing Cloud) because ExactTarget employees identified strongly with "ExactTarget orange" culture rather than "Salesforce blue", which mostly meant being appalled at the technical and process swamp that Salesforce represented and pushing hard to keep their own tech stack and their own culture and standards as long as possible.
Heroku had an interesting arc, as they were the bright spot people would point at internally as where actually good engineering somehow happene even at Salesforce. There was a whole effort to let Heroku be the business unit that paved the path to AWS and PaaS for the entire company (which was at the time operating datacenters themselves), and so Heroku got a bunch of investment and freedom for a bit.
Then there was some weird power struggle, and the executives inexplicably decided not only to take that out of Heroku's hands despite their expertise, but also to basically shove Heroku in a corner to be ignored unless stripmined of its customer base through upsells or its staff through reallocations of headcount.
I think the downhill slide started when they introduced the "Private Space Peering". It is a wrapper on top of AWS VPC, but it was something like $1000 a month several years ago. It also was gating larger instances and other important features.
So few people used it. I guess this provided a negative signal to their management about the adoption rate of new features. And then everything eventually just died.
It’s just in coma, slowly dying away on a respirator. Some relatives irrationally keep paying the hospital to keep the patient alive, but the doctors just wait until they can finally pull the plugs and use the bed for someone with actual chances of survival.
I think its impossible for the Herokus and the digital oceans of the world to survive in the cloud world. They might be able to create better experience for customers but noone can match the networking that AWS, GCP and Azure can provide. Low latency will always win over better developer experience.
True, it can't compete with AWS/GCP/Azure if you're large scale. But most of us are not large scale, we just need a no frills experience instead of dealing with 27 nested panels just to spin up a VM.
Heroku runs on AWS though, doesn’t it? They just package it.
I don’t think it’s impossible for them to survive. Salesforce bought them more than 10 years ago and did little to support growth of Heroku. And yet they’re still around and people still ask „is there something new with comparable customer experience?” because they don’t mind paying more
on the other hand modern tech stacks can process insane amounts of req/s for typical websites/services in a single shared vserver core. not your 2010 ruby snoozefest anymore. plus I can't even remember when a few decade old droplets needed anything from me and still host some things just fine with zero issues or friction or nagging at all. DO is the number one pick for me in 2026 still when the problem fits a droplet style deployment, full stop.
I've never found cloud anything to beat the speed (and price) of a well placed server.
DO has always been a bit rich for my blood though, and even a low cost hetzner VPS has less cores than I remember seeing at the same price a decade ago. I could be wrong there though I usually use Vultr for their SYD region.
A slower background transcode usually doesn't matter, but a faster transcode that stops important processes running in the meantime might. This is usually fixable with effort, but sometimes it's nice to not have to configure everything to the nth degree.
I don't really buy it. The idea that somehow getting one less core but faster per core speeds per pricing bracket makes any difference in this imagined problem.
There are many different configurations of vps available with different numbers of cores, if you are picking the vps configuration specifically to have more cores than some transcoding software uses by default to avoid configuring a thread limit for that software then you are still configuring things to the nth degree just at the objectively wrong level of abstraction.
But, paradoxically, this has given those engineers free rein to make whatever improvements they deem fit - including things they may have been blocked from working on in the past due to Product meddling and/or corporate bureaucracy.
(Not speaking authoritatively - this situation just, from the outside, appears to have a lot of parallels to teams I've been on that owned "Legacy" services.)
Management: “we’re going into maintenance mode”
Devs: “You mean we get to work on whatever we want?!”
An update of Heroku
https://news.ycombinator.com/item?id=46913903
The large customers still get what they want as long as the ask isn’t too big and that’s why you see new features even though the product is in maintenance mode.
However, Heroku said they were changing focus. It’s entirely possible to change focus away from something and still do some of it. A focus on things other than new features doesn’t mean, necessarily, no new features at all. Heroku could probably save their customers and partners a lot of anxiety by being clearer and more explicit what they mean.
Maybe we could say they went uphill instead for a while? Or something
This was a(n internally-)famously hard and lengthy process for them with ExactTarget (read: Marketing Cloud) because ExactTarget employees identified strongly with "ExactTarget orange" culture rather than "Salesforce blue", which mostly meant being appalled at the technical and process swamp that Salesforce represented and pushing hard to keep their own tech stack and their own culture and standards as long as possible.
Heroku had an interesting arc, as they were the bright spot people would point at internally as where actually good engineering somehow happene even at Salesforce. There was a whole effort to let Heroku be the business unit that paved the path to AWS and PaaS for the entire company (which was at the time operating datacenters themselves), and so Heroku got a bunch of investment and freedom for a bit.
Then there was some weird power struggle, and the executives inexplicably decided not only to take that out of Heroku's hands despite their expertise, but also to basically shove Heroku in a corner to be ignored unless stripmined of its customer base through upsells or its staff through reallocations of headcount.
So few people used it. I guess this provided a negative signal to their management about the adoption rate of new features. And then everything eventually just died.
True, it can't compete with AWS/GCP/Azure if you're large scale. But most of us are not large scale, we just need a no frills experience instead of dealing with 27 nested panels just to spin up a VM.
I don’t think it’s impossible for them to survive. Salesforce bought them more than 10 years ago and did little to support growth of Heroku. And yet they’re still around and people still ask „is there something new with comparable customer experience?” because they don’t mind paying more
DO has always been a bit rich for my blood though, and even a low cost hetzner VPS has less cores than I remember seeing at the same price a decade ago. I could be wrong there though I usually use Vultr for their SYD region.
Less cores but probably 5x more performance per core now.
There are many different configurations of vps available with different numbers of cores, if you are picking the vps configuration specifically to have more cores than some transcoding software uses by default to avoid configuring a thread limit for that software then you are still configuring things to the nth degree just at the objectively wrong level of abstraction.