I liked this idea when it came out, and there was some software that implemented it. Mr Schedule by Andrew Pietschy added outliner functionality to Joel's idea, so you could see how much time a group of subtasks would take (and if you should maybe drop that feature group to make your deadline). It had some keyboard driven shortcuts that made it faster to move around in than Excel, while making things simpler.
Unfortunately Mr Schedule and the pietschy.com website disappeared. I made my own recreation using REALbasic / Xojo at the time, but never released it and faded from using it.
Joel Spolsky expanded the idea later with Evidence Based Scheduling:
That takes the estimates from Painless Software Schedules, but runs a Monte Carlo simulation using your estimates & data on actual time taken, to create a confidence distribution curve graph of when you'll be finished.
I have done the monte carlo thing in practice with a team and it works well under some conditions.
The most important is that the team needs to actually use the task board (or whatever data source you use to get your inputs) to track their work actively. It cannot be an afterthought that gets looked at every now and then, it actually needs to be something the team uses.
My current team kind of doesn't like task boards because people tend to work in small groups on projects where they can keep that stuff in their own heads. This requires some more communication but that happens naturally anyway. They are still productive, but this kind of forecasting doesn't work then.
I hate this whole thing with me having to use some tool to track the work (usually Jira which is a PoS). My entire output is data, why can't a tool automatically summarise what I'm doing? It seems an ideal task for an AI actually.
Here's a real schedule:
CEO: we need to launch x end of Q2
PM: Here are the four monthly milestones
Engineer Mgr: Let's estimate the stories. Now put them into eight sprints
Go!
> Netscape has seen its browser share go from about 80% to about 20% during this time, all the while it could do nothing to address competitive concerns, because their key software product was disassembled in 1000 pieces on the floor and was in no shape to drive anywhere. That single bad decision, more than anything else, was the nuclear bomb Netscape blew itself up with.
This post from spolsky is always amusing to me because it came 6 months after Microsoft was convicted of antitrust violations to crush Netscape. So it's funny that he claims Netscape killed themselves, when the courts actually said that Microsoft killed Netscape. Obviously Netscape made critical bad decisions, but Microsoft's illegal behavior was what actually killed them.
Netscape made mistakes, but they didn't lose 60% of their market share in just two years because they didn't ship a major update. They lost it because Microsoft bundled a "good enough" browser with their operating system already installed on the computers out of the box.
Well first off I remember Netscape of that time, it was a disaster, and this was the time when most peoples computer browser stuff was handled by their nerdy relative. I had plenty of people I could have put Netscape on their computers but I didn't because it was just such a shitshow.
So I'm not sure about that loss of market share being just due to MS. IE at the time was just better than Netscape. You had to be a masochist to use Netscape. It would crash badly at the silliest little things, and since websites were made with even less professional standards than nowadays those silly little things were quite frequent.
You might have gotten IE preinstalled, but even for devs who went and installed Netscape it just made more sense to use IE, because it was better.
MS preinstalled IE, but Netscape made sure only the truly dedicated would actually download and use it.
Without Netscape's mess-up I can totally see them only losing 30% of their share, and being in a good place to recuperate when MS got slapped down in court.
As always, the only way anybody has ever thought of to "plan" software is:
1) write down everything you're going to do
2) write down how long that's going to take
3) add them all up and voila! You have a schedule!
The ways this breaks down in practice would be comical if not for the fact that everybody takes it so seriously. The biggest problem is that step 1 takes longer than the actual software development task all the time, every time. That might not be _so_ bad other than the fact that it's also always completely wrong.
I actually did this (around 2006) after reading this article by Joel and I was skeptical but I used excel and wrote down all the tasks that needed to be done and kept breaking it down till each task was in hours.
It took me a few hours to do and as Joel says in the article, it was not a fun thing to do (jumping right into code was more fun) but I stuck with it and did the whole thing.
Then I followed that list of tasks and kept track of when tasks started and ended and I was pleasantly surprised when after a few weeks the project was done right on schedule as predicted by the excel sheet. So my experience (data point of 1) was that it works if you do it exactly how he says to do it in the blog post.
I did it only that one time so take that for what it is.
Was wondering how StockOverflow guy was doing these days and it turns out he sold the company for $2B in 2021. What's the saying? Time in the market vs timing the market. Good for him but imagine being one of the investors.
What additional data is worth paying for that wasn't already freely given awaY? Right now you can download the entire corpus of Stack Exchange content for local review off of the Kiwix library. Because it's primarily text the dataset isn't even that large.
The article mentions milestones twice, and assumes their existence. But the scheduling methodology described has nothing to say about where these come from or how to think about them. So it’s missing something that makes it at least a little less simple.
For many of us, the way we manage software projects has changed has changed so much since the days when Joel wrote this.
It was a different age, with different products. I’m sure there are still products built the old ways, but Joel was writing before SaaS and CI/CD and endless roadmaps.
Reading into Joel, he was building SaaS. Fogbugz to name one.
He seems to have other posts on the lifecycle of software and product budding. Maybe it wasn’t mainstream then but some folks were doing meaningful parts of it.
Fogbugz, if the first version even existed in 2000, was not a SaaS. Nor was Jira, by the way.
Both products were initially once-off purchases that you had to install and run on your own infrastructure, and with new, major versions packed with new features that you had to buy if you wanted, but could ignore if you didn’t.
The move to a SaaS model came years later for both products.
Agreed- just a month ago I told my team to read https://www.joelonsoftware.com/2006/06/16/my-first-billg-rev... and note how Spolsky knew the details of his application (weird date issues in Excel and VB). If you want to be a senior engineer, you need to know where are the odd edge cases in your app. I don't want to be the only one on the team who remembers that stuff.
Don Reinertsen did some nice work on what he calls lean 2.0. Part of that is basically doing cost estimations for work. Cost estimations basically just boil down to hours times cost per hour. The nice thing of thinking in dollars instead of hours is that it suddenly becomes a money game. Now there is a stake. Because companies are usually budget constrained and while they can pretend there are more than 24 hours in day, pretending there are more dollars in the bank is a lot harder. The tradeoffs get a lot more real.
One of the points he makes that a bad estimate is better than no estimate. If you have no estimates, you literally can't plan. Even if you are going to be off by 3x it's better than not knowing. A lot of the companies have no clue about the cost of what they are doing. So, he fixes that by making them predict cost of their plans. Which in turn forces them to do time estimates. Like Joel says, breaking things down helps making better estimates.
Another point he makes is that different people can come up with wildly different cost predictions for the same thing. That's still a lot better than not having any cost at all. Whenever you get wild divergence in cost estimates, that signals that there's no collective understanding of what a team is doing. That's a problem that needs fixing or somebody needs a reality check with their expectations (e.g. managers). If they are low balling an expensive thing, they are going to look pretty bad when that doesn't happen repeatedly.
And then he introduces a concept called "cost of delay" which is a simple potential revenue based mechanism for calculating what it would cost if feature X ships 3 months late. Now you get money based prioritization. We make more money if we do X before Y.
And a final point he makes is that empowering people to come up with money saving measures can actually be hugely beneficial. Some things get cheaper if you rethink a design, maybe re-implement some thing, etc. Instead of making people beg for permission to do that, it's much more cost effective to let people figure things out. Up to a certain dollar amount. That amount can go up or down as people gain experience. But the point is that rewarding people for things that are profitable is a very sane thing to do for companies. And usually the experts have the best understanding of where the potential gains are.
All very simple ideas conceptually. But the thing is, many software shops have no clue about any of this. They don't understand their own cost. They don't understand the dollar impact of choices they make; including important things like prioritization.
I don't actually practice any of this. But it's an intriguing way to look at estimations. Well worth checking out his work.
4) Only the programmer who is going to write the code can schedule it.
This item makes Joel's scheduling idea a no-go at most companies.
Schedules are set by management or sales
and programmers are expected to meet the date or get PIP'd.
This was written at a time were Software Engineering (not Developers) was valued more.
I had my first programming job around this time, and there wasn't scrum and all that crap. I was a Jr engineer, still in the last semesters of univ. And yet, we were treated like you read in the post: We were handed a feature and asked to do it. First estimate it , then ask the Design guys for UI and finally start coding it.
Now Software dev feels like sweatshops, business people think we are sewing jeans. And Software Developers became code monkeys.
I've been in the industry since before this article was written.
Notice I said most companies.
Back when programmers were valued more,
we still didn't always get much say in schedules.
Certainly more than we do now.
Your term "sweatshop" is on the mark, too.
Since the advent of "open plan" offices,
we even look like rows of tailors sitting at sewing machines stitching together jeans.
The companies don't always fail, but the software projects frequently do. When was the last time you saw a headline about a massive software project and the outcome was that it was early and under budget with all planned features working?
Unfortunately Mr Schedule and the pietschy.com website disappeared. I made my own recreation using REALbasic / Xojo at the time, but never released it and faded from using it.
Joel Spolsky expanded the idea later with Evidence Based Scheduling:
https://www.joelonsoftware.com/2007/10/26/evidence-based-sch...
That takes the estimates from Painless Software Schedules, but runs a Monte Carlo simulation using your estimates & data on actual time taken, to create a confidence distribution curve graph of when you'll be finished.
The most important is that the team needs to actually use the task board (or whatever data source you use to get your inputs) to track their work actively. It cannot be an afterthought that gets looked at every now and then, it actually needs to be something the team uses.
My current team kind of doesn't like task boards because people tend to work in small groups on projects where they can keep that stuff in their own heads. This requires some more communication but that happens naturally anyway. They are still productive, but this kind of forecasting doesn't work then.
That process isn't free. For many features, it's the largest share of the work.
Even for features that stay on the cutting-room floor. Especially for features that stay on the cutting-room floor.
This post from spolsky is always amusing to me because it came 6 months after Microsoft was convicted of antitrust violations to crush Netscape. So it's funny that he claims Netscape killed themselves, when the courts actually said that Microsoft killed Netscape. Obviously Netscape made critical bad decisions, but Microsoft's illegal behavior was what actually killed them.
So I'm not sure about that loss of market share being just due to MS. IE at the time was just better than Netscape. You had to be a masochist to use Netscape. It would crash badly at the silliest little things, and since websites were made with even less professional standards than nowadays those silly little things were quite frequent.
You might have gotten IE preinstalled, but even for devs who went and installed Netscape it just made more sense to use IE, because it was better.
MS preinstalled IE, but Netscape made sure only the truly dedicated would actually download and use it.
Without Netscape's mess-up I can totally see them only losing 30% of their share, and being in a good place to recuperate when MS got slapped down in court.
It took me a few hours to do and as Joel says in the article, it was not a fun thing to do (jumping right into code was more fun) but I stuck with it and did the whole thing.
Then I followed that list of tasks and kept track of when tasks started and ended and I was pleasantly surprised when after a few weeks the project was done right on schedule as predicted by the excel sheet. So my experience (data point of 1) was that it works if you do it exactly how he says to do it in the blog post.
I did it only that one time so take that for what it is.
Seems like he managed both.
It was a different age, with different products. I’m sure there are still products built the old ways, but Joel was writing before SaaS and CI/CD and endless roadmaps.
He seems to have other posts on the lifecycle of software and product budding. Maybe it wasn’t mainstream then but some folks were doing meaningful parts of it.
Both products were initially once-off purchases that you had to install and run on your own infrastructure, and with new, major versions packed with new features that you had to buy if you wanted, but could ignore if you didn’t.
The move to a SaaS model came years later for both products.
It has to be interpreted through modernity sometimes to account for changes but overall his stuff feels really solid
One of the points he makes that a bad estimate is better than no estimate. If you have no estimates, you literally can't plan. Even if you are going to be off by 3x it's better than not knowing. A lot of the companies have no clue about the cost of what they are doing. So, he fixes that by making them predict cost of their plans. Which in turn forces them to do time estimates. Like Joel says, breaking things down helps making better estimates.
Another point he makes is that different people can come up with wildly different cost predictions for the same thing. That's still a lot better than not having any cost at all. Whenever you get wild divergence in cost estimates, that signals that there's no collective understanding of what a team is doing. That's a problem that needs fixing or somebody needs a reality check with their expectations (e.g. managers). If they are low balling an expensive thing, they are going to look pretty bad when that doesn't happen repeatedly.
And then he introduces a concept called "cost of delay" which is a simple potential revenue based mechanism for calculating what it would cost if feature X ships 3 months late. Now you get money based prioritization. We make more money if we do X before Y.
And a final point he makes is that empowering people to come up with money saving measures can actually be hugely beneficial. Some things get cheaper if you rethink a design, maybe re-implement some thing, etc. Instead of making people beg for permission to do that, it's much more cost effective to let people figure things out. Up to a certain dollar amount. That amount can go up or down as people gain experience. But the point is that rewarding people for things that are profitable is a very sane thing to do for companies. And usually the experts have the best understanding of where the potential gains are.
All very simple ideas conceptually. But the thing is, many software shops have no clue about any of this. They don't understand their own cost. They don't understand the dollar impact of choices they make; including important things like prioritization.
I don't actually practice any of this. But it's an intriguing way to look at estimations. Well worth checking out his work.
This item makes Joel's scheduling idea a no-go at most companies. Schedules are set by management or sales and programmers are expected to meet the date or get PIP'd.
I had my first programming job around this time, and there wasn't scrum and all that crap. I was a Jr engineer, still in the last semesters of univ. And yet, we were treated like you read in the post: We were handed a feature and asked to do it. First estimate it , then ask the Design guys for UI and finally start coding it.
Now Software dev feels like sweatshops, business people think we are sewing jeans. And Software Developers became code monkeys.
Its quite sad.