The author calls it a 'joke' that Heroes are just unpaid Amazon employees, but reality doesn't become a joke just because it's funny. The asymmetry here is staggering. I find myself holding back private research because I don't want to provide free R&D for a value-extraction machine that is already efficient enough.
The author was at least dependency-driven in their contribution, but outside that kind of dependency, it's hard to justify contributing even 'in the open' when the relationship is this one-sided. Amazon in particular has done enormous damage to the economic assumptions that permissive open source once relied on. There's increasingly more projects adopting 'Business Source Licenses', precisely to prevent open work from becoming a free input into hyperscaler monetization.
These devs know Amazon is grabby and, at some point, the only dominant outcome their community contribution is upstream of is unpaid labor for a trillion-dollar entity that also diverts support and community engagement away from the original projects by funneling users into managed versions of the same software.
I am saying this is exactly what's happening, but with more robust language. If you disallow Amazon, maybe there is a third party that offers our services to Amazon. So Amazon-the-string is not the bogeyman; the concern is the resale or hosted-service arrangement they can access.
So you see formulations that target infrastructure resale rather than specific entities, such as:
"For the avoidance of doubt, the following scenarios are not permitted under the license:
* A managed service that lets third party developers ... register their own [SERVICE] service endpoints and invoke them through that managed service."
"You may not provide the software to third parties as a hosted or managed service, where the service provides users with access to any substantial set of the features or functionality of the software."
"If you make the functionality of the Program or a modified version available to third parties as a service, you must make the Service Source Code available via network download to everyone at no charge, under the terms of this License [...] where 'Service Source Code' is defined broadly to include the entire hosting stack (monitoring, backups, etc.) to ensure a level playing field"
> I find myself holding back private research because I don't want to provide free R&D for a value-extraction machine that is already efficient enough.
If someone wants to release technology in a way that makes it publicly viewable but restricts its use, they can do that.
If they don't want to release it, they don't have to.
Additionally, publicly released technology destroys patentability, if that's the objective.
I don't understand what one would want to achieve that can't be achieved here.
> If you disallow Amazon, maybe there is a third party that offers our services to Amazon. So Amazon-the-string is not the bogeyman; the concern is the resale or hosted-service arrangement they can access
That's some acrobatics I suspect Amazon won't engage in, because communicating to the customer that your FooBarDB is managed in AWS but hosted by a third party is awkward.
Amazon will happily reimplement your API with their backend, as they've done before.
> There's increasingly more projects adopting 'Business Source Licenses', precisely to prevent open work from becoming a free input into hyperscaler monetization.
They could use AGPL or GPL3, typically those licenses are verboten in hyperscalers.
The truth is that the sort of company opting for BSL never really wanted to do OSS, and in truth only did so for the optics of it, for the goodwill it buys among developers, etc.
I know this is true of AGPL, but GPL3? I thought the people who objected to GPL3 were those distributing software to their users (e.g. was a reason Apple switched from bash to zsh). I cannot think of aything in GPL3 that would be a problem for hyper-scalers.
> They could use AGPL or GPL3, typically those licenses are verboten in hyperscalers.
Laws are only as good as their enforcement, in business at least. Unfortunately I have seen first hand that no one cares about licensing if they can’t get caught.
Businesses licenses are good because you can offer support and other benefits to encourage payment.
The claim is that those licenses are deemed no-touch within those companies—it's the companies themselves that insist on the software and their business not mixing, e.g. Apple continuing to ship old versions of GNU programs like Bash and then eventually moving to zsh rather than provide updated versions that are GPLv3.
Neither GPLv3 nor AGPLv3 say anything about businesses not being able to use the software.
Hey, nothing wrong with closed source, BSL, etc. I am fine with it. I am the last person that will say someone should give out their work for free.
What I object to is companies releasing software with permissive licenses, and then getting butthurt that others profit from it, or trying to rug pull the permissive licenses after a community adopted and contributed to it.
If you want to play the OSS game, then play it right.
I'm "lucky" to not be smart enough or important enough to think about this. Regardless, i wholeheartedly agree -- at this point, anything i personally could release publicly, will either be fully open source, or completely private. And I'm only choosing open source if I'm relatively sure it's not gonna make some asshole tons of money.
That's in the ballpark how big corps use open source strategically. They try to kill everyone value extraction moat at any other layer than the ones they dominate.
So they commoditize their complement [0]. They don't care if you make money based on their OSS, as long as you race to the bottom against anyone else who also has access to it and turn anything but the corp's profit center into a ubiquitous commodity. So they make the "asshole"'s incentives line up with their own.
That link was a great read and makes a strong point! Another reason corps invest in OSS is to develop something they rely on - special driver, etc - and capitalizing on that in the form of OSS maintainers charging consulting fees has been successful. Exactly in agreement with making the incentives line up with their own.
> in fact in one of Jeff Barr's AWS user meetups in Second Life
There's so much about that phrase that makes me smile. Easy to forget that Second Life was also one of the earliest users of AWS, S3 first. Jeff Bezos had personally invested in our 2005 round (a round that made Linden Lab a unicorn before that was a thing) and pointed us at Jeff Barr and the work coming from AWS.
In return, Jeff Barr started hosting AWS meetups in Second Life -- this was the era of lots of groups setting up Second Life outposts, from Jonathan Coulton to Reuters.
I’ll never forget seeing Second Life for the first time at a conference, in Flagstaff I think. You guys had a single folding table booth (as we all did) and computer running Second Life. Our team thought it was pretty cool and we talked about it quite a bit back at the office later. It was either 2002 or 2003.
We were with Evolution Robotics and were showing off the ER1, a new hobbyist robot.
I understand people have a viewpoint here about not giving time to large behemoths. I'll counter with a story and perhaps a larger point.
Back in 2006/7 I had an idea for a project for which, in all enthusiasm, I setup a mailing list, but ended up never pursuing it. It's a very unique name.
In 2012, another developer landed on the same name for their project, but saw that the mailing list was taken up and reach out inquiring if he could take over, and I obliged because here's another person doing something in cryptography and open source, 2 of my favorite things then (and now).
The project was "scrypt" and the developer was Colin! :) I knew nothing about Colin or tarsnap then, IIRC.
Sometimes you just do kindnesses of which you're able, with people who you feel a sense of community with, without expectation of anything commercial. Karma adds up, and it's benefits are large, though hard to always articulate.
> In April 2024 I confided in an Amazonian that I was "not really doing a good job of owning FreeBSD/EC2 right now" and asked if he could find some funding to support my work, on the theory that at a certain point time and dollars are fungible
>I received sponsorship from Amazon via GitHub Sponsors for 10 hours per week for a year
For whatever reason, I remember being shocked that you were only charging $300/hr [1] which was what a mere L6 google engineer would make salaried. I hope they are paying you more nowadays
American hourly rates in IT are truly nuts. I wonder if the value-add to hiring American is really worth it, in German-speaking EU you'd get real top-notch engineering for 120€/h. Even less further eastwards.
> German-speaking EU you'd get real top-notch engineering for 120€/h
No disrespect to German-speaking engs, but Colin isn't merely "top-notch", he's "the top".
Huge salaries (like those paid to "top" athletes in "top" professional team sports) aren't unheard of in Tech anymore. For instance, Google paid $2b+ to acquihire Noam Shazeer of c.ai back. Meta was rumoured to be paying $20m+ salaries to poach OpenAI researchers based in Zurich.
The going rate for 1099 work tends to be higher than this to account for risk, unbillable work, and increased tax rate. Agencies that lend out their developers to clients charge 2-3x this. Remember that engineers can work remotely now which makes regional rates much fuzzier.
I strongly disagree with the part about IAM roles for EC2
> a useful improvement (especially given the urgency after the Capital One breach) but in my view just a mitigation of one particular exploit path rather than addressing the fundamental problem that credentials were being exposed via an interface which was entirely unsuitable for that purpose.
What alternative interface does the author propose we use to securely exchange credentials? The only other approaches I can come up with involve allowing monkey hands to come into direct contact with secret materials. Outlook, slack and teams cannot possibly be more secure than IMDSv2. I think if you are manually passing around things like PFX files you've already lost the game.
The entire point of the IAM roles is to make everything a matter of policy rather than procedure. The difference here is insane when you play through all of the edges. IAM policy management is significantly easier to lock down than the alternative paths. I can prove to an auditor in 5 minutes that it is mathematically impossible for a member of my team to even see the signing keys we use for certain vendors without triggering alerts to other administrators. I've got KMS signing keys that I cannot delete with my root account because I applied inappropriate policies at creation time. This stuff can be very powerful when used well. Azure has a similar idea that makes accessing things like mssql servers way less messy.
What alternative interface does the author propose we use to securely exchange credentials?
If you read the linked post you'll see that at the time I suggested using XenStore to pass credentials to the OS kernel. Obviously a different approach would be needed with Nitro but if anything it would be easier now.
Once the kernel had them they could be exposed to applications via a synthetic filesystem which, crucially, can have ownership and permissions set on it.
I'm absolutely not arguing against IAM Roles for EC2. I'm arguing that they picked the worst possible interface over which to transmit those role credentials.
Scaleway's equivalent only allows connections from ports <1024. This is cute and means only processes with CAP_NET_BIND_SERVICE can retrieve the tokens.
You can do similar with vsock(7) sockets. This also has the advantage that it's harder to trick an application into making a connection to a vsock socket.
Both of these have the weakness that it is not entirely atypical to give processes CAP_NET_BIND_SERVICE so they can listen on "privileged" sockets, but they work against anything without that.
Even better, you could put bootstrap credentials in DMI data or similar, where it'll end up (on Linux) inside a sysfs directory which can only be read by root.
Fantastic piece of lore. Fascinating to read the journey. But also hearing some of the names here (Tavis Ormandy is famous for his role on Project Zero, for instance) and knowing that even top engineers can bomb interviews for making poor choices.
Nothing useful to add except that I Like these blog posts from someone who actually did a bunch of things. Nice round-up of the past.
A lot of the "free labor for Amazon" framing in this thread misses the core dynamic here. Colin wasn't doing charity work, he was making FreeBSD run on EC2 because Tarsnap literally depends on it. That's probably the healthiest model for open source contribution: you fix the infrastructure your own product sits on, and everyone downstream benefits too. The alternative is waiting for Amazon to care about your niche platform, which could mean waiting forever. It's a different calculus than, say, an indie dev writing a library that AWS wraps into a managed service.
I remember many of these events as I was running FreeBSD a lot and subscribed to the mailing lists.
Why on earth would you give this monstrosity of a company so much free labour?
I get that volunteering is fun, but donating your time and competence to a hyper capitalist company is short sighted. I hope there was appropriate compensation, and I'm not including "early access".
Colin, if I remember correctly, you first ran Tarsnap servers on Ubuntu before you made FreeBSD work on EC2. At what point were you confident enough to switch to FreeBSD?
Netflix is a big FreeBSD user and a big AWS user, do they run FreeBSD on AWS? Would be the obvious sponsor to me as they rely heavily on the infrastructure built by volunteers like Colin
Netflix uses FreeBSD specifically for their custom-built CDN/streaming servers, which are hosted directly with ISPs … not on AWS. Their user-facing catalog app, however, runs on Ubuntu servers hosted on AWS.
AWS was the clear undisputed leader for years, but feels like it’s lost its way now.
It knew how to be the market leader and first to market with big launches. It’s now struggling to navigate a world where in more and more areas it’s falling behind. The big early misses on GenAI seem to have accelerated that.
A ton of momentum from earlier years keeps it moving, but that playbook only lasts so long.
No, it was really not. His tale is from mid-2000s, not from mid-1980s.
In mid 2000s these companies were already operating in the billions and their engineers were already well compensated, and it was known.
Hell, "Cracking the Coding Interview" came out in 2008. Getting a job at those companies at the time was already something coveted because of how well they paid.
Interesting how this history is about the edge cases and the unlikely risks that turn into real incidents. the systems scale faster than what we think about their safety.
2 companies have functionally similar products, but behaves completely different. One company makes technical decisions with security as the fundamental principal, while for the other company, security is not a consideration.
The author was at least dependency-driven in their contribution, but outside that kind of dependency, it's hard to justify contributing even 'in the open' when the relationship is this one-sided. Amazon in particular has done enormous damage to the economic assumptions that permissive open source once relied on. There's increasingly more projects adopting 'Business Source Licenses', precisely to prevent open work from becoming a free input into hyperscaler monetization.
These devs know Amazon is grabby and, at some point, the only dominant outcome their community contribution is upstream of is unpaid labor for a trillion-dollar entity that also diverts support and community engagement away from the original projects by funneling users into managed versions of the same software.
It's perfectly legal to say: "except for Amazon [and whoever], anyone can use this for any purpose, provided..."
Amazon won't intentionally use that software. It's not worth the potential legal liability.
That doesn't mean Amazon won't write their own version though if they think they need to at some point.
So you see formulations that target infrastructure resale rather than specific entities, such as:
"For the avoidance of doubt, the following scenarios are not permitted under the license:
* A managed service that lets third party developers ... register their own [SERVICE] service endpoints and invoke them through that managed service."
"You may not provide the software to third parties as a hosted or managed service, where the service provides users with access to any substantial set of the features or functionality of the software."
"If you make the functionality of the Program or a modified version available to third parties as a service, you must make the Service Source Code available via network download to everyone at no charge, under the terms of this License [...] where 'Service Source Code' is defined broadly to include the entire hosting stack (monitoring, backups, etc.) to ensure a level playing field"
If someone wants to release technology in a way that makes it publicly viewable but restricts its use, they can do that.
If they don't want to release it, they don't have to.
Additionally, publicly released technology destroys patentability, if that's the objective.
I don't understand what one would want to achieve that can't be achieved here.
That's some acrobatics I suspect Amazon won't engage in, because communicating to the customer that your FooBarDB is managed in AWS but hosted by a third party is awkward.
Amazon will happily reimplement your API with their backend, as they've done before.
They could use AGPL or GPL3, typically those licenses are verboten in hyperscalers.
The truth is that the sort of company opting for BSL never really wanted to do OSS, and in truth only did so for the optics of it, for the goodwill it buys among developers, etc.
Only the AGPL is remotely close to forcing hyper-scalars to release the source code of what they provide.
Laws are only as good as their enforcement, in business at least. Unfortunately I have seen first hand that no one cares about licensing if they can’t get caught.
Businesses licenses are good because you can offer support and other benefits to encourage payment.
The claim is that those licenses are deemed no-touch within those companies—it's the companies themselves that insist on the software and their business not mixing, e.g. Apple continuing to ship old versions of GNU programs like Bash and then eventually moving to zsh rather than provide updated versions that are GPLv3.
Neither GPLv3 nor AGPLv3 say anything about businesses not being able to use the software.
What I object to is companies releasing software with permissive licenses, and then getting butthurt that others profit from it, or trying to rug pull the permissive licenses after a community adopted and contributed to it.
If you want to play the OSS game, then play it right.
From "The SSPL is Not an Open Source License" <https://opensource.org/blog/the-sspl-is-not-an-open-source-l...>
[0] https://www.joelonsoftware.com/2002/06/12/strategy-letter-v/
There's so much about that phrase that makes me smile. Easy to forget that Second Life was also one of the earliest users of AWS, S3 first. Jeff Bezos had personally invested in our 2005 round (a round that made Linden Lab a unicorn before that was a thing) and pointed us at Jeff Barr and the work coming from AWS.
In return, Jeff Barr started hosting AWS meetups in Second Life -- this was the era of lots of groups setting up Second Life outposts, from Jonathan Coulton to Reuters.
We were with Evolution Robotics and were showing off the ER1, a new hobbyist robot.
Good memories for sure!
Back in 2006/7 I had an idea for a project for which, in all enthusiasm, I setup a mailing list, but ended up never pursuing it. It's a very unique name.
In 2012, another developer landed on the same name for their project, but saw that the mailing list was taken up and reach out inquiring if he could take over, and I obliged because here's another person doing something in cryptography and open source, 2 of my favorite things then (and now).
The project was "scrypt" and the developer was Colin! :) I knew nothing about Colin or tarsnap then, IIRC.
Sometimes you just do kindnesses of which you're able, with people who you feel a sense of community with, without expectation of anything commercial. Karma adds up, and it's benefits are large, though hard to always articulate.
>I received sponsorship from Amazon via GitHub Sponsors for 10 hours per week for a year
For whatever reason, I remember being shocked that you were only charging $300/hr [1] which was what a mere L6 google engineer would make salaried. I hope they are paying you more nowadays
[1] https://news.ycombinator.com/item?id=30188512
100-200k, is what you'd expect elsewhere. Which is still pretty good, just not astronomical.
No disrespect to German-speaking engs, but Colin isn't merely "top-notch", he's "the top".
Huge salaries (like those paid to "top" athletes in "top" professional team sports) aren't unheard of in Tech anymore. For instance, Google paid $2b+ to acquihire Noam Shazeer of c.ai back. Meta was rumoured to be paying $20m+ salaries to poach OpenAI researchers based in Zurich.
> a useful improvement (especially given the urgency after the Capital One breach) but in my view just a mitigation of one particular exploit path rather than addressing the fundamental problem that credentials were being exposed via an interface which was entirely unsuitable for that purpose.
What alternative interface does the author propose we use to securely exchange credentials? The only other approaches I can come up with involve allowing monkey hands to come into direct contact with secret materials. Outlook, slack and teams cannot possibly be more secure than IMDSv2. I think if you are manually passing around things like PFX files you've already lost the game.
The entire point of the IAM roles is to make everything a matter of policy rather than procedure. The difference here is insane when you play through all of the edges. IAM policy management is significantly easier to lock down than the alternative paths. I can prove to an auditor in 5 minutes that it is mathematically impossible for a member of my team to even see the signing keys we use for certain vendors without triggering alerts to other administrators. I've got KMS signing keys that I cannot delete with my root account because I applied inappropriate policies at creation time. This stuff can be very powerful when used well. Azure has a similar idea that makes accessing things like mssql servers way less messy.
If you read the linked post you'll see that at the time I suggested using XenStore to pass credentials to the OS kernel. Obviously a different approach would be needed with Nitro but if anything it would be easier now.
Once the kernel had them they could be exposed to applications via a synthetic filesystem which, crucially, can have ownership and permissions set on it.
I'm absolutely not arguing against IAM Roles for EC2. I'm arguing that they picked the worst possible interface over which to transmit those role credentials.
You can do similar with vsock(7) sockets. This also has the advantage that it's harder to trick an application into making a connection to a vsock socket.
Both of these have the weakness that it is not entirely atypical to give processes CAP_NET_BIND_SERVICE so they can listen on "privileged" sockets, but they work against anything without that.
Even better, you could put bootstrap credentials in DMI data or similar, where it'll end up (on Linux) inside a sysfs directory which can only be read by root.
Nothing useful to add except that I Like these blog posts from someone who actually did a bunch of things. Nice round-up of the past.
Why on earth would you give this monstrosity of a company so much free labour?
I get that volunteering is fun, but donating your time and competence to a hyper capitalist company is short sighted. I hope there was appropriate compensation, and I'm not including "early access".
Netflix uses FreeBSD specifically for their custom-built CDN/streaming servers, which are hosted directly with ISPs … not on AWS. Their user-facing catalog app, however, runs on Ubuntu servers hosted on AWS.
At least that’s what I recall reading here on HN.
It knew how to be the market leader and first to market with big launches. It’s now struggling to navigate a world where in more and more areas it’s falling behind. The big early misses on GenAI seem to have accelerated that.
A ton of momentum from earlier years keeps it moving, but that playbook only lasts so long.
In mid 2000s these companies were already operating in the billions and their engineers were already well compensated, and it was known.
Hell, "Cracking the Coding Interview" came out in 2008. Getting a job at those companies at the time was already something coveted because of how well they paid.
Perhaps in the USA, but in many other countries this does for sure not hold.
At some stage I realised AWS is extremely expensive, extremely slow, extremely ridiculously complex and also a parasitic attitude to open source.
I realised I should instead go all in on Linux on virtual machines on other platforms.
AWS I’m done.
2 companies have functionally similar products, but behaves completely different. One company makes technical decisions with security as the fundamental principal, while for the other company, security is not a consideration.
Azure engineers absolutely considered security.
They just chose other priorities: growth at any cost to catch up with AWS.