This is all way too much. If you see a duplicate idempotency key, skip the replay and always return 409. This becomes a client problem. Clients already need to help enforce idempotent contracts; "check for conflict response" is not an onerous imposition.
I've built multiple ecommerce APIs with this approach and they work great. No heroic measures required. You can often satisfy this contract with a unique constraint; if not, a simple presence check in redis. No hashing or worrying about PII.
But that's not idempotent? If I'm a client and I don't know if the original request went through, getting a 409 on any subsequent requests tells me nothing about whether the original request was successful or not.
If idempotent key was seen then send back response.
Clients intention is outside the scope. If contract says "idempotency on key" the idempotent response on key. If contract says "idempotent on body hash" then response on body hash (which might or might not include extra data).
APIs are contracts. Not the pinky promise of "I'll do my best guess"
IMHO it's more: fix problems, or at least mitigate them, regardless whose problem it is.
I've been in this situation, a clientside bug meant that different requests arrived with the same idempotency key.
In my case, updating the client would have taken weeks, in the best case scenario. Updating the backend to check for a matching request body would have taken minutes, maybe hours.
It took me a surprising amount of arguing to convince people that, even if it was a clientside bug, we couldn't let users suffer for weeks in name of "correctness".
Well.. it was ~6 years and ~10 billion payments ago, the clients have been fixed but the "hack" is still there, it has caused no harm as far as I can tell. Worst case scenario it's useless, best case scenario it prevents regressions.
The issue with things that client must not do is that they might still do them, and users don't care whose fault it is. It's important to have auxilliary mechanisms to mitigate these.
Sometimes "best guess" is the contract. Obviously many applications can't tolerate that, but surprisingly many can.
The user just needs to know what the trade-off is. And "best guess" can be hard to characterize, so you need to be extremely careful. But sometimes it's a big win for a low price.
That just leads to bigger fools. I don't just mean that as clever wordplay, but as a serious point. No matter how sloppy you make your API someone else will use it even more sloppily. Now you've got an enormous sloppy surface you can't properly contain or maintain, and people still transgress its boundaries even so.
The robustness principle has its times and places but the general consensus that it should be applied everywhere to everything was a big mistake. The default should be that you are very rigid and precise and only apply the robustness principle in those times and places it applies, and I'm perfectly comfortable waiting to deploy something precise and find out that this was one of them. The vast majority of APIs is not the time and place for the robustness principle. It's the time and place for careful precision on exactly what is provided, and detailed and description error messages, logging, and metrics for when the boundaries are transgressed.
> APIs are contracts. Not the pinky promise of "I'll do my best guess"
You have never had to work with PHP backends, have you?
JSON in PHP is a flustercluck. Undefined, null, "" or "null", that is always the question.
If you use a typed Go/Rust client and schemas, you usually end up with "look ahead schemas" that try to detect the actual types behind the scenes, either with custom marshallers or with some v1/v2/v3 etc schema structs.
It's so painful to deal with ducktyped languages ... that's something I wouldn't wish on anyone.
This is an excellent article, I’ve seen almost all of the issues it calls out in production for various APIs. I’ll be saving this to share with my team.
I’ve seen two separate engineers implement a “generic idempotent operation” library which used separate transactions to store the idempotency details without realizing the issues it had. That was in an organization of less than 100 engineers less than 5 years apart.
I once wrote about inherent, irreducible complexity and how we try to deal with it. The draft has sections on how complexity can be hidden, spread out, localized, passed off, or recreated from scratch. Unfortunately, people are now using LLMs to pile complexity on the simplest of tasks, and my essay isn't really worth finishing.
> people are now using LLMs to pile complexity on the simplest of tasks, and my essay isn't really worth finishing.
Isn't the opposite true? The more people are messing with complexity, the more they could benefit from a model of a complexity? And if they generate complexity with external tools, then maybe a theoretical take on that will be the only way for them to learn? I mean, we learn these things through struggle and pain, but if all of that becomes an LLM problem, than you just stop learning? But at some point complexity will strike back, at some point there will be as much of it, that LLM will be no help.
OTOH, if LLM still win, and skills of managing complexity will be lost in future generations, if we are at the peak of our skills of dealing with complexity, than shouldn't we try our best to imprint our hard won lessons into a history? Maybe for some later generations the tide will turn and they would write textbooks on complexity, and with your article you'll get your portrait in a textbook, and each bored pupil will decorate it with mustaches? You have a chance to immortalize yourself. xD
Or maybe you can become someone like Ramanujan for math? Someone who honed obsolete math skills to an unimaginable level? Maybe a time will come, when students will pour over Ramanujan works, because his skills became useful again, and they try to find out how Ramanujan thought?
...
Sorry, I just couldn't resist. Seriously, it is hard to predict with LLMs, maybe we will not need intellect or any intellectual skills at all after AGI.
How would you even know the second request is different? Hash every request? That's a waste of resources. The only sensible policy is to trust the key.
No, the sensible policy is to have the code operate idempotently for every request with an idempotent method. This is a design decision, not something you slap on top afterward with a special key.
I skimmed the article earlier, but going back and looking, it doesn't appear that the article mentions the spec at all, or links to it. In fact, the first paragraph is literally "People talk about..."
Some help for others to understand the history of this (which apparently Stripe, Paypal, Dwolla, and others use): https://github.com/mdn/content/issues/41497 There are links to the RFC and prior art.
That aside, my first impulse is to say that the server should specify that the key includes a hash of the important parts of the request, checked on receipt, so that only the key itself need be stored. However, FF's implementation apparently(?) adds the header automatically to POST and PATCH if it's not already present, which means that it's not able to comply with such a decision, and the RFC (currently expired) recommends using a UUID, so.
I'm guessing the original motivation of this is "Browser JS might not be able to send a PUT, or proxies may not handle a PUT correctly".
A couple of years ago, we experienced a silent data corruption incident in our checkout process due to this specific edge case.
A user would generate the idempotency key by loading the front-end application, adding item(s) to their cart, submitting their order but timing out. The user would then navigate back to the front-end application and add another item and submit the order again. Since the user is submitting an identical idempotency key to the same transaction, our payment gateway would look up the request/transaction by idempotency key and see in its cache that there was a successful (200 OK) response to the previous request. The user now believes they purchased three items, however, our system only charged and shipped on two of the orders.
Consequently, the lesson we take away from the aforementioned incident is idempotency keys are really composite keys (Client_Provided_Key + Hash(Request_Payload)).
If a system receives an identical idempotency key (but with a different request payload) the idempotency key should be rejected with a 409 Conflict response with a message similar to "Idempotency key already used with different request payload". Alternatively, some teams argue it should be returned with a 400 Bad Request response. Systems should never return a failed cache response or replace old entries of data.
This article explains how to unlock your flow. The final idempotent key will not be located until the first request completes, but will rather exist when the request is in progress.
To safely accomplish your goal, you have to follow the following steps:
1. Acquire a distributed lock on the idempotent key.
2. Check for the existence of a key in your persistent store.
3. If an existing key is found, verify the hash of the payload against the hash for the payload type. If the hashes do not match, return a 409 error.
4. If the hashes match, look up the status of the payload. If the status shows COMPLETED in the persistent store, return the cached response. If the status shows PENDING in the persistent store, return a 429 Too Many Requests to the user or hold the connection open until the request reaches a PENDING state.
5. After processing the request, save the response to the persistent store before releasing the lock.
While this may look simple on paper, creating a distributed locking state machine for a single API endpoint is typically how developers have their first aha moments with idempotency. Becoming idempotent is often an enormous architectural shift and not just a middleware header check.
I wholeheartedly agree. Luckily it is a lot easier to reliably run a distributed KV store that only needs to lock the idempotency key over relatively short times rather than a whole database with millions of records or make arbitrary systems idempotent.
Congrats on destroying the purpose of Idempotency Keys.
Ask yourself, why not just `Hash(Request_Payload)`? That'll give you half of what you need to know about why the Idempotency Key header is useful in the first place.
The other half you already know? You just described your bug, it's a bug, on your front-end, this has nothing to do with idempotency; if anything, the system is performing as expected.
If your requests do something different, they should have different Idempotency Keys. <- this brings down TFA and most of the comments here. I guess those are the perils of vibecoding.
Sounds like an interesting case of incorrectly trusting user input.
The idempotency key should have been viewed as the untrustworthy hint it really is. Then you can decide whether an untrustworthy hint is what you really need. At that point I'd hope someone on the team says "This is ordering - I think we need something trustworthy"
> Consequently, the lesson we take away from the aforementioned incident is idempotency keys are really composite keys (Client_Provided_Key + Hash(Request_Payload)).
Did the postmortem result in any other (wider) changes/actions, out of curiosity?
No idea if this was anything like what happened your case, and probably going off on a tangent, but I've seen so many cases where teams are split into backend and frontend, and they stop thinking about the product as a single distributed system (or, it exacerbates that lack of that thinking from before). Frontend often suggest "Oh we can just create an idempotency key" and any concerns from backend are dismissed. If they implement it incorrectly, backend are on the wrong 'team' to provide input.
The real problem is that sticking an idempotency key onto an operation doesn’t make it idempotent.
It may improve efficiency where a protocol doesn’t assure exactly-once delivery of messages, but it cannot help you with problems other than deduplication of identical messages.
Creating a payment is not an idempotent operation. If the economics of the operation can differ when the “idempotency” key remains the same then you’ve just created a foot-gun in your API.
You can document that you’re going to ignore “duplicate” requests that share an idempotency key but that’s just user-hostile. The system as a whole is broken as designed.
i dont disagree with the problem, but this sort of Idempotency-Key header is kind of outsourcing the de-duplication to the client. If the client sends a different request with same Idempotency-Key header its the user's (client's) fault. Its also circumventing the fact that its the effect that should apply to give the same state, you could design the API itself to be idempotent wrt to some other property such as the transaction id. The designs I have seen using an explicit Idempotency-Key header has usually been added on after launch.
I do not disagree with their definition of idempotency, but they silently assume resending the same result is the default. They discus this later on in the article but they do not seem to question why that might not be a good idea in the first place.
Edit: Perhaps it is my mental model that is different. I think it makes most sense to see the idempotency key as a transaction identifier, and each request as a modification of that transaction. From this perspective it is clearer that the API calls are only implying the expected state that you need to handle conflicts and make PUTs idempotent. Making it explicit clarifies things.
The article actually ends up creating the required table to make this explicit, but the API calls do not clarify their intent. As long as the transaction remains pending you're free to say "just set the details to X" and just let the last call win, but making the state final requires knowing the state and if you are wrong it should return an error.
If you split this in two calls there's no way to avoid an error if you set it from pending to final twice. So a call that does both at once should also crash on conflicts because one of the two calls incorrectly assumed the transaction was still pending.
Right. An operation is idempotent only if doing it twice has the same result as doing it once. If you have to worry about whether an operation has already been done, it's not idempotent. If you have to worry about order of operations, it's not idempotent.
What's being asked for here is eventual consistency. If you make the same request twice, the system must settle into a the same state as if it was done only once.
That's the realm of conflict-free replicated data types, which the article is trying to re-invent.
x = 1
is idempotent.
x = x + 1
over a link with delay and errors is a problem that requires the heavy machinery of CRDTs.
> Send the same payment twice and one of them should respond "payment already exists".
You are hiding the relevant complexity in the term "same". What is here the same? I mean, if accidentally buy only 1 instead of two items of a product and then buy afterwards again 1 item. How is this then the same or not the same payment?
How and based on what is the idempotency key calculated which the clients sends with its request? In my double-purchase example above: when would the second purchase be requested with the same key or not?
For idempotency you literally just want f(state) = f(f(state)). Whether you achieve this by just doing the same thing twice (no external effects) or doing the thing exactly once (if you do have side effects) is not important.
But if you have side effects and need something to happen exactly once it seems a lot more useful to communicate this, rather than pretending you did the thing.
> But if you have side effects and need something to happen exactly once it seems a lot more useful to communicate this, rather than pretending you did the thing.
I think it depends on whether the sender needs to know whether the thing was done during the request, or just needs to know that the thing was done at all. If the API is to make a purchase then maybe all the caller really needs to know is "the purchase has been done", no matter whether it was done this time or a previous time.
And in terms of a caller implementing retry logic, it's easier for the caller to just retry and accept the success response the second time (no matter if it was done the second time, or actually done the first time but the response got lost triggering the retry).
Not that it's a bad TFA but in addition to what you said, many/most of the edge cases mentioned in TFA are just as problematic whether idempotency is desired or not.
I mean:
> Maybe the first request created a local payment but crashed before publishing an event ...
I mean, yeah, sure. That's a problem. I can come up with another one:
"Maybe the ZFS disk array for the DB caught fire and died a horrible death and you now need to restore from backups".
You keep the hash of the request so that you can reject a subsequent request with a different body. This has helped me surface bugs and data issues in other systems.
I think this article (and the author's previous articles on their blog) is quite clearly AI written. It has such a frustratingly punctuated cadence and really does not serve the reader anything valuable.
What really does not serve the reader any value is this comment now appearing on nearly every single HN thread. (And neither does my comment, sorry about that.)
If you like the article, upvote. If you don’t, don’t.
This article seems to be missing an example use-case for this functionality, it is very unclear to me that this is a good idea. Your job is hopefully to build an API with the simplest and most reliable contract to meet the needs of the client. Sometimes that involves saying, "human technology does not have a reliable way to support this expectation, but here's what else could work."
It's just the horrible misapplication of the term 'stateless' to a wrapper around something very-much stateful. It's here to stay.
(Though I do disagree with the original premise too. Putting on a 'stateless' boxing glove won't mean there's no difference between punching a guy once or twice)
Here x is interpreted as state and f an action acting on the state.
State is in practice always subjected to side effects and concurrency. That's why if x is state then f can never be purely idempotent and the term has to be interpreted in a hand-wavy fashion which leads to confusions regarding attempts to handle that mismatch which again leads to rather meandering and confusing and way too long blog posts as the one we are seeing here.
*: I wonder how you can write such a lengthy text and not once even mention this. If you want to understand idempotency in a meaningful way then you have to reduce the scenario to a mathematical function. If you don't then you are left with a fuzzy concept and there isn't much point about philosophizing over just accepting how something is practically implemented; like this idempotency-key.
Idempotence is a semantically overloaded term in computer science where in functional programming it refers to the same concept as mathematical idempotence it refers to any function leading to the same state in multiple calls as the first.
And yes, in real machines we can't ever have true same states between multiple calls as system time, heat and other effects will differ but we define the state over the abstracted system model of whatever we are modelling and we define idempotency as the same state over multiple calls in that system.
not just heat and system time. the context is about state handled by databases. database content can never be assumed to be identical between to identical operations involving it.
"delete record with id 123" is only idempotent if there is no chance that an operation like "create record with id 123" happened in between.
Half of the mentioned issues are issues of atomicity, not idempotency. If I make a request, and the server crashes midway and doesn't send some crucial events, that's an issue whether or not I send a second request.
From a cursory read, only the part up to "what if the second request comes while the first is running" is an idempotency problem, in which case all subsequent responses need to wait until the first one is generated.
Everything else is an atomicity issue, which is fine, let's just call it what it is.
If the atomic action is idempotent, you don't need a layer for repeating yourself. You hit the nail on the head. So much idempotency efforts are made because they never made the actions idempotent in the first place.
The point of idempotency is safe retries. Systems are completely fallible, all the way down to the network cables.
The user wants something + the system might fail = the user must be able to try again.
If the system does not try again, but instead parrots the text of the previous failure, why bother? You didn't build reliability into the system, you built a deliberately stale cache.
That's why you need to separate work from actual input.
It's not about trying again but about making sure you get consistent state.
Imagine request for payment. You made one and timeouted. Why did it timeout? Your network or payment service error?
You don't know, so you can't decide between retry and not retry.
Thus practice is: make request - ack request with status request id (idempotent, same request gives same status id) - status checks might or might not be idempotent but they usually are - each request need to have unique id to validate if caller even tried to check (idenpotency requires state registration).
If you want to try again you give new key and that's it.
There might of course be bug in implementation (naive example: idempotency key is uint8) but proper implementation should scope keys so they don't clash. (Example implementation: idempotency keys are reusable after 48h).
If same calls result in different responses (doesn't matter if you saw it or not) then API isn't idempotent.
> You don't know, so you can't decide between retry and not retry
I'm well aware that the first order went through, even though the dumb system fumbled the translation of the success message and gave me a 500 back.
I do retry because I wanted the outcome. I'm not giving it a new key (firstly because I'm a user clicking a form, not choosing UUIDs for my shopping cart) but more importantly, if I did supply a second key, it's now my fault for ordering two copies.
"Idempotency" feels like "encapsulation" all over again.
Take a good principle like 'modules should keep their inner workings secret so the caller can't use it wrong', run it through the best-practise-machine, and end up with 'I hand-write getters and setters on all my classes because encapsulation'.
> Maybe the first request called a payment provider, the provider accepted it, and your process died before recording the result. Now your database cannot infer whether money moved.
This entire example is bad design. It's bad, bad design. I'm sorry, but if this is your example, you are doing it wrong in every way. There are ways to handle these sorts of things, well-known and well-established patterns. You are using none of these here.
I get it, it's an example, but it's a poor example. You should change it before someone assumes what you are talking about is sensible or reasonable in a production environment. Or at least put a warning.
I really hate the POST verb for RESTish APIs because it cannot be idempotent without implementing an idempotency layer. Other verbs are naturally idempotent. Has anyone tried foregoing POST routes entirely? Theoretically you can let the client generate an ID and have it request a PUT route to create new entities. This would give you a tiny amount of extra complexity on the client, but make the server simpler as a trade-off.
GET is not supposed to make changes on the server. The usual idempotent verbs for making changes are PUT and DELETE.
One thing that's confusing, here, is that idempotency only applies for the same request, but the article implies that idempotency is about whether the request contains a specific "idempotency key".
How can you tell from the server if that's a retry (think e.g. some reverse proxy crashed and the first request timed out, but the payment already went through to the user's CC)... or if the user just trying to purchase another item 123 because they forgot they needed 2?
There is simply no way to make the requests idempotent without an idempotency key. The only way to tell both situations apart is to key the requests by some UID. The HTTP verb is irrelevant.
yes I always thought it's an easy thing. but I changed my mind recently when I had to deal with it.
A lot little things you need to think of. For example.
Client sends a request. The database is temporarily down. The server catches the exception and records the key status as FAILED. The client retries the request (as they should for a 500 error). The server sees the key exists with status FAILED and returns the error again-forever. Effectively "burned" the key on a transient error.
others like:
- you may have Namespace Collisions for users... (data leaks)
- when not using transactions only redis locking you have different set of problem
- the client needs to be implmented correctly. Like client sees timout and generates a new key, and exactly once processing is broken
- you may have race conditions with resource deletes
- using UUID vs keys build from object attributes (different set of issues)
I mean the list can get very long with little details..
None of those are really unsolvable problems. I think though the issue it seems everyone in this thread is having is you can't wrap a non idempotent function to make it idempotent no matter how hard you try you have to design your system around it.
> If you’re still in school, here’s a fact: you will learn as much or more every year of your professional life than you learned during an entire university degree—assuming you have a real engineering job.
This rubs me the wrong way. It's stated as fact without any trace of evidence, it is probably false, and it seems to serve no purpose but to make struggling students feel worse (and make the author feel superior).
What you learn at a uni is not really about learning a trade, sure it gives you a taste of the basics in many areas, but you will never be an supeb developer (or another profession) when you get out by only attending classes. However, what uni teaches you is how to learn, how to think critically, how important sources are, what to look for to get the most knowledge out of what you read. Or at least that is what it has always been about for me, the process of learning effective learning.
That's what university is supposed to be about, but it doesn't achieve that for many people. Which is a shame, because those skills are extremely important.
"assuming you have a real engineering job" does a lot of work there. You could also do a lot of work the other way by stating "assuming you are getting a real education". I studied physics when I was young and that field is a lot deeper than my current work in programming. Computer science can also be quite deep if one considers things like the halting problem, type theory and proof assistants.
I think it's that the things learned in school are academic (red-black trees, dynamic programming, writing toy OS and programming languages, etc.)
In the real world you're faced with building five nines active-active systems that interface across various stakeholders, behaviour has to be eventually consistent, you've got a long list of requirements and deadlines, etc. It's practical, hands on, and people are there to build the thing with you at a scale that far exceeds the university undergraduate setting.
It's not a bad thing, it's just different.
Students shouldn't be afraid of it. Your job and coworkers, if it's a good workplace, are there to help you succeed as you succeed together. You learn and grow a lot.
You also learn how to deal with people, politics, changing requirements, etc., which I would imagine is difficult or impossible to teach without just throwing yourself into the fire.
I've built multiple ecommerce APIs with this approach and they work great. No heroic measures required. You can often satisfy this contract with a unique constraint; if not, a simple presence check in redis. No hashing or worrying about PII.
My rant about this: https://github.com/stickfigure/blog/wiki/How-to-%28and-how-n...
If idempotent key was seen then send back response.
Clients intention is outside the scope. If contract says "idempotency on key" the idempotent response on key. If contract says "idempotent on body hash" then response on body hash (which might or might not include extra data).
APIs are contracts. Not the pinky promise of "I'll do my best guess"
"Best guess" can be bad if it is not well-defined, but you can still make error detection obvious rather than hidden.
I've been in this situation, a clientside bug meant that different requests arrived with the same idempotency key.
In my case, updating the client would have taken weeks, in the best case scenario. Updating the backend to check for a matching request body would have taken minutes, maybe hours.
It took me a surprising amount of arguing to convince people that, even if it was a clientside bug, we couldn't let users suffer for weeks in name of "correctness".
Ideally you already send client version in requests (or have an API version prefix). Add the workaround only for legacy clients.
Next client version must distinguish itself from predecessor and must not require the bodge to work.
The issue with things that client must not do is that they might still do them, and users don't care whose fault it is. It's important to have auxilliary mechanisms to mitigate these.
The user just needs to know what the trade-off is. And "best guess" can be hard to characterize, so you need to be extremely careful. But sometimes it's a big win for a low price.
The robustness principle has its times and places but the general consensus that it should be applied everywhere to everything was a big mistake. The default should be that you are very rigid and precise and only apply the robustness principle in those times and places it applies, and I'm perfectly comfortable waiting to deploy something precise and find out that this was one of them. The vast majority of APIs is not the time and place for the robustness principle. It's the time and place for careful precision on exactly what is provided, and detailed and description error messages, logging, and metrics for when the boundaries are transgressed.
You have never had to work with PHP backends, have you?
JSON in PHP is a flustercluck. Undefined, null, "" or "null", that is always the question.
If you use a typed Go/Rust client and schemas, you usually end up with "look ahead schemas" that try to detect the actual types behind the scenes, either with custom marshallers or with some v1/v2/v3 etc schema structs.
It's so painful to deal with ducktyped languages ... that's something I wouldn't wish on anyone.
I’ve seen two separate engineers implement a “generic idempotent operation” library which used separate transactions to store the idempotency details without realizing the issues it had. That was in an organization of less than 100 engineers less than 5 years apart.
One other thing I would augment this with is Antithesis’ Definite vs Indefinite error definition (https://antithesis.com/docs/resources/reliability_glossary/#...). It helps to classify your failures in this way when considering replay behavior.
[1] http://johnsalvatier.org/blog/2017/reality-has-a-surprising-...
I once wrote about inherent, irreducible complexity and how we try to deal with it. The draft has sections on how complexity can be hidden, spread out, localized, passed off, or recreated from scratch. Unfortunately, people are now using LLMs to pile complexity on the simplest of tasks, and my essay isn't really worth finishing.
Isn't the opposite true? The more people are messing with complexity, the more they could benefit from a model of a complexity? And if they generate complexity with external tools, then maybe a theoretical take on that will be the only way for them to learn? I mean, we learn these things through struggle and pain, but if all of that becomes an LLM problem, than you just stop learning? But at some point complexity will strike back, at some point there will be as much of it, that LLM will be no help.
OTOH, if LLM still win, and skills of managing complexity will be lost in future generations, if we are at the peak of our skills of dealing with complexity, than shouldn't we try our best to imprint our hard won lessons into a history? Maybe for some later generations the tide will turn and they would write textbooks on complexity, and with your article you'll get your portrait in a textbook, and each bored pupil will decorate it with mustaches? You have a chance to immortalize yourself. xD
Or maybe you can become someone like Ramanujan for math? Someone who honed obsolete math skills to an unimaginable level? Maybe a time will come, when students will pour over Ramanujan works, because his skills became useful again, and they try to find out how Ramanujan thought?
...
Sorry, I just couldn't resist. Seriously, it is hard to predict with LLMs, maybe we will not need intellect or any intellectual skills at all after AGI.
Some help for others to understand the history of this (which apparently Stripe, Paypal, Dwolla, and others use): https://github.com/mdn/content/issues/41497 There are links to the RFC and prior art.
That aside, my first impulse is to say that the server should specify that the key includes a hash of the important parts of the request, checked on receipt, so that only the key itself need be stored. However, FF's implementation apparently(?) adds the header automatically to POST and PATCH if it's not already present, which means that it's not able to comply with such a decision, and the RFC (currently expired) recommends using a UUID, so.
I'm guessing the original motivation of this is "Browser JS might not be able to send a PUT, or proxies may not handle a PUT correctly".
A user would generate the idempotency key by loading the front-end application, adding item(s) to their cart, submitting their order but timing out. The user would then navigate back to the front-end application and add another item and submit the order again. Since the user is submitting an identical idempotency key to the same transaction, our payment gateway would look up the request/transaction by idempotency key and see in its cache that there was a successful (200 OK) response to the previous request. The user now believes they purchased three items, however, our system only charged and shipped on two of the orders.
Consequently, the lesson we take away from the aforementioned incident is idempotency keys are really composite keys (Client_Provided_Key + Hash(Request_Payload)).
If a system receives an identical idempotency key (but with a different request payload) the idempotency key should be rejected with a 409 Conflict response with a message similar to "Idempotency key already used with different request payload". Alternatively, some teams argue it should be returned with a 400 Bad Request response. Systems should never return a failed cache response or replace old entries of data.
This article explains how to unlock your flow. The final idempotent key will not be located until the first request completes, but will rather exist when the request is in progress.
To safely accomplish your goal, you have to follow the following steps:
1. Acquire a distributed lock on the idempotent key.
2. Check for the existence of a key in your persistent store.
3. If an existing key is found, verify the hash of the payload against the hash for the payload type. If the hashes do not match, return a 409 error.
4. If the hashes match, look up the status of the payload. If the status shows COMPLETED in the persistent store, return the cached response. If the status shows PENDING in the persistent store, return a 429 Too Many Requests to the user or hold the connection open until the request reaches a PENDING state.
5. After processing the request, save the response to the persistent store before releasing the lock.
While this may look simple on paper, creating a distributed locking state machine for a single API endpoint is typically how developers have their first aha moments with idempotency. Becoming idempotent is often an enormous architectural shift and not just a middleware header check.
Congrats on destroying the purpose of Idempotency Keys.
Ask yourself, why not just `Hash(Request_Payload)`? That'll give you half of what you need to know about why the Idempotency Key header is useful in the first place.
The other half you already know? You just described your bug, it's a bug, on your front-end, this has nothing to do with idempotency; if anything, the system is performing as expected.
If your requests do something different, they should have different Idempotency Keys. <- this brings down TFA and most of the comments here. I guess those are the perils of vibecoding.
The idempotency key should have been viewed as the untrustworthy hint it really is. Then you can decide whether an untrustworthy hint is what you really need. At that point I'd hope someone on the team says "This is ordering - I think we need something trustworthy"
> Consequently, the lesson we take away from the aforementioned incident is idempotency keys are really composite keys (Client_Provided_Key + Hash(Request_Payload)).
Did the postmortem result in any other (wider) changes/actions, out of curiosity?
No idea if this was anything like what happened your case, and probably going off on a tangent, but I've seen so many cases where teams are split into backend and frontend, and they stop thinking about the product as a single distributed system (or, it exacerbates that lack of that thinking from before). Frontend often suggest "Oh we can just create an idempotency key" and any concerns from backend are dismissed. If they implement it incorrectly, backend are on the wrong 'team' to provide input.
It may improve efficiency where a protocol doesn’t assure exactly-once delivery of messages, but it cannot help you with problems other than deduplication of identical messages.
Creating a payment is not an idempotent operation. If the economics of the operation can differ when the “idempotency” key remains the same then you’ve just created a foot-gun in your API.
You can document that you’re going to ignore “duplicate” requests that share an idempotency key but that’s just user-hostile. The system as a whole is broken as designed.
Idempotency is about state, not communication. Send the same payment twice and one of them should respond "payment already exists".
”Idempotency is about the effect
An operation is idempotent if applying it once or many times has the same intended effect.”
Edit: Perhaps it is my mental model that is different. I think it makes most sense to see the idempotency key as a transaction identifier, and each request as a modification of that transaction. From this perspective it is clearer that the API calls are only implying the expected state that you need to handle conflicts and make PUTs idempotent. Making it explicit clarifies things.
The article actually ends up creating the required table to make this explicit, but the API calls do not clarify their intent. As long as the transaction remains pending you're free to say "just set the details to X" and just let the last call win, but making the state final requires knowing the state and if you are wrong it should return an error.
If you split this in two calls there's no way to avoid an error if you set it from pending to final twice. So a call that does both at once should also crash on conflicts because one of the two calls incorrectly assumed the transaction was still pending.
What's being asked for here is eventual consistency. If you make the same request twice, the system must settle into a the same state as if it was done only once. That's the realm of conflict-free replicated data types, which the article is trying to re-invent.
is idempotent. over a link with delay and errors is a problem that requires the heavy machinery of CRDTs.You are hiding the relevant complexity in the term "same". What is here the same? I mean, if accidentally buy only 1 instead of two items of a product and then buy afterwards again 1 item. How is this then the same or not the same payment?
The idempotency key of the request
If the client sends the same key but a different payload that’s a 400 or 409 in my eyes.
2) Client's choice
I can choose to purchase a 2nd item, or I can choose to retry purchasing the 1st item. The server making that choice for me is not idempotency.
Idempotency is the server supporting my ability to retry purchasing the 1st item, safe in the knowledge that they won't send me a 2nd one.
For idempotency you literally just want f(state) = f(f(state)). Whether you achieve this by just doing the same thing twice (no external effects) or doing the thing exactly once (if you do have side effects) is not important.
But if you have side effects and need something to happen exactly once it seems a lot more useful to communicate this, rather than pretending you did the thing.
I think it depends on whether the sender needs to know whether the thing was done during the request, or just needs to know that the thing was done at all. If the API is to make a purchase then maybe all the caller really needs to know is "the purchase has been done", no matter whether it was done this time or a previous time.
And in terms of a caller implementing retry logic, it's easier for the caller to just retry and accept the success response the second time (no matter if it was done the second time, or actually done the first time but the response got lost triggering the retry).
I mean:
> Maybe the first request created a local payment but crashed before publishing an event ...
I mean, yeah, sure. That's a problem. I can come up with another one:
"Maybe the ZFS disk array for the DB caught fire and died a horrible death and you now need to restore from backups".
But that's going to be a problem anyway.
Idempotency or not, many points in the articles are are about atomic transactions.
Not well organized, but not zero value.
If you like the article, upvote. If you don’t, don’t.
Auth, logging, and atomicity are all isolated concerns that should not affect the domain specific user contract with your API.
How you handle unique keys is going to vary by domain and tolerance-- and its probably not going to be the same in every table.
It's important to design a database schema that can work independently of your middleware layer.
(Though I do disagree with the original premise too. Putting on a 'stateless' boxing glove won't mean there's no difference between punching a guy once or twice)
Here x is interpreted as state and f an action acting on the state.
State is in practice always subjected to side effects and concurrency. That's why if x is state then f can never be purely idempotent and the term has to be interpreted in a hand-wavy fashion which leads to confusions regarding attempts to handle that mismatch which again leads to rather meandering and confusing and way too long blog posts as the one we are seeing here.
*: I wonder how you can write such a lengthy text and not once even mention this. If you want to understand idempotency in a meaningful way then you have to reduce the scenario to a mathematical function. If you don't then you are left with a fuzzy concept and there isn't much point about philosophizing over just accepting how something is practically implemented; like this idempotency-key.
That is simply not true. f could be, for example, “set x.variable to 7”, which is definitely idempotent.
> State is in practice always subjected to side effects and concurrency.
There was never any claim or assumption regarding f. Maybe the way you interpreted it is what they meant, but it is not what was stated.
And yes, in real machines we can't ever have true same states between multiple calls as system time, heat and other effects will differ but we define the state over the abstracted system model of whatever we are modelling and we define idempotency as the same state over multiple calls in that system.
"delete record with id 123" is only idempotent if there is no chance that an operation like "create record with id 123" happened in between.
I wondered about this too. Also, why was it framed in the context of JSON based RPC over HTTP ?
In that mathematical notation typically there is no side effects and those are meant to be pure functions.
From a cursory read, only the part up to "what if the second request comes while the first is running" is an idempotency problem, in which case all subsequent responses need to wait until the first one is generated.
Everything else is an atomicity issue, which is fine, let's just call it what it is.
The user wants something + the system might fail = the user must be able to try again.
If the system does not try again, but instead parrots the text of the previous failure, why bother? You didn't build reliability into the system, you built a deliberately stale cache.
It's not about trying again but about making sure you get consistent state.
Imagine request for payment. You made one and timeouted. Why did it timeout? Your network or payment service error?
You don't know, so you can't decide between retry and not retry.
Thus practice is: make request - ack request with status request id (idempotent, same request gives same status id) - status checks might or might not be idempotent but they usually are - each request need to have unique id to validate if caller even tried to check (idenpotency requires state registration).
If you want to try again you give new key and that's it.
There might of course be bug in implementation (naive example: idempotency key is uint8) but proper implementation should scope keys so they don't clash. (Example implementation: idempotency keys are reusable after 48h).
If same calls result in different responses (doesn't matter if you saw it or not) then API isn't idempotent.
I'm well aware that the first order went through, even though the dumb system fumbled the translation of the success message and gave me a 500 back.
I do retry because I wanted the outcome. I'm not giving it a new key (firstly because I'm a user clicking a form, not choosing UUIDs for my shopping cart) but more importantly, if I did supply a second key, it's now my fault for ordering two copies.
Take a good principle like 'modules should keep their inner workings secret so the caller can't use it wrong', run it through the best-practise-machine, and end up with 'I hand-write getters and setters on all my classes because encapsulation'.
This entire example is bad design. It's bad, bad design. I'm sorry, but if this is your example, you are doing it wrong in every way. There are ways to handle these sorts of things, well-known and well-established patterns. You are using none of these here.
I get it, it's an example, but it's a poor example. You should change it before someone assumes what you are talking about is sensible or reasonable in a production environment. Or at least put a warning.
The GET/POST split is the defence (even it's only advisory).
GET-only means every time you hit the back button during an order flow, you might double-order.
One thing that's confusing, here, is that idempotency only applies for the same request, but the article implies that idempotency is about whether the request contains a specific "idempotency key".
Don't do that, and this problem evaporates.
Don't do that, and you solved nothing.
Either I'm missing what you mean, or half the comments here are missing the point of idempotency.
Let's say your server received this request twice within one minute:
How can you tell from the server if that's a retry (think e.g. some reverse proxy crashed and the first request timed out, but the payment already went through to the user's CC)... or if the user just trying to purchase another item 123 because they forgot they needed 2?There is simply no way to make the requests idempotent without an idempotency key. The only way to tell both situations apart is to key the requests by some UID. The HTTP verb is irrelevant.
Did I misunderstand what you meant?
A lot little things you need to think of. For example.
Client sends a request. The database is temporarily down. The server catches the exception and records the key status as FAILED. The client retries the request (as they should for a 500 error). The server sees the key exists with status FAILED and returns the error again-forever. Effectively "burned" the key on a transient error.
others like:
- you may have Namespace Collisions for users... (data leaks) - when not using transactions only redis locking you have different set of problem - the client needs to be implmented correctly. Like client sees timout and generates a new key, and exactly once processing is broken - you may have race conditions with resource deletes - using UUID vs keys build from object attributes (different set of issues)
I mean the list can get very long with little details..
This is the bug regardless of idempotency, right? It should be recording something like RESOURCE_UNAVAILABLE.
This rubs me the wrong way. It's stated as fact without any trace of evidence, it is probably false, and it seems to serve no purpose but to make struggling students feel worse (and make the author feel superior).
In the real world you're faced with building five nines active-active systems that interface across various stakeholders, behaviour has to be eventually consistent, you've got a long list of requirements and deadlines, etc. It's practical, hands on, and people are there to build the thing with you at a scale that far exceeds the university undergraduate setting.
It's not a bad thing, it's just different.
Students shouldn't be afraid of it. Your job and coworkers, if it's a good workplace, are there to help you succeed as you succeed together. You learn and grow a lot.
You also learn how to deal with people, politics, changing requirements, etc., which I would imagine is difficult or impossible to teach without just throwing yourself into the fire.