The Bq suggestion doesn’t actually fix anything. Becquerel is defined as one decay event per second and is dimensionally identical to Hz. Using Bq typically signals that a poisson process is being measured which is itself an assumption about the arrival statistics. This assumption is likely wrong for real web traffic (which tends to be bursty rather than memoryless).
More importantly, the claim that Hz is inappropriate for non-periodic phenomena is false. Many random processes have a well-defined Fourier transform, and reporting the intensity of random fluctuations in a frequency-range is standard across signal processing, neuroscience, finance, and physics. The unit doesn’t imply periodicity of the process itself. It implies that we are working in the Fourier domain, which applies as much to periodic signals as to stochastic processes.
If you want to characterize web request traffic properly, the right question is what the arrival process actually looks like. A single scalar whether in Hz or Bq throws away almost all of that. In all cases, you have to think carefully what your underlying assumptions are and what the reported number actually measures.
Becquerel (or counts per second) have the same problem in that they don't measure the "energy" of each request.
I do like the analogy though. Actual radiation has many forms and energy levels.
Decay chains are a nice analogy you could use too (i.e. a branching out of subsequent processes and work that come later, but are a consequence of the initial request).
And yes, like Sieverts, some types of incoming request, and some "organs" are more consequential than others. There's even an analogy to "committed dose" as the database accumulates things.
The authority on the definition of SI units is very clear:
> The hertz shall only be used for periodic phenomena and the becquerel shall only be used for stochastic processes in activity referred to a radionuclide
Usually, no radionuclides are involved in web requests.
Counterpoint: let's say we connect a speaker to the HTTP server, and every time there's a request, the speaker produces a click. This setup will make audible sound. If it's OK to measure this sound in Hz, then is't OK to measure the HTTP requests in Hz, because they're explicitly === sound in this case.
We don't use unit of measurements.
We use metrics because we have a lot more context.
Rps, requests per second is a commonly used unit but it has no defined standard, you could and often do average it over time for reporting but no one says you have to. For scaling however you'll probably want to use the max not the average, because no one wants a web application where in business as usual 60% of the time it works every time.
Oh, that's kinda fun. I got the same that I get for every Mastodon (and Anubis-protected) link: a page telling me that it won't work without JavaScript. I guess since AI scrapers these days do run some amount of JS, that is some second layer of defense?
At least for Twitter there are proxies that work without JS. For Mastodon, none that I'm aware of. I usually just audibly sigh and remark that they shall "keep their secrets then", and move on.
We are not talking about the same thing, it seems. I can understand a web page that doesn't work without javascript.
What I do not understand someone who goes through all this work of putting an AI-scraper tarpit on Mastodon, a system that fundamentally needs to have its data distributed to other servers. It's just signalling and posturing, because that content is available on any server that has someone following the account.
(Tip to AI scrapers: if you want to slurp all the data from the fediverse, just create an account on mastodon.social and pull the data from the "Federated timeline" stream.)
It's not you. It's the people that were somehow convinced that serving crap is gonna "hurt" the models. These are people who have 0 clue on how models are trained and how they work, but have been riled up by others who similarly don't understand the technical details, but have strong biases against them. This is ignorance signalling at its finest.
And, as expected, it's hurting their (regular) users more than they'll hurt the model trainers. Oh well..
To those who automatically assume humans with "weird" setups are "AI scrapers" (also a bit of a boogeyman these days): FUCK YOU. I'm a human, not a stupid mindless sheeple.
We should use Sievert. I.e. how is the speed affecting my UX. That may depend on how much I give a fuck about the site multiplied by how many requests are needed to render it.
I'd say, Hz is quite regular choice for _this_ ... it's just not referred to as "Hertz" by IT practitioners (usually). Technically Bq and Hz are same unit 1/sec - difference is that Bq is used for random physical events (comparable to web requests) and Hz is used for periodic physical events.
More importantly, the claim that Hz is inappropriate for non-periodic phenomena is false. Many random processes have a well-defined Fourier transform, and reporting the intensity of random fluctuations in a frequency-range is standard across signal processing, neuroscience, finance, and physics. The unit doesn’t imply periodicity of the process itself. It implies that we are working in the Fourier domain, which applies as much to periodic signals as to stochastic processes.
If you want to characterize web request traffic properly, the right question is what the arrival process actually looks like. A single scalar whether in Hz or Bq throws away almost all of that. In all cases, you have to think carefully what your underlying assumptions are and what the reported number actually measures.
I do like the analogy though. Actual radiation has many forms and energy levels.
Decay chains are a nice analogy you could use too (i.e. a branching out of subsequent processes and work that come later, but are a consequence of the initial request).
The authority on the definition of SI units is very clear:
> The hertz shall only be used for periodic phenomena and the becquerel shall only be used for stochastic processes in activity referred to a radionuclide
Usually, no radionuclides are involved in web requests.
https://www.bipm.org/documents/d/guest/si-brochure-9-en-pdf
At least for Twitter there are proxies that work without JS. For Mastodon, none that I'm aware of. I usually just audibly sigh and remark that they shall "keep their secrets then", and move on.
What I do not understand someone who goes through all this work of putting an AI-scraper tarpit on Mastodon, a system that fundamentally needs to have its data distributed to other servers. It's just signalling and posturing, because that content is available on any server that has someone following the account.
(Tip to AI scrapers: if you want to slurp all the data from the fediverse, just create an account on mastodon.social and pull the data from the "Federated timeline" stream.)
It's not you. It's the people that were somehow convinced that serving crap is gonna "hurt" the models. These are people who have 0 clue on how models are trained and how they work, but have been riled up by others who similarly don't understand the technical details, but have strong biases against them. This is ignorance signalling at its finest.
And, as expected, it's hurting their (regular) users more than they'll hurt the model trainers. Oh well..
All the talk about "putting the human first" and "embracing diversity" goes out of the window the moment you are not diverse in the way they want.
[R] = Ohm
Never [Ohms]