13 comments

  • RossBencina 1 day ago
    It is not just a way of writing ring buffers. It's a way of implementing concurrent non-blocking single-reader single-writer atomic ring buffers with only atomic load and store (and memory barriers).

    The author says that non-power-of-two is not possible, but I'm pretty sure it is if you use a conditional instead of integer modulus.

    I first learnt of this technique from Phil Burk, we've been using it in PortAudio forever. The technique is also widely known in FPGA/hardware circles, see:

    "Simulation and Synthesis Techniques for Asynchronous FIFO Design", Clifford E. Cummings, Sunburst Design, Inc.

    https://twins.ee.nctu.edu.tw/courses/ip_core_04/resource_pdf...

    • hinkley 1 day ago
      I think unfortunately we sometimes ascribe to powers of two supernatural powers that are really about caches being built in powers of two.

      Intel is still 64 byte cache lines as they have been for quite a long time but they also do some shenanigans on the bus where they try to fetch two lines when you ask for one. So there’s ostensibly some benefit of aligning data particularly on linear scans to 128 byte alignment for cold cache access.

      • rcoveson 1 day ago
        But there's a reason that caches are always sized in powers of two as well, and that same reason is applicable to high-performance ring buffers: Division by powers of two is easy and easy is fast. It's reliably a single cycle, compared to division by arbitrary 32bit integers which can be 8-30 cycles depending on CPU.

        Also, there's another benefit downstream of that one: Powers of two work as a schelling point for allocations. Picking powers of two for resizable vectors maximizes "good luck" when you malloc/realloc in most allocators, in part because e.g. a buddy allocator is probably also implemented using power-of-two allocations for the above reason, but also for the plain reason that other users of the same allocator are more likely to have requested power of two allocations. Spontaneous coordination is a benefit all its own. Almost supernatural! :)

        • hinkley 11 hours ago
          CPU Caches are powers of two because retrieval involves a logarithmic number of gates have to fire in a clock cycle. There is a saddle point where more cache starts to make the instructions per second start to go back down again, and that number will be a power of two.

          That has next to nothing to do with how much of your 128 GB of RAM should be dedicated to any one data structure, because working memory for a task is the sum of a bunch of different data structures that have to fit into both the caches and main memory, which used to be powers of two but now main memory is often 2^n x 3.

          And as someone else pointed out, the optimal growth factor for resizable data structures is not 2, but the golden ratio, 1.61. But most implementations use 1.5 aka 3/2.

        • kevin_thibedeau 1 day ago
          powers-of-two are problematic with growable arrays on small heaps. You risk ending up with fragmented space you can't allocate unless you keep growth less than 1.61x, which would necessitate data structures that can deal with arbitrary sizes.
        • loeg 1 day ago
          Fwiw in this application you would never need to divide by an arbitrary integer each time; you'd pick it once and then plumb it into libdivide and get something significantly cheaper than 8-30 cycles.
      • KeplerBoy 16 hours ago
        It's not just Intel is it, AMD is also using 64 byte cache lines afaik.
    • waffletower 10 hours ago
      Regardless of correctness, as a DSP dork I really identified with the question: "What kind of a monster would make a non-power of two ring anyway?" I remember thinking similarly when requesting a power of two buffer from a 3rd party audio hardware device and having it correct to a nearby non-power of two. Latency adding ringbuffer to the rescue.
    • aidenn0 1 day ago
      Non-power-of-two is only really feasible of the total number of inserts will fit in your post/ack counters. Otherwise you have to implement overflow manually which may or may not be possible to do with the available atomic primitives on your architecture.

      I first encountered this structure at a summer internship at a company making data switches.

    • tom_ 1 day ago
      A couple of the comments to the article suggest using 64-bit numbers, which is exactly the right solution. 2^64 nanoseconds=584.55 years - overflow is implausible for any realistic use case. Even pathological cases will struggle to induce wraparound at a human timescale.

      (People will probably moan at the idea of restarting the process periodically rather than fixing the issue properly, but when the period would be something like 50 years I don't think it's actually a problem.)

      • ale42 19 hours ago
        > using 64-bit numbers, which is exactly the right solution

        On a 64-bit platform, sure. When you're working on ring buffers with an 8-bit microcontroller, using 64-bit numbers would be such an overhead that nobody would even think of it.

      • thaumasiotes 22 hours ago
        > but when the period would be something like 50 years I don't think it's actually a problem

        I think you have that backwards. If something needs to be done every week, it will get done every week. That's not a problem.

        If something needs to be done every fifty years, you'll be lucky if it happens once.

        • reincarnate0x14 19 hours ago
          I agree with that sentiment in general but even though I've seen systems in continuous operation for 15 years, I've never seen anything make it to 20. I wouldn't write something with the external expectation it never made it that far, but in practical terms, that's probably about as safe as it gets. Even like embedded medical devices expect to get restarted every now and again.

          Just as an example the Voyager computers have been restarted and that's been almost 60 years.

        • tom_ 22 hours ago
          My parting shot was slightly tongue in cheek, apologies. Fifty years is a long time. The process, whatever it is, will have been replaced or otherwise become irrelevant long before the period is up. 64 bits will be sufficient.
          • gpderetta 16 hours ago
            At some point a random bitflip becomes more likely than the counter overflowing.
    • azemetre 1 day ago
      Your link has an invalid cert FYI, but do appreciate the knowledge drop. Rung buffers are some of the cooler data structures out there.
      • RossBencina 17 hours ago
        Unfortunately the original source is now behind a sign-in-wall.
    • zephen 1 day ago
      > It is not just a way of writing ring buffers. It's a way of implementing concurrent non-blocking single-reader single-writer atomic ring buffers with only atomic load and store (and memory barriers).

      That may or may not be part of the actual definition of a ring buffer, but every ring buffer I have written had those goals in mind.

      And the first method mentioned in the article fully satisfies this, except for the one missing element mentioned by the author. Which in practice, often is not only not a problem, but simplifies the logic so much that you make up for it in code space.

      Or, for example, say you have a 256 character buffer. You really, really want to make sure you don't waste that one character. So you increase the size of your indices. Now they are 16 bits each instead of 8 bits, so you've gained the ability to store 256 bytes by having 260 bytes of data, rather than 255 bytes by having 258 bytes of data.

      Obviously, if you have a 64 byte buffer, there is no such tradeoff, and the third example wins (but, whether your are doing the first or third example, you still have to mask the index data off at some point, whether it's on an increment or a read).

      > The author says that non-power-of-two is not possible, but I'm pretty sure it is if you use a conditional instead of integer modulus.

      There's "not possible" and then "not practical."

      Sure, you could have a 50 byte buffer, but now, if your indices are ever >= 50, you're subtracting 50 before accessing the array, so this will increase the code space (and execution time).

      > The [index size > array size] technique is also widely known in FPGA/hardware circles

      Right, but in those hardware circles, power-of-two _definitely_ matters. You allocate exactly one extra bit for your pointers, and you never bother manually masking them or taking a modulo or anything like that -- they simply roll over.

      If you really, really need to construct something like a 6 entry FIFO in hardware, then you have techniques available to you that mere mortal programmers could not use efficiently at all. For example, you could construct a drop-through FIFO, where every element traverses every storage slot (with a concomitant increase in minimum latency to 6 clock cycles), or you could construct 4 bit indices that counted 0-1-2-3-4-5-8-9-10-11-12-13-0-1-2 etc.

      Most ring buffers, hardware or software, are constructed as powers of two, and most ring buffers either (a) have so much storage that one more element wouldn't make any difference, or (b) have the ability to apply back pressure, so one more element wouldn't make any difference.

    • ErroneousBosh 18 hours ago
      > The author says that non-power-of-two is not possible, but I'm pretty sure it is if you use a conditional instead of integer modulus.

      I don't see why it wouldn't be, it's just computationally expensive to take the modulo value of the pointer rather than just masking off the appropriate number of bits.

      • myrmidon 15 hours ago
        Replacing just the mask operation is not enough.

        The problem is incrementing past the index integer type limit.

        Consider a simple example with ring buffer size 9, and 16bit indices:

        When you increment the write index from 0xffff to 0, your "masked index" jumps from 6 (0xffff % 9) to 0 (instead of 7).

        There is no elegant fix that I'm aware of (using a very wide index type, like possibly a uint64, is extremely non-elegant).

        • ErroneousBosh 14 hours ago
          Yes, that's what I'm saying. You can't just use a quick and easy mask, you have to use a modulo operator which is computationally expensive enough that it's probably killing the time savings you made elsewhere.

          There's probably no good reason to make your buffer sizes NOT a power of two, though. If memory's that tight, maybe look elsewhere first.

          • myrmidon 12 hours ago
            What I mean is: This ringbuffer implementation (and its simplicity) relies on the index range being a multiple of the buffer size (which is only true for powers of two, when the index is e.g. a 32bit unsigned integer).

            If you swap bitmasking for modulo operations then that does work at first glance, but breaks down when the index wraps around. This forces you to abandon the simple "increment" operation for something more complex, too.

            The requirement for a power-of-two size is more intrinsic to the approach than just the bitmasking operation itself.

  • cuno 18 hours ago
    This stuff dates way, way before 2004.

    For non-power of two, just checked our own very old circular byte buffer library code and using the notation from this article, it is:

      entriesAllocated() { return ((wrPtr-rdPtr+2*bufSize) % (2*bufSize)); }
      remainingSpace() { return bufSize - entriesAllocated(); }
      isEmpty() { return (entriesAllocated()==0); }
      isFull() { return (entriesAllocated()==bufSize); }
      incWr(int n) { wrPtr = (wrPtr+n) % (2*bufSize); }
      incRd(int n) { rdPtr = (rdPtr+n) % (2*bufSize); }
    
    The 2*bufSize gives you an extra bit (beyond representing bufSize) that lets you disambiguate empty vs full. And if it is a constant power of two (e.g. via C++ template), then you can see how this just compiles into a bitmask instead, like the author's version. You read and write the buffer at (rdPtr%bufSize) and (wrPtr%bufSize) respectively.
  • atq2119 18 hours ago
    It's a silly offhand remark at the end of the article, but anybody who is genuinely interested in whether they've been tying their shoes wrong will enjoy Ian's shoelace site: https://www.fieggen.com/shoelace/
  • Someone 1 day ago
    > So there I was, implementing a one element ring buffer. Which, I'm sure you'll agree, is a perfectly reasonable data structure.

    It is, but, IMO, shouldn’t use the code for “a n-element ring buffer, with n set to 1”, similarly to how an array of booleans in many languages shouldn’t be implemented as “an arrayof Foos, with Foo set to bool”.

    C++ has std::bitset and std::vector and Java similarly has BitSet and Array because using the generic code for arrays of bits is too wasteful.

    Similarly, a one-element ring buffer is either full or it is empty. Why use two indexes to encode a single boolean?

    • cpgxiii 1 day ago
      > C++ has std::bitset and std::vector and Java similarly has BitSet and Array because using the generic code for arrays of bits is too wasteful.

      Rather infamously, C++ tried to be clever here and std::vector<bool> is not just a vector-of-bools but instead a totally different vector-ish type that lacks many of the important properties of every other instantiation of std::vector. Yes, a lot of the time you want the space efficiency of a dynamic bitset, rather than wasting an extra 7 bits per element. But also quite often you do want the behavior of a "real" std::vector for true/false values, and then you have to work around it manually (usually via std::vector<uint8_t> or similar) to get the expected behavior.

    • jsnell 1 day ago
      It was for a dynamically growing ring buffer that also did short-object optimization. The natural implementation was to have the capacity and the offsets stored in fixed locations and with a fixed width, and have the variable part be a union of pointer or inline byte buffer.

      Depending on the element width, you'd have space for different amounts of data in the inline buffer. Sometimes 1, sometimes a few more. Specializing for a one-element inline buffer would be quite complex with limited gains.

      In retrospect trying to use that as a running gag for the blog post did not work well without actually giving the full context, but the full context would have been a distraction.

    • andrepd 1 day ago
      > C++ has std::bitset and std::vector

      Notably, this is not the case. C++ std::vector is specialised for bools to pack bits into words, causing an untold array (heh) of headaches.

      And "wasteful" is doing a lot of lifting here. In terms of memory usage? Yes. In terms of CPU? The other way around.

      • mbel 1 day ago
        > In terms of CPU? The other way around.

        That depends on your architecture and access pattern. In case of sequential access, packed bools may perform better due to arithmetic being usually way cheaper than memory operations.

  • ekropotin 1 day ago
    I’m jealous of people, who have to write ring buffers for work.

    It feels like 90% swe jobs these days are about writing CRUD wrappers.

    • avadodin 1 day ago
      Sorry.

      Mostly Type 1 and overflow is a diagnostic log at most. Losing all stale unprocessed data and leaving a ready empty buffer behind is often the desired outcome.

      Type 3 is probably banned on most codebases because of the integer overflow.

      • zephen 21 hours ago
        > Losing all stale unprocessed data and leaving a ready empty buffer behind is often the desired outcome.

        Yeah, the Type 3 example could conceivably make it so that you intermix old and new data if you overflow, rather than just dumping a whole buffer.

        Especially when your full() function checks for exact equality, like the one in the article does.

        And if you remove the asserts, and then somehow underflow? God help you. You'll be pulling 4 billion entries you never actually stored out of the buffer, just repeating previously stored garbage over and over.

        > Type 3 is probably banned on most codebases because of the integer overflow.

        Not only this, but the purported code reduction benefits associated with type 3 are only superficial, and won't actually appear in any assembly listing.

      • Krssst 1 day ago
        Unsigned integer arithmetic operations don't overflow but are done modulo 2^n (https://en.cppreference.com/w/c/language/operator_arithmetic...). The author does use unsigned integers so I don't think there is a problem there.

        Signed integer overflow is definitely a problem however. Something as simple as incrementing a user-provided int can lead to UB (if the user provides INT_MAX).

      • RealityVoid 23 hours ago
        Banned is a bit strong, maybe discouraged. MISRA might yell but it's valid technique, IMO, unsigned integer overflow will be fine.
    • Neywiny 1 day ago
      And yet here I sit, writing ring buffers, and never thinking about this idea. Probably because of the power of two issue. Which isn't actually a problem because as he points out, who would do that? But it makes me think that it's a restriction that it just isn't.

      But in all honesty, look for more embedded jobs, then. We can certainly use the help.

      • nathan_douglas 23 hours ago
        What do you work with (if you don't mind answering)? I'm looking for a change and like low-level stuff about as much as I like any other level. I've done some cycle-accurate NES emulation and VM implementation stuff - I'm not much of a DSA guy but performance and efficiency appeal to me.
        • Neywiny 15 hours ago
          I work with pretty much everything (except GPUs I guess). Embedded is extremely relative. To some, embedded means a rack mount server that's idk embedded in a vehicle instead of a datacenter. That's not me. To others, embedded means a 4-bit low power, mask-rom fed micro inside a sensor IC. That's also not me.

          So I work with microcontrollers of various vendors, I do FPGA with hard and soft processors, recently did just past the smoke test through embedded Linux on a SoC, and I've done plenty of desktop code on Linux and Windows for interfacing. I get to work with a wide range of devices and a wide range of tasks for them. Might not pay as much but my goodness is it fun

      • ekropotin 1 day ago
        For some unexplainable reason, CRUD job’s pay is better than embedded, on average.
        • Neywiny 1 day ago
          I mean idk, I'm living comfortably and as the adage says, not working a day in my life. But if you're at a spot where you need the pay more than you want to write ring buffers, I understand.
        • IsTom 15 hours ago
          Probably there are less people excited to make another CRUD app than to write embedded code.
    • RealityVoid 1 day ago
      Jokes on me, when I need them, I don't feel like writing them so I just pick up an old one and tweak it. Or just tell Claude to build me one and it one shots it.
  • dang 1 day ago
    Related. Others?

    I've been writing ring buffers wrong all these years - https://news.ycombinator.com/item?id=13175832 - Dec 2016 (167 comments)

  • codeworse 3 days ago
    As far as I know, the last approach is the only way to implement efficient lock-free ring-buffer
    • zephen 1 day ago
      The middle approach is the only one that is not lock-free.

      The first approach is lock-free, but as the author says, it wastes an element.

      But here's the thing. If your element is a character, and your buffer size is, say, 256 bytes, and you are using 8-bit unsigned characters for indices, the one wasted byte is less than one percent of your buffer space, and also is compensated for by the simplicity and reduced code size.

      • fullstop 23 hours ago
        I've used the "Waste an element" one for ages on microcontrollers where I don't want to deal with the overhead in an ISR.
        • zephen 21 hours ago
          Agreed.

          The article author claims that the "don't waste an element" code is also more efficient, but that claim seems to be based on a hard-on about the post-increment operator, rather than any kind of dive into the cyclometric complexity, or even, y'know, just looking at the assembler output from the compiler.

    • mrcode007 1 day ago
      There is one more way that is truly lock free. Most lock free implementations relying on atomic compare and swap instructions are not lock free afaik; they have a lock on the cache line in the CPU (in a way you go away from global lock to many distributed locks).

      There is one more mechanism that allows implementing ring buffers without having to compare head and tail buffers at all (and doesn’t rely on counters or empty/full flags etc) that piggybacks on the cache consistency protocol

      • dooglius 1 day ago
        That's not how "lock free" is defined/used. If you are considering the MESI M state to be a "lock" then you also have to grant that any write instruction is a "lock".
        • mrcode007 2 hours ago
          In fact this a crux of the problem in low latency code and there are ways to combat this.

          I know there is an academic wait-free and lock-free definition but folks use those often incorrectly as a slogan that something is magically better because it’s „lockfree”.

          Imagine how _you_ would implement a read-modify-write atomic in the CPU and why E stands for exclusive (sort of like exclusive in a mutex)

      • wat10000 1 day ago
        Those hardware-level locks are typically not considered because they work quite differently. A standard software mutex can cause other threads to block indefinitely if, for example, the thread holding the mutex gets preempted for a long time. "Lock free" isn't really about the locks, it's about a guarantee that the system makes progress.

        In this sense, the hardware locks used for atomic instructions don't really count, because they're implemented such that they can only be held for a brief, well defined time. There's no equivalent to suspending a thread while it holds a lock, causing all other threads to wait for an arbitrary amount of time.

      • spockz 1 day ago
        Interesting! Do you know of an example implementation of this?
  • nly 18 hours ago
    Most people implement them now in my field using mmap tricks so the CPU can do the wraparound for you in virtual memory.

    Makes the code trivial

    • thrtythreeforty 13 hours ago
      Not only that, but you can also always form a normal (ptr, size) slice reference to any piece of the ring buffer, even when it wraps. This is really helpful for Eigen arrays that you need to rotate.
  • Mikhail_Edoshin 21 hours ago
    Technically each side needs an index plus a single bit. The bit is a counter, you increment it on every wrap. It overflows, but this is correct, we only need the last bit. Initially it is 0. By comparing the indexes and the bit you tell apart all cases and do not lose an entry.

    (I think this was published in one of Llang's papers but in a rather obscure language.)

    • grumbelbart2 20 hours ago
      One of the comments in the article proposes that: Just wrap both counters at 2capacity (instead of capacity or UINT_MAX).
      • Mikhail_Edoshin 20 hours ago
        Which does that and that's what Llang suggests as well, I remember 2 in some power in his formula. I myself find the separate one-bit counter easier to understand: each side counts pages, but they don't need the full number, only the difference between their counters and the difference can be at most one, so one bit is sufficient. If the counters are same, the actors are on the same page, if they are different, then the writer is one page ahead.
  • kybernetikos 1 day ago
    Every implementation of "the lmax disrupter" I've come across uses this trick.
  • z3t4 19 hours ago
    Why would you ever want a data structure that wraps around!? What a headache! Is it a memory constraint or optimization!? All I can think about is a physical knob where you want to know what position it is in.
    • danhau 18 hours ago
      They are efficient FIFOs (queues). You‘ll find them in many places. I know them from multimedia / audio, where you often have unsynchronized readers and writers.

      In the audio domain, the reader and weiter are usually allowed to trample over each other. If you‘ve ever gamed on a PC, you might have heard this. When a game freezes, sometimes you hear a short loop of audio playing until the game unfreezes. That‘s a ringbuffer whose writer has stopped, but the async reader is still reading the entire buffer.

      Zig‘s “There are too many ring buffer implementations in the standard library“ might also be interesting:

      https://github.com/ziglang/zig/issues/19231

    • aldonius 15 hours ago
      It's a somewhat different kind of ring buffer, because there's just one index, but I used it in my signal processing class for a finite-impulse-response filter.

      Choose N to be a power of two >= the length of your filter.

      Increment index i mod N, write the sample at buffer position x[i], output sum of x[i-k mod N] * a[k] where a[k] are your filter coefficients, repeat with next sample at next time step.

  • gpderetta 18 hours ago
    That's how TCP sequence numbers work as well.
  • spacechild1 17 hours ago
    > All of those seem like non-issues. What kind of a monster would make a non-power of two ring anyway?

    Huh? Anytime you want to restrict the buffer to a specific size, you will have to support non-power-of-two capacities. There are cases where the capacity of the ring buffer determines the latency of the system, e.g. an audio ringbuffer or a network jitter buffer.