Parameterized types in C using the new tag compatibility rule

(nullprogram.com)

154 points | by ingve 36 days ago

13 comments

  • fuhsnn 35 days ago
    The recent #def #enddef proposal[1] would eliminate the need for backslashes to define readable macros, making this pattern much more pleasant, finger crossed for its inclusion in C2Y!

    [1] https://www.open-std.org/jtc1/sc22/wg14/www/docs/n3531.txt

    • cb321 35 days ago
      While long-def's might be nice, you can even back in ANSI C 89 get rid of the backslash pattern (or need to cc -E and run through GNU indent/whatever) by "flipping the script" and defining whole files "parameterized" by their macro environment like https://github.com/c-blake/bst or https://github.com/glouw/ctl/

      Add a namespacing macro and you have a whole generics system, unlike that in TFA.

      So, it might add more value to have the C std add an `#include "file.c" name1=val1 name2=val2` preprocessor syntax where name1, name2 would be on a "stack" and be popped after processing the file. This would let you do types/functions/whatever "generic modules" with manual instantiation which kind of fits with C (manual management of memory, bounds checking, etc.) but preprocessor-assisted "macro scoping" for nested generics. Perhaps an idea to play with in your slimcc fork?

      • fuhsnn 35 days ago
        > `#include "file.c" name1=val1 name2=val2`

        That's an interesting idea! I think D or Zig's C header importer had similar syntax, I'm definitely gonna do it.

        • cb321 35 days ago
          Glad you like it! Not sure what separator you would pick between the name(args)=expansion stuff. I could imagine some generic files/modules might have enough or long enough params that people might want to backslash line continue. So, maybe '@' or '`' { depending on if you want many or few pixels ;-) } ?

              #include "file.c" _(_x)=myNamePrefix ## _x `\
                                KEY=charPtr VAL=int `\
                                ....
          
          The idea being that inside any generic module your private / protected names are all spelled _(_add)(..).

          By doing that kind of namespacing, you actually can write a generic module which allows client code manual instantiators a lot of control to select "verb noun" instead of "noun verb" kinds of schemes like "add_word" instead of "word_add" and potentially even change snake_case to camelCase with some _capwords.h file that does `#define _get Get` like moves, though of course name collisions can bite. That bst/ thing I linked to does not have full examples of all the optionality. E.g., to support my "stack popping" of macro defs, without that but just with ANSI C 89 you might do something like this instead to get "namespace nesting":

              #ifndef CT_ENVIRON_H
              #define CT_ENVIRON_H
              /* This file establishes a macro environment suitable for instantiation of
                 any of the Map/Set/Seq/Pool or other sorts of generic collections. */
              
              #ifndef _
              /* set up a macro-chain using token pasting *inside* macro argument lists. */
              #define _9(x)    x    /* an identity macro must terminate the chain. */
              #define _8(x) _9(x)
              #define _7(x) _8(x)   /* This whole chain can really be as long as   */
              #define _6(x) _7(x)   /* you want.  At some extreme point (maybe     */
              #define _5(x) _6(x)   /* dozens of levels) expanding _(_) will start */
              #define _4(x) _5(x)   /* to slow-down the Cpp phase.                */
              #define _3(x) _4(x)   /* Also, definition order doesn't matter, but  */
              #define _2(x) _3(x)   /* I like how top->bottom matches left->right  */
              #define _1(x) _2(x)   /* in the prefixing-expansions.               */
              #define _0(x) _1(x)
              #define _(x)  _0(x)   /* _(_) must start the expansion chain */
              #endif
              
              #ifndef CT_LNK
              #   define CT_LNK static
              #endif
              #endif /* CT_ENVIRON_H */
          
          and then with a setup like that in place you can do:

              #define _8(x) _9(i_ ## x)  /* some external client decides "i_"          */
              _(_foo)                    /* #include "I" -> i_foo at nesting-level 8   */
              #define _6(x) _7(e_ ## x)  /* impl of i_ decides "e_"                    */
              _(_foo)                    /* #include "E" -> i_e_foo at level 6         */
              #define _3(x) _4(c_ ## x)  /* impl of e_ decides "c_"                    */
              _(_foo)                    /* #include "C" -> i_e_c_foo at level 3       */
              #define _0(x) _1(l_ ## x)  /* impl of c_ decides "l_"                    */
              _(_t)
              _(_foo)                    /* #include "L" -> i_e_c_l_foo at level 0     */
              #define _0(x) _1(x)        /* c impl uses _(l_foo) to define _(bars)     */
              _(_foo)                    /* i_e_c_foo at nesting level 3 again         */
              #define _3(x) _4(x)        /* e impl uses _(c_foo) to define _(bars)     */
              _(_foo)                    /* i_e_foo at nesting level 6 again           */
              #define _6(x) _7(x)        /* i impl now uses _(e_foo) to define _(bars) */
              _(_foo)                    /* i_foo at nesting level 8 again             */
          
          Yes, yes. All pretty hopelessly manual (as is C in so many aspects!). But that smarter macro def semantics across parameterized includes I mentioned above could go a long way towards a quality of life improvement "for client code" with good "library code" file organization. I doubt it will ever be easy enough to displace C++ much, though.

          Personally, I started doing this kind of thing in the mid-1990s as soon as I saw people shipping "code in headers" in C++ template libraries and open source taking off. These days I think of it as an example of how much you can achieve with very simple mechanisms and the trade-offs of automating instantiation at all. But people sure seem to like to "just refer" to instances of generic types.

      • glouwbug 35 days ago
        I've been thinking of maybe doing CTL2 with this. Maybe if #def makes it in.
        • cb321 35 days ago
          I think the #include extension could make vec_vec / vec_list / lst_str type nesting more natural/maybe more general, but maybe just my opinion. :-)

          I guess ctags-type tools would need updating for the new possible definition location. Mostly someone needs to decide on a separation syntax for stuff like `name1(..)=expansion1 name2(..)=expansion2` for "in-line" cases. Compiler programs have had `cc -Dname(..)=expansion` or equivalents since the dawn of the language, but they actually get the OS/argv idea of separation from whatever CL args or Windows APIs or etc.

          Anyway, might makes sense to first get experience with a slimcc/tinycc/gcc/clang cpp++ extension. ;-) Personally, these days I mostly just use Nim as a better C.

    • hyperbolablabla 35 days ago
      I really don't think the backslashes are that annoying? Seems unnecessary to complicate the spec with stuff like this.
      • cb321 35 days ago
        FWIW, https://www.cs.cornell.edu/andru/ Andrew Myers had some patch to gcc to do this back in the late 90s.

        Anyway, as is so often the case, it's about the whole ecosystem not just of tooling but the ecosystem of assumptions about & around tooling.

        As I mentioned in my other comment, if you want you can always cc -E and re-format the code somehow, although the main times you want to do that are for line-by-line stepping in debuggers or maybe for other cases of "lines as source coordinates" like line-by-line profilers.

        Of course, a more elegant solution might be just having more "adjustable step size/source coordinates" like "single ';'-statement or maybe single sequence control point in debuggers than just "line orientation". This is, in fact, so natural an idea that it seems a virtual certainty some C debugger has an "expressional step/next", especially if written by a fan more of Lisp than assembly. Of course, at some point a library is just debugged/trusted, but if there are "user hooks" those can be buggy. If it's performance important, it may never be unwelcome to have better profile reports.

        While addr2line has been a thing forever, I've never heard of an addr2expr - probably because "how would you label it?" So, pros & cons, but easy for debugger/profilers is one reason I think the parameterized file way is lower friction.

        • kreco 35 days ago
          This Facebook repository also use a new "extension" to do a similar thing:

          https://github.com/facebookresearch/CParser#multiline-macros

        • core-explorer 35 days ago
          debugging information is more precise than line numbers, it usually conveys line and column in a source file.

          Some debuggers make use of it when displaying the current program state, the major debuggers do not allow you to step into a specific sub-call on a line (e.g. skip function arguments and go straight to the outermost function call). This is purely a UI issue, they have enough information. I believe the nnd debugger has implemented selecting the call to step into.

          Addr2line could be amended. I am working on my own debugger and I keep re-implementing existing command line tools as part of my testing strategy. A finer-grained addr2line sounds like a good exercise.

          • cb321 35 days ago
            Our exact context here is not just column numbers, but also about backslash line continuations joined by the C preprocessor. That makes the #line directives emitted refer to columns within a (large) "virtual line assembled by the tooling", not an "actual source" coordinate.

            So, a column number would not be very meaningful to a programmer (relative to some ';' or '{}' expressional label leveraging internal language syntax/bracketing which would definitely still be a bit to muck about with). As per my Lisp mention, it is really be a >1 dimensional idea, and there are various ways to flatten/marshal that parse tree. "next/over" and "step/into" are enough "incrementally/dynamically/interactively" to build up that 2d navigation, but also harder to work with "cumulatively" and with more complex than lisp grammars. Maybe most concretely, how "subexpression numbers" (in addr2x or other senses) are enumerated might still be a thing programmers need to "learn" from their "debugger UI".

            Another option might be to "reverse preprocess it" or maintain forward-meta-data to go from the "virtual line column number" back to the "true source (line,column)".

            I don't mean to discourage you, but just explain more what problem I meant to refer to by "how to label it" and highlight limitations of your new test. { But many are probably limited somehow! :-) }

      • kreco 35 days ago
        The backslashes itself make the preprocessor way more complicated for no real advantage (apart when it's unavoidable like in macros).

        For every single symbol you need to actually check if there is a splice (backslash + new line) in it. For single pass compiler, this contribute to a very slow lexing phase as this splice can appear anywhere in a C/C++ code.

        • jcelerier 35 days ago
          I don't think this is optimizing for the right thing, I've sat in front of hundreds of gcc & clang compiler time traces and lexing is a minuscule percentage of the time spent in the compiler
          • kreco 34 days ago
            My point is that it would make simpler for the lexer and for the human being.
  • JonChesterfield 35 days ago
    Not personally interested in this hack, but https://www.open-std.org/jtc1/sc22/wg14/www/docs/n3037.pdf means struct foo {} defined multiple times with the same fields in the same TU now refers to the same thing instead of to UB and that is a good bugfix.
  • Arnavion 35 days ago
    Neat similarity to Zig's approach to generic types. The generic type is defined as a type constructor, a function that returns a type. Every instantiation of that generic type is an invocation of that function. So the generic growable list type is `fn ArrayList(comptype T: type) type` and a function that takes two lists of i32 and returns a third is `fn foo(a: ArrayList(i32), b: ArrayList(i32)) ArrayList(i32)`
  • IAmLiterallyAB 35 days ago
    If you're reaching for that hack, just use C++? You don't have to go all in on C++-isms, you can always write C-style C++ and only use the features you need.
    • pton_xd 35 days ago
      Yeah as someone who writes C in C++, everytime I see posts bending over backwards trying to fit parameterized types into C I just cringe a little. I understand the appeal of sticking to pure C, but... why do that to yourself? Come on over, we've got lambdas, and operator overloading for those special circumstances... the water's fine!
      • spc476 35 days ago
        So maybe you can answer the following question I have: what is a "protected abstract virtual base pure virtual private destructor," and when was the last time you needed one?" At least with C, I understand the feature set and how they interact.
        • pjmlp 35 days ago
          Just because a feature is there doesn't mean you have to use it.

          Additionally the example isn't even possible, at least make ridiculous examples that compile.

        • jjmarr 35 days ago
          Don't use inheritance and you won't have to find out.
        • rramadass 35 days ago
          This is just silly. C++ gives you a smorgasbord of multi-paradigm features. Everything has its place and you can mix and match your needed featureset based on project needs, team skillset etc. You don't have to know or learn everything.
      • pjmlp 35 days ago
        Some people will do as much as they can to hurt themselves, only to avoid using C++.

        Note as the newer versions are basically C++ without Classes kind of thing.

        • glouwbug 35 days ago
          I think the main appeal is subset lock-down and compile times. ~5000 lines in C gets me sub second iteration times, while ~5000 lines in C++ hits the 10 second mark. Including both iostream and format in C++ gets any project up into the ~1.5 second mark which kills my iteration interests.

          Second to that I'd say the appeal is just watching something you've known for a long time grow slowly and steadily.

          • kilpikaarna 35 days ago
            This, and the two pages of incomprehensible compiler spam you get when you make a typo in C++.
            • pjmlp 35 days ago
              Depends pretty much on where you do such typo.

              If you mean templates, a kind of solved problem since C++17 with static_assert and enable_if, moreso in C++20 with concepts.

          • pjmlp 35 days ago
            Use binary libraries and modules, alongside incremental compilation and linking.
            • glouwbug 34 days ago
              I can't really afford the link time optimization losses
              • pjmlp 34 days ago
                It is called link time optimization for a reason.
                • glouwbug 33 days ago
                  Which kills my iteration interests ;)
        • uecker 35 days ago
          I see it the other way round. People hurt themselves by using C++. C++ fans will never understand it, but it you can solve your problem in a much simpler way, this is far better.
          • pjmlp 35 days ago
            We won't, because C++ is Typescript for C.

            It offers us safety features for arrays and strings, that apparently WG14 will never add to C.

            Didn't so in 40 years, and still remains to be seen what will be done with the current trend of cybersecurity laws.

            Then there is the whole basic stuff like proper namespaces instead of the ridiculous prefix convention.

            This from a point of view of C++ ARM defacto standard back in the 1990's, not even considering anything else.

            I see more possibilities for people to hurt themselves using C than C++, since 1993 when I added C++ to my toolbox.

            • uecker 35 days ago
              The stl is also unsafe by default and not actually safer than what you can also do in C.

              I debugged enough problematic C++ code to know that people can hurt themselves badly with it.

              • pjmlp 35 days ago
                Contrary to C standard library, all C++ compilers have provided safe versions of their standard libraries, predating C++98, enabled in debug mode.

                Even if non standard, all major C++ compiler vendors have provided similar features on their standard library, and is now officially supported in C++26.

                I have debugged enough C memory corruption issues with strings and arrays, that I would thought by now WG14 would actually care to fix the root cause, 40 years in.

                • uecker 34 days ago
                  The C standard library does not have containers, so I do not see how this sentence makes any sense. The reality is that C++ STL is in practice not really safer than C arrays, and although you can activate bounds checking, there remain many gotchas. But I am happy to see that bounds checking is now becoming official with C++26. For C arrays you get bounds checking in practice with -fsanitize=bounds. For containers, you would need a library in C that does bounds checking. So in both languages it is possible to get bounds checking if you want to.
                  • pjmlp 34 days ago
                    A compiler extension only available in clang is not C, so nope, there is no solution available in ISO C, and apparently never will be one.

                    Also to note that said extension only exists because Apple did the work WG14 did not bothered to do for the last 40 years, and as way to improve interop with safe Swift.

                    • uecker 34 days ago
                      The compiler extension is also available in GCC at least, and it was you who cited extensions.
                      • pjmlp 34 days ago
                        Doesn't change the fact that isn't on ISO C.

                        At least WG21 eventually did the correct thing and placed those extensions into the standard, even if due to governmental pressure.

                        Also while enabling bounds checking has been a common configuration option in all C++ compilers, clang and GCC aren't all C compilers.

                        This kind of discussion is also quite telling that nothing will change on WG14, maybe eventually who knows, C2y might finally get fat pointers if voted in, and then we will see during the following decades whatever fruits that will bare.

                        • uecker 34 days ago
                          That WG21 made finally made its containers safer is great, but the C standard library does not have containers. You can still have your own bounds checked containers just fine in C, as you could in the past.. ISO C simply does not matter as much as you like to pretend - for trolling I must assume.

                          When we will have a standard for bounds checking arrays and pointers remains to be seen, but this does not stop anyone from using the non-standard tools available today.

          • TuxSH 35 days ago
            IMHO C++ scales far better for large, self-contained, personal projects though it requires slightly more initial investment.

            And if you're targeting PC, you might be better off using Python to begin with (if perf is not a concern)

            • uecker 35 days ago
              What specifically makes it scale better in your opinion?
              • TuxSH 34 days ago
                - "All" C libraries use some form of namespacing (the typical mylib_dosomething kind of name); actual namespaces mean you don't write these prefixes over and over again when in the same namespace

                - "Most" C projects do basic OOP, many C projects even do inheritance via composition and a fair few of these do virtual dispatch too

                - Templates (esp. since C++20), lambda functions, overloads and more recently coroutines (which are fancy FSM in their impl), etc. reduce boilerplate a lot

                - Containers (whether std:: or one's own) are far easier to work with in C++, a lot less boilerplate overall (GDB leveraged this during their migration iirc)

                - string_view makes non-destructive substring manipulation a lot easier; chrono literals (in application code) make working with durations a lot more readable too

                In the past decade or two, major projects like GCC and GDB have migrated from C to C++.

                Obviously, C retains advantages over C++, but they are fairly limited: faster build times, not having to worry about exposing "extern C" interface in libraries, not having to worry about controversial features like exceptions and (contextually) magic statics and so on...

                • rramadass 34 days ago
                  Nicely said!

                  One other key thing is encapsulation provided via various C++ syntax which is missing in C (where only file scope is possible).

    • waynecochran 35 days ago
      Not always a viable option -- especially for embedded and systems programming.
      • Too 35 days ago
        In embedded you are typically stuck on some ancient proprietary compiler and can't take advantage of the latest C versions. Even less so if you need safety standards like MISRA.

        That of course doesn't help you with the switch away from C. The question is why they keep updating the language. The only ones with valid reasons to not upgrade to some more sane language can't take advantage of the new features.

      • _proofs 35 days ago
        i work in an embedded space in the context of devices and safety. if it were as simple as "just use c++ for these projects" most of us would use a subset, and our newer projects try to make this a requirement (we roll our own ETL for example).

        however for some niche os specific things, and existing legacy products where oversight is involved, simply rolling out a c++ porting of it on the next release is, well, not a reality, and often not worth the bureaucratic investment.

        while i have no commentary on the post because i'm not really a c programmer, i think a lot of comments forget some projects have requirements, and sometimes those requirements become obsolete, but you're struck with what you got until gen2, or lazyloading standardization across teams.

    • sim7c00 35 days ago
      you are so right..thought hisotrically i would of disagreed just by being triggered.

      templates is the main thing c++ has over c. its trivial to circumvent or escape the thing u dont 'like' about c++ like new and delete (personal obstacle) and write good nice modern c++ with templates.

      C generic can help but ultimately, in my opinion, the need for templating is a good one to go from C to C++.

  • uecker 35 days ago
    Here is my experimental library for generic types with some godbolt links to try: https://github.com/uecker/noplate
  • wsve 35 days ago
    Sometimes I look at the way C macros are used to simulate generics and wonder to myself... Why don't y'all just put templates into the standard? If the way you're writing C code is by badly imitating C++, then just imitate C++! There's no shame in it!
    • jimbob45 35 days ago
      C++ doesn’t force you to pay for anything you don’t use so you can just use the C++ compiler at that point and change the few incompatibilities between C and C++.

      That said…I agree that there is a lot of syntactic sugar that could be added for free to C.

    • uecker 35 days ago
      Maybe you could try to formulate it what sense this approach is actually inferior? IMHO it is superior to C++ templates by being far simpler.
  • rwmj 35 days ago
    Slighty off-topic, why is he using ptrdiff_t (instead of size_t) for the cap & len types?
    • foobar12345quux 35 days ago
      Hi Rich, using ptrdiff_t is (alas) the right thing to do: pointer subtraction returns that type, and if the result doesn't fit, you get UB. And ptrdiff_t is a signed type.

      Assume you successfuly allocate an array "arr" with "sz" elements, where "sz" is of type "size_t". Then "arr + sz" is a valid expression (meaning the same as "&arr[sz]"), because it's OK to compute a pointer one past the last element of an array (but not to dereference it). Next you might be tempted to write "arr + sz - arr" (meaning the same as "&arr[sz] - &arr[0]"), and expect it to produce "sz", because it is valid to compute the element offset difference between two "pointers into an array or one past it". However, that difference is always signed, and if "sz" does not fit into "ptrdiff_t", you get UB from the pointer subtraction.

      Given that the C standard (or even POSIX, AIUI) don't relate ptrdiff_t and size_t to each other, we need to restrict array element counts, before allocation, with two limits:

      - nelem <= (size_t)-1 / sizeof(element_type)

      - nelem <= PTRDIFF_MAX

      (I forget which standard header #defines PTRDIFF_MAX; surpisingly, it is not <limits.h>.)

      In general, neither condition implies the other. However, once you have enforced both, you can store the element count as either "size_t" or "ptrdiff_t".

    • r1chardnl 35 days ago
      From one of his other blogposts. "Guidelines for computing sizes and subscripts"

        Never mix unsigned and signed operands. Prefer signed. If you need to convert an operand, see (2).
      
      https://nullprogram.com/blog/2024/05/24/

      https://www.youtube.com/watch?v=wvtFGa6XJDU

      • poly2it 35 days ago
        I still don't understand how these arguments make sense for new code. Naturally, sizes should be unsigned because they represent values which cannot be unsigned. If you do pointer/size arithmetic, the only solution to avoid overflows is to overflow-check and range-check before computation.

        You cannot even check the signedness of a signed size to detect an overflow, because signed overflow is undefined!

        The remaining argument from what I can tell is that comparisons between signed and unsigned sizes are bug-prone. There is however, a dedicated warning to resolve this instantly.

        It makes sense that you should be able to assign a pointer to a size. If the size is signed, this cannot be done due to its smaller capacity.

        Given this, I can't understand the justification. I'm currently using unsigned sizes. If you have anything contradicting, please comment :^)

        • sparkie 35 days ago
          C offers a different solution to the problem in Annex K of the standard. It provides a type `rsize_t`, which like `size_t` is unsigned, and has the same bit width, but where `RSIZE_MAX` is recommended to be `SIZE_MAX >> 1` or smaller. You perform bounds checking as `<= RSIZE_MAX` to ensure that a value used for indexing is not in the range that would be considered negative if converted to a signed integer. A negative value provided where `rsize_t` is expected would fail the check `<= RSIZE_MAX`.

          IMO, this is a better approach than using signed types for indexing, but AFAIK, it's not included in GCC/glibc or gnulib. It's an optional extension and you're supposed to define `__STDC_WANT_LIB_EXT1__` to use it.

          I don't know if any compiler actually supports it. It came from Microsoft and was submitted for standardization, but ISO made some changes from Microsoft's own implementation.

          https://www.open-std.org/JTC1/SC22/WG14/www/docs/n1173.pdf#p...

          https://www.open-std.org/JTC1/SC22/WG14/www/docs/n1225.pdf

          • poly2it 35 days ago
            This is an interesting middle ground. As ncruces pointed out in a sibling comment, the sign bit in a pointer cannot be set without contradicting the ptrdiff_t type. That makes this seem like a reasonable approach to storing sizes.
        • int_19h 35 days ago
          > It makes sense that you should be able to assign a pointer to a size. If the size is signed, this cannot be done due to its smaller capacity.

          You can, since the number of bits is the same. The mapping of pointer bits to signed integer bits will mean that you can't then do arithmetic on the resulting integers and get meaningful results, but the behavior of such shenanigans is already unspecified with no guarantees other than you can get an integer out of a pointer and then convert it back later.

          But also, semantically, what does it even mean to convert a single pointer to a size? A size of an object is naturally defined as the count of chars between two pointers, one pointing at the beginning of the object, the other at its end. Which is to say, a size is a subset of pointer difference that just happens to always be non-negative. So long as the implementation guarantees that for no object that non-negative difference will always fit in a signed int of the appropriate size, it seems reasonable to reflect this in the types.

          • ncruces 35 days ago
            The correct integer type to use to store pointers (the only one specified to actually work) is uintptr_t, not size_t.
        • foldr 35 days ago
          Stroustrup believes that signed should be preferred to unsigned even for values that can’t be less than zero: https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2019/p14...
          • poly2it 35 days ago
            I've of course read his argument before, and I think it might be more applicable to C++. I exclusively program in C, and in that regard, the relevant aspects as far as I can tell wouldn't be clearly in favour of a signed type. I also think his discussion on iterator signedness mixes issues with improper bounds checking and attributes it to the size type signedness. What remains I cannot see justify using the a signed type other than "just because". I'm not sure it's applicable to C.
            • uecker 35 days ago
              I also prefer signed types in C for sizes and indices. You can screen for overflow bugs easily using UBSan (or use it to prevent exploitation).
        • sim7c00 35 days ago
          I dont know either.

          int somearray[10];

          new_ptr = somearray + signed_value;

          or

          element = somearray[signedvalue];

          this seems almost criminal to how my brain does logic/C code.

          The only thing i could think of is this:

          somearray+=11; somearray[-1] // index set to somearray[10] ??

          if i'd see my CPU execute that i'd want it to please stop. I'd want my compiler to shout at me like a little child, and be mean until i do better.

          -Wall -Wextra -Wextra -Wpedantic <-- that should flag i think any of these weird practices.

          As you stated tho, i'd be keen to learn why i am wrong!

          • windward 35 days ago
            In the implementation of something like a deque or merge sort, you could have a variable that represents offsets from pointers but which could sensibly be negative. C developers culturally aren't as particular about theoretical correctness of types as developers in some other languages - there's a lot of implicit casting being used - so you'll typically see an `int` used for this. If you do wish to bring some rigidity to your type system, you may argue that this value is distinct from a general integer which could be used for any arithmetic and definitely not just a pointer. So it should be a signed pointer difference.

            Arrays aren't the best example, since they are inherently about linear, scalar offsets, but you might see a negative offset from the start of a (decayed) array in the implementation of an allocator with clobber canaries before and after the data.

            • sim7c00 32 days ago
              thanks a lot for this. definitely gonna look at these sorts to see whats going on :D interesting!
          • mandarax8 35 days ago
            Any kind of relative/offset pointers require negative pointer arithmetic. https://www.gingerbill.org/article/2020/05/17/relative-point...
            • poly2it 35 days ago
              I don't think you can make such a broad statement and be correct in all cases. Negative pointer arithmetic is not by itself a reason to use signed types, except if you are:

              1. Certain your added value is negative.

              2. Checking for underflows after computation, which you shouldn't.

              The article was interesting.

        • ncruces 35 days ago
          > It makes sense that you should be able to assign a pointer to a size. If the size is signed, this cannot be done due to its smaller capacity.

          Why?

          By the definition of ptrdiff_t, ISTM the size of any object allocated by malloc cannot be out of bounds of ptrdiff_t, so I'm not sure how can you have a useful size_t that uses the sign bit?

        • uecker 35 days ago
          "Naturally, sizes should be unsigned because they represent values which cannot be unsigned."

          Unsigned types in C have modular arithmetic, I think they should be used exclusively when this is needed, or maybe if you absolutely need the full range.

          • TuxSH 35 days ago
            So do signed types (guaranteed to be 2's complement since C23, though all sane targets have been using that representation for a long time).

            Two's complement encodes -x as ~x + 1 = 2^n - x = -x (mod 2^n) and can therefore be mixed with unsigned for (+, -, *, &, |, ^, ~, <<).

            > I think they should be used exclusively when this is needed

            The opposite: signed type usage should be kept to a minimum because signed type (and pointer) overflow is UB and will get optimized as such.

            • uecker 35 days ago
              For signed types you can can also get a run-time trap on overflow, making it safe to use. Bug caused by unsigned wraparound are extremely hard to find.
          • enqk 35 days ago
            yeah unsigned are really about opting to perform modular arithmetic, or for mapping hardware registers.

            C is weakly typed, the basic types are really not to maintain invariants or detect their violation

        • windward 35 days ago
          Pointer arithmetic that could overflow would probably involve a heap and therefore be less likely to require a relative, negative offset. Just use the addresses and errors you get from allocation.
          • poly2it 35 days ago
            Yes, but there are definitely cases where this doesn't apply, for example when deriving an offset from a user pointer. As such this is not a universal solution.
        • user____name 34 days ago
          FWIW If you don't care about portability and target x64 you only get 48 bits worth of virtual address in the MMU.
    • rurban 35 days ago
      Skeeto and Stroustrup are a bit confused about valid index types. They prefer signed, which will lead to overflows on negative values, but have the advantage of using only half of the valid ranges, so there's more heap for the rest. Very confused
  • unwind 35 days ago
    I think this is an interesting change, even though I (as someone who has loved C for 30+ years and use it daily in a professional capacity) don't immediately see a lot of use-cases I'm sure they can be found as the author demonstrates. Cool, and a good post!
    • glouwbug 35 days ago
      Combined with C23's auto (see vec_for) you can technically backport the entirety of C++'s STL (of course with skeeto's limitation in his last paragraph in mind). gcc -std=c23. It is a _very_ useful feature for even the mundane, like resizable arrays:

        #include <stdlib.h>
        #include <stdio.h>
        
        #define vec(T) struct { T* val; int size; int cap; }
        
        #define vec_push(self, x) {                                                 \
            if((self).size == (self).cap) {                                         \
                (self).cap = (self).cap == 0 ? 1 : 2 * (self).cap;                  \
                (self).val = realloc((self).val, sizeof(*(self).val) * (self).cap); \
            }                                                                       \
            (self).val[(self).size++] = x;                                          \
        }
        
        #define vec_for(self, at, ...)             \
            for(int i = 0; i < (self).size; i++) { \
                auto at = &(self).val[i];          \
                __VA_ARGS__                        \
            }
        
        typedef vec(char) string;
        
        void string_push(string* self, char* chars)
        {
            if(self->size > 0)
            {
                self->size -= 1;
            }
            while(*chars)
            {
                vec_push(*self, *chars++);
            }
            vec_push(*self, '\0');
        }
        
        int main()
        {
            vec(int) a = {};
            vec_push(a, 1);
            vec_push(a, 2);
            vec_push(a, 3);
            vec_for(a, at, {
                printf("%d\n", *at);
            });
            vec(double) b = {};
            vec_push(b, 1.0);
            vec_push(b, 2.0);
            vec_push(b, 3.0);
            vec_for(b, at, {
                printf("%f\n", *at);
            });
            string c = {};
            string_push(&c, "this is a test");
            string_push(&c, " ");
            string_push(&c, "for c23");
            printf("%s\n", c.val);
        }
      • int_19h 35 days ago
        What I don't quite get is why they didn't go all the way in and basically enabled full fledged structural typing for anonymous structs.
        • uecker 35 days ago
          That way my plan, but the committee had concerns about type safety.
          • int_19h 35 days ago
            This would probably need some special rules around stuff like:

               typedef struct { ... } foo_t;
               typedef struct { ... } bar_t;
               foo_t foo = (bar_t){ ... };
            
            i.e. these are meant to be named types and thus should remain nominal even though it's technically a typedef. And ditto for similarly defined pointer types etc. But this is a pattern regular enough that it can just be special-cased while still allowing proper structural typing for cases where that's obviously what is intended (i.e. basically everywhere else).
            • uecker 35 days ago
              I agree, that is also the solution I suggested. I will try to bring this back for C2y.
  • o11c 35 days ago
    Are we getting a non-broken `_Generic` yet? Because that's the thing that made me give up with disgust the last project I tried to write in C. Manually having to do `extern template` a few times is nothing in comparison.
    • uecker 35 days ago
      What is a non-broken `_Generic' ?
      • o11c 35 days ago
        A `_Generic` that only requires its expressions to be valid for the type associated with them, rather than spewing errors everywhere.
        • uecker 35 days ago
          This is intentional. But my idea for _Generic is to have

          _Generic(x, int i: i + 1, float f: f + 1.);

          where the i and f then have the correct type, so you do not need to refer to 'x' in those expressions.

  • Surac 35 days ago
    i fear this will make slopy code compile more often OK.
    • poly2it 35 days ago
      Dear God I hope nobody is committing unreviewed LLM output in C codebases.
      • pests 35 days ago
        No worries, the LLM commits it for you.
      • pjmlp 35 days ago
        Eventually they will generate executables directly.
    • ioasuncvinvaer 35 days ago
      Can you give an example?
  • tialaramex 35 days ago
    It seems as though this makes it impossible to do the new-type paradigm in C23 ? If Goose and Beaver differ only in their name, C now thinks they're the same type so too bad we can tell a Beaver to fly even though we deliberately required a Goose ?
    • yorwba 35 days ago
      "Tag compatibility" means that the name has to be the same. The issue the proposal is trying to address is that "struct Goose { float weight; }" and "struct Goose { float weight; }" are different types if declared in different locations of the same translation unit, but the same if declared in different translation units. With tag compatibility, they would always be treated as being the same.

      "struct Goose { float weight; }" and "struct Beaver { float weight; }" would remain incompatible, as would "struct { float weight; }" and "struct { float weight; }" (since they're declared without tags.)

      • tialaramex 35 days ago
        Ah, thanks, that makes sense.