Swift is a more convenient Rust

(blog.namangoel.com)

242 points | by mpweiher 255 days ago

30 comments

  • noelwelsh 255 days ago
    Hmmm...I disagree with a number of statements in the post but I think the following two hypotheses will make for more interesting discussion than some nitpicks:

    1. A large part of why many people love Rust is that it's the first time they've used an ML family language. One of the innovations of Rust was to create a community that felt like home to Unix hackers who weren't programming language nerds.

    2. Rust is the first language to bring non-GC automatic memory management to the mainstream. It won't be the last and it might well be the worst. Other languages in this space include Swift (as in OP's post), OCaml[1], and Scala[2]. These alternatives take the approach that tracking ownership is not the default, which is fine for the majority of programs and far more ergonomic.

    There's also a corollary

    3. The age of Smalltalk is over. It's now the age of ML. (Amusingly, going backwards in time, from 1983 to 1975). The languages that dominated the 2000s (Ruby, Python, Javscript, PHP) were all more or less derived from Smalltalk (everything is an object! dynamic types! runtime metaprogramming!) The new languages (Rust, Scala, Swift, Kotlin, etc.) are ML family languages. Similarly, back in the day you could learn, say, Ruby, and still be reasonably proficient in Python or JS. Now you can learn, say, Scala, and pick up Rust or Swift reasonably easily.

    [1]: https://blog.janestreet.com/oxidizing-ocaml-locality/

    [2]: https://dl.acm.org/doi/full/10.1145/3618003

    • Loic 255 days ago
      In this context, ML is Meta Language: https://en.wikipedia.org/wiki/ML_(programming_language)
      • kingofthehill98 255 days ago
        Thank you!

        The whole time I was like "what in the absolute hell is a ML language".

        • high_priest 255 days ago
          I've heard so much about Machine Learning in recent years, that I couldn't fathom ML be used in any other context & be understood.
      • mlindner 254 days ago
        ML is such an overloaded acronym in software that people really need to stop using it or ALWAYS clarify it.

        ML = Machine Learning

        ML = Machine Language

        ML = Markup Language

        ML = Meta Language

        And I always see ML in the Meta Language sense to be the biggest culprit where writers assume the reader knows it, even though it's probably the least widely known.

      • noobr 255 days ago
        thanks, i'm bad at abbreviations and don't like when people just throw them like this without first using the whole term at least once
        • plorkyeran 255 days ago
          Expanding this abbreviation conveys zero information. It’s a programming language named ML, and if you haven’t heard of it then giving the etymology of the name isn’t going to help.
          • duckmysick 255 days ago
            It clarifies it's not something else I've heard of (markup language, machine learning). The ML abbreviation shares the space with other terms so it can be confusing. Compare it to HTML which is 1) more popular and 2) unique in the context of programming.
          • juneyi 250 days ago
            > Expanding this abbreviation conveys zero information. It’s a programming language named ML, and if you haven’t heard of it then giving the etymology of the name isn’t going to help.

            Honestly kind of shocked to see this type of mentality...wow.

          • jamwil 254 days ago
            Huh? If I search ‘Meta Language’ I get the exact information I need. If I search ‘ML’ I get less than nothing.
        • thiht 255 days ago
          This is indeed an abbreviation, but it doesn’t convey meaning. At this point ML is just the name of the thing, it could not expand to anything and still convey the same meaning.

          If you don’t know ML, knowing what it stands for won’t help you. It’s like JS, either you know what JS is or you don’t. It’s not about what it means, it’s about what it is.

          • BlueTemplar 255 days ago
            You are forgetting that looking up abbreviations of 3 characters is already very hard, and it becomes almost impossible for the 2 character ones.
            • thiht 254 days ago
              It’s usually not when there’s some context.

              > A large part of why many people love Rust is that it's the first time they've used an ML family language.

              You wouldn’t search "ML" but "ML family language".

              Or copy paste the whole thing and ask ChatGPT what ML means in this context.

    • arghwhat 255 days ago
      > Rust is the first language to bring non-GC automatic memory management to the mainstream

      To nitpick, the first language to bring non-GC automatic memory management to the mainstream would be either Forth or C through stack memory.

      For heaps, we have reference counting, often handled automatically through the stack (e.g., C++ smart pointers).

      The thing Rust brought to mainstream is not automatic memory management - they're a dime a dozen with and without GC. It's an ownership model that is strict about whether you have read or write access within the type system. At the same time, this is also where most developer friction stems from, as it's quite unique, and you're often faced with a trade-off between explaining exactly what you want at compile-time at higher design cost, or bailing out more to runtime checks with e.g. RefCell while getting fewer guarantees about code correctness from the compiler.

      • pjmlp 255 days ago
        Not at all, considering the decade of high level systems programming languages that predated the very thought of C coming into life.

        All with stack allocation.

        • arghwhat 255 days ago
          Keyword is mainstream, so it’s not when the idea was invented - I think Forth popularized it?

          I can’t really comment on what counted as mainstream before C though. Not old enough for that.

          • pjmlp 255 days ago
            Mainstream is whatever people could use between 1958 and 1969, as high level programming language, in whatever computers, universities and companies could afford.

            Goggle, Bing, DuckDuckGo, ChatGPT,... will gladly provide a list.

            • arghwhat 255 days ago
              What people could use is not at all what mainstream is. You can use D, OCaml, F# and brainfuck, but they are not mainstream.

              To define what was mainstream we'd need to talk to people coding at the time and ask them about popularity, adoption, tooling availability, etc. and what languages people were likely to use rather than what was available.

              And no, I have no need for researching that as I have no use of the information - the point is to say that automatic memory management in mainstream languages predate rust by decades and is present in most languages, not "this specific language should get credit".

              • pjmlp 254 days ago
                Then you should not have mentioned Forth or C at all, only to show not having a clue about programming languages history.

                Besides, any Assembly programmer is well aware of what stack memory is all about, bring us even further back than 1958.

                • arghwhat 252 days ago
                  > Then you should not have mentioned Forth or C at all, only to show not having a clue about programming languages history.

                  The whole point was that these well-known mainstream languages provided the feature claimed to first have arrived in Rust, to make it clear that it was not just theoretical or some obscure language.

                  It's not my fault that you misunderstood the topic or the idea of what makes something mainstream, and instead read this as an elaborate history class stating the absolute first discovery of the idea.

                  > Besides, any Assembly programmer is well aware of what stack memory is all about, bring us even further back than 1958.

                  The instructions you refer to were designed to support stack-based languages, not the other way around.

      • shakow 255 days ago
        > C through stack memory.

        Stack memory is decades older than C.

      • Ygg2 255 days ago
        > non-GC *automatic memory* management to the mainstream would be either Forth or C

        Emphasis mine.

        It's not C. You do manual memory management there. You have to call malloc/free.

        Didn't use Forth, so I can't say anything about it.

        • xolox 255 days ago
          malloc/free are for heap based allocations. The grand parent explicitly mentioned he was referring to stack based allocations, which are kind of automatic (implicit).
          • Ygg2 255 days ago
            Sure, however you still have to do them manually. That's what manual in manual memory management stands for.

            Stack based allocation are essentially registers, right?

            • arghwhat 255 days ago
              The stack is fully automatic arbitrary memory and has nothing to do with registers. You can allocate as much as you want (including e.g. bytearrays), until you run into the allocated stack limit. That limit can be arbitrary, and some languages even dynamically scale the stack.

              That you also have access to manual memory on the heap doesn’t matter. You can also do manual memory management in Rust if you want, as one has to do at times.

              • Ygg2 255 days ago
                Ok. Then manual memory applies to just heap memory management.

                It's not rocket science. If you are calling malloc/free to maintain your memory you're doing manual memory management.

                > You can also do manual memory management in Rust

                You can do Garbage Collection in C, it doesn't make it Garbage Collected language.

                You're confusing default memory management with what's possible.

                By your logic, Arena allocators can be used in many different languages, including GC ones like Java. So that means GC languages like Java are manually memory managed. Which is defeats the definition of it.

                • arghwhat 255 days ago
                  > It's not rocket science. If you are calling malloc/free to maintain your memory you're doing manual memory management.

                  Sure, and when I do the same in Rust I'm also doing manual memory management. So by your definition, both Rust and C are manual memory languages.

                  > You're confusing default memory management with what's possible.

                  Ah, so we care about the default, which I pressume is what the language semantics themselves provide, rather than focusing on what the standard libraries can provide you?

                  In that case C is an automatic memory language, because the language semantics only provide you stack memory. malloc/free are just random functions in the standard library after all, just like Rust's `Box::new` and `std::alloc`.

                  See, the point is that you're opting into manual memory management in C from an automatic model. We are so used to the stack that we forget that it's the OG fully automatic zero-cost memory management system, and in case of C++, can be used to implement fully automatic heap memory as well - in which case you never need to call malloc/free/new/delete.

                  (And no, it doesn't count that your smart pointer calls new/delete, because then you also need to count Box::new calling std::alloc)

                  > By your logic, Arena allocators can be used in many different languages, including GC ones like Java.

                  Uh, even in bog standard Java without any shenanigans you are using an "arena allocator". A GC doesn't change how allocators work, it just responsible for calling free.

                  (Caveat about moving vs. non-moving garbage collectors and ones that have multiple arenas, but that's not relevant here and an entire topic of its own.)

                  • Ygg2 254 days ago
                    > Sure, and when I do the same in Rust I'm also doing manual memory management. So by your definition, both Rust and C are manual memory languages.

                    You're being very thick on purpose. In Rust you need to reach for foreign functions to implement malloc/free.

                    > So by your definition, both Rust and C are manual memory languages.

                    No. By my definition what is the default semantics determines if it's manual or automatic. It's CS 101.

                    But if you want to play these semantics games, you just admitted C is GC language and thus unsuitable for kernel development.

                    > Uh, even in bog standard Java without any shenanigans you are using an "arena allocator".

                    No, no you aren't. At least not explicitly. I assume you mean GC, if it has or doesn't have arenas is implementation detail.

                    It also hinges on Wikipedia being correct that Arena IS manual memory management, which is unsubstantiated at best.

                    • arghwhat 252 days ago
                      > You're being very thick on purpose. In Rust you need to reach for foreign functions to implement malloc/free.

                      I think you're mistakenly thinking of calling out to the (rust-lang maintained) libc crate's malloc/free functions. That's not the case - the standard library provides `std::alloc`, which is the allocator also backing Box and Vec.

                      > No. By my definition what is the default semantics determines if it's manual or automatic. It's CS 101.

                      Your definition of default semantics - "in C it's not what the language does but what happens when you call random library functions, while in Rust it's the opposite" - makes no sense at all.

                      C isn't considered a manual language because of default semantics, but because people have chosen to mainly rely on such paradigm.

                      > No, no you aren't. At least not explicitly. I assume you mean GC, if it has or doesn't have arenas is implementation detail.

                      Yes, whether any allocator has an arena is an implementation detail.

                      Whether you call `malloc` or `new`, or you have Go or Java do a heap allocation for you (which, to nitpick, is not actually the job of the garbage collector), the use of an arena is an implementation detail. In case of GC, the availability of optimizations also depend heavily on whether it's a moving GC.

                      • Ygg2 252 days ago
                        > That's not the case - the standard library provides `std::alloc`, which is the allocator also backing Box and Vec.

                        Again, emphasis on the word, default. How are you using Box and Vec. Are you by default encouraged to drop them manually via Allocator?

                        No, you aren't. You're heavily discouraged via usage of `unsafe`, you are discouraged because the compiler does automatic `alloc`/`drop`.

                        You have to go out of your way to do manual memory management.

                        > Your definition of default semantics

                        I showed you what happens when you apply the non-default semantics of a language to C. You end up with nonsensical statements like C is a GC language. Just because a style is possible in language X doesn't mean language X is of that style.

                        > C isn't considered a manual language because of default semantics, but because people have chosen to mainly rely on such paradigm.

                        And why is that? Because the default affordance of the language makes it so that way of usage is the most natural to most users. You could put everything in the stack, but that would be extremely torturous for most users, so they use malloc/free.

                        You give people a knife, of course they will grab it by the handle.

            • eska 255 days ago

                {
                  int8_t x = 1;
                }
              
              allocates a byte on the stack, binds it to a variable named x, sets it to 1, then deallocates it from the stack. There is no explicit allocation or deallocation.

              As an optimization it can be put into a register instead of the stack, again without explicit allocation and deallocation (this is done by the compiler playing musical chairs with registers).

              I would not consider this manual. Manual would instead be something like in embedded programming

                int8_t* y = (int8_t*)0xB4DC0FF3;
                *y = 1;
              
              because one needs to keep track of used memory locations (do allocation management)
    • bad_user 255 days ago
      > first language to bring non-GC automatic memory management to the mainstream ... Other languages in this space include Swift

      Ugh, “Arc”, aka (automatic) reference counting, is an implementation of GC as well. It may have certain characteristics, like having worse throughput, corner cases, and better predictability/latency characteristics, but it's GC nonetheless. That's not manual memory management. Swift is not a non-GC language.

      And that's the big issue that people are missing — Rust tackles some hard problems regarding memory management, and a friendlier alternative exists: garbage collection.

      • rcruzeiro 255 days ago
        Both ARC and garbage collection are memory management techniques. ARC does not equal garbage collection though. Garbage collection runs at intervals and is triggered by certain signals (memory pressure, etc), pauses all threads and scans for objects that can be deallocated. It is a very different concept than ARC.
        • twoodfin 255 days ago
          https://courses.cs.washington.edu/courses/cse590p/05au/p50-b...

          This paper makes a strong argument that “pure” tracing garbage collection and “rote” reference counting are effectively duals at the opposite ends of a spectrum of implementation choices.

          For example, GC’s often optimize differently for young vs. old objects. How do they distinguish these? By “counting” (one bit) their references across one or more traces in a series of sweeps.

          • 01HNNWZ0MV43FF 255 days ago
            And a hot wheels car is a car, but I would be perplexed to see one offered to me at a car dealership
        • Tuna-Fish 255 days ago
          ARC is a method of implementing garbage collection, and is discussed as such in academic literature.

          What you are equating with "garbage collection" is Mark-Sweep, which is an another way to implement GC, with a very different set of tradeoffs. (Broadly, better throughput at the expense of higher latency.)

          • 7jjjjjjj 255 days ago
            In common usage everyone understands that GC = tracing GC.
        • 112233 255 days ago
          Deleting large hierarchy of objects when the last reference gets decremented also is triggered by a certain signal (reference decrement) and pauses all threads while it collects garbage. There are even tunable GC that become ARC if you set particular parameter to "N=1"
          • zozbot234 255 days ago
            I'm not sure that collecting a large hierarchy of objects requires pausing all threads. What can definitely require pausing threads (other than in specialized concurrent collectors) is the tracing step, which relies on some invariants that in simple GC implementations (not the more specialized concurrent ones mentioned earlier) are only ensured at very specific points in the program (known as GC "safepoints").
            • 112233 255 days ago
              My bad, thanks for correcting! Of course single thread freeing a lot of heap allocations will not pause other threads by itsel. What I had to write was: calling destructor of an object that in turn calls destructors of other objects (because a large tree of reference counted objects loses last reference to it's root) stops that thread, until all deallocation is done.
          • pessimizer 255 days ago
            > There are even tunable GC that become ARC if you set particular parameter to "N=1"

            You can call a pointer to a value a "one-element static array."

      • pessimizer 255 days ago
        It's not "automatic" reference counting, it's "atomic" reference counting. And it's not automatic, it's entirely manual and implemented with traits as a fat pointer. Reference counting types in Rust are just defaults that give certain guarantees.

        This feels like people arguing that atheism is a kind of religion. Borrow checking and a library of safe default primitives make it so you don't really have to think about memory management; except to the extent that you generally stick to those primitives, and that you have to install the borrow checker into your own head to write Rust comfortably (although the compiler will help.)

        Rust doesn't have any garbage collection, but it can just assume how you would want to deal with memory by forcing you to be very specific about when values should be created or destroyed.

    • est31 255 days ago
      > Rust is the first language to bring non-GC automatic memory management to the mainstream. It won't be the last and it might well be the worst. Other languages in this space include Swift

      Rust's memory management is not automatic, you have to explicitly manage memory. It's just made extremely easy for you by the language thanks to stuff like RAII, and there is checks that prevent memory safety violations like double free. But those checks won't prevent leaks, although the many lints do make it harder to forget about a value.

      Also, Swift's refcounting can be seen as a form of gc.

      • lolinder 255 days ago
        > Rust's memory management is not automatic, you have to explicitly manage memory. It's just made extremely easy for you by the language thanks to stuff like RAII, and there is checks that prevent memory safety violations like double free. But those checks won't prevent leaks, although the many lints do make it harder to forget about a value.

        By this logic no language on earth has automatic memory management. I've spent time troubleshooting a memory leak in JavaScript in the past month, caused by someone keeping a pointer around longer than necessary.

        Rust's memory management is automatic in that you can write entire Rust programs without once calling `free` manually. I'm not sure what definition of automatic you're working off of, but if it excludes JavaScript it doesn't seem especially useful.

        • Measter 255 days ago
          I've found it's kinda helpful to think of Rust/modern C++ style management as semi-automatic memory management. You control when things are allocated and freed, but you don't bother with the minutia of it unless you're writing a low-level container type.

          This is contrast to something like C/Zig, where things are fully manual, or something like Python or Javascript where things are fully handled for you.

        • pessimizer 255 days ago
          > you can write entire Rust programs without once calling `free` manually.

          If you mean "drop," it's just syntactic sugar for calling a trait that manually deallocates the memory and that you're free to reimplement.

          It feels like people are equating "manual memory management" with "onerous memory management." People are usually going to want to do the boring thing with memory, and everything in the standard library by default does the boring thing with memory. If you write boring structs and enums, you'll derive boring memory management. But it's not part of the language, it's part of the library.

          • lolinder 255 days ago
            > If you mean "drop," it's just syntactic sugar for calling a trait that manually deallocates the memory and that you're free to reimplement.

            I meant `free` because I'm contrasting with C.

            > It feels like people are equating "manual memory management" with "onerous memory management."

            Manual memory management is onerous, but I'm very specifically talking about manual. If I can write a whole program without thinking about memory, the language does not require manual memory management. It may support it, but it doesn't require it the way that C or Zig do. You do not "have to explicitly manage memory" as OP claimed.

          • tialaramex 255 days ago
            You can't call Drop::drop for type T at all. Try it if you don't believe me.

            It would be unacceptable to allow this because Drop::drop says it only takes a mutable reference &mut T (and indeed it does) but now the thing is destroyed, so, that wasn't just a mutable reference at all!

            You can call core::mem::drop<T> but well, look at it, here's the code:

            pub fn drop<T>(_x: T) {}

            Like, duh, we give it a T and then it doesn't give anything, the T is gone. That's not magic library code by the way, if we make our own:

            pub fn vanish<T>(_x: T) {}

            Now we can call our vanish function and the same happens.

            So yeah, automatic memory management.

            • consteval 255 days ago
              To be fair I don't think this is enough to say automatic memory management, otherwise C++ would be automatic memory management too (and maybe it is, just poorly implemented?)
              • tialaramex 255 days ago
                Yes, C++ has automatic memory management.

                As with the rest of the language, the automatic memory mangement requires that your program has no mistakes - which requires inhuman amounts of care during implementation and testing. So, that's obviously a spectacularly bad idea, but it's still automatic memory management.

        • est31 255 days ago
          > Rust's memory management is automatic in that you can write entire Rust programs without once calling `free` manually.

          Personally I'd put the bar around not requiring people to distinguish different pointer types (owned vs shared). Languages like Java, JS, Python, Swift, Go, etc all have this "everything is a smart pointer" paradigm, (or at least the strong and widely used default). I'd say Rust is manually managed because you need to think about which pointer type to use and using gc'd pointer types like Arc has a syntax overhead (like say when a pointer is cloned).

          • consteval 255 days ago
            Most GC langs have these features too they're just not in your face. Technically C# has true value types like C++.

            Rust makes the distinction between stack and heap references, like C++. Other, more high-level languages don't - there's only one kind of reference, you can't take a reference to a stack object. Maybe you implement that by making all objects heap allocating (Java) or you just say they have to be copied every time (C# struct). That's really where the difference is.

            There's a lot of juicy, juicy performance there. The problem is taking references to stack variables is problematic. Tracking heap objects with a GC or a ref counter is really trivial in comparison IMO, at least when you try to combine the systems.

            • neonsunset 255 days ago
              The assessment on C# does not match language spec at all.

              Not only instance methods on C# structs are implicitly byref, you can easily pass structs by reference via ref, out and in keywords. On top of that, ref structs can hold `byref` pointers aka 'ref' keyword which can point to arbitrary memory, or have references to other structs/variables/anything.

              There is also regular C syntax with &T and T* for unmanaged references/pointers. On top of that, .NET's compiler has gotten very good at struct optimizations and pretty much ensures they stay in registers all the time unless address-taken, including SIMD registers for Vector<T> and Vector128/256/512<T> even their "deconstructed" form when specified width is not supported so they get handled as e.g. 256x2. There was a big jump in codegen quality in .NET 8 which can now sometimes trade blows with GCC and Clang around struct optimizations.

              All these features are first-class and are heavily used by all kinds of performance-sensitive code.

              Also, structs can implement interfaces and can be generic arguments that satisfy interface constraints, which works exactly like generics with trait bounds in Rust - you get a generic instantiation aka monomorphized function body, making the abstraction zero-cost.

              • consteval 255 days ago
                Wow, I was not aware of the monomorphization of structs in C#. That's very interesting, I wonder how you're able to mix generic structs and generic classes seamlessly.

                > you can easily pass structs by reference via ref, out and in keywords. On top of that, ref structs can hold `byref` pointers aka 'ref' keyword which can point to arbitrary memory

                These are not features I've encountered. I wonder how you solve dangling references when those references could point to automatic stack variables.

                • afdbcreid 254 days ago
                  > I wonder how you solve dangling references when those references could point to automatic stack variables.

                  By restricting them, to, essentially, disallow storing `ref`s on the heap.

                • neonsunset 255 days ago
                  > Wow, I was not aware of the monomorphization of structs in C#. That's very interesting, I wonder how you're able to mix generic structs and generic classes seamlessly.

                  The handling is transparent, in a way. As implemented by CoreCLR, class-type generic arguments have shared representation named __Canon. This means that, for example, a `Dictionary<int, string>` has a generic instantiation indicated as `Dictionary<int, __Canon>` where __Canon is an implicit generic type argument passed alongside relevant calls. Statics referencing that do get exact address, and there is quite a bit of complexity regarding runtime handling of this as far as virtual calls and other edge cases are involved, but as a programmer you are never exposed to that directly. It's an implementation detail, and even un-monomorphized cases work rather fast in most situations, like standard data containers.

                  > These are not features I've encountered. I wonder how you solve dangling references when those references could point to automatic stack variables.

                  Not sure what you mean by automatic stack variables, but the idea behind byref pointers\managed references\'ref's is that they are not allowed to be boxed or otherwise placed on the heap.

                  This lifetime restriction enables key scenarios:

                  - byrefs can point to object interiors without hindering GC throughput

                  - byrefs can point to stack memory, allowing `var span = (stackalloc byte[32]);` and more

                  - byrefs can point to any unmanaged memory without requiring excessive range checks by GC

                  This way you can use `ref T` to represent any memory location and `Span<T>` to represent any contiguous memory range up to 2B elements, without having to carry around bespoke overloads and types that disambiguate between containers and memory sources, much like you would usually see in most other GC-based languages.

                  There is also additional lifetime analysis in Roslyn to prevent you from returning 'scoped' 'ref's to an outer scope, like having a ref point to an integer in the current method body and returning it to the caller - this will not compile, unless you override it with unsafe [UnscopedRef] which you should never do unless you are absolutely certain (every time I used it and was absolutely certain, the compiler was right and I was not :D). This also works "through" ref structs and other tricky scenarios, which means you can return a ref that points to a middle of heap-allocated array - the scope of object exceeds current method, and byref can keep the array rooted even if no other reference to it exists. The main restriction is byrefs can mostly flow "downward" as not to escape the scope they originate from, but that's a given in most scenarios in Rust just as much.

                  There is a basic walkthrough about byrefs here: https://learn.microsoft.com/en-us/dotnet/csharp/language-ref...

                  C# is good at systems programming - it also has portable SIMD and intrinsics, static linking, native compilation and zero-cost FFI. Engineers getting surprised by this fact rather than upset is the happier but unfortunately less frequent outcome :)

            • est31 255 days ago
              > Most GC langs have these features too they're just not in your face.

              That's what I meant by "strong and widely used default". Defaults and syntactic sugar do matter and influence the type of code written.

          • iknowstuff 255 days ago
            Swift has „weak” references for when you don’t want to bump the reference count. Does that make it manual?
        • bryancoxwell 255 days ago
          Pointers in JavaScript? I’m far from a JS expert, but didn’t think the language had pointers. Could you explain what you mean here?
          • lolinder 255 days ago
            I'm not aware of any mainstream language that doesn't have pointers, the question is whether they expose pointers as a first-class language construct (i.e. you can choose to not dereference them or to do pointer arithmetic) or use them as an implementation detail.

            In JavaScript's case it's an implementation detail, but one that is extremely relevant when maintaining something in production.

            • bobajeff 255 days ago
              Correction about JavaScript: All non-primitive data types are pass by reference. So not just implementation detail as no valid js interpreter will pass those by value.
              • lolinder 255 days ago
                It's not up to the implementation to decide whether to pass by reference or not, but it is up to the implementation whether to use pointers under the hood to model passing by reference. Since there's no pointer arithmetic, other models could theoretically be used to accomplish the same semantics [0], it just happens to be that pointers are by far the most logical choice.

                [0] As a silly example: a JavaScript implementation could technically store object references as a URL of an API that allows interacting with the object, as long as this is transparent to the user and the program behaves the same as a pointer implementation (minus non-functional characteristics like performance).

            • bryancoxwell 255 days ago
              Makes sense, thanks!
          • mind-blight 255 days ago
            Most variables on JavaScript are (essentially) pointers. A common mistake people make in the language is keeping references to objects in a global map, which prevents them from being garbage collected (often a bad caching implementation).

            You can use things like WeakMap or WeakRef as one solution, but there are usually better options

      • kaba0 255 days ago
        Yeah, the correct statement would be ‘Rust is the first mainstream language without GC, that guarantees memory safety’ (with obviously the caveat of unsafe blocks, but than you can also sun.misc.unsafe yourself into segfault in java).
    • tgv 255 days ago
      Rust is not really an ML language, is it? &mut is a very noticeable difference. Calling Rust, Scala, Swift and Kotlin ML seems to be taking it too far. Scala and Kotlin even have all the traditional OOP features. You might just as well put C++ in the ML list.
      • tialaramex 255 days ago
        It's not the Standard ML of New Jersey of course, like I was taught last century, but it looks like an ML to me. Rust has a sound type system, whereas a language like C++ inherits C's YOLO approach to typing.

        In Rust Vec<Option<Infallible>> is just a counter. Really, that's not theory that'll just happen by default because of how type arithmetic works.

        In C++ you can't even write down an equivalent type, let alone say how it would be represented, the type system blows up long before we get there.

        • wsabihi 255 days ago
          I've looked for this optimisation, and while it makes sense to me (Infallible is unhabitable ==> s: Option<Infallible> can only exist if s = None ==> all values in vector must be of the same value None that is known ahead of time ==> store a counter of how many Nones are in the vector instead of each None as an entry into a traditional vec), I cannot find any trace of such optimisation, whether by reading into the bytes backing the vector (with rustc -O / -C opt-level=3 to ensure this opt is triggered), or by calling `mem::size_of::<Vec<Option<std::convert::Infallible>>>()`.
          • Measter 255 days ago
            What's happening is that `Infallible` has no values, and cannot be instantiated. This results in `Option<Infallible>` only having one possible variant (None) which has no payload, and therefore being a zero-sized type.

            Separately `Vec<T>`, if `T` is a zero-sized type, never allocates and has a capacity of usize::MAX. Then, when you push into the Vec, because the value you push is zero-sized, there's no allocation and no data to write anywhere. Therefore the only effect is to increment the length counter.

        • koverstreet 255 days ago
          Could you elaborate? I would think Vec<Option<Infallible>> would have to be a bit vector.
          • zozbot234 255 days ago
            Infallible is a clunky name for the Never type in Rust, i.e. a type that has no values and cannot be instantiated. Thus, Option<Infallible> only has a single value, viz. None. Then Vec<Option<Infallible>> is isomorphic to Vec<()> which reduces to a length field - there's no other data associated with it.
        • jpc0 255 days ago
          Could you explain the semantics of Vec<Option<Infallible>>? What would exist in the memory pointed to by the Vec? What would be the use case for this type?
          • tialaramex 254 days ago
            The Vec itself carries four items, a unique pointer to T, a capacity, an Allocator A, and a current length. It is a generic type, generic over the type T and the allocator A.

            1) lets dispense with the Allocator, for a typical Vec the global Allocator is used, so this type has no size, every Vec is talking about the same global Allocator, its state is tracked internally to itself. We need not consider this object further†

            2) now lets dispense with that unique pointer. Its semantics are crucial if T had non zero size because this is why Rust knows the associated memory which is pointed to is "owned" by the Vec. However, for a zero size T this pointer is entirely unused.

            3) Capacity at last actually is used, it's a machine word sized integer, so on a modern computer that's 64-bits, 8 bytes to store the capacity of the Vec, which will be the maximum possible unsigned integer of that kind, usize::MAX. It is set to this value when the Vec is created (because the size of T was zero) and never changes.

            4) Length is also used, despite not needing to store any data for T the Vec is finite and this tracks how many of the zero size item are in the Vec. Thus, it's a counter.

            † In C++ they use the "Empty Base Optimisation" to avoid needing space for such things, in Rust their size is just Zero and so they won't be stored.

            What is the use case? Vec<T> is a generic type (Rust's generic growable array of T) so although Vec<Option<Infallible>> seems somewhat useless as a concrete type, it is likely to sometimes occur in generic code.

            Example: generic code to do a bunch of potentially fallible operations and remember whether and how they failed for later summarisation may make a type Vec<Option<E>> where E is the failure type. When the operation wasn't actually fallible E is Infallible and instead of an actual growable array type we're just making a trivial counter, our summary is going to inevitably say that all N operations were successful, any code for the "List of failures" summary should even get trimmed as dead code in this case since it depends on an Infallible object and the compiler knows Infallible cannot exist.

            • jpc0 254 days ago
              So from what I've played with Option<Infallible> in rust would be directly equivalent to std::nullopt_t and has similar semantics in vectors, ie it just increments length.

              There seems to be slightly more compile time work done in Rust but using the use case you described the concept of a Vec<Option<Infallible>> can just as easily be represented as std::vector<std::nullopt_t> with very similar semantics although possibly slightly less optimisation, although I would think if it was actually used in the language extensively I would see library implementors makeing it entirely compile time just like in rust.

              I don't disagree that algebraic data types are nice and that Rust has an interesting implementation of them, however this specific example doesn't seem unrepresentable in C++ using std::option and std::variant although the actual semantics relating to usage is easier in Rust.

              • Measter 254 days ago
                Bear in mind that the optimization on `Option<Infallible>` isn't specific to that type; it applies to anything that results in one possible value. For example, `Foo` in this example is also zero-sized:

                    pub enum Foo {
                        Foo,
                        Bar(A),
                        Baz(B),
                        Qux(C),
                    }
                
                    pub enum A {}
                    pub enum B {}
                    pub enum C {}
                
                The optimization applied to Vec also applies to any zero-sized type.
              • tialaramex 254 days ago
                How do you get to the idea that std::vector<std::nullopt_t> "just increments length" ?

                I spent some time trying this out in Godbolt†, and then by hand, and then reading the Microsoft STL, and exactly as I'd assumed before I read your comment it was a growable array of one byte objects, with the type std::nullopt_t, 400 objects? 400 bytes. 4 million objects? 4 million bytes.

                † Including longer than I'd like to admit forgetting that C++ silently copies large objects without telling you and so my diagnostic was actually making a new std::vector because I forgot to write an ampersand to make it a reference...

                It is still possible I missed something and so I invite you to tell me what that was. But overall my thought is that you've missed the whole point and are in the same place as when C++ programmers get confused and think Empty types (like Infallible) and Zero Size Types (like Option<Infallible>) are somehow the same, which is like confusing zero (The additive identity) with one (the multiplicative identity).

                • jpc0 253 days ago
                  [dead]
      • pornel 255 days ago
        Rust is its own thing, but use of algebraic data types + pattern matching and type inference makes it feel closer to OCaml than C++. The original Rust compiler has been written in OCaml, so that's definitely an influence.
      • foldr 255 days ago
        ML has mutable references:

        https://saityi.github.io/sml-tour/tour/02-09-mutable-refs.ht...

        (OCaml has something similar too.)

      • lolinder 255 days ago
        If &mut makes Rust not in the ML family then none of the languages they list are in the Smalltalk family either—there's no concept of an image, everything is in files!

        It's pretty clear that we're talking about very broad families, which is okay—Rust has more in common with ML than it does with Smalltalk or ALGOL (though the ALGOL heritage is definitely present).

      • pjmlp 254 days ago
        O in OCaml is for Objective aka Objects, and ML languages do support mutable types, aka ref cells.
    • lucideer 255 days ago
      > The age of Smalltalk is over. It's now the age of ML [...] The languages that dominated the 2000s (Ruby, Python, Javscript, PHP) were all more or less derived from Smalltalk [...] The new languages (Rust, Scala, Swift, Kotlin, etc.) are ML family languages.

      What drew people to a lot of your examples (Javascript, PHP especially) was the runtimes rather than the language features. Your example sets aren't just demarcated by Smalltalk-ishness/ML-ness but more by runtime type (Scala & Kotlin are odd examples given the VM but for the most part your latter examples have build-time compilation while the former have plaintext interpreters).

      We're definitely in a tools-heavy era of programming where, unless you're writing bash scripts, even very basic applications in interpreted languages are layered with a slew of transpilation/compilation/etc., but generally speaking I still don't see the majority of people moving wholesale away from plaintext runtime interpreters. Where are the ML-ish competitors in that space?

      • noelwelsh 255 days ago
        Good point.

        The nitpicky take: a lot of these languages come with a repl / console. E.g. the most recent version of Scala builds Scala CLI[1] into the language. You can run `scala repl` and just type code into it, or run `scala SomeFile.scala` and it will compile and run `SomeFile.scala`. There is special syntax for writing dependencies so that a single file can pull in the libraries it needs.

        The 5head thought leader take: the traditional model for typed languages has two phases: compile time and run time. Types exist at compile time. This is inadequate for many applications, particularly interactive ones. E.g. a data scientist doesn't know the shape (type) of the data until it is loaded. It should be possible to infer the type from the data once it is loaded and make that type available to the rest of the program. We know how to do this (it's called staging) but it's just not available in the vast majority of languages. Staging, and metaprogramming in general, is perhaps the next great innovation in programming languages (which will take us from ML to Lisp).

        In general, the challenge for these new languages is to reach "down" into the simpler scriptier applications, instead of the "serious" programming they are usually built for.

        [1]: https://scala-cli.virtuslab.org/

    • the_duke 255 days ago
      Ad 3)

      Scala is quite a bit older than the others, I'd put it in a previous generation.

      I always saw Kotlin as a more convenient Java, without Scalas heavy tilt towards FP.

      Java also added the equivalent of sum types with sealed interfaces and exhaustive pattern matching, does that make Java an ML language?

      Many older languages by now have incorporated valuable parts of FP/ML (including Javascript, Java, C++, ...)

      Borrowing some concepts from ML doesn't put these languages in the ML family.

      • valenterry 255 days ago
        OCaml is older than Scala though.

        > Java also added the equivalent of sum types with sealed interfaces and exhaustive pattern matching, does that make Java an ML language?

        I don't think so. Java heavily relies on mutation and you can see this throughout the whole ecosystem and even the JVM whereas OCaml doesn't. Kotlin also relies on mutation (it relies on Java's stdlib after all) whereas Scala has its own stdlib with both mutable and immutable classes but defaulting to immutability almost everywhere.

        So if anything, Scala can be called an ML language imho.

      • SSLy 255 days ago
        > does that make Java an ML language?

        not really, the type-system-sublanguage is still modelled like classy OOP, not more like MLy modules.

        • nextaccountic 255 days ago
          But Kotlin is like this too.
          • SSLy 255 days ago
            Indeed, and Scala (at least the version I've tried 10 years ago) tries to do both, somewhat poorly. shrug
            • valenterry 255 days ago
              Personally I found Scala's module/import system to be really really good. There are some annoying things with it, but they mostly come from JVM/Java compatibility.
              • SSLy 255 days ago
                Yeah, it's the compat that I vaguely recall being kludgy.
    • hyperbrainer 255 days ago
      About the time travel to past aspect, I wonder if we will have an age of lisp now? I know Lisp is very popular in some niches(zB Emacs), but I really want to see innovations in syntax from the current standard of too many brackets. (M-expressions?)
    • dgellow 255 days ago
      Swift does have garbage collection via reference counting
      • zerr 255 days ago
        Yes, usually GC vs non-GC discussion is actually about deterministic vs non-deterministic memory management. Swift's ref-counting is deterministic.
    • benreesman 255 days ago
      I think if Linear/Affine typing was a default anyone wanted, Haskell hackers (who are usually pretty concerned with performance) would regard it as more than a research oddity.

      When people bitch about the borrow checker, what they usually mean is “std::move is a great wristwatch but god damn does it chafe as a chastity belt”.

      • zozbot234 255 days ago
        The whole point of Haskell as a programming language is to have lazy evaluation be the default. This comes with boxing data objects everywhere, which is the opposite of a focus on low-level performance. Haskell does support using strict, unboxed data but it's clunky and not the default. (Also, recent versions of Rust have now added support for "plug-in" laziness via the type constructors LazyCell<> and LazyLock<>.)
        • benreesman 255 days ago
          I’m aware of how boxing works in GHC, I learned it from Simon Marlowe.

          I don’t mean to be obtuse but I don’t see what that has to do with linear typing as a fucking weird mandatory default?

          • the_duke 255 days ago
            Why should every language occupy the same design space?

            Rust was shaped by Mozilla wanting a language to replace parts of Firefox with. A language that is low-level enough to give very high performance, enforces correctness through the type system, and yet has many abstractions so it feels modern and convenient to use.

            Rust doesn't have any automatic allocations. You can wrap every type in `Arc<Mutex<T>>` and treat it like a very verbose Python, but that's the programmers choice.

            The design decisions have been validated by the success of Rust in certain domains.

            There is a reason why more than one programming language exists...

            • benreesman 255 days ago
              I think Rust is cool!

              If it wasn’t actively seeking world domination I’d be like, cool let’s use that sometimes.

              Rust is so hell bent on a Rust monoculture ranging from stuff like TRACTOR to actively opposing interop that a few of us who know our shit have to be like “easy now” once in a while.

              I’m trying to decide between tiktoken and sentencepiece for a new vocabulary at the moment, and it would be easier in some ways to go with tiktoken.

              But I want it to run faster than a fart at inference time, which means I’m linking libtorch, which means I’m writing C++.

              And fuck, link to Rust from C++? Let’s take another look at the Google stuff.

              • tialaramex 255 days ago
                > If it wasn’t actively seeking world domination I’d be like, cool let’s use that sometimes.

                Hylo specifically says it intends "world domination" (next year in fact, having apparently completed all the delayed 2023 goals and also all its 2024 goals this year, I guess maybe they're going to do it all in December?)

                But I don't really see this from Rust. I'm sure Hylo's "world domination" is a joke, but, so is the "Rust Evangelism Strike Force". Rust people I've interacted with strike me as very much more accepting of the "better tool" theory than of some weird cult or panacea. When somebody in a Rust forum says (e.g. about JPEG XL) "Ooh we should use Rust" and I say "No, WUFFS. This problem is what WUFFS is for" the responses tend to be a mix of "Yeah, I guess" and "I didn't know that existed. Thanks for the link" rather than cultist denial.

                • benreesman 255 days ago
                  Haha we’ve tangled before and in my experience when the Rust evangelism strike force is on the case discretion is the better part of valor.

                  I admire both the technical sophistication and passion you folks bring to the table, the one true language thing just isn’t my bag. I pine for the Halcyon days when HN was run by the Rust people rather than the LLM idiots.

                  Keep doing what you’re doing, one hacker to another.

              • zozbot234 255 days ago
                > to actively opposing interop

                Rust just uses the C ABI for interop (including interop with Rust code that might not be part of the same build, such as dynamically-linked program objects). So you just have to come up with a plain C API for the interface and write wrappers on both sides of the divide that reference the C API. There are crates that will help with this, even achieving something like a "stable ABI" for Rust interop.

                (And we'll probably see some work into interop with Swift itself (which has a stable ABI of its own) once there's enough interest in that as a memory safe language.)

                • benreesman 255 days ago
                  It’s doable!

                  But bindgen/cbindgen/cxx/etc exist for a reason: it hurts like a root canal.

                  In my particular case, tiktoken is substantially like 600 lines of Rust. If I have a deadline do I fuck with the linker or just write it again? I’m marshaling std::string here, fuck let’s just write it.

                  What I’m not going to re-write because I’m not a magnificent maniac like geohotz? I’m not going to rewrite libtorch. Which is written in C++ like all the software you can’t really live without.

          • zozbot234 255 days ago
            I'm not sure what's supposed to be especially weird about linear typing (or rather, uniqueness typing which is what Rust ultimately relies on). If you want to see the use of such types in a language that's clearly even less "convenient" and more principled than Rust, you can look at Austral https://austral-lang.org/
            • benreesman 255 days ago
              I think I’m going to be a hard pass on a language more militant about BDSM type system level memory management than Rust.
              • farmeroy 255 days ago
                Is their code linter called whippy?
        • bananapub 255 days ago
          perhaps a dumb question, but why does laziness imply boxing? or does 'boxing' in haskell mean something other than 'embed a simple bit of data in a fancy thing on the heap'?
          • benreesman 255 days ago
            A dramatic over-simplification is that a lazy value referenced by some continuation/thunk is hard to stack allocate in-situ.

            Now GHC does stack allocate, but it’s hard to count on without really aggressive hinting.

            • the_duke 255 days ago
              Funnily enough Rust supports exactly that pretty well with async.

              An `async {}` block is not evaluated, but converted into a generator/state machine that lives on the stack, and that has to be advanced by calling a poll method. You can move it to the heap , but that does not happen automatically.

              Sharing of values is obviously much more awkward due to mutability and no automatic cloning though.

              • benreesman 255 days ago
                I know how it works though I remain unconvinced that rustc is dealing with the same scope of problem as GHC in the general case.

                But to be clear, anyone with the username the_duke is probably really cool, and you clearly know your stuff as well as having style, so count me as a fan.

            • bananapub 255 days ago
              ah, right, laziness can cause the control flow to become extremely complicated - makes sense, thanks!
              • benreesman 255 days ago
                Also true! And in very real sense literally what I meant.

                But not so much in the conditional branch sense of the phrase “control flow”.

                More like, I’ve got a bunch of references and one of them is a reference to a function I might evaluate, damn, I need that reference and the transitive closure of everything it knows about, all of which are probably on the heap.

                • zozbot234 255 days ago
                  The proper name for this is a "space leak". Also known as: "you wanted a banana, but what you got was a gorilla holding the banana and the entire jungle". This is of course pretty much the opposite of low-level "performance" - and it's also the kind of problem that tracing garbage collection was actually designed to address when first developed.
                  • benreesman 255 days ago
                    Fake news and FUD and other balderdash.

                    A space leak, much like any memory leak is when you or the compiler or the runtime or whoever misplaces such reference.

                    GHC having a closure in scope in no way immediately means there is a space leak. Plenty of Haskell code that compiles to thunks will run all day in an amortized fixed-size heap.

                    Much like Erlang, which is what Joe Armstrong had designed before he said the thing about jungles you’ve quoted.

                    And it is in no way difficult or even really rare to drop an Arc in a table and leak all over the place.

                    Rust has all the same computer science problems as anything else. Full stop.

          • throwaway17_17 255 days ago
            I seem to be missing the implication as well. I am under the impression that Haskell relies on boxed values primarily due to its parametric polymorphism. There is a Simon Peyton Jones talk somewhere discussing the acrobatic GHC requires for the type theory to deal with unboxed types and their implicit requirement for an alternative Kind at the type level.

            The laziness as a default just makes boxing closure values a sensible default, when the type system requires the boxing of everything in general.

            • benreesman 255 days ago
              On everything from the JVM to GHC to v8 there is a critical distinction between the semantics of a boxed type and the practical outcome on the gear. Every managed language runtime done by pros (and those all are, I should have said the CLR too) will quietly unbox in all the easy cases and some of the hard ones.

              Parametric polymorphism is kind of orthogonal to that. It’s related I guess. It can involve some kind of runtime dispatch, but usually that gets JITed or inlined or whatever unless you really insult the compiler.

              And even full frontal invokedynamic it’s still like, is it in the BTB and the I-cache? Ok well we’re still doing business.

      • mananaysiempre 255 days ago
        GHC Haskell is a fairly large language with a considerable legacy already, so integrating linear or affine types into it may well be a very different question from making a language centered around them. That language exists in the form of Clean[1,2]. Admittedly it’s not really well-known.

        [1] https://clean.cs.ru.nl/

        [2] https://clean-lang.org/

        • benreesman 255 days ago
          Thank you for making me aware of Clean, on quick glance it looks cool.

          I like a lot of things about Rust: it’s got compiler checked exhaustive pattern match and a good initialization syntax and Either baked in and fucking correct package management and a ton of other things.

          I’ll look at Clean to see if linear-only memory management can be done gracefully. Rust is a lot of cool things but not the language that demonstrated that.

    • hollerith 255 days ago
      You seem to assert that OCaml doesn't use a run-time garbage collector:

      >Rust is the first language to bring non-GC automatic memory management to the mainstream. . . . Other languages in this space include . . . OCaml[1]

      But your own link[1] says,

      >The OCaml compiler does not statically track lifetimes. Instead, it relies on a garbage collector to figure out a suitable lifespan for each value at runtime. Values are collected only after they become unreferenced, so OCaml programs are memory-safe. To a first approximation, this model requires allocating all values on the heap. Fortunately, OCaml’s generational GC can efficiently handle . . .

      Scala does run-time garbage collecting, too (or rather the JVM does, which Scala depends on at run-time) unless I'm very much mistaken.

    • timeon 255 days ago
      > These alternatives take the approach that tracking ownership is not the default, which is fine for the majority of programs and far more ergonomic.

      If you want non-default safety you do not need to leave C++. That is why unlike all these new languages, Rust makes difference.

      • zozbot234 255 days ago
        Swift 6 is expected to be memory safe, including for concurrent programs. That's a lot more than you can say about C/C++. (This is achieved by tracking ownership at runtime when required. Rust allows for this via library types such as Rc<> and RefCell<>, which is more principled but obviously adds some boilerplate compared to Swift.)
    • IsTom 255 days ago
      > (Ruby, Python, Javscript, PHP) were all more or less derived from Smalltalk

      I'd argue that Erlang (with gen_servers being "objects") has more in common with Smalltalk than these.

    • alex_smart 255 days ago
      > The languages that dominated the 2000s (Ruby, Python, Javscript, PHP) were all more or less derived from Smalltalk

      One of these is not like the others. Javascript derives from Lisp (first-class functions, lambdas, closures), not Smalltalk.

      • robertkrahn01 255 days ago
        Brendan Eich used both Scheme and Self as inspirations. Self is a dialect of Smalltalk.

        https://en.wikipedia.org/wiki/Brendan_Eich

        • pessimizer 255 days ago
          Did you link his biography? The claim is that he used Self as an inspiration, not that there was once a man named Brendan Eich.
      • astrobe_ 255 days ago
        More than features, JS derives from Lisp because its author was a Scheme implementor. Otherwise function pointers are trivially done even in assembly language, and Smalltalk does have closures.

        Or maybe you think of the fact that JS uses prototype-based OOP, which makes it closer to what one would do with Scheme (something inspired from CLOS I guess) than Smalltalk ?

      • panzi 255 days ago
        Python has those too, although with less convenient syntax (multiline functions (closures) can only be at statement locations).
        • alex_smart 255 days ago
          Sure, but use of those patterns is not idiomatic in Python.
  • afavour 255 days ago
    As someone that’s recently been working on integrating Rust into an iOS Swift app I do agree with a lot of this. I love Rust but the more I’ve used Swift the more I find myself wishing I was just using Swift all the time.

    That said, the difference between the two has a lot less to do with the language itself than the world surrounding it. You can use Swift cross platform but it’s very obvious that Apple platforms are the primary target. Rust has a rich and varied package system, about the only assumption most packages make is that you’re using the Rust standard library. By comparison lot of Swift packages (which is a much smaller ecosystem anyway that’s slowly transitioning away from Cocapods, Carthage etc) will lean on OS APIs that won’t work if you compile for Linux or WASM[1].

    I want Swift to be a more convenient Rust but it just isn’t there. e.g look at IBM abandoning Swift on the server not that long ago.

    [1] for example, this blog post:

    https://swiftrocks.com/weak-dictionary-values-in-swift

    Discusses making a dictionary with weak values in Swift. It has a homegrown Swift version then discusses using NSMapTable… which isn’t available on Linux. But you wouldn’t know that reading the article because the assumption is that you’re running on an Apple platform.

    • 369548684892826 255 days ago
      Swift is in its .NET Framework stage. I’m looking forward to Swift Core!
      • lolinder 255 days ago
        The difference is that Microsoft very intentionally decided it was going to make .NET cross-platform as a strategic play. Apple has recently shown a willingness to get out of the way of cross-platform Swift, but I haven't yet seen evidence that they're throwing any significant weight behind it. They have an even worse track record than Microsoft did of investing heavily in locking developers into their ecosystem, so I'll have to see quite a bit of evidence to believe they're actually serious about cross-platform Swift.
      • afavour 255 days ago
        I like that comparison! But it might require Apple’s perspective as a company to shift in the same way Microsoft’s did. We’ll see, I guess…
      • pzo 255 days ago
        It might be too late. IMHO Microsoft also did transition too late and these days .NET is less relevant today than it used to be - even many microsoft apps are build using React Native or Electron.
        • pjmlp 254 days ago
          As someone mostly focused on Java and .NET ecosystems, for the last 20 years, the problem isn't the transition, rather the team is hindered by upper management goals.

          While they do their best to open source .NET and related SDK tooling, the whole IDE story has been a bit of a mess, rewriting VS4Mac only to kill it shortly after the rewrite reached 1.0, C# DevKit under the same license as VS, the stuff that will never leave Windows or VS to Mac/Linux, Xamarin.Forms rewrite, while not taking GNU/Linux into consideration, and using Catalyst for macOS MAUI backend, dotnet watch drama, some of the ASP.NET features seen as way to sell Azure stuff,....

          If you look closer, you will notice that usually the React Native or Electron based applications are from business units that are mostly C++ focused, that never had great love for .NET in first place.

          While the team tries to cater to new generations with making .NET development a great experience, even with some constraints in cross-platform deployments, I think they would gain more by fixing the bad perception .NET still gets in UNIX shops, which those upper management decisions don't make any better, regardless of how great .NET happens to be.

        • nicce 255 days ago
          > even many microsoft apps are build using React Native or Electron.

          I doubt that .NET has much to do with it. There are just many more JavaScript developers who know how to make UIs.

          • consteval 255 days ago
            That's because the web is cross platform, truly. It's not a separate issue - the prevalence of the web is exactly due to platforms like .NET being limited. If .NET (and others) were cross platform much earlier, I think many applications would be true apps instead of web apps right now.
    • lukeh 255 days ago
      Well, that article also presents a pure Swift (non-Foundation) solution, WeakRef.
      • afavour 255 days ago
        Yes, I mentioned that. There’s probably a better example but this is off the top of my head as something I experienced myself. My point is that the blog post suggests the Swift native version then discusses NSMapTable without ever calling out the fact that you can’t use the latter cross-platform. I find that endemic in the Swift community, there’s a base assumption you’re coding on an Apple platform (and often specifically on iOS!), it adds an additional barrier to getting things done.
      • naman34 254 days ago
        And that article is from 2020. A lot has changed since.
    • naman34 254 days ago
      I added a new section to the post with links and information to dispel this myth that Swift is only for Apple platforms.
    • jnrk 252 days ago
      > look at IBM abandoning Swift on the server not that long ago

      That was back in 2019..

  • nicce 255 days ago
    > Rust invented the concept of ownership as a solution memory management issues without resorting to something slower like Garbage Collection or Reference Counting.

    They did it well, but not invented?

    There very many kinds of influeces: https://www.reddit.com/r/rust/comments/le7m54/is_it_fair_to_...

    Especially Cyclone, I think: https://en.m.wikipedia.org/wiki/Cyclone_(programming_languag...

    • mananaysiempre 255 days ago
      The best review of the relevant research that I know is Pottier’s presentation[1]. It’s from 2007, but then again as far as fundamental concepts it doesn’t seem to me that Rust is state-of-the-art even as of then. (To be fair, that’s not due to ignorance, Rust’s type system is deliberately conservative.)

      [1] https://pauillac.inria.fr/~fpottier/slides/fpottier-2007-05-...

  • K0nserv 255 days ago
    I love both Rust and Swift, they have their respective strengths. I would say Swift has a less noisy surface syntax, but instead uses more dedicated keywords and compiler magic. This is nicer, but means some areas are "compiler only territory".

    In many cases Swift would be a better choice than Rust, when the convenience and developer experience is worth trading for some performance. However, Swift's biggest problem is that any usage outside of the Apple ecosystem is a second(or third) class citizen. Until this is solved, Swift will remain mostly an Apple only language, regardless of how nice it is.

    • galangalalgol 255 days ago
      In my toy usage of swift I found it to run dramatically slower than rust. To even get something close to go or java speeds I had to compile with unchecked losing a lot of safety. Other than developing for an apple product, I don't know why I would ever pick swift and I wouldn't ever find myself deciding between rust and swift. It would be swift vs go. When considering rust it would be vs c++ or zig.
      • attractivechaos 255 days ago
        In my limited experiences in both languages, it is easier to write inefficient programs in swift. With rust, follow your instinct. As long as it compiles, the performance is usually close to your expectation. With swift, the result may surprise you more often.
      • K0nserv 255 days ago
        It depends on the axis you are considering. Swift might be slower than Rust, but it has an equally powerful type system. Swift and Go are much less similar than Swift and Rust in this regard.
        • galangalalgol 255 days ago
          I compare tools based on what they do not how I use them. I use mig and hot glue guns similarly, but I'd compare mig to stick or tig and a glue gun to tubes or gluesticks.
    • zer0zzz 255 days ago
      > However, Swift's biggest problem is that any usage outside of the Apple ecosystem is a second(or third) class citizen. Until this is solved, Swift will remain mostly an Apple only language, regardless of how nice it is.

      I think a time goes on this is more and more only a perception and not a reality. However perception is really important because it’ll mean even if it’s a great language with a great set of tools and libraries for all platforms it will not have a community of developers outside of the Apple ecosystem.

      • knighthack 255 days ago
        The issue is not "as time goes on".

        The issue is now - the use of Swift outside of the Apple ecosystem is remarkably stifled, given the lack of Apple support.

        A language truly acquires mainstream mindshare and community, only if it can extend its use beyond Apple's OSes.

        • zer0zzz 255 days ago
          > given the lack of Apple support.

          It’s an open source project that’s more and more detached from Apple itself every release, with lots of contributors making it better and better on Android and windows all the time.

          It is possible to get a vscode ide environment working for cross platform development and debugging today, which was not that easy or possible even a few years ago.

          There is at least one pretty sizable software project that has shipped a product on Android and windows using swift.

          I think swift 6 is going to be great for cross platform uses, but convincing folks to try it still won’t be easy.

  • robjwells 255 days ago
    One way that Swift is certainly not more convenient than Rust is the tooling. I use a 2018 MacBook Air, using macOS 12, which is now unsupported by Xcode. Meanwhile SourceKit-LSP is treated as very much a second-class citizen. But Rust 1.81 and rust-analyzer build and run just fine.
    • arghwhat 255 days ago
      Not to praise Swift, but Swift and Xcode are unrelated projects. Saying that Swift is inconvenient because of Xcode issues is like saying C++/C# is inconvenient because of Visual Studio.

      You can install Swift on Linux if you want and code away, just like with Rust - but as it hasn't really caught on for anything other than building apps for the Apple ecosystem, it's not a particularly normal thing to do.

      • x3ro 255 days ago
        The defacto standard matters. The vast majority of Swift programmers will be using Xcode, and I definitely immediately think of the pains I've had with Xcode when I hear Swift. For example, I don't know of any other good IDE environment for Swift, though maybe there is one.

        You could also argue that e.g. Rust is not cargo, but almost every Rust programmer will be using cargo. Sure I could use something else, but why would I? Of course the analogy is not perfect, because the "why would I?" is clear for Xcode: it's bad and macOS only :D

        All I'm trying to say is: defaults matter. Most people will not be writing Swift in VS Code (on macOS).

        • nelup20 255 days ago
          Yep, I've only started using Swift a couple of months ago, but Xcode just isn't pleasant to use imo (though I'm probably spoiled by Jetbrains' IDEs), and the Swift extension for VS Code is clunky.

          I'm still really sad Jetbrains decided to sunset AppCode :(

          So the quality of tooling/IDEs is definitely a factor, I just don't see myself using Swift outside of the Apple ecosystem when there are so many other alternatives.

        • arghwhat 255 days ago
          You don't use Xcode to write Swift, you use Xcode to write Apple apps which you happen to do in Swift. No matter the language, you need Xcode to write those apps.

          Equalling Xcode and Cargo makes no sense. A similar situation to Xcode and Swift would be like Visual Studio and C# on Windows. Many developers use these tools, but they are not the language ecosystem, and the Cargo equivalents are entirely separate (Nuget, Cocoapods).

          • eptcyka 252 days ago
            If you argue about nomenclature, yes, Xcode and cargo fit into entirely different categories. However, they are still comparable you can't feasibly escape either to develop Swift or Rust apps the way most people develop them.
      • yunohn 255 days ago
        I’m pretty sure you need Xcode to compile Swift on macOS.
        • robjwells 255 days ago
          You can install a Swift toolchain on macOS without Xcode, but Xcode is the “blessed” route and listed first on the swift.org getting started instructions. https://www.swift.org/install/macos

          However, it can be a bumpy ride. For example: “swift test” (via the CLI) doesn’t work if Xcode is not installed. https://github.com/swiftlang/swift-package-manager/issues/43...

          I would just emphasise that the OP was about _convenience_ and Swift-without-Xcode on macOS is not smooth sailing.

        • latexr 255 days ago
          Everyone who has installed Homebrew has the official CLI Developer Tools (Homebrew installs them for you), and everyone who has those has a way to compile Swift.

          Even if you don’t have Homebrew, the first time you try to run `swift` in a Terminal you’ll get a GUI prompt which lets you install the Developer Tools in two clicks.

          Or you can run `xcode-select --install` (comes preinstalled, no Xcode needed).

        • acar_rag 255 days ago
          Absolutely not, you just need to use swiftc: https://theswiftdev.com/the-swift-compiler-for-beginners/
    • dlachausse 255 days ago
      It looks like MacOS Sonoma (and the latest Xcode) should work on your Mac.

      https://support.apple.com/en-us/105113

    • mrtksn 255 days ago
      [flagged]
      • homebrewer 255 days ago
        Is upgrading every 5-6 years to be able to write code considered normal in the Apple world? I'd been comfortably writing code in several languages using latest toolchains on a decade-old Intel machine just a few months ago, and was forced to upgrade because one of the components died. Otherwise I'd be using it for at least five more years.
        • jdmoreira 255 days ago
          > Is upgrading every 5-6 years to be able to write code considered normal in the Apple world?

          Yes, pretty much

        • zer0zzz 255 days ago
          I thought this was pretty normal for anyone writing code in larger code bases in these slow to compile languages in the first place…
        • SSLy 255 days ago
          frankly, late intel macs just sucked with their power-to-capability ratios. progress since M1 haven't been that incredible.
        • sgt 255 days ago
          I mean.. upgrading every 5-6 years is pretty normal outside the Apple world too, if you are a professional programmer. I don't expect my mom to upgrade her computer that much (she's happy with her 2011 (!) model iMac).

          But for those of us who use a computer every day, surely it makes sense to stay a bit cutting edge and it won't break the bank either since it is literally our income.

          • timeon 255 days ago
            Is it sustainable?
            • kaba0 255 days ago
              The curve has flattened a lot - on mobile phones it is extremely noticeable, but I would say that apple’s M series is also the last significant jump in the laptop world (because laptops weren’t really ‘mobile’ before with such a bad battery life). So, yeah, I would say it can be reasonably sustainable.
        • mrtksn 255 days ago
          I don't see why would anybody complain that the tools they bought 10 years ago don't magically do the things that the new tools are doing. It's not like the old tools stopped doing the things they do, right?

          Anyway, you sel the old machine and get the new one is pretty normal as the Apple devices tend to hold value.

          • tcfhgj 255 days ago
            We are talking about Software though.

            It's not really sustainable to throw buy new stuff all the time when there's not a technical reason for it

            • mrtksn 255 days ago
              That still doesn't mean that the tooling is bad
          • gr__or 255 days ago
            let the record show that we're talking about 6 years
            • mrtksn 255 days ago
              Which is about the same time it takes a high school student to get his masters degree in computer science, get into a FAANG and get promoted.

              6 years is a lot. It runs about 14$ a month if you throw away you tool at the end. It's probably more like 10$ since you are likely to be able to sell that tool at reasonable price since its Apple.

      • dannersy 255 days ago
        Buying a computer is not a reasonable response to a comment about bad tooling.
        • mrtksn 255 days ago
          The comment says that the old tool doesn't do this new thing. The new tools do the new things.

          How is it that buying the new tool to do the new thing is unreasonable and expecting the old tool to do the new thing is reasonable?

          IMHO if the new tools are good and the tool you have is not good it doesn't mean that the tooling is bad, it means that the tool you have is bad and the solution for this is to obtain a new tool.

      • dgellow 255 days ago
        I can write rust on a decade old machine without issues
        • pjmlp 255 days ago
          Depends on how much time you have to wait for clean builds, recompiling the same crates from scratch.
          • tcfhgj 255 days ago
            On my 15 years old machine I learned to do clean builds only when necessary
            • pjmlp 255 days ago
              Basically working around the problem.
              • tcfhgj 255 days ago
                Without any effort, because great tooling
                • pjmlp 255 days ago
                  Great tooling is able to depend on binary libraries without requiring to build the world.
              • timeon 255 days ago
                Constantly making clean builds is made-up problem.
                • pjmlp 255 days ago
                  Better yet is not doing any at all, binary libraries are a thing in some systems programming languages.
      • bhaney 255 days ago
        The modern tooling not being available for common hardware that's only a few years old is a problem with the tooling
        • mrtksn 255 days ago
          It can be a bad value depending on what do you do with your tools but I wouldn't say that this is bad tooling.

          The new tooling is amazing, its fast its snappy it runs the code you write for you mobile device directly on your tool. If that brings you value less that is less than 200$ a year, then it can be a bad deal but its not bad tooling.

  • seanalltogether 255 days ago
    I keep trying to get into rust but I always hit a brick wall when looking at examples and the code looks so complicated. Examples like this straight from the rust website just make my eyes glaze over

        struct Task {
    
            future: Mutex<Option<BoxFuture<'static, ()>>>,
    
            task_sender: SyncSender<Arc<Task>>,
        }
    • MSFT_Edging 255 days ago
      Something I like about Rust is that it gets very specific and explicit.

      On first glance you're like wtf is going on, but you can derive backwards what is happening without having to look for a discussion on language behavior under the hood. Automagic is nice sometimes but I'm the kind of person that needs to say the each word of an acronym to process it.

      Rust is like that in a way. You have a mutex on an option type, the option has a heap allocated future that contains data that can live for the lifetime of the program.

      This is clear to me because I don't need to fill in blanks. My memory is terrible, I've forgotten so many things I've learned in the past. I could pick them back up quite quickly but I don't have the little facts ready to go. If i wanted to use that future, I know I need to check the mutex lock, check if the option contains a Some(), etc.

      Sure this isn't for everyone, but I'm glad we have a tool like this gaining popularity. I have little interest in studying the arcane knowledge of C++ and sorting out what is current and what is obsolete, then arguing with a 30 year veteran that their technique is 20 years stale.

    • swiftcoder 255 days ago
      I'll grant you that is probably too complicated an example to appear early in the docs, but how often are you actually building multithreaded job dispatch systems from scratch (which appears to be what this example is doing), and how simple would it be in other languages?
      • sorentwo 255 days ago
        Transparently parallel, lightweight tasks are part of the language in Elixir. Fully multithreaded and able to utilize all cores out of the box.
        • foldr 255 days ago
          Yes, but the Rust code is the kind of thing you'd write if you wanted to build that kind of thing yourself. I do think normal Rust can get too typey, but this is probably not the best example of that.
      • diggan 255 days ago
        This is how simple it would be in Clojure, for comparison:

            (defrecord Task [future task-sender])
        
        Probably used something like this:

            (defn create-task []
              (let [future (atom nil)
                    task-sender (async/chan)]
                (->Task future task-sender)))
        
        
        Not sure it makes sense but a pretty much direct translation as far as it goes.
        • galangalalgol 255 days ago
          Having never used a lisp outside an AI class a quarter century ago, that isn't comprehensible to me, and I don't think it is just my lack of caffeine. The rust example makes more sense, but I've used rust for almost a decade at this point and c++ for over three. Familiarity matters.
          • Sharlin 255 days ago
            But honestly only for a very short time. It took me two weeks of learning for Clojure to start looking completely natural to me. Surface syntax is such a trivially minor thing, which is why it seems ridiculous to me that the OP author even mentions things like "Swift uses switch rather than match, such familiarity" among other much more solid points.
          • diggan 255 days ago
            Yeah, of course. If you show me Brainfuck or APL code I won't be able to make head or tails of it either. But familiarity should never stop one from learning new things, otherwise we miss out on a lot of simple, interesting and useful stuff :)
        • speed_spread 255 days ago
          So it's the same but without type annotations? Meaning that the clojure IDE has a much harder time providing completion and that you need to write all sorts of tests to get the same guarantees as the rust compiler provides for free. Type systems aren't just about memory representation, they also tell the next programmer about the intent and performance characteristics of the code at hand.
          • tmtvl 255 days ago
            That's why I moved from Scheme to Common Lisp, it's nice being able to do...

              (defstruct Task
                (future :type Future)
                (task-sender :type Sender))
            
            And have SBCL warn me when I try to jam the wrong type in.
          • diggan 255 days ago
            If you really badly want types, you'd slap something like this below it:

                (s/def ::future atom?))
                (s/def ::task-sender async/chan?)
                (s/def ::task
                  (s/keys :req-un [::future ::task-sender]))
            
                (s/fdef create-task
                  :ret ::task)
            
                (stest/instrument `create-task)
            
            But then I don't think you'd reach for something like Clojure if static typing is something that you require.
      • nurettin 255 days ago
        It would be ridiculously simple in go with channels.
        • kenmacd 255 days ago
          Would it though? Just running things 'async' is simple, but it's also simple in Rust. I wouldn't figure writing an Executor would be any easier in Go than the Rust example at https://rust-lang.github.io/async-book/02_execution/04_execu...
          • nurettin 255 days ago
            As I understand, the idea an executor is to emulate an event queue using threadsafe (hopefully lock-free) queues. Since the language already has goroutines, channels and group waits, I don't think this is something that needs to be built in Go. I mean you can do your rate limiting and retry handling logic outside the language primitives. If you do have to make a task spawning executor, you can do that with channels as well. But it would just be a very thin wrapper.
      • jppittma 255 days ago
        In golang I think just

            type Task struct {
                Sender chan Task
            }
        
        no?
    • bitexploder 255 days ago
      The only tricky thing there is the BoxFuture and ‘static lifetime. Arc is pretty simple. Option is simple. Rust is just forcing you to acknowledge certain things and understand lifetimes explicitly before you can move data around. And static is close to a “plz make this lifetime error go away” thing cause it means it can live forever. But they are also fine, and often the right choice. This may seem contradictory but it will make sense if the Rust compiler hurts you enough times :)
    • smodo 255 days ago
      This may demonstrate the intricacy of the type system but isn’t a very intuitive example… I feel like the Rust Book does a good job of explaining the different types in std. For one thing it shows these types in their most idiomatic use cases. Rust is my first real low level language and I learned a lot by reading the book.

      And by reading it I mean I actually read the thing front to back before touching a keyboard. Then I started to experiment with some simple code and after a couple of months I had some actually programs that I use often. Obviously I rewrote them after a year but hey ho, it’s all a learning experience.

    • j-pb 254 days ago
      It looks complicated but if you think about it as a series of property descriptions it becomes much easier.

      I'm gonna make it worse first by substituting the `BoxFuture` type alias with its definition but that makes it easier to explain.

        future: Mutex<Option<Pin<Box<dyn Future<Output = ()> + Send + 'static>>>>,
      
      What this means is that the future field stores something that is a:

      - Dynamically dispatched object (with a V-Table) that implements methods from the Future trait, and returns nothing () = void. (dyn Future<Output = ()>)

      - That thing needs to be sendable between Threads (+ Send)

      - And must potentially live forever (+ 'static)

      - The dynamic dispatch stuff only works with heap allocations because the V-Table pointer is stored in the double wide smart-pointer. In this case we use a Box which means only one thing/thread can hold it at a time (unlike reference counted Rc/Arc).

      - That allocation of the Heap allocation must not be moved around in memory. (Pin<...>)

      - that thing might not be there (it can be Null), but replacing/taking/putting the thing will be secured by the mutex so that's a nice guarantee for multithreading (Option<...>)

      - access to a thing is synchronised via a mutex (Mutex<...>)

      So while it looks horrible at first sight, it is simply telling you a lot about the guarantees of that particular type.

    • XorNot 255 days ago
      Honestly that one I can read (I am just now in earnest trying to learn Rust). What I've been banging my head into is trying to get a suite of boilerplate libraries together to make basic things happen.

      Got directed from slog to tokio-tracing for logging, so dove into setting up my basic "--log-level", "--log-format" CLI options I add to every non-trivial program in Rust and ran into...whatever is going on with configuring it (namely, tokio-subscriber seems to encode all the configuration information into types, so you can't just return a Subscriber or Subscriber builder from a function and then add it as a logger...the documentation gives no hint on what trait or type I'm meant to return here).

      • guitarbill 255 days ago
        Not just you. Best case the tracing ecosystem is weird and the documentation is lacking, but all of the weirdness makes it very performant. Worst case it is weird and badly documented and over-engineered.

        Since rustc uses tracing I really hope it's the first...

    • hu3 255 days ago
      It's not just you, I watched yesterday [1] that Bun's author initially tried to use Rust but wasn't as productive. Then he switched to Zig and here they are, innovating JS/TS landscape.

      [1] https://www.youtube.com/watch?v=eF48Ar-JjT8&t=670s

    • junon 255 days ago
      Yeah this is a bad example for introductions for rust - namely that async rust is something I'd consider advanced.

      However the type explicitness is, IMO, one of its strengths. It lets you build up types that e.g. in C++ we're not a given, had properties about the behavior buried in docs, etc.

      • pjmlp 255 days ago
        It would be, if there wasn't a trend to expose async all over the place on the Rust ecosystem.

        Thus even newbies get to bump into it quite fast.

        • kenmacd 255 days ago
          They run in to async, but do they run in to the internals of how that code is executed?

          The chapter that example is from includes the disclaimer:

          > In this section, we'll cover the underlying structure of how Futures and asynchronous tasks are scheduled. If you're only interested in learning how to write higher-level code that uses existing Future types and aren't interested in the details of how Future types work, you can skip ahead to the async/await chapter.

          • pjmlp 255 days ago
            Naturally, as soon as they get some kind of async related compilation error.
        • junon 254 days ago
          Depends on the use case. Rust has more explicit types, yes. It's kind of a weak argument against the language though, in my opinion.
    • draw_down 255 days ago
      [dead]
  • agubelu 255 days ago
    > In fact, Swift treats enums as more than just types and lets you put methods directly on it

    You can do the exact same thing in Rust:

      impl Coin {
          fn valueInCents(&self) -> u8 {
              match self {
                  Self::Penny => 1,
                  Self::Nickel => 5,
                  Self::Dime => 10,
                  Self::Quarter => 25,
              }
          }
      }
    • tialaramex 255 days ago
      Also they probably have their understanding upside down. What's going on here isn't that either Swift or Rust treats the enum types as "more than just types" but that they are indeed first class types, whereas in C and C++ what you get isn't a type at all, it's just a strange way to spell an integer.

      I don't think Swift has union types, but Rust does and so does C++ and in both languages the unions, just like their main product type (C++ class, Rust struct) can have methods defined on them. The enumeration "types" in the C-like languages are special and it's because they really aren't proper types at all, they're just integers wearing a funny hat.

      One of Rust's most important post 1.0 types MaybeUninit<T> is actually a union, lots of Rust's important types, in 1.0 and since, are enums.

      • secondcoming 255 days ago
        C++'s scoped enums are strict types.
        • tialaramex 255 days ago
          Scoped enums ("enum class") are a very small improvement on the previous enum, the main thing they deliver is that they don't pollute your namespace as badly thanks to the scoping.

          Their notional status as "strict types" means nothing in a language which doesn't really care anyway, that's why memory_order::relaxed < cv_status::timeout is true

          In Rust if you write nonsense like that it doesn't compile, these aren't comparable things. In C++ they're just integers, so of course you can see that relaxed (the integer zero) is less than timeout (the integer one) ...

          They are just integers wearing funny hats.

          • tcbrindle 255 days ago
            > memory_order::relaxed < cv_status::timeout is true

            No it isn't. https://godbolt.org/z/bz7EhMhMT

            • tialaramex 255 days ago
              I knew I should have checked that in Godbolt before posting it. You're correct, the types don't coerce and so this actually doesn't work.
          • consteval 255 days ago
            Oh come on now, everyone who programs in C++ knows operators don't work like that. Operators are tied to types, those enums don't have operator < defined.

            You also can't add them, or subtract them.

            Yes behind the scenes their integers, but almost certainly this is the case in Rust too. You can SOMETIMES coerce an enum class instance back to an int if you do unsafe casts (on purpose).

            • tialaramex 255 days ago
              Both those scoped enums do in fact have operator< but, as your sibling comment points out, they insist on type matching and so my example won't actually compile because it uses dissimilar types. The analogous Rust types implement only Eq not PartialOrd, so we can't make this error.

              However whether you can add or subtract them is harder to guess than you seem to have assumed

              https://godbolt.org/z/3vdazcr7E

              Of course the bit pattern representation is the same in Rust. The point isn't the representation or we'd be talking about machine code. The point is the ergonomics.

              These types aren't (shouldn't be) integers, but in C and C++ they are anyway.

        • jb1991 255 days ago
          They are still a long way from the features of enums in Swift.
  • SneakerXZ 255 days ago
    > Swift use value-types by default with copy-on-write semantics.

    This isn't true. Copy-on-write semantics are implemented only for arrays, dictionaries, and strings. Swift value-types are copied immediately.

    https://docs.swift.org/swift-book/documentation/the-swift-pr...

    • naman34 251 days ago
      The documentation uses “copied” for value types that are copy-on-write.

      The documentation has oversimplified a bunch of the mental model and it causes incorrect understanding.

      Swift’s documentation is quite bad TBH. The biggest thing that still needs to be improved a lot.

  • mogoh 255 days ago
    Always, when I read those postings that praise Swift, I wonder how good is the developer experience is if you don't use any of the Apple/MacOS ecosystem (except Swift, of course). I have not met any Swift developer who is not developing on macOS, usually for macOS. That makes me very skeptical that non-Mac devs are treated as second-class citizens. I am not only talking about the standard library, but also about the tooling, LSP, libraries, tutorials and so on. I totally believe that Swift is a good language, but I guess it is only good if you are on macOS.

    If you are developing with Swift but not using a Mac at all, I would love to hear how your experience has been.

    • pzo 255 days ago
      And even on MacOS you are limited - you cannot avoid not using Xcode pretty much. So in practice you cannot just only use VSCode or Jetbrains IDEs. There is been a talk in WWDC 2024 that apple invest with Swift for embedding systems and some language servers but not sure how mature it is. Apple doesn't have crossplatform in their DNA.
    • bryancoxwell 255 days ago
      I recently went through the Swift tutorial for building a REST API using Vapor[0]. I used VSCode, compiled and run on Ubuntu. It’s the only Swift I’ve ever written, and obviously far from a production app, but I did enjoy it enough that it made me want to continue playing with Swift outside of the XCode/Mac ecosystem. [0]: https://www.swift.org/getting-started/vapor-web-server/
      • mogoh 255 days ago
        It's not much, but it's something. Thanks
  • lukeh 255 days ago
    I don’t know Rust, but I’m loving Swift for systems programming. Most recent project was an 802.1Q SRP implementation and I couldn’t imagine going back to C. Higher up the stack, I’ve also used it with a custom runner to build the business logic for an embedded Flutter app.

    My only beef is that the binaries are quite large. I’m hoping the new Foundation will improve things, in the interim I’m trying to eliminate my Foundation dependencies.

    • neonsunset 255 days ago
      You can get much smaller native binaries with .NET's NativeAOT nowadays: https://github.com/MichalStrehovsky/sizegame

      Funnily enough, I'm using C# to solve quite a similar type of tasks. It's a really pleasant experience.

      • pjmlp 255 days ago
        Native AOT needs to get better tooling ergonomics though, the whole publish process is a bit convoluted versus toolchains that have AOT compilation as their default.

        Not counting having to learn about IL trimming, and the whole AOT compatible libraries.

        • neonsunset 255 days ago
          What kind of issues did you have with the build process? (as in, if you had a specific case, it might be worth submitting an issue or updating documentation)

          It's a very straightforward process - you pass a single flag, maybe specify optimization preference and instruction set target, and it gives you the binary upon 'dotnet publish'ing the project.

          The process of static linking (if you care about this scenario), with other static dependencies written in C/C++/Rust is not too different from other toolchains - you specify `DirectPInvoke` and `NativeLibrary` properties in .csproj, and they are linked into the final product as a part of the build process. You may need to forward linker arguments[0] for the imports referenced by those however, but this is expected regardless of .NET.

          I think it's fair to criticize the additional compatibility effort required for high-level user libraries that rely on un-analyzable reflection, reflection emit or assembly loading, but none of these features usually have any relevance in the domain of systems programming.

          When you write a project from scratch, you never have to think about whether it's native compilation or "JIT+CIL assemblies sandwich" executable, or anything else. It just works.

          For example https://github.com/codr7/sharpl - the author was pretty much learning C# on the go and it needed exactly 0 changes besides adding `PublishAot` property to make it output a native executable.

          [0]: https://github.com/U8String/U8String/blob/main/Examples/Inte...

          • pjmlp 255 days ago
            As you are well aware, I know .NET since it was "partners only beta software", so dealing with "dotnet publish" plus csproj configuration, is still way easier than something like NGEN.

            However this is somehow cumbersome versus doing a plain "aot-lang-compiler source -o binary".

            Which is the experience I think the team should strive on as goal, especially for newcomers.

            More so when comparing IDE experience of something like Delphi or Swift to keep it in context (press build), versus Visual Studio (besides csproj, create publish properties file, followed by a solution publish, which is hardly documented still).

            • neonsunset 255 days ago
              > However this is somehow cumbersome versus doing a plain "aot-lang-compiler source -o binary".

              But...it works in this exact way?

                dotnet publish -o folder -p:PublishAot=true
              
              Does not need -p argument either, as noted in the previous comment, if `PublishAot` is specified in .csproj, like with any AOT template (e.g. dotnet new console --aot or dotnet new webapiaot).

              On Visual Studio, I have started to recommend to newcomers to avoid its publishing UI which is convoluted and is easy to get side-tracked with. CLI offers much cleaner UX. Not that it matters outside of Windows. But you don't need to create publish file either, just tick the boxes you care about in the modal window.

              Edit: you can't be serious, no one in their sane mind would consider a single extra keyword to get the final product once you're done writing code a learning curve too steep, unless you're in a bad mood and want to make a bad faith argument. Consider what you make the comparison against - C and C++ with their CMake, Ninja, or even raw MSBuild, Swift is barely better too. Rust with Cargo is about the same amount of effort as .NET CLI. Cmon, Pjmlp.

              • pjmlp 255 days ago
                Publish versus build, yet another step, another concept to learn.

                CLI only isn't the answer, if you want better adoption.

          • Measter 255 days ago
            How are build times for AOT these days? Last time I tried it (~2019) it was slower than Rust at building the same project.
            • neonsunset 255 days ago
              If you refer to R2R, then it's more like pre-JIT - it embeds pre-JITed code into the executable that still very much uses JIT. It will also re-JIT R2Rd code if it's deemed hot enough and in need for further optimization.

              This is an older mechanism that works differently to NativeAOT, it's also used by host-installed runtime to improve startup latency, and you can do your own R2R, either full or granular, to further improve this: https://learn.microsoft.com/en-us/dotnet/core/deploying/read... It can slow down the build quite a bit from what I've seen.

              NativeAOT is different - when you use it, ILLink (which previously was the Mono Linker and now has evolved) trims all unreachable and links together all of the remaining CIL bytecode and metadata from the CIL assemblies that the application/library consists of. After that, ILC (IL AOT Compiler) compiles everything (and performs AOT-specific optimizations) into a single static bundle emitted as COFF or PE (or Mach-O?) file containing machine code. Then the toolchain invokes a host-provided linker which produces the final native executable or library, much like it happens with C, C++ or Rust. It even has the OS-specific symbol format, so you can feed it to the same tools that work with native code.

              Technically speaking, .NET's compiler still retains the name "RyuJIT", but in practice it's not JIT-specific and ILC drives the same back-end as JIT compilation at runtime, but with different optimizations and options.

              This is a new feature that was introduced in .NET 7, and has substantially improved further in 8 and upcoming 9. Completely different output aside, NAOT builds take less time than the ones that have R2R from my experience.

              Your mileage may vary depending on how big the application is, how complex linking it requires and how much the native code to compile there is.

              The main difference with Rust is that compilation time whether it's good or bad has much less relevance because for development you can use JIT (both debug and release, i.e. plain 'dotnet run') or even hot-reload which can recompile actively running code without restarting the application with 'dotnet watch' which works surprisingly well.

              • pjmlp 255 days ago
                The last point is exactly the value of having compiler toolchains with JIT/AOT in the box, as standard toolchain, which only a few are around.

                We get to enjoy both worlds, and picking the best deployment options as per scenario.

                Rust development experience would be much better if they adopted a similar approach, which by the way, is common on other ML inspired languages.

    • ahlCVA 255 days ago
      Fascinating that somebody is using Swift "in production" for that!

      I work on small-ish embedded Linux systems and I've been looking for an alternative to C for a long time. Rust does not fit the bill, both because I don't enjoy the ergonomics of it (but I could get over that were it not for the other issues) and because the binaries are huge, so unless you go for a busybox-style multicall binary (and even then) you'll be wasting a lot of space. Both the standard size reduction techniques and splitting out things into shared objects (the ABI instability is not an issue on an embedded system which always gets compiled as a whole anyway) don't really move the needle compared to what you can achieve using plain C.

      I've been meaning to give Swift a shot for applications like this. From what I've seen I had assumed that it would have less of a tendency to monomorphize everything (like Rust and C++ like to do when you use them idiomatically), leading to less binary size bloat.

      You also mention binary size, but in relation to a library - is it that the absolute cost of including the library in the image is too high or is there a per-binary effect here as well?

      Could you maybe share some of your experiences with Swift on embedded Linux in the context of that project? What is working well, what are the warts? What kind of distribution (like buildroot, Yocto, ...) are you using?

      Sorry for the ton of questions, it's just something that has been on my bucket list for quite a while and it very exciting to hear that somebody is already doing this.

      • naman34 251 days ago
        Sadly, Swift also produces large binaries unless you use Embedded Swift which is still early.
  • hardwaresofton 255 days ago
    Am I the only one that dislikes the dot syntax for variants?

    Zig and Swift do it and I feel like it makes things harder to read, not easier.

    `.variant` vs `Type::Variant`

    IIRC the syntax is optional (you can include the type name) but it seems obvious that in any sufficiently long or complex code, not having the type name close would be annoying, especially if you didn’t have IDE like capabilities in your editor.

    • Klonoar 255 days ago
      I've said it in other comments on HN over the years, but yes - I agree.

      That `.variant` syntax is annoying as hell to dig through when you don't have an IDE to rely on for jumping around. Rust (or your choice of other more explicit language) is just generally way more clear about what's being used.

      The way I usually settle on describing it is that this feature solved for people writing code, but code is read more than it's written, and thus I don't believe this tradeoff was worthwhile.

      • afdbcreid 254 days ago
        You will probably be sad to hear that allowing to omit the type in Rust is considered, and has been proposed many times.
        • Klonoar 253 days ago
          I can only hope that it's been proposed many times and not actioned on before means there's been enough pushback, considering one of the entire points of the language is the explicit nature of it.

          Do you have a link for an RFC or GitHub issue or anything? Would be curious to see.

    • robjwells 255 days ago
      Fortunately in Rust you can have it both ways, as in `Type::Variant` most of the time and `use Type::Variant; … Variant …` in cases where the type name just causes noise.

      Example in the playground: https://play.rust-lang.org/?version=stable&mode=debug&editio...

      • hardwaresofton 255 days ago
        I’ve never done this but this is a great point. I think I’d avoid doing this though because variants often have really short, reusable/general names (leaning into the overall type name being close)
        • GrantMoyer 255 days ago
          You can put the use statement in the function with the match, or even limit the scope further:

            {
              use MyEnum::*;
              match my_enum {…}
            }
  • pjmlp 255 days ago
    What Rust did great was bringing affine type systems into mainstream culture.

    However outside very specific use cases, where no kind of automatic resource management is allowed, no matter what, like in high integrity computing, or critical kernel code, approaches that mix and match both solutions are much more ergonomic.

    Swift isn't the only one going down this route, we see it as well in D, Chapel, Linear Haskell, OCamml Effects, Mojo, Hylo, Koka, Veronica, and probably many other research languages being born during the last 5 years.

    Because in the end, programming language ergonomics and expressiveness really matters for wide scale adoption.

    • DasIch 255 days ago
      Rust is more popular than all of these languages except swift combined. So it seems empirically that ergonomics and expressiveness don't matter that much or these languages don't manage to do significantly better than rust.
      • pjmlp 255 days ago
        It appears more popular, which isn't the same thing.
        • tialaramex 255 days ago
          If you have better metrics then that's interesting, otherwise it just looks bitter.

          Yeah, maybe chicken nuggets only appear more popular than broccoli. But I think since we don't have any actual evidence to say otherwise we can assume that's because they are in fact more popular than broccoli.

    • naman34 251 days ago
      There’s lots of interesting ideas in languages but most of them fail to be mainstream at all. When comparing new languages at the level of Rust, Swift, Go and maybe Kotlin are the only real competitors.
  • rich_sasha 255 days ago
    My interest in Rust stems from a very good Python interop. There's a very small handful of compiled languages with that property: Rust, C++, Nim (which is itself quite niche)... That's basically it as far as I know. Swift doesn't seem to tick that box.

    I would happily have something 50-100% slower than C++/Rust with good interop, alas there seems to be very little / nothing. Cython for various reasons isn't ideal.

    • pzo 255 days ago
      There used to be Swift for Tensorflow but google killed it and then Chris Lattner created Mojo. AFAIR the punchline was that Swift was either too complicated for data scientists or they just preferred to use python that already knew.
    • naman34 251 days ago
      Seift has great C, C++ and Rust interop.

      And despite Swift for Tensorflow being killed, there’s still decent python interop.

      Now, setting up a project for that is probably quite hard because the documentation is non-existent.

  • prmoustache 255 days ago
    Unrelated to the blogpost in question but I don't understand why the author is not using its own certificate if he is using his own domain.

    Using an browser set to autodirect to https by default, all I get is some huge warning because the host use a certificate for svbtle, the service used to host this blog.

    I know some people think that because some blogpost is public it can be served without ssl, but I think it is still nice to leave the choice of the reader to disclose or not to his provider and everyone in between what he is reading or not without seeing horrible warnings.

    • naman34 251 days ago
      Good to know. It’s an old blog and I’ll be moving it away soon. This just makes that work more urgent for me.
    • alabhyajindal 255 days ago
      Exactly. Very annoying!
  • lame-lexem 255 days ago
    Rust's `Vec` already allocate values on the heap, so there is no need for another inderection with `Box`.

    this works:

      enum TreeNode<T> {
          Leaf(T),
          Branch(Vec<TreeNode<T>>),
      }
    
    https://play.rust-lang.org/?version=stable&mode=debug&editio...

    otherwise if it was just tuple of `TreeNode` there would be E0072 https://doc.rust-lang.org/stable/error_codes/E0072.html

    • nicce 255 days ago
      Box takes less space in stack, and you can use Box to move pointer to Vec contents easier, without mem::take. So if you want to min-max things, you could still use Box.
  • tromp 255 days ago
    > it gives you utilities such as Rc, Arc and Cow to do reference counting and “clone-on-right”

    Not quite right:

    [1] https://en.wikipedia.org/wiki/Copy-on-write

    • steveklabnik 255 days ago
      Cow in Rust does mean “Clone on write,” because Copy is a term of art in Rust (as is Clone).
  • __jonas 255 days ago
    Does Swift make it as easy as Go or Rust to produce a single binary for any target platform? Would it be a good choice for a CLI application?

    It sounds kind of intriguing but I know very little about the language.

    • naman34 251 days ago
      The feature exists, but it’s not so easy because there is a lack of documentation.
  • BiteCode_dev 255 days ago
    Swift won't be able to compete with rust until it has a great scripting story.

    Rust is currently the defactor choice if you want to make something with js/python bindings, of id you want to speed up a dynamic bottleneck because it makes it super easy.

    Swift doesn't even have a good ffi.

    • lukeh 255 days ago
      Swift can import C (and C++) modules directly, and it's fairly straightforward to wrap existing C APIs with callbacks to be usable from structured concurrency. Here's an example I just worked on. [1]

      [1] https://github.com/PADL/NetLinkSwift

      • BiteCode_dev 255 days ago
        This is not what holding swift.

        What's holding swift is not having a good story to call a swift compiled extension from Python/JS/Ruby/PHP.

        Python's community is huge, very active, Python is everywhere, and it needs compiled extensions. And tooling to make them BFF, like pyo3 and maturin, is great

        And that's how we got cryptography, pydantic core, polar... Which motives even tooling to be written in rust as a side effect, and how we have uv and ruff.

        The scripting community is the most active, if you got them on your side, you get access to a huge pool of devs.

        • w10-1 253 days ago
          I'm not sure scripting-language interop is holding back Swift, but ...

          > call a swift compiled extension from Python/JS/Ruby/PHP

          The story there is excellent.

          Here's a Swift function callable from C

              @cdecl("myfunc")
              public static func f(){...}
          
          C/C++ types are mostly handled automatically (including sharing memory management), but it can be tricky.

          To build a library to load and call from any language:

              swiftc mine.swift -emit-library -o libmine.dylib 
          
          E.g., calling from java 22 FFI:

          https://foojay.io/today/java-panama-polyglot-swift-part-2/

          But most people instead use a slower json server network interface because (sadly) json is the lingua franca for scripting and Swift Codable makes it trivial to map the json to a swift type.

        • pjmlp 254 days ago
          What Python needs is to catch up to SELF, Smalltalk, Common Lisp, and either embrace PyPy, or finally get a mature JIT into CPython.

          Thankfully, Facebook and Microsoft are making it happen.

          • BiteCode_dev 254 days ago
            Yes, but it's a completely unrelated matter: even with a JIT you'll want compiled extensions since you can't beat hand crafted SIMD for the scientific stacks or machine learning.
            • pjmlp 253 days ago
              Ever heard of intrinsics?

              Maybe spend some time reading about other JIT languages expose SIMD to developers, without requiring folks to go down to C.

              Which incidentally is what all those Python DSLs for GPGPU APIs also do, with the caveat of not being that useful for general purpose programming outside machine learning algorithms.

    • pjmlp 255 days ago
      Try to make Rust interop with C++, C and Objective-C as easy as Swift does it.
      • sgt 255 days ago
        This is a big deal. In reality, most of the code that powers your systems (phones, tablets, any computers) is built using C and C++, regardless what OS you use.
        • galangalalgol 255 days ago
          Agreed, and rust-c interop is as good as any other I've used. The c++ interop was clunky but getting much better with crates like zngur. That is what was holding back its use on chrome for a while. Oddly its python interop is way better with pyo3 and maturin making it super easy.
      • Klonoar 255 days ago
        C++ is (IME) an order of magnitude more annoying from Rust than C or Objective-C. I've dealt with interop of the latter two on a number of projects and frankly found them to be fine.

        (Though I wouldn't fault anyone for making an argument that Swift is still more ergonomic here)

    • naman34 251 days ago
      swift-wasm exists
  • dist1ll 255 days ago
    The tree example is quite contrived. Most Rust programmers would use indices for indirection, pointing into flat buffers.
    • MereInterest 255 days ago
      It’s also incorrect. There’s no need to use both Vec and Box, as either is sufficient to provide indirection.
  • high_priest 255 days ago
    Yes! And C# is just more convenient C.
    • dbfa 255 days ago
      JavaScript is just a more convenient x86-assembly. :P
    • neonsunset 255 days ago
      Closer to convenient C++ probably :)
      • pjmlp 255 days ago
        Definitely, Windows development would be so much more fun, had Native AOT been there since day one, and WinDev wasn't so much siloed into their C++ toys.
  • StewardMcOy 255 days ago
    As someone who has worked on large projects in both Swift and Rust, I have to disagree with the author about error handling. It's not perfect, but Rust's use of Result and syntactic sugar is probably the best solution to error handling I've ever used. try/catch (or in Swift's case do/catch) is more disruptive to the program flow. And prior to Swift 6, throws in Swift were untyped.

    (Yes, there's arguments that untyped throws are better for library code because it leaves you more room to add more errors as you become aware of the need for them without breaking the contract with your users, but really, client code is going to be written to the errors you're throwing anyway (i.e. Hyrum's Law). I much prefer my errors to be typed so that if an error is added, the library version has to be bumped to indicate the new incompatibility with old code. And for my own, non-library code, typed errors make sure I'm handling all the error cases.)

  • singularity2001 255 days ago
    'Swift is the better rust' inspired by the why-ruby-is-an-acceptable-lisp debate: http://www.randomhacks.net/2005/12/03/why-ruby-is-an-accepta...
  • lostmsu 255 days ago
    The article is outright wrong at times (like the need for Box or special status of enums), and is missing 80% reason to use Rust: it helps to ensure correctness of multi threaded synchronization at compile time.

    So no, Swift is not a "more convenient Rust". It is not Rust at all. It is more of a variant of C#/Java. And I haven't seen any indication that it is an improvement over either. The only reason to use Swift is having to deal with Apple.

    • Terretta 255 days ago
      > missing 80% reason to use Rust: it helps to ensure correctness of multi threaded synchronization at compile time

      Article may have overlooked this since it's been available in recent 5.x and becomes default in 6:

      Swift 6 brings complete concurrency enabled by default

      By far the biggest change is that complete concurrency checking is enabled by default. Unless you're very fortunate indeed, there's a very good chance your code will need some adjustment – it's no surprise the Swift team made it optional in earlier versions to give folks time to evaluate what's changing.

      Swift 6 improves concurrency checking further, and the Swift team say it "removes many false-positive data-race warnings" that were present in 5.10. It also introduces several targeted changes that will do wonders to make concurrency easier to adopt – if you tried with 5.10 and found things just too gnarly to figure out, hopefully some of the changes in Swift 6 will help.

      Easily the biggest is SE-0414, which defines isolation regions that allow the compiler to conclusively prove different parts of your code can run concurrently.

      https://www.hackingwithswift.com/articles/269/whats-new-in-s...

      • lostmsu 254 days ago
        Is there a better description of this change? How does it work internally? Is thread safety guaranteed or best effort warning?

        How does it work with immutable objects?

  • jb1991 255 days ago
    > But when you need extra speed you can opt into an ownership system and “move” values to avoid copying.

    I've been using Swift a long time and don't quite get what this is referring to.

    Also:

    > Swift too gives you complete type-safety without a garbage collector.

    Type-safety and memory-safety are two entirely different things; this is an odd sentence.

    • nicce 255 days ago
      > I've been using Swift a long time and don't quite get what this is referring to.

      Latest Swift release (June 11th) added ownership system.

      • iknowstuff 255 days ago
        Seems odd for an article which is comparing rust and Swift not to delve into the new Swift ownership system...
    • naman34 251 days ago
      That second things is a typo.
  • andrewstuart 255 days ago
    If only it wasn't so heavily tied to Apple.
  • neonsunset 255 days ago
    And C# is a cross-platform and faster Swift :)
    • pjmlp 254 days ago
      Unfortunely, without any mobile OS to call its own, and if only WinDev was so found of adopting .NET, as Apple keeps embracing and pushing Swift, or Google with Java/Kotlin on Android.

      With the recent C# focus given to WinUI and WinAppSDK, despite C++/WinRT underpinnings, maybe caused by failure to deliver VS tooling for C++/WinRT after almost a decade, or ongoing cybersecurity considerations, maybe WinDev attitute will change, although I see them more eager to adopt Rust than C#, across the Microsoftshpere blogs and technical notes.

    • naman34 251 days ago
      C# is a faster Java.
  • tkz1312 255 days ago
    I mean basically everything is a more convenient rust...
  • dangoodmanUT 255 days ago
    Clone-on-write...
  • mangeshbankar21 255 days ago
    [flagged]