The Modern CLI Renaissance

(gabevenberg.com)

130 points | by theshrike79 127 days ago

15 comments

  • llm_trw 126 days ago
    To summarize: The people who ruined native GUIs moved to HTML pages. After ruining HTML pages they are now moving to terminals.

    In this very thread we're seeing people say "Well sure, but why not add ...".

    The reason why the CLI is good is because it _can't_ do most things people want it to do. Which means you have to think when making an application.

    Please, if you're one of the people 'modernizing' the terminal stop and think why the terminal is valuable. Don't make it into another of the long line of UIs which have been destroyed by modern developers.

    • tazu 126 days ago
      This is why Textualize[1] concerns me. I've tried a few of the applications using their framework, and they have noticable keypress latency. I didn't think it was possible to make a bloated TUI, but they have somehow succeeded. This might just be a Python thing because of the GIL, because VisiData[2] has the same problem. It's physically jarring to use VisiData because it's such a cool idea, but takes 500ms to register key presses in my 120hz terminal emulator.

      It's reminiscent of scroll-jacking, excessive animation, and other web GUI bloat, just translated to TUIs.

      [1]: https://www.textualize.io/

      [2]: https://www.visidata.org/

      • saulpw 125 days ago
        (Author of VisiData here)

        I don't know what the issue is on your computer, but it definitely doesn't take my computer 500ms to register a key press in VisiData in the general case. That would be an issue I'd have to take seriously. I've spent way too much time optimizing VisiData's startup time to be under 300ms to let something like that slide. It's a point of pride for me that VisiData doesn't incorporate that "web GUI bloat" as you mention.

        If you're willing to help out, please contact me so we can file an informative issue and see if we can figure out what's going on in your setup. I have a hunch it's generalized computer/OS bloat that we (VisiData) can't do anything about, but at least we can try to understand it so that hand-crafted software like VisiData isn't unnecessarily maligned.

    • contrarian1234 126 days ago
      I think it's a general issue of developers with tricked out machines and fast internet connections developing software for everyone else. The reason the CLI is fast is not because it was carefully engineered that way, but because most of the software was developed when computers were slower than the chip you'd find in a washing-machine these days

      example: flatpak. Last I tried - tab-outcompleting to install a package makes a network request for the application list. This ofcourse means you don't have to maintain a cache like `apt` and certainly makes things easier. I'm sure this makes sense if you're on your macbook pro in a cafe in Cupertino. But it was nearly unusable (whole CLI locks up and hangs) from my Core m3 computer tethered to a shitty network in a 2nd tier city in China.

      • pxc 124 days ago
        > I think it's a general issue of developers with tricked out machines and fast internet connections developing software for everyone else. The reason the CLI is fast is not because it was carefully engineered that way, but because most of the software was developed when computers were slower than the chip you'd find in a washing-machine these days.

        I think in the context of the ongoing 'CLI renaissance', what this misses is that many of the most recognizable tools which emerge from (or perhaps kicked off) the contemporary 'CLI renaissance' are very carefully engineered with performance in mind. Ripgrep is, to me, the most vital of all the 'next-gen' CLI programs. GNU grep is already incredibly fast, and ripgrep manages to be faster, to take common-sense shortcuts, to add in a few extra conveniences (like --replace, so you don't always need to pipe to sed or awk or whatever when you want to do a very simple transformation) without going nuts and losing focus. Its author, burntsushi, has written here on HN about their implementation as well as GNU grep's and other regex implementations, and it's clear that the predecessors have been studied with great care and respect.

        I love many newer CLI tools, and this pattern holds for all of them. Consider, for instance, `jq`, which has become a 21st-century complement to vital staples like awk. This is even the case for many tools whose purposes are largely 'cosmetic', like `bat`, which more or less exists to be a more performant replacement for older colorizers like `ccat`. As cringey as the buzzword 'blazing fast' has become, Rust CLI tools are typically in tune with the general cultural emphasis of speed in the Rust community. And many Go CLI tools today are rewrites or clones of Python or Ruby tools, which were written expressly to achieve improved performance.

        This also seems to hold for next-gen CLI environments: one of the main advancements most next-gen shells (e.g., Elvish, Nushell, Oil) offer over PowerShell (which introduced structured pipelines to the shell world) is performance!

        This doesn't mean that every single new CLI tool is well thought-out, or (if it has direct antecedents) better than its predecessors. But I don't think the idea that new CLI tools are generally written without care for performance or consideration of the past is borne out when you look at what's out there.

    • wiseowise 126 days ago
      > The reason why the CLI is good is because it _can't_ do most things people want it to do.

      -1. You’ve lost me here. The reason why CLI is good because it solves problems that even modern GUIs can’t, not because of some artificial limitations or lack of convenient defaults.

      > Which means you have to think when making an application.

      “Think” and “remember obscure syntax that greyheads keep on a paper under their mattress” are two different things. I agree with the author that sensible defaults are severely lacking from some of the tools.

      • ulbu 126 days ago
        one great property of the terminal is that it’s not a continuous canvas. it’s a grid of discrete cells, just as it is natural for computers to be. instead of laying interactive objects out in virtual space, you’re pushed to think in discrete states and discrete actions. so I say cli is, indeed, good in part due of its limitations.

        edit: i’ll add that continuousness of guis is actually its limitation. a terminal interface is indexable, a gui is not. so what looks like freedom is actually a different structure. a gui is more general, as real numbers are more general than whole numbers. but to get to whole numbers, you have to re-constrain real numbers. this is left up to the developer to create a constraining framework. and it’s a path of less resistance to not do this. most data we deal with is discrete. so a continuous representation of it is a mismatch of structures. terminals force you to stay in that same discrete space. so your solutions are from the getgo closer in fit to your data.

        it’s important to match up the structure of your interactive surface with the structure of your domain. it’s a common principle we talk about too little.

        • cdrini 125 days ago
          I think that's it, although I don't think I'd describe it as a grid, but as a 1d list. The core concept of CLIs is linear. That's in part why streaming works so effectively in CLIs, because the linear medium clicks really well with the idea of streaming data. And actually the fact that a lot of UNIX commands actually operate on a graph of sorts under-the-hood is where you can see some of the limitations of this linear approach (eg trying to pipe one stream to multiple paths with each path having its own processing requires a branch which is inherently a little difficult to represent in a linear system).

          You can lay things out in a grid, but the same can be said of GUIs. You can consider each character a cell in a grid, but I think that's so small that at that point it's not conceptually sufficiently different from being continuous. The computational paradigm that's a grid at its core is spreadsheets.

      • oulipo 126 days ago
        Exactly, they conflate the fact that the CLI tool is just a "programmable API view" of an app, and that GUI is more of a "interaction view", with the fact that it can or cannot do some stuff

        Now with AI and better design, nothing in theory would prevent us to "script UI apps" like we do with terminal, you could say

        "take a screenshot of my webpage with Chrome | set it in isometric with nice shadow using Photoshop | send it on my iPhone Twitter app"

        and you would get the same pipeline

        of course "UI apps" just have too many input / output possibilities, and you wouldn't want to have "--flags" everywhere for every possibility

        But that's where "natural language as a universal API" applies, or perhaps just a better design for interaction with UI apps

        • tazu 126 days ago
          > "take a screenshot of my webpage with Chrome | set it in isometric with nice shadow using Photoshop | send it on my iPhone Twitter app"

          This is basically Apple Shortcuts. It would be nice to have a CLI interface for Shortcut "actions".

      • pjmlp 126 days ago
        Yes they can, since Xerox PARC and Genera, it is called a REPL and scripting.
    • poincaredisk 126 days ago
      >The people who ruined native GUIs moved to HTML pages. After ruining HTML pages they are now moving to terminals.

      I know this is just (probably) a figure of speech, but I really don't like this way of writing. This has strong "us vs them" vibes. There is no team of people actively working on ruining GUI, HTML and now terminals, over the last 30 years. There are just people who have different tastes than you and I. I find this approach more productive.

    • kvark 126 days ago
      Having a standard CLI is definitely helpful. But does it have to be so hard to use? At what point would we acknowledge that we are being hostage by some random ideas put into ancient software, which just happened to survive?

      I don't think calling for a stop of experimentation is the way. Nobody is going to take away your vi and bash any time soon.

      • immibis 126 days ago
        We shouldn't be held hostage, but the evolution of interactive CLIs is GUIs and the evolution of batch CLIs is scripting languages, and we don't need to render the CLI concept meaningless by integrating both. I've seen a terminal emulator that supported pixel graphics - what's the point! Just make a GUI using your favorite framebuffer library!

        A TUI is not a CLI, either.

    • Zababa 126 days ago
      I don't think this is what the article says, and I don't think terminals are getting "ruined". The trend of TUi is a bit worrying (usually you get a better looking but worse GUI), but the tools are great. fzf is great, fd is great, rg is great. Some TUIs are great too, lazygit and lazydocker being good examples.

      > The reason why the CLI is good is because it _can't_ do most things people want it to do. Which means you have to think when making an application.

      I don't think this is true at all. If I had to say why the CLI is good, I would mention:

      - speed

      - composability, as in I can grep for something and then do other operations

      - ease of development, it's very easy to make a small tool for my usage

    • hnlmorg 126 days ago
      As someone who is working on modernizing the command line, I think it is entirely possible to strike a balance.

      - the command line is just byte streams

      Shells like Elvish, Nushell, Powershell and my own shell, Murex, support typed pipelines and that brings a great number of enhancements. Like native support for JSON, CSV and other data formats. Thus meaning you can have a unified syntax for handling different files rather than remembering a dozen different tools and their specific idiosyncrasies, eg `jq`, `awk`, etc

      - readline is dated

      We can do better than the typical shell interface. The popularity of tools like `fzf` and `starship` show that people do actually find modern TUIs useful

      - rendered text is static

      Lets say you view a JSON file in Firefox, you can expand and collapse specific branches of that JSON file. I'd love to be able to do that with JSON in the terminal. And not just JSON, tools like `tree` could benefit from that too.

      Collapsible trees is a feature I'm working on implementing in my terminal emulator and it's completely optional. ie by default the tree renders the same way any other terminal would render that clump of text. Except you can then optionally collapse branches just like you can with code in an IDE or comments in HN.

      ---

      I love the command line. I've been using it for 30+ years and written so many tools for fellow CLI enthusasts. But even I think an ideal world would be us moving away from grid based virtual teletype interfaces pushing communicating via raw bytes and with in-band control codes, and instead switch to something more robust. But that's never going to happen. Heck, even things like job control (ie when you hit ^z) is non-trivial to implement at a shell level. It requires multiple signals (each originating from a different sender), and 3 different hierarchies of process ID registration. Frankly, it's amazing any of this stuff still works. And that's before we touch on the plethora of competing standards to test what client (ie terminal emulator) is connected to that "virtual typewriter".

      • aragilar 126 days ago
        But how many of these new tools are robust when there's high latency? The advantage of good old POSIX shell is it works on cheap routers, the latest HPC systems, the system next to me and the system of the literal other side of the world, and everything in between.
        • maccard 126 days ago
          Being realistic - how often is that a problem?For some people working on specific domains yes. But why are a large number of developers who _dont_ have those restrictions held back by ensuring that your terminal works properly over 9600 baud
          • quotemstr 126 days ago
            > Being realistic - how often is that a problem?

            I have 800ms RTTs on airplanes. Character-by-character editing is painful. Old-fashioned line-buffering makes the experience much better. It'd be nice not to give up this capability.

            • maccard 126 days ago
              How much time do you spend coding on airplanes on remote systems?
              • RhysU 125 days ago
                As a thought experiment: send your next email 1 character at a time by regular post.

                One doesn't have to do something often for it to be an awful experience. We choose tools, in part, because of the worst case experience not only the average case.

                • maccard 125 days ago
                  We shouldn’t optimise for the least frequent use case at the expense of the most common use case. Reversing a car is awkward, but we don’t reverse the steering wheel so that you get to turn the right direction while reversing.
                  • RhysU 124 days ago
                    Correcting reversed steering only while backing up is entirely possible. Drive by wire allows it.

                    Car designers realize the user base knows the older UI and uses the older UI without issue. The "install base" of existing user skills is a compelling reason to not change.

          • aragilar 126 days ago
            Uh, I live in Australia, it's always a problem ;)

            More seriously, unless you're doing stuff locally (in which case, go ham doing cool things on your system), the tools you use need to work for others, with their requirements and restrictions (some of which can be changed, but others which are inherent).

            • maccard 126 days ago
              The problem is that we cater for the absolute lowest common denominator and completely eschew the vast vast majority of people who don’t require that feature. Of course there’s an xkcd for it [0]. I haven’t owned a device in the last decade with a screen resolution less than 1920x1080, and most devices are significantly higher these days. My primary work monitor is 27 inches with a 144hz adaptive refresh rate - these are available at the bottom end of the scale, and are widely available at the mid level of the market.

              And yet, I’m forced to use tools that adhere to standards from when latency to disk was higher than my latency to a remote server these days, that can’t handle resizing, that can’t reliably handle Unicode input. Imagine if I gave you a car designed in 2024 that had a hand crank to start, but you could configure it to use a battery if you wanted to. That’s how I feel about the restrictions of terminals, shells and TUIs these days.

              [0] https://xkcd.com/1172/

              • camgunz 125 days ago
                The strength of these tools is that they are the lowest common denominator. I don't need to worry about `ls` not fitting in an 80 character terminal because its devs "haven't owned a device in the last decade with a screen resolution less than 1920x1080". I don't need to worry about `find` not working because it can't resolve DNS. I know when I SSH into my router, my raspberry pi running pi hole or emulation, my laptop, and my server that my `#!/bin/sh` script works exactly the same.

                It was so, so hard to get here. Imagine chaos so maddening that autotools was somehow an improvement.

                > The problem is that we cater for the absolute lowest common denominator and completely eschew the vast vast majority of people who don’t require that feature.

                Not for nothing, but for most people in the world computers and internet access are an unaffordable luxury. I'm typing this on a machine that cost me $3,700, and in some ways I'm sympathetic to what you're saying. But average world GDP per capita is something like $12,500. Electricity isn't free. Internet access isn't free. Before we start making arguments about catering to the 1% of people fortunate enough to use the fastest machines and networks, we should consider who our actions may close the door on.

                • maccard 125 days ago
                  > The strength of these tools is that they are the lowest common denominator.

                  I disagree - the strength is that they're standardised. The weakness is that those standards are ancient.

                  > Not for nothing, but for most people in the world computers and internet access are an unaffordable luxury. I

                  Are we really doing "there's starving children in africa"?

                  There's a vast, vast middleground between a western developer with a stack of multi-thousand dollar devices with low latency ssh access, and someone in a rural village in kenya using a 2g sim card on a 25 year old OS. The line shouldn't be "you need an ARM Macbook to run a terminal", but maybe, just maybe we could realise that pretty much any device it possible to buy and run a shell on in the last 20 years is not restricted to being driven by control codes, and that I'd wager pretty much every device that is actually using a terminal emulator sold in the last 18 years (I'm going to draw the line at 2006 when Core2Duo appeared) has had an amount of hardware that was just not considered 35 years ago when they were designed.

                  • camgunz 125 days ago
                    > I disagree - the strength is that they're standardised. The weakness is that those standards are ancient.

                    Which standards do you have a problem with? Handling terminal escape codes is pretty easy; dozens of terminal emulators do it great. Are you saying it should be easier to build a TUI? It's already very easy: just use a library like Textualize or Bubbletea or rview (there are also dozens of these). I'm not saying every TUI app performs great, but maybe you remember an article that made it to the HN front page about how centering in web apps is impossible? Or how web apps can't both confine text to a reasonable center column of text and take full advantage of all the screen space when a browser window is fully maximized on a wide monitor? TUIs don't have a monopoly on layout weirdness is all I'm saying here.

                    > Are we really doing "there's starving children in africa"?

                    Feel free to correct me here, but it sounds like your point is that we should move the baseline from what was prevalent in the 70s to what was prevalent ~2006. I'd love to hear some examples of what you think that would enable beyond what we currently have, because thinking about it, I'm not coming up with too many that wouldn't really irritate me. Maybe tab completion could query external services because we have pretty fast networks and CDNs now, but I don't always have internet, and I'm not thrilled by the idea of my ISP knowing whenever I try to tab complete a Docker image or whatever. Maybe we could cache it, but I'm not wild about every tool I use keeping its own cache or history database. Maybe we could have big beautiful TUIs, in fact we already do, but I prefer to have a lot of splits in my terminal and therefore like it when things work fine at 80 columns.

                    So I can't think of significant improvements, but on the other hand I can easily envision this being a bigger barrier to expanding computer/internet use to more humans. Or to be maybe too frank about it, I'm completely fine with you being annoyed by yet another Unicode or layout bug if 1,000 more people--let's just say in rural Washington--get the chance to be charmed by Linux.

                    • maccard 125 days ago
                      > Handling terminal escape codes is pretty easy; dozens of terminal emulators do it great

                      And yet some of the most common ones fail - here’s a thread about VSCode from this morning.

                      https://x.com/thdxr/status/1833727037074227603?s=46

                      > just use a library like Textualize or Bubbletea or rview (there are also dozens of these)

                      I’m more of a user of these TUI apps, so I don’t have control over what framework they use.

                      > Feel free to correct me here, but it sounds like your point is that we should move the baseline from what was prevalent in the 70s to what was prevalent ~2006.

                      We should move the baseline to represent how computers are used today. That might be 2004 or 2008, I don’t really care. One really perfect example of crazy behaviour that is still widespread is “tools that dump binary data to terminal emulators that are parsed as escape codes and cause the emulator to hang” - there are dozens of these foot guns, and removing these surprise behaviours would be far more likely to help those 1000 people in rural Washington than letting them run a terminal on a device that is possibly from before they were born.

                      > I'd love to hear some examples of what you think that would enable beyond what we currently have

                      I think the fact that we serialize everything to text to allow interoperability between tools is insane in this day and age. We spend so much computing time on this. Powershell has it right. The fact that in my hands I have a device that is many orders of magnitude faster than the device I first wrote code on (and this phone is probably an order of magnitude weaker than my primary workstation) and we’re still waiting for basic operations on files because of APIs and programs that were designed in the 80s when we needed to page text files

                      > I'm completely fine with you being annoyed by yet another Unicode or layout bug if 1,000 more people--let's just say in rural Washington--get the chance to be charmed by Linux.

                      One of the earliest memories I have of programming is having a computer say hello to me, and printing ascii art. I’d guess GenZ is far more familiar with Unicode than with ascii art, and there are millions of people out there with non-ascii characters in their names - shouldn’t they be able to have the same access that we do, or are we saying g that only poor Americans are entitled to have that magic?

                      • camgunz 125 days ago
                        > And yet some of the most common ones fail - here’s a thread about VSCode from this morning.

                        Well, not being an X user I can't see anything past "the vscode terminal is so poorly implemented it cannot deal with the ansi i am throwing at it", but as a Vim user you won't catch me defending VSCode. I will say if you're looking for perfect classes of applications you won't find any, i.e. if your standard for implementing things is that no implementation of a thing can have a bug, you'll never implement anything.

                        >> just use a library like Textualize or Bubbletea or rview (there are also dozens of these)

                        > I’m more of a user of these TUI apps, so I don’t have control over what framework they use.

                        I'm just trying to get at the outlines of your argument. If this wasn't what you meant, OK then.

                        > One really perfect example of crazy behaviour that is still widespread is “tools that dump binary data to terminal emulators that are parsed as escape codes and cause the emulator to hang”

                        This shouldn't do anything that running `reset` can't fix. If it does, that emulator has a bug. But further, that's not really a standard or anything to do with ancient computing. I can easily send you to a page that locks up your computer with bonkers JavaScript (it's called realclearpolitics lol). Terminal escape codes don't have a monopoly on locking up your apps.

                        > removing these surprise behaviours would be far more likely to help those 1000 people in rural Washington than letting them run a terminal on a device that is possibly from before they were born

                        My premise is that these people don't have computers. Also using a terminal on a device from before I was born sounds awesome (you can imagine what my YouTube recs look like haha)

                        > I think the fact that we serialize everything to text to allow interoperability between tools is insane in this day and age.

                        I think about this all the time, but primarily like in the realm of like JSON vs. msgpack. There's something very cool about being able to interpret a data stream just by looking at it (more or less). But it's also wildly inefficient, 99.999999999% of the time only computers are ever looking at it, and in the CLI case the general lack of structure or standard leads to annoying extra work and pretty evil bugs.

                        > One of the earliest memories I have of programming is having a computer say hello to me, and printing ascii art. I’d guess GenZ is far more familiar with Unicode than with ascii art, and there are millions of people out there with non-ascii characters in their names - shouldn’t they be able to have the same access that we do, or are we saying g that only poor Americans are entitled to have that magic?

                        There's no way I can dig it up now, but I remember reading some article about everything you'd have to go through to assuredly (more or less) print Unicode characters to a screen in C, like you're dealing with wchar_t, defining special things on Windows, making sure you have a supporting font, blah blah.

                        In fairness though, this is pretty easy in modern languages like Python or Go, plus I don't know how common it is for terminals to not render Unicode (if you're using a non-Unicode-aware terminal these days I'd be amazed, I mean how will npm display emojis??).

                        I don't know if Unicode is a good example for us to work with though. It is more computationally taxing so it works for your end, but it also extends computing to way more people so it works for my end too. A win-win doesn't really exercise the tension you and I are getting at.

                        • hnlmorg 124 days ago
                          Your last point is the key thing imo. The different between features and bloat is simply just if people need it.

                          However we won’t always agree on what we “need”.

                • SilverRubicon 125 days ago
                  > But average world GDP per capita is something like $12,500. Electricity isn't free. Internet access isn't free.

                  These people are big users of the command line?

                  • camgunz 125 days ago
                    If you read on, you'll find that my point is that if even the most basic apps (CLIs and TUIs) have hefty system requirements, we stand little chance expanding the use of computers to the average person. I'm not sure what your point is though.
        • hnlmorg 126 days ago
          The biggest problem with the latency of these new tools isn’t the tools themselves but rather the huge amount back-and-forth chatter that happens with SSH.

          If Bash works then these should work too. And if Bash doesn’t work then you’ll need something that supports local echo like Mosh instead of SSH.

          • aragilar 126 days ago
            I never really got mosh working nicely with the VPNs and ssh jumphosts I needed to use.

            I would say vim is excellent over high latency connections, as you can slow down your typing and plan movements and queue up commands. Readline's ability to pop open an editor makes running complex shell commands so much nicer.

      • quotemstr 126 days ago
        > - readline is dated

        But at least it was --- pre-GPLv3, pre-prompt_toolkit, etc. --- a standard. It would be nice to again have a single de-facto standard line editing system that I could

        1) customize once, for all my programs, and

        2) compose with other programs (e.g. shells embedding in other programs).

        It'd also be nice for it to be friendly to high-latency connections.

        But alas, we're heading towards a world in which the terminal is just a GUI with Minecraft pixels.

        • hnlmorg 126 days ago
          > But alas, we're heading towards a world in which the terminal is just a GUI with Minecraft pixels

          This has been the case for decades already. Some TUIs use ncurses, some don’t. Some use 7bit escape sequences for switching to drawing characters, others use 8bit control codes. Neither fish nor (and correct me if I’m wrong on this one) zsh use readline.

          There never was a standard way. Some de facto standards exist but even these were ignored as often as they were used. And the reason for this is because there never was a standard VT in the hardware days (DEC, Tektonix, etc all did things slightly differently) let alone in the xterm, VTE, et all days. And all of these different terminals have been driven by different operating systems. For example ‘ps’ on macOS uses different escape sequences to GNU ‘ps’ yet they mostly achieve the same output. So imagine how inconsistent things were when you had fundamentally different time sharing systems.

        • nemoniac 126 days ago
          You can use rlwrap to wrap many command-line commands with readline. It is my single de-facto standard line editing system.
      • Zababa 126 days ago
        > rendered text is static

        The way I see it is that you're making it dynamic by entering other commands. This doesn't really feel dynamic when you don't know what to enter, but the more fluency you achieve, the more dynamic/in the flow it feels.

        • hnlmorg 126 days ago
          It’s not dynamic though. Take my tree holding example and compare the speed of collapsing a tree in a GUI like Firefox vs doing the same with iteratively updating jq queries.

          Much as I prefer command line tools for most tasks, it’s silly to say that they’re always just as dynamic as GUIs.

          You can already recreate my tree folding example as a TUI using block characters and terminal mouse input. There’s nothing exotic about that. So all I’m suggesting is that this convenience should be built in as standard.

          • Zababa 123 days ago
            The issue is that you can't use the other tools of the terminal like grepping and friends on a TUI. You can get semi-dynamic behavior with stuff like piping into fzf, maybe there's something like this for JSON collapsing too?
      • pjmlp 126 days ago
        > Shells like Elvish, Nushell, Powershell and my own shell, Murex......

        Except nothing of this is really new, this is how REPLs in Xerox PARC worstations, ETHZ and Genera used to work.

        The novelty of these shells is a side effect of UNIX wiping out the alternatives.

        • hnlmorg 126 days ago
          You’re calling them “new”, not me.

          Besides, if you want to go down that line of thinking then nothing in technology is truly original. We all stand of the shoulders of giants.

          • pjmlp 126 days ago
            I clearly pointed out these are 1970-80's ideas outside Bell Labs, one just needs to check the dates of those listed systems.

            Those that call them new, only know UNIX and Windows.

            • hnlmorg 126 days ago
              …or they are aware but instead using the term “new” to refer to the release date of these tools rather then the invention of core concepts.

              I feel like you’re picking a fight for the sake of an argument.

              • pjmlp 125 days ago
                I wasn't the one that decided to drill down on the meaning of new....
    • lifthrasiir 126 days ago
      The terminal is valuable just because it has applications that are easy to do or even only possible in it. Specifically, a full-duplex interactivity at a low data rate. I don't think any current "modernization" proposals threatened that so far, the biggest contender would be any image support but it isn't meant to be used every time anyway.
      • flohofwoe 126 days ago
        TBH I would love a pixel framebuffer standard for terminals that's fast enough to run things like Doom, emulators, or simple UI applications in the terminal. The Sixels standard exists but has a too complicated pixel encoding.
        • immibis 126 days ago
          That's every GUI system, though. /dev/fb0, or X11. We don't have to force it to fit into a character stream abstraction for some silly reason. Different abstractions are allowed to be different.
    • oulipo 126 days ago
      You conflate the fact that the CLI tool is just a "programmable API view" of an app, and that GUI is more of a "interaction view", with the fact that it can or cannot do some stuff Now with AI and better design, nothing in theory would prevent us to "script UI apps" like we do with terminal, you could say

      "take a screenshot of my webpage with Chrome | set it in isometric with nice shadow using Photoshop | send it on my iPhone Twitter app"

      and you would get the same pipeline

      of course "UI apps" just have too many input / output possibilities, and you wouldn't want to have "--flags" everywhere for every possibility

      But that's where "natural language as a universal API" applies, or perhaps just a better design for interaction with UI apps

      • anthk 126 days ago
        KDE's Konqueror+Krita+a KDE3 twitter tool would be able to do that with DCOP, DBUS grandpa but for KDE3.

        Ditto with Haiku OS (and Be) with the 'Hey' tool:

        https://www.haiku-os.org/blog/humdinger/2017-11-05_scripting...

        On pure CLI tools, that's trivial with some of them sending commands to Chromium, piping the output to ImageMagick and using something like ttytter (now it wouldn't work with the API changes). Replace Twitter with Mastodon and I'm sure you could do that with 'toot'.

      • oulipo 126 days ago
        in theory, if computers were really "smart" (and I'm not talking about AI, but just better design), you could define that pipeline and it would JIT-build an optimized binary that does the equivalent of your pipeline for each app, so that it would still be an optimized process

        Eg. each "UI app" would just become a big API of all input/output possibilities, and "interactive mouse / keyboard / visual interaction" would just be one possibility among others

        Your computer would know how to pipe between each app, optimize it if possible, and give you what you want

    • pjmlp 126 days ago
      Pretty much so, I really don't get those rainbow terminals full of fluff running after each command.

      Then to top that, we had TUIs up to the mid-1990's because we couldn't afford anything else, it was the best many of us could have, given the bugdet and the astronomical cost of anything with a graphics display attached to the computer.

      I really don't get the coolness of reproducing this experience, I was more than happy to move away from this as soon as I could afford to.

  • cyberax 126 days ago
    The stale field of terminals is also getting new developments. My particular favorite is Kitty input protocol that allows terminals to use such 21-st century functionality as accurate key press reporting: https://sw.kovidgoyal.net/kitty/keyboard-protocol/
    • sweeter 126 days ago
      Kitty has done great work in this regard. The Keyboard Protocol and the Image Protocol is top notch. Accurate keys was a much needed change. The image protocol blows sixel and ascii out of the water. The thing that baffles me is that so many people are resistant to these two things because they don't like the lead dev.
      • wiseowise 126 days ago
        > The thing that baffles me is that so many people are resistant to these two things because they don't like the lead dev.

        Who’s the developer of kitty… oh.

        It’s the guy who said he’ll single-handedly going to keep Python 2 alive?

        • aumerle 125 days ago
          And that's just what he did until someone showed up to do the grunt work of porting to Python 3, at which point he co-operated with that person and helped make the port happen, just as he said he would. I suggest a more facts based narrative in the future.

          List of all python3 PRs merged by the developer in minutes to hours: https://github.com/kovidgoyal/calibre/pulls?q=is%3Apr+py3

        • pxc 124 days ago
          It's fine if his software is not for you, but Kovid Goyal is the real deal.

          And his claim wasn't that he would keep a generally-viable fork of Python 2 going, but that he would maintain the parts of it that he used in his Python 2 application. He wasn't trying to out-code the whole Python core team or something silly like that.

          • wiseowise 122 days ago
            I’m not saying he’s a solid dev. But I can also understand why some of his statements might raise an eyebrow, especially if taken out of context.
        • tasuki 125 days ago
          > It’s the guy who said he’ll single-handedly going to keep Python 2 alive?

          I don't know about that, but he's extremely competent, so I wouldn't be too surprised.

        • dustypotato 126 days ago
          I mean he's also the chief dev of calibre so that's a gold star
      • eviks 125 days ago
        How do you measure this personal resistance in the field notorious for its snail pace of change?

        (also keyboard protocol isn't top notch, don't know about the image one)

        • aumerle 125 days ago
          Ah, you are back with your "the kitty protocol doesn't support my pet feature that nothing else supports, aka left and right distinct modifier states" so it must be bad. I was wondering when you would show up again.
    • goodpoint 126 days ago
      The "security" practices of kitty are dubious: https://github.com/kovidgoyal/kitty/pull/3544
      • dustypotato 126 days ago
        It's just a check for updates and notifies you of it. Standard feature in many modern applications
      • yencabulator 124 days ago
        • aumerle 124 days ago
          Sigh. Go read the docs:

          remote control has both a socket only mode, and has the ability to allow tty connections only with a password: https://sw.kovidgoyal.net/kitty/conf/#opt-kitty.allow_remote...

          Furthermore, remote control in kitty has capability based security where you can lock down the protocol to allow individual actions with arbitrary granularity:

          https://sw.kovidgoyal.net/kitty/conf/#opt-kitty.remote_contr...

          • yencabulator 124 days ago
            No thanks, I prefer software with a more principled security architecture.
            • aumerle 124 days ago
              Then stick to commenting on posts about such software instead of trying to mislead people about software you know nothing about.
              • yencabulator 124 days ago
                Oh yes, that's the other reason I don't use Kitty -- personalities. Enjoy your day.
                • aumerle 124 days ago
                  And we are all delighted to be spared from your personality in the kitty community. Enjoy your day as well.
      • cyberax 125 days ago
        You actually don't have to use Kitty. I'm using iTerm2, and it also supports the Kitty input protocol. Several other terminals also support it.
  • terminaltrove 126 days ago
    Excellent article of what is going on in the terminal space, agree on the TUI section where we are seeing lots of terminal tools being built in Rust and Go and libraries such as Ratatui [1] and Bubble Tea [2] becoming new modern alternatives to ncurses for building TUIs.

    Python has Textualize which is also very popular for building terminal user interfaces [3]

    And we've noticed this renaissance as well of new CLI and TUI tools that we list on Terminal Trove [4].

    [1] https://ratatui.rs/

    [2] https://github.com/charmbracelet/bubbletea

    [3] https://textual.textualize.io/

    [4] https://terminaltrove.com/

    • euroderf 126 days ago
      Ratatui is a great name. The landing page needs a rat!
    • quotemstr 126 days ago
      It's been disappointing me lately that we've all given up on terminfo --- or at least forgotten what it was for. Most of these new libraries just hardcode escape sequences. It'd be more future-proof and customizable for users to consult a terminal database like programs used to do, yes? Maybe it doesn't matter: the world has mostly converged on xterm-based control sequences. But what about newer capabilities? Without either a central terminfo database or some kind of standard way to query feature availability, we end up with an explosion of ad-hoc and opaque logic.
      • kragen 126 days ago
        i don't think so. for future-proofness all you need is a standard; it doesn't matter very much whether that standard is vt100 escape sequences or terminfo names. i think vt100 escape sequences are a little better, and certainly programs written using them are easier to debug

        as for customizability, terminfo never provided real customizability, nor does it help with newer capabilities

  • JimDabell 126 days ago
    I would be very happy to see shells shed the idea that everything needs to be done in the context of emulating a terminal from the 70s. But even though new shells exist, I always end up writing shell scripts in ancient Bourne because I want them to run out of the box instead of requiring a third-party shell to be installed. Is there no appetite for a new default shell that could be adopted by Linux, macOS, and BSDs?
    • setopt 126 days ago
      > I would be very happy to see shells shed the idea that everything needs to be done in the context of emulating a terminal from the 70s.

      In Emacs, there is a big difference between M-x vterm (which emulates a 70s terminal), M-x shell (which doesn’t emulate a terminal, but still runs e.g. bash underneath), and M-x eshell (which replaces bash etc. completely, and thus offers abstractions like “cd into an ssh: path”, “cp this file from an ssh: path”, “pipe this file through an Emacs function”, etc. that you normally don’t expect). If you’re into alternative ways of working with shells this might be interesting.

      (With that said, I myself went back to a “real” terminal. These days I’m using Zsh in iTerm2.)

    • hiAndrewQuinn 126 days ago
      You could be the first! Fork Ubuntu, leave everything else the same, but have it run `fish` as the default shell with `bash` as a fallback.

      Alternatively, you could install default Ubuntu and run https://github.com/hiAndrewQuinn/shell-bling-ubuntu to switch your terminal to fish, kitty, and get a whole slew of other niceties in there by default. I found myself doing this a lot at my last job in VMs, which is why I have this set of shell scripts lying around and easy to audit.

      • dotancohen 126 days ago
        You've really modernized the CLI there. I've been resisting doing that because I SSH into other machines often enough, but with your script I might just start. Thank you, nice work!

        As another big rg and tree (and ncdu, do you know of this) user, I highly suggest rotating one monitor 90 degrees. These tools with long output really benefit, as does web browsing, coding, and even the ability to see an entire page of a PDF without scrolling or making it tiny.

    • sakjur 126 days ago
      Having been recently exposed to PowerShell, I’m quite impressed by the balance they’ve struck in developing a new shell. I wouldn’t want to switch out Bash for PowerShell, but that’s mostly based on the same reasoning as you have (I can’t be bothered to install new shells everywhere).

      Though in a sense I think the most viable solution would be almost the opposite for UNIX: a reduced (and strict) shell syntax intended to be the target of crosscompilation rather than manual use. If it’d be possible to make that a subset of the standardized sh, there’d be automatic compatibility from the start and adoption wouldn’t need to happen on the consumer side.

      Still, we have binaries already. Binaries are much more capable than what I’m suggesting, though they are platform dependent. A variant of WASM, maybe?

      • Timwi 126 days ago
        > Binaries are much more capable than what I’m suggesting, though they are platform dependent.

        Not if they're .NET/IL/CLR (which is an ECMA standard).

        • sakjur 126 days ago
          Fair, though that’s still dependent on having a platform that follow that spec, in this case the common denominator for the platforms in question is POSIX rather than CLR.
    • viraptor 126 days ago
      Switching away from being bash compatible would be really unexpected. Too many existing scripts and websites explaining how to do things. Maybe something like http://www.oilshell.org/ has a chance though?

      If we were breaking away from the old style shells completely, then https://www.nushell.sh/ would be my preferred upgrade.

      • poincaredisk 125 days ago
        I use fish personally. I don't see value in using a bash compatible shell for interactive use (I can run bash scripts anyway, just like I can run python and perl scripts)
        • setopt 120 days ago
          The benefit is if you often have to work on remote servers where only Bash is available.

          Then, using the same interactive syntax locally and remotely is simply less jarring, since switching between e.g. the Fish and Bash for-loop syntax every hour causes lots of mix-ups. (I write a lot of loops interactively in the CLI.)

          I used Fish for ~5 years and loved it, but I ended up going back to Zsh myself.

        • fragmede 125 days ago
          The value is in using the same language for scripting as interacting with the computer as a shell, so that you're more fluent in that language when it comes out writing things with that language.
    • jrimbault 126 days ago
      The Redox project uses their own Ion shell https://doc.redox-os.org/ion-manual/ which isn't posix
    • aragilar 126 days ago
      Over what timeframe do you expect this to happen? It'll take at least 10 years for a new default on all systems (which is a challenge in of itself) to propagate to the point where you could rely on it.

      There's also the question of what new language is worth the cost of the transition period (no new shell I've seen justifies the change).

    • flohofwoe 126 days ago
      IMHO instead of offering a builtin programming language, shells should make it easy to launch scripts written in 'foreign' languages (maybe via a more informative shebang block at the top). I want to write my 'shell scripts' in Python, Typescript or Lua and those scripts should work on any shell and on any platform (including Windows outside WSL).
      • dotancohen 126 days ago
        That's what /usr/bin/env is for: a standard path to reference interpreters.

        There is still the issue of the proper interpreter actually being installed, however.

        • flohofwoe 126 days ago
          > There is still the issue of the proper interpreter actually being installed, however.

          Yeah, that's what I mean with "make it easy". The script header should announce requirements (like what language and language version it is written in, and the shell should be able to ensure those requirements before running the script - without ignoring security aspects of course). E.g. every script should be able to bootstrap itself, no matter what programming language is used (and that might even include compiled languages).

          • poincaredisk 125 days ago
            This is already possible with #!/usr/bin/env nix-shell
      • kragen 126 days ago
        unix tried that in the 70s, ms-dos batch files tried it in the 80s. turns out offering a builtin programming language is good actually
  • apitman 126 days ago
    As far as I know, TUI is the only way to build a statically linked app that can be controlled with a mouse. If you're running in a terminal that supports sixel, that covers a huge class of apps. And you get lots of bonuses like excellent cross platform support and running over a network. It's honestly a compelling platform even without the nostalgia factor.
    • actionfromafar 126 days ago
      Interesting... never thought of that angle. Mouse control is pretty important.
    • kragen 126 days ago
      i think you are probably going to be a sad panda if you are running sixel apps over a network. vnc is passable, xpra is excellent

      inspired by your implicit challenge, i just compiled a statically linked app that can be controlled with a mouse on amd64 debian, which is at http://canonical.org/~kragen/tmp/static-μpaint (2.3 megabytes, but remember that this is just code some random stranger on the internet linked for you to download and run, much like a random npm module). it was a little harder than i expected, but not really very hard. with respect to its staticness, i did get these warnings:

          warning: Using 'dlopen' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking
          warning: Using 'getaddrinfo' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking
      
      but those are from functions buried deep in xlib that i may not actually be using. if you do dare to run my executable on your system, i'm interested to hear whether it works and what version of glibc you're using (i'm on 2.36-9+deb12u7)

      the program is my example paint program, source code in https://gitlab.com/kragen/bubbleos/blob/master/yeso/%CE%BCpa..., explained line by line in https://gitlab.com/kragen/bubbleos/blob/master/yeso/README.m...

      the compilation command line was

          cc -static -Wall -Wno-cpp -g -Os -I. -std=gnu99 μpaint.o \
              -L. -lyeso-xlib -lX11 -lXext -lpng -ljpeg -lm -lbsd -lz \
              -lxcb -lX11 -lXau -lXdmcp -o μpaint
      
      which i was later able to reduce to

          cc -static -Wall -Wno-cpp -g -Os -I. -std=gnu99 μpaint.c \
              -L. -lyeso-xlib -lXext -lX11 -lxcb -lXau -lXdmcp \
              -lpng -ljpeg -lm -lbsd -lz -o μpaint
      
      and you should be able to build it from source if you clone my bubbleos repo and run `make` in the yeso directory. `make` will build a dynamically linked version, but the compilation command line above will build a static one instead

      this is using my minimal gui library `yeso` but i'm pretty sure the same approach would work for things like fltk which are designed for static linking. maybe it would work with more expansive libraries like gtk too

      • apitman 125 days ago
        Worked for me (glibc 2.40)! I'm honestly surprised (and impressed).

        Every time I've tried to statically compile a major glib project like X11, the answer the internet has given back is "don't do it".

        In terms of practicality, what would the runtime requirements of this be? Obviously compatible glibc, but also compatible X api? Though I'm assuming that hasn't changed much in years.

        • kragen 125 days ago
          thank you! i very much appreciate the information

          the x11 library doesn't use glib; maybe you meant 'a major glib project like gtk'

          this binary requires neither compatible glibc nor compatible x api, because it's statically linked. (i just tried it in a chroot with neither glibc nor any x api installed.) it does require a compatible x protocol, including, in particular, the shared memory extension, and yeso also (rather nonportably) assumes that the server's pixel format is 32-bit bgra (i.e. little-endian argb, with the blue byte first). so it won't work over a network (but works fine talking to a local vnc server which can then talk over the network), it won't work in a 16-bit video mode (unless you interpose vnc or something), it won't work in a paletted video mode (ditto), and it won't work in a monochrome video mode. but i don't think it will have any software incompatibilities with any x-windows server from the last 30 years or so

          the procedure for setting up the testing chroot was to build a static μpaint as above, install the debian `bash-static` package, and:

              : yeso; mkdir chroot
              : yeso; cp μpaint /bin/bash-static ~/.Xauthority chroot/.
              : yeso; XAUTHORITY=/.Xauthority sudo chroot chroot /bash-static
              bash-static-5.2# 
          
          it's really a very bare chroot, with very little of the usual unix amenities:

              bash-static-5.2# ls
              bash-static: ls: command not found
              bash-static-5.2# echo *
              bash-static μpaint
              bash-static-5.2# echo /*
              /bash-static /μpaint
              bash-static-5.2# echo /.*
              /.Xauthority
              bash-static-5.2# cd ..           # this does nothing
              bash-static-5.2# echo *
              bash-static μpaint
              bash-static-5.2# ./\316\274paint # bash defaults to not supporting utf-8
          
          at this point the program opens a window and i can run it, painting in the window with the mouse, until i close the window

              X connection to :0.0 broken (explicit kill or server shutdown).
              bash-static-5.2# echo $DISPLAY
              :0.0
              bash-static-5.2# exit
             : chroot;
          
          this suggests that in fact it does not use getaddrinfo() or dlopen(), for which the linker warned me it would need to have the corresponding glibc version installed at runtime; as you can see, no glibc version at all was installed at runtime, nor any /etc, /dev, /proc, /bin, /usr, or /home. just three files

          it took me a while to figure out how to get it to find the xauth file correctly; it doesn't use $HOME or /etc/passwd

  • godelski 126 days ago
    I'm loving how coreutils is getting improved and with how TUIs are exploding. It's just been a breath of fresh air. I hate to reach for my mouse.

    Side note: in vim you can press K to see the help page for a specific function. The default is the man page but some plugins will fix this though I'm not aware of one that's let me get away from documents and help files completely (I'd love to pick up docs of the codebase I'm working on. Anyone know of something that will?)

    But one thing I want to stress, the defaults of these tools should be the same as the originals as much as possible. This makes them more effective and adoption seamless.

    To give two examples fd and ripgrep default to respecting your .gitignore file. If you're not expecting this you can easily land yourself in trouble. Sure, it's in the readme but people are installing through their package managers so it doesn't matter. I'm glad the option exists, but don't make it the fucking default! We have aliases for that stuff. It's always better to error in the direction of too much output than too little. Output can be parsed and searched, but you can't when it's not there.

    Btw, here's what pacman says

      extra/fd 10.2.0-1
        Simple, fast and user - friendly alternative to find
      extra/ripgrep 14.1.0-1
        A search tool that combines the usability of ag with the raw speed of grep
    
    So I'm surprised people are surprised that people think you can just alias find or grep, since they are advertised as such and this is also what's said through word of mouth. (Yes, people should open man pages and visit the GitHub, but let's be real about our expectations)
    • kstrauser 126 days ago
      I strongly disagree. Neither of those tools claim to be drop-in replacements. They’re more like a reinvention of the older tools given what we’ve learned in the many decades since the originals came out, and tailored for the common use cases today. It’s rare that I want to grep, say, the node_modules directory in a JS project. The default of showing the results that git says are my own code is very useful. And as you say, there are aliases if you want to get off the beaten path.
      • godelski 126 days ago
        Disagree all you want, you still shouldn't be surprised people aren't expecting this

        fd: https://github.com/sharkdp/fd/issues?q=gitignore+

        And I stand by it is strictly better for the __default__ to be unfiltered. This is what people expect in general.

          > tailored for the common use cases today.
        
        *YOUR* common usecases. Grep, sure, it is rare that I want to grep a binary file. But I do want to grep configs and secrets. I do want to find binaries all the time. There are many different ways to program and expecting everyone to program like you or even remotely like you is naive. There's so many things under the sun.

        Failure analysis is critical when designing anything. So let me rephrase

          Which do you think is a better mode of failure?
          1) Not finding files you are expecting to find
          2) Finding more files than you expect to find
        
        Or for ripgrep, grepping fewer files than you expect to vs grepping more files than you expect to.

        It is very hard to argue for one considering we have regex to filter which files we want... not to mention pipes and all sorts of common tools including the tools themselves.

        • kstrauser 125 days ago
          No, the most common use cases for the people using a tool designed to more easily search source code. Notice that the examples on its site show searching the Linux source code. For that matter, the very first sentence on that site says

            “ripgrep recursively searches directories for a regex pattern while respecting your gitignore”
          
          Grep still exists, unchanged. It does what you want it to. Ripgrep is a new tool designed for a different use case. It’s not about failure modes. Ripgrep isn’t failing when it doesn’t show you the files it tells you it’s going to skip in the very first line of its description.

          Again, you don’t have to use it if its advertised purpose doesn’t line up with what you want.

          • godelski 125 days ago

              > For that matter, the very first sentence on that site says
            
            I'd rather not go in circles. Please refer back to the original post as I addressed this. See package managers and word of mouth. Do you really visit the website or GitHub page for every tool you install? I'd be surprised if you did.

            Also, why is it not okay that the design was wrong? That doesn't make it a bad project. But we can recognize that choices could have been better. Those are different things. It's learning from the past

            https://en.wikipedia.org/wiki/Principle_of_least_astonishmen...

            • kstrauser 125 days ago
              Do I read the manual of every tool I use before counting on it? Yes, absolutely. The first sentence of ripgrep's man page's description section is:

                ripgrep (rg) recursively searches the current directory for a regex pattern.  By default, ripgrep will respect your .gitignore and automatically skip hidden files/directories and binary files.
              
              You cannot pick up a tool, start using it without understanding what it does, then tell the tool that it's doing things wrong.
              • godelski 125 days ago

                  > Do I read the manual of every tool I use before counting on it? Yes, absolutely. 
                
                Sorry I'm not sure I believe you and even if you are that rare case, I think you know most people don't. I'm a manual lover myself but haven't read all the manuals of all the tools I use. That would be extremely cumbersome given the sheer amount and the complexity of some tools.

                  > You cannot pick up a tool, start using it without understanding what it does
                
                I don't disagree with the exact words but I do with what I think you're arguing. Lots of tools have a natural design where you can intuit their usage. A hammer or screwdriver is a great example. Many videogames also teach you how to play via level design without the need for tutorials (there's also many many terrible examples from video games and many sub par ones too). And that's what I'm arguing about when I'm talking about the direction of error. There's a natural signal coming back to the user so they know if something is wrong or isn't. But letting users think things are good when they aren't is an extremely costly error.
        • poincaredisk 125 days ago
          >Grep, sure, it is rare that I want to grep a binary file.

          Case in your point - I grep binary files all the time (mainly to look for strings in complied executables).

          • godelski 125 days ago
            A hacker on hacker news‽

            But yeah this is why I'm arguing for no filtering by default and why I believe the best designs are the most flexible ones. Because any time I think "nobody would need that" I find a world of people I didn't know about. You can't ever predict the usage, so flexibility and defaults aren't about what fits most people's use cases but what can be easily modified to use cases of any group. I'm definitely too dumb to figure out what everyone actually needs lol

        • setopt 126 days ago
          Perhaps the best of both worlds would just be to write a warning to stderr by default that files in .gitignore have been ignored (only if such a file is applied)?
          • burntsushi 125 days ago
            This won't happen because it would imply writing a warning for almost every run of ripgrep. It's the kind of warning that pisses people off because 1) it's the intended behavior and 2) since it would be shown almost all the time, folks would immediately start ignoring it.

            ripgrep does try to emit a warning if you run `rg foo` and it doesn't search any files because of gitignore. This can happen in some cases, e.g., when you have a `.gitignore` in your `$HOME` with a `*`.

            • setopt 125 days ago
              Makes sense – thanks for the explanation and for ripgrep.

              Personally, I love this default and find it way more usable than grep in most cases. It’s a more important feature to me than the per-file search speed, since grep is often fast enough.

          • Timwi 126 days ago
            Or just have a command-line option to specify an ignore file. That's strictly more useful because then you're not arbitrarily limited to files named “.gitignore”.
            • setopt 125 days ago
              rg, fd, etc. all use not only .gitignore, but also merge in rules from a global gitignore in your home directory, as well as a local .ignore file that takes precedence over .gitignore.

              Setting up all that with CLI flags is a lot of work for what is arguably a sensible default.

              • godelski 125 days ago

                  > Setting up all that with CLI flags is a lot of work for what is arguably a sensible default.
                
                  alias fd="fd --include-gitignore --include-global-gitignore"
                  alias rg="rg --include-gitignore --include-global-gitignore"
                
                
                I would not consider even a long line like this in your shell's config file "a lot of work". In fact, it seems pretty standard.
            • burntsushi 125 days ago
              Why assume that ripgrep or fd is "arbitrarily" limited to files named `.gitignore`?
    • burntsushi 125 days ago
      > I'm glad the option exists, but don't make it the fucking default!

      I very intentionally made it the default. It is one of ripgrep's (along with other tools like ack and ag) most important innovations.

      What you say is absolutely a cost. But it's a cost worth paying because it improves the user experience so much for tons of folks. Other than perf, the defaults are by far the most critical difference between ripgrep and typical POSIX grep programs. So if the defaults were the same as grep, it would be removing the most visible innovation!

      This default is half the reason of ripgrep's being in the first place. The docs are very clear and upfront about this. It's not reasonable for me to change the tool just because some users won't understand what they're installing. That's absurd.

      And of course, changing how it works now would be a massive breaking change that would confuse and piss off way more users. So guess what. It is never ever never going to happen.

      • godelski 125 days ago

          > it's a cost worth paying because it improves the user experience so much for tons of folks.
        
        You can have the benefit by making it a flag while also maintaining minimal surprise.

          > if the defaults were the same as grep, it would be removing the most visible innovation!
        
        At the cost of increased surprise and decreased utility. I think you're going to disagree with the utility part, but you have all the utility when the defaults are off and you need flags to turn features on. There a reason core utils have the defaults as simple. That's a feature but a bug.

          > now would be a massive breaking change
        
        You're right, it would be a big change. I'm not sure how breaking, but it would be a big change and increase surprise of users which I'm against. I'm not asking that you change it. Damage is done. But I don't think you need to double down on saying this was the right way to do things. You built a tool for things you needed, it got popular, and didn't expect it to get as far of a reach as it did. That's great! But it's also not surprising what are advantages to you are disadvantages to others.
        • burntsushi 125 days ago
          > You can have the benefit by making it a flag while also maintaining minimal surprise.

          You're ignoring the benefits of good defaults. So no, I strongly disagree. Your approach does not give the same benefit.

          > At the cost of increased surprise and decreased utility.

          I specifically and explicitly acknowledged it as a cost.

          > But I don't think you need to double down on saying this was the right way to do things.

          I 100% believe it was the right way to go. As in, it is my opinion that it is a better user experience overall. More explicitly, if I could go back in time, I would make exactly the same design choice even knowing what I know now.

          > You built a tool for things you needed

          I didn't build it just for me. ripgrep started out as a tool to benchmark the regex crate. But when I noticed it was as fast as GNU grep in common cases, I decided to turn it into a tool for others to use. At this point, I specifically chose the design I did not just because of its impact on me, but the impact that older tools with a similar design (ack and ag) had on untold legions of programmers. I thought it was the better UX then and I think it is the better UX now.

          > But it's also not surprising what are advantages to you are disadvantages to others.

          I don't understand why you're so focused on "surprise." I'm not surprised at all by anything you said. And this insistence on ignoring my acknowledgment of my design having a cost is pretty frustrating.

      • Y_Y 125 days ago
        I happily use ripgrep, but naively believed it was just naively looking at whatever files it was pointed at, like grep does.

        I get it, and it's probably more often than not the desired behavior, but it certainly violated the Principle of Least Surprise for me.

        Finally, being confused and pissed off is the default state for users. No config will ever be sane enough to avoid rancour.

        • burntsushi 125 days ago
          When I use `grep` in a code repository and it turns up pointless results in my `.git` directory, that also violates the "Principle of Least Surprise."

          There's a reason why I don't follow the "Principle of Least Surprise" in any rigorous way, because everyone can be surprised by just about anything. I essentially never use that phrase in my writing or thinking because it's so vague and literally anyone can disagree with it about anything.

          As I've already acknowledged, the default skipping behavior is a cost. And this is precisely why this behavior is mentioned extremely prominently in all of my materials about ripgrep.

          The GitHub description:

          > ripgrep recursively searches directories for a regex pattern while respecting your gitignore

          The first two sentences of the README:

          > ripgrep is a line-oriented search tool that recursively searches the current directory for a regex pattern. By default, ripgrep will respect gitignore rules and automatically skip hidden files/directories and binary files.

          The first two sentences of `rg -h` and `rg --help`:

          > ripgrep (rg) recursively searches the current directory for lines matching a regex pattern. By default, ripgrep will respect gitignore rules and automatically skip hidden files/directories and binary files.

          The first two sentences of the description in the man page:

          > ripgrep (rg) recursively searches the current directory for a regex pattern. By default, ripgrep will respect your .gitignore and automatically skip hidden files/directories and binary files.

          Near the top of every release of ripgrep:

          > In case you haven't heard of it before, ripgrep is a line-oriented search tool that recursively searches the current directory for a regex pattern. By default, ripgrep will respect gitignore rules and automatically skip hidden files/directories and binary files.

          Like, I've gone out of my way to make is superbly clear what it's doing by default. I can't control how other people write about ripgrep or talk about it, and just because I can't control that doesn't mean I should just give up that default behavior entirely.

          I'm not surprised if users miss this. I understand it. This is why I've acknowledged it as a cost. But I think the benefits outweigh the costs. I cannot tell you how many times folks are delightfully surprised (there's that principle again, but in the opposite direction, demonstrating again just how useless of a metric it is) when ripgrep automatically ignores all the junk they don't care about without them needing to think about it at all.

          ripgrep is an object lesson in the idea that defaults matter. This doesn't mean it's an unmitigated win from every possible perspective. And this is why ripgrep also makes it extremely easy to disable the automatic filtering: `rg -uuu`.

          • godelski 125 days ago

              > because everyone can be surprised by just about anything.
            
            I don't think you're being honest here and I'm not sure why. Yes, evening can be surprised by anything, but no, it's pretty clear that it's worse to not get files you're expecting vs get more files than you're expecting. In the later case there's a pretty clear indication you need to filter while in the former there's no signal that anything is wrong. This is objectively a worse case.

              >  Near the top of every release of ripgrep
            
            I addressed this in the OP. People are installing through their package managers and will never see this. I'm glad you do this and it helps because people will go to open an issue and half of them will read the readme, but it's not surprising people don't.

            Look, I get you're frustrated. Especially since you get people opening issues all the time. I know that this is so frustrating to you __because__ how frequently it comes up. But that doesn't mean you need to backtrack and change things nor does it mean it wasn't the best design. It's okay to have design flaws. The point of my post isn't to argue that rg should change, but that others shouldn't follow this as the default design (again, I like the feature as it simplifies filtering, but defaults with no filtering is best. Especially for filtering tools)

              > extremely easy to disable the automatic filtering: `rg -uuu`.
            
            I'm happy you did this. Thanks. But it is cumbersome to have a default alias like this and then add back gitignore and the global gitignore when you do want it.
            • burntsushi 125 days ago
              > I don't think you're being honest here

              Really? Wow. You are quite quick to leap to accusations of dishonesty!

              But no, I wasn't being dishonest. I'm serious. I do not rigorously use the "Principle of Least Surprise" when thinking about UX. I understand the thinking behind it, and I understand that bad surprises should be avoided and that not all bad surprises are equally bad, but that's exactly the point. It isn't sufficient just to say something is bad according to the Principle of Least Surprise. You had to use additional reasoning to talk about one surprise being worse than the other.

              > Yes, evening can be surprised by anything, but no, it's pretty clear that it's worse to not get files you're expecting vs get more files than you're expecting. In the later case there's a pretty clear indication you need to filter while in the former there's no signal that anything is wrong. This is objectively a worse case.

              "objectively" is a terribly misused word and I think you are misusing it here. I don't agree with your conclusion here. In information retrieval, the trade-offs between false negatives and false positives aren't so clear cut. A sea of false positives can be just as devastating as false negatives. The .git directory is perhaps an obvious case to filter out, but there are many that are less obvious but are captured by .ignore or .gitignore. They can waste a ton of time, and it is not so obvious in those cases to say that false negatives are "objectively" worse than false positives.

              In other contexts, sure, false negatives are worse than false positives. For example, a bug in ripgrep that leads to false positives is probably better than a bug that leads to a false negative. It depends.

              The key here is to teach users what the expectation is. This is why all of my blurbs introducing ripgrep in any documentation mention the filtering. That some people miss this and get surprised by the results does not mean I have failed. It just means that it is a cost to be weighed against the benefits of filtering by default.

              I think I've mitigated the surprise factor sufficiently and I think the benefits of filtering by default are immense. They are easy to discount from your perspective because you think it's just as simple as adding a flag. But that presupposes you know about it! This is one of those defaults where I think a lot of folks don't even realize they could benefit from it in the first place.

              I encourage other tools to follow ripgrep's design and perhaps to even make bolder choices. Because guess what. The existence of new tools does not mean the old tools no longer exist. You can keep doing what you're doing.

              > I addressed this in the OP. People are installing through their package managers and will never see this.

              This is why I acknowledged that it's a cost! My goodness. I acknowledged that it's a cost because not everyone will see the blurb.

              > but it's not surprising people don't.

              Why do you keep saying this? Do you think I'm surprised that people don't read the docs, even the first few sentences? I ain't no spring chicken. I've been around for a while. And I'm guilty of not reading the docs too.

              > The point of my post isn't to argue that rg should change, but that others shouldn't follow this as the default design

              That's not what I took from it. But okay.

          • kstrauser 125 days ago
            I almost feel like we're being trolled now. I will not reply to this person anymore and may recommend the same.

            I'm one of the people delightfully surprised by ripgrep. If I wanted a tool that behaved exactly like grep, I'd use grep. It's already fast enough for my uses, so if the only difference were that rg were a bit faster, I'd ignore it and stick to the default tool. That's not why I use rg though! Thanks for your hard work, and yes, the excellent defaults.

            • Y_Y 125 days ago
              Are you referring to me? I only posted once in this thread and it was just my genuine opinion, which I considered relatively anodyne.

              Surprises are rarely delightful when I'm coding and I'm not always assiduous enough to read all the man pages. That's not burntsushi's fault, and not necessarily a design flaw. On the other hand I think it's fair to mention that it violates a popular (but not universal) software design principle.

              If you just wanted the gitignore magic then you can use `git grep` or the `--exclude` flag and save yourself installing ripgrep.

              • kstrauser 125 days ago
                No. There are other people in the thread who were more, uh, assertive about their complaints.
            • godelski 125 days ago
              You're not being trolled. You just do a different kind of programming and likely haven't experienced how disastrous it is to have things filtered out when you weren't expecting it to.

              I addressed the point more clearly here

              https://news.ycombinator.com/item?id=41515601

              And I don't think rg needs to change. The cure would be worse than that disease at this point. But that doesn't mean people making new tools should do the same thing.

          • Y_Y 125 days ago
            I appreciate your thorough explanation. Maxims like minimizing surprise are fine, but if you understand why it's there but still disagree then it only makes sense to make a better product at the expense of surprising people. Thanks for the great application, and sorry that morons like me keep making wrong guesses about its behaviour.
            • burntsushi 125 days ago
              To be clear, I don't think you are a moron for making a wrong guess. I make wrong guesses about software all of the time.
    • camgunz 125 days ago
      This kind of thing isn't without precedent; ls famously ignores files/folders that start with a dot.

      ripgrep in no way tries to be a drop-in replacement for grep. Not only is it super explicit about it in all the docs, a major raison d'etre for it is its UX, which differs substantially from grep's.

      Also, it's fine if people make mistakes, flounder, or get stuck. When I installed RedHat 5.0 for the first time it took forever to figure out out how to run programs (gotta use ./), then it took forever to figure out how to use emacs, then it took forever to figure out how to save a file and exit emacs. Along the way I learned a ton about man and info pages, how to page up and page down, the names of the keys... skills and knowledge that have served me literally for decades (OK, maybe not how to use info pages). Not everything needs to be optimized for like, minimal number of clicks to purchase.

      • godelski 125 days ago

          > ls famously ignores files/folders that start with a dot.
        
        This isn't related. There's a specific system standard. It's been that way for decades and is now expected. Last I checked, git isn't part of the unix system, isn't ubiquitous in file systems, and gitignores don't exclusively specify files that are junk or binary (I always exclude build artifacts, like most people, but build artifacts are useful to look at. And I do ML, so I'm never gonna include my checkpoints because git isn't for binaries, but I still want to search for them).

          > Also, it's fine if people make mistakes,
        
        Sure! And we should learn from the mistakes. I'm not arguing fd or rg should change. Momentum is a bitch. But if you're going to build a new tool, don't make this mistake.

        We've talked about principle of least surprises but let's also talk about the way you have to handle this issue. In my dotfiles, along with thousands of other people, I have

          alias fd="fd --no-ignore"
        
        But now we have an issue. What happens when I want to add back the ignore? Sure, with this example I just do `\fd` but doesn't if I add any of the other likely flags to my alias. So now you have to complexity your argument parser to handle precedence. Which dominates? Ignore or no ignore? Does order matter?[0] You now have to make choices to handle things you wouldn't have had to if you did it from the other way. This design choice requires more complexity overall to meet user demands. Given that aliases are the norm for these kinds of situations, it is not a hard ask to ask users to alias if they want this as the default.

        But let's talk about that generalization more, because I see this error in software all the time. People think

          Defaults should be what most people will use
        
        This is wrong, for multiple reasons.

        First, your tools will always take on a life beyond what you expect and you will always be bad at predicting what people need or want (therefore use flexibility).

        Second, while you should strive for this, there are many conditions that take higher priority. For example the principle of least surprise.

        Good design should strive for users not requiring a manual. It's almost always impossible to actually achieve, but you still do your best. One key aspect to this is understanding natural feedback. In our example of too much or too little we can see that there's natural feedback so the user can understand. And yes, the feedback will cause the user to likely reach for the manual, but it won't let them think something is right when something is actually wrong!

        Even if you know 90% of your users would be happy with a specific default this does not override the importance of natural feedback signals. It's why people will still push on a door labeled pull, because the design is telling them the instructions not the label. Labels are just a patch to fix what was just a bad design.

        Look, lots of design is ambiguous and there's almost never optimal solutions. But our specific example is a software version of a Norman door

        [0] for our specific example with fd it's probably safe to let the ignore flag strictly dominate but I'm sure you can understand examples where this won't be as obvious and that the conversation at hand actually generalizes far beyond even unix tools and is a general design principle.

        • camgunz 125 days ago
          It's clear you feel strongly about this, so I'm not gonna try and talk you out of it; I think the viewpoint you're espousing is perfectly reasonable.

          But I think you're expressing a subjective aesthetic preference, while trying to give it the weight of objective superiority by appealing to the principle of least surprise, "system standards" and the like. As burntsushi's tried to explain to you, it's possible to be surprised by ignoring .gitignore. It's totally reasonable to configure a source folder and expect coding tools to all use the same stuff. If I had to set up a .grepignore and a .findignore etc etc that would be surprising to me (and probably violate some design principles as well). And besides, system standards are just what we decide they are, they're no better or worse objectively and they're definitely not consistent (find, for example, does not ignore dotted files/folders). This is probably also a design principle.

          I made this point elsewhere, but it's fine to prefer what you prefer. No further justification necessary! But you're presumptive bordering on rude when you tell other people that their preferences for tools they built are objectively wrong or worse. We should all be free to build tools without worrying that our aesthetic, personal preferences will be assailed as objectively worse or downright harmful. It's cool that ripgrep does this and plenty of people love it (it's very helpful inside an editor, for example).

          > In my dotfiles, along with thousands of other people, I have...

          Just give it a different name. For years I had 'cgrep' which was grep with a lot of --exclude to avoid junk generated html/js/css (etc). That way when I wanted grep I got grep, and when I wanted code grep I could get code grep. Ez.

          > Good design should strive for users not requiring a manual.

          I can't disagree with this more. This works only for shallow tools meant for dilettantes. I have... probably tens of thousands of hours in Vim. There's no way you could make Vim not require documentation, and I'm grateful it doesn't limit itself to only what is self-documenting and that I don't need to disable tutorial mode or little helpers every time I configure it on a new machine. It's a power tool for power users. It's not for everyone, and that's super OK. Reading is a major way we get new information! We shouldn't be so allergic to it.

          • godelski 125 days ago

              > aesthetic preference
            
            I think this is the core of why we're not communicating and why I'm a bit frustrated with burntsushi. My argument has *nothing* to do with aesthetics (nor anything to do with preference). I think one reason poor design is so prolific is because people conflate design with aesthetics. That's part, and matters, but design is so much more and aesthetics is not what should drive design, but error analysis should[0].

            My argument has from the beginning been about failure. How things fail and the nature of how this is communicated to the user. If you think by "design" I mean "aesthetics" then we have no hope of understanding one another.

            This is why burntsushi's argument is not meaningful to me and has no potential to sway me, because it is not addressing the argument I'm making. My argument is that when you are doing failure analysis that you have to look at the different ways things can fail (I am considering both, and both have been in all my examples). Too much output is annoying but is highly unlikely to be catastrophic (especially if our prior is grep). It can also be dealt with in other ways and sends a clear signal (excessive output) to the user to do something about it and WHAT to do about it (filter). This is "free" information. But too little output does not communicate to the user that something is amiss and this is why it leads to catastrophic issues. That's why this is "more surprising." It isn't that too much output isn't surprising (it is), but that there's a natural signal back to the user.

            "Deadly" errors are the ones where there's no indication that something is wrong. In our case, there's even a signal telling the user that things are fine when it is not. It's extremely difficult to fix problems you're not aware of and even harder to fix problems that tell you there's no problem. That's why this mode of failure has higher "surprise" (we can call it perplexity if you want to use more formal language) and why I'm talking in a non-subjective way. This has absolutely nothing to do with aesthetics this has nothing to do with preference. This has everything to do with modes of failure.

            So arguments about preference are not addressing my argument, and if it is interpreted this way we will talk past one another. Maybe you want to argue that these cases are rare, but that is subjective to the type of programming you do. The issue here is not too different from rm or dd, where it is easy to make mistakes and that they are quite costly. But at least those tools will give you VERY STRONG feedback (you destroyed things *AND you'll know it very quickly*). It naturally communicates to the user that they fucked up. But it is why many distros create a default alias with the iv flags and why they'll check for rm -rf /. It's also why many admins alias rm to a trashing command. It is because rm (and dd) have poor design. Almost every Linux user has accidentally deleted something they didn't want to, it is almost a right of passage. This doesn't mean they aren't essential tools and aren't useful, and I'm not saying rg is useless or a bad program itself. I'm just making the argument that this one aspect is.

              > Just give it a different name.
            
            I think you misread. I didn't alias fd to find, I aliased fd to fd. I didn't alias rg to grep, I aliased rg to rg. This is orthogonal to the issue though as that was about the added complexity (we need not rehash).

              > There's no way you could make Vim not require documentation,
            
            I think you misunderstood my point, so let me try to clarify. You should strive for this, but this doesn't mean abolish documents nor that it is a replacement(you'll NEVER find me argue that!). I did want to make clear that this will almost never happen fully. I've been using vim for well over a decade as well, and I think vim follows the design principles I'm trying to convey (I guess poorly). Yes, you need documentation for vim, but vim's beauty is that learning one thing results in learning a dozen things. The hard part of vim is learning the core aspects: about command/operator keys and movement keys, so it is a framework that most are not use to using. For example, if all you knew was the movements h,j,k,l,w,e and operators d,y then we know 18 things. We then learn that $ moves to the end of the line, and we didn't learn 1 thing, we learned 3! We learn about a new operator c, and we've learned 7 things! This is why vim is so powerful. Because you gain knowledge for free. Learning vim doesn't have a linear learning curve, it is super-linear because of this communication that doesn't have to be explicitly stated.

            It's not about "no manual" but maximize the amount of information the user gets "for free". You get to exploit your prior knowledge, not that that prior knowledge is free. So I don't really disagree with your main argument, but I think we are miscommunicating due to priors. I'm an avid reader of docs myself and I highly encourage people to read them[1] and am passionate about documenting things as well. But I am *NOT* advocating for no documentation. Personally I abhor the idea of "don't document, make your code self documenting." It is a false dichotomy because they were never at odds. Document your code *and* make your code as readable as possible. These complement one another and together are more powerful than if either was done alone. I'm quite verbose and so I hope you do not think I'd argue we should avoid reading. That would be quite hypocritical haha. FWIW, I do appreciate you engaging in a longer more nuanced conversation. Even if we continue to disagree I appreciate this and wish more long conversations would happen on HN.

            [0] This is one of a number of things I wish was more formally discussed in software education that is commonly taught in traditional engineering education. Every engineer will spend several classes learning about modeling when things go wrong, and it is stressed how this drives the way you design things.

            [1] A big reason I think people should reach for documents over things like stackoverflow is specifically because documentation is much more likely to create compounding knowledge. Whereas I believe many times these other resources are more about a short and concise answer that's narrow to the problem. Though that isn't to say there aren't fantastic SO posts, blogs, and all that. It is always case dependent.

            • burntsushi 125 days ago
              > My argument is that when you are doing failure analysis that you have to look at the different ways things can fail (I am considering both, and both have been in all my examples).

              Lmao. I've done this. That's my whole point! I've weighed it against the upsides of filtering by default. And I have legions of users who like those defaults.

              Your comments are filled with the conceit of certainty instead of acknowledging trade offs. Your comments are so incredibly condescending.

              • godelski 125 days ago
                How about attacking my argument instead of my character?
                • burntsushi 124 days ago
                  I'm not attacking your character. I'm criticizing your chosen words and communication style. I was extremely careful about that. If you can't fix that, I don't see us moving forward. I'm pointing it out because I perceive your comments as carrying the implication that I'm not doing "real" engineering, given your ranting about programmers these days not being taught how to model "when things go wrong." The idea that I didn't do that is insulting.

                  Besides, I've raised several points that you've completely ignored from my perspective as well. So engaging with you further seems pointless.

                  I will continue to advocate in favor of making bold UX choices by looking at trade offs.

            • camgunz 125 days ago
              > FWIW, I do appreciate you engaging in a longer more nuanced conversation. Even if we continue to disagree I appreciate this and wish more long conversations would happen on HN.

              Same! I really feel like HN should have like a "branch topic" or "move to DM" or something. Is that email (I'm on a big "everything is email/NNTP" thing lately haha)?

              > This is why burntsushi's argument is not meaningful to me and has no potential to sway me, because it is not addressing the argument I'm making. My argument is that when you are doing failure analysis that you have to look at the different ways things can fail.

              Well I think he did this at least as much as you have. If there were some kind of usage study or telemetry on how many times people run rg and then run rg -uuu I guess we'd have something approaching good data on this, but I don't think we do. In the absence of that, it seems like we just have a couple of different viewpoints on ripgrep UX, neither more objective or correct than the other. That doesn't make them invalid, it just means neither is measurably wrong. You might point to IDK, GitHub issues or something, but silent majority and all that.

              > This has everything to do with modes of failure.

              Ooh I think the miscommunication here is when you say "failure". It's not a failure that ripgrep honors .gitignore. It would be if it advertised that it honors .gitignore, but then scanned everything you listed to ignore--that would be a bug. Maybe your issue with these tools is that the names of them shadow their coreutils counterparts, but don't aspire to be drop-in replacements and have significantly different UX? I can understand that. IMO "grep" is both the very specific tool and behavior of grep, but also a generic "look for something in the contents of things" verb. Like, I'd be with you if ripgrep were "ripawk" or whatever, but I think grep has a certain broad abstract meaning at this point. But yeah, fuzzy.

              > Too much output is annoying but is highly unlikely to be catastrophic (especially if our prior is grep).

              Well, grep is really parsimonious by default. I don't worry about putting it in systems where output ends up in log files. I really dislike the modern trend of being maximally verbose by default; IMO that's what -v and -vv and -vvvvvv are for. I get pretty annoyed having to pipe output through grep, and the nuclear irony of piping grep output also through grep would probably destroy me.

              > I think you misread. I didn't alias fd to find, I aliased fd to fd.

              Oh yeah that's what I meant. Whenever I have stuff that has annoying CLI args I build aliases or functions in my shell config, so like:

                  alias cgrep=grep --exclude=.git --exclude=.gitignore --exclude=.node_modules --exclude=frontend_build
              
              That way I still have regular grep hanging around, but when I'm grepping in my source code folders I just use cgrep (actually I think it was pgrep because of where I was working, which is also a little easier to type). Maybe it's a little imperfect, but engineering is the religious practice of celebrating imperfection.

              > it is super-linear because of this communication that doesn't have to be explicitly stated

              Oh I'm 100% with you on this one. Vim has a mental model where once you gain the tiniest bit of fluency you just take off. At this point I can rarely even tell you what commands I'm inputting, or like when things go wrong it's like I tripped over my tongue or stuttered or something haha. I think tools that show you a way of thinking (Lisp is another) are amazing. And moreover, I think they create _culture_. Getting trapped in Vim (or emacs... it doesn't even tell you how to get out!) is cultural, rm -rf'ing / is cultural, reading tons of man pages is cultural, dd -of'ing the wrong device is cultural, and you take different things away from those experiences. I learned that when you're using dd you're in "I'm not fucking around I will destroy everything" mode, and also that that mode exists--in contrast to more gentle or verbose CLI/TUI/GUI tools, and I learned to appreciate both (love dd, love OpenBSD's wizard installer). It also made me realize you can create culture through your work, choosing which values and aesthetics to showcase or mental models to build a tool/app/platform around. In the same way every act is a political act, they're all also cultural, and by necessity spiritual. When I build I put something of my spirit into it, not something ineffable like a soul or whatever, but the culmination of my experience, taste and ethos.

              Maybe that's why engineers get into super heated arguments over the tiniest of things. They're not tiny to us! We feel them deeply.

              > Personally I abhor the idea of "don't document, make your code self documenting."

              Yeah Hillel Wayne had a great post about it [0]. I've been very anti comment a lot of my career, but I've been turned around. I can feel my brain start to sag when I think of a page of code full of huge identifiers like remoteControlWithSendOffExemptionEnabled or wtfever.

              > It's not about "no manual" but maximize the amount of information the user gets "for free".

              I personally don't love little helper affordances or overly verbose output. TFA has the perfect example of Helix showing possible commands as you input, because that would drive me absolutely bonkers. My fingers speak Vim the language, and when I speak English I don't have a box in the bottom right corner of my sight that tells me how the rest of my sentence might go. It's an entirely different part of my brain; it's non-visual.

              I don't want to be bombarded by information. I want the bare minimum to convey your point. My mind is already full of incredible amounts of junk and I am often struggling to marshal enough mental resources to do reasonably good work. If I need more information, I'm capable of asking for it. I'll write a tool myself if I have to.

              I do think there's a tension here between like, someone learning Vim and someone having used Vim for years. A feature like this is maybe really helpful for the former (I'm not super sure, it could end up being kind of a crutch, who knows). Maybe a useful way to put this is I don't want to start from a super loud "wall of sound" information environment. I want to start from silence and turn it up to where I'm comfortable. I want to adjust the mix or EQ different things to raise or lower them. I want to be able to configure different scenes where sometimes I'm very concerned about what things look like in memory, whereas others I'm very interested in the call graph, but I never want to see the call graph when I'm examining the memory because I want to focus on the memory.

              There's a value here of respecting a user's focus and attention that I feel like isn't getting enough play. My strong feeling is don't try to give me information unless I've specifically asked for it. Don't anticipate. Assume I know what I'm doing (at least, allow me that illusion haha).

              [0]: https://buttondown.com/hillelwayne/archive/why-not-comments/

              • godelski 124 days ago

                  > If there were some kind of usage study or telemetry 
                
                This still wouldn't convince me because it's not what my argument is about.

                (Fwiw, I think one of the reasons BS is upset is how frequent the issue is brought up. You can see his frustration in the GitHub issues too. And how he points to it being at the top of the readme and it still doesn't get through to people. Just like a sign on a door saying pull. There's a reason that doesn't work. Why literate people keep failing. Ask why and how it can be prevented.)

                I know I'm miscommunicating but I'm really not sure how. Because it isn't about what most people do. It isn't about what most people what. I know this sounds odd, and maybe that's what's getting in the way.

                Maybe I shouldn't say error because you're thinking the tool has errored? How about user mistake? When user uses the tool wrong? When something unexpected happens? Any of these will substitute. It's all about when I user does something and it's not what they think they're doing. Similarly I understand how it appears I'm arguing for verbose by default, and in a way I am (I'm not blind to this lol), but I'm also not really. But when talking about verbosity in terms of programs I think we'd need to distinguish number of words from what kind of words. Though I think it's not uncommon for debug or warnings to be misclassified as info haha.

                Design is about communication. Any command you enter gives you feedback. Returning nothing or returning too much are feedback, the tool communicating to you. That's why this is less about majority of the time and user preference. Most of the time I want to filter dotfiles. But I don't always and that still won't convince me because it's orthogonal to my argument. I think if you can see why you might understand my argument a bit more. Because you still seem to be focusing on it being about preferences when it's not. It's about the information users get back when they enter commands and what that means and tells the user. I know this sounds like preference, but it isn't. This is why I brought up Norman Doors before. I wouldn't fault someone for thinking a vertical bar on a door that you push is just preference because we don't talk about these types of language and just expect people to get it. But it isn't preference, it is bad design. I think if you watch this you'll understand my argument much better (it's 5 minutes haha). Because I think you'll understand why one could say that they way these doors are made are objectively bad but it looks like it's all about aesthetics

                https://youtu.be/yY96hTb8WgI

                So design isn't just about aesthetics, it is about language. Albeit an unspoken and unwritten one.

                  > Maybe your issue with these tools is that the names of them shadow their coreutils counterparts,
                
                Please see my initial post. I think we've been talking so much that things got lost.

                  > I'm not fucking around I will destroy everything
                
                We shouldn't get rid of sudo. But yes, requiring an explicit action to enter this mode is the design principle I'm discussing.

                  > Hillel Wayne had a great post about it 
                
                I didn't like the post tbh. His issue is better solved by writing a better comment. The comment is terrible and doesn't communicate what it needs to. But most people think they're far better at writing and communicating than they actually are. If I'm doubt, let my own comments serve as examples lol.

                It also has poor assumptions. Ones that work for him if he is the only one using the code but I think it's absurd to think a lack of comment will be reliably interpreted as intentional and holding meaning that the function is inefficient. It only works if comments are highly prolific. And a new person coming in will assume that it's just because of how common this is and it's a more likely explanation.

                But he's right about not using self documenting code in that cumbersome way. That is, as you say, too much.

                I do apologize. This conversation has been exhausting for me. Not really because of you but there's been several people I've talked with. I do appreciate you comments and the conversation. I will almost always be happy to engage in long form and nuanced conversations. So when you see me again don't hesitate to word dump on me lol. Distilling words is very hard, takes a lot of time, and a lot of skill. But I come to HN because I hate Twitter

    • JNRowe 125 days ago
      When vim plugins change the effect of K, all they're likely doing is setting keywordprg¹. You can do the same yourself, fex to display info on the commit under cursor you'd use "setl kp=git\ show". You can write little script wrappers to fetch whatever custom resources you want, or you can up your game by writing vim commands so that you can use a count prefix to influence behaviour of the fetch too.

      ¹ https://vimdoc.sourceforge.net/htmldoc/options.html#'keyword...

      • godelski 124 days ago
        I've actually found this before but found it hard to parse and little third party information on the subject. I've been able to get K to show things like standard Python docs but not modules and not docs from the active working project. I'm sure there has to be a way considering ctags and jumps exist but I think I need a full weekend you at least unravel all the moving parts without third party explanation.
      • setopt 125 days ago
        It’s worth noting that Vim has a lot of these *prg settings you can customize per file type: makeprg, formatprg, equalprg, etc.
  • solatic 126 days ago
    One interesting idea, in the Platform Engineering space (inside companies), is using TUIs to take advantage of credentials that may already be available on the developer's laptop. If you serve an internal app as a webapp, then you either need for the webapp to have a service user (icky audit logs) or waste lots of time setting up OAuth-style login flows so the webapp can authenticate as the user (and maybe IT doesn't like the idea anyway). Or, you write something that runs on the dev's laptop, and make use of the credentials that are already available locally, easy-peasy. Auth is simple, use the audit mechanisms that are already in place, easy.
    • maccard 126 days ago
      > waste lots of time setting up OAuth-style login flows so the webapp can authenticate as the user

      Being real - if you can’t manage an oauth library in 2024, you probably shouldn’t be deploying these kinds of apps, it’s about 30 lines of code in go, JavaScript or c# , and they’re just the languages I’ve implemented it in.

      • solatic 123 days ago
        You realize adding the library is like maybe 10% of the work?

          * Talk to IT and get your library added to SSO. Make sure both staging and prod.
          * Ensure user deprovisioning works when IT initiates. Prove to IT that the underlying cached tokens no longer work when the user is deprovisioned, even if it's really the upstream system's responsibility
          * Convince InfoSec that the particular OAuth library you chose is legit and doesn't pose a security risk
        
        If you think 90% of implementation isn't getting approvals internally then you clearly don't work for a big company.
  • sandreas 126 days ago
    Great article, especially the awesome tools collection at the bottom. I'm barely missing any of my daily drivers:

      # dra - automatically download release assets from github
      # example: dra download -a "dundee/gdu" -I "gdu" --output "$HOME/bin"
      devmatteini/dra
      
      # gdu - disk usage analyzer similar to ncdu but faster
      dundee/gdu
      
      # glow - terminal markdown reader
      charmbracelet/glow
      
      # jless - json viewer
      PaulJuliusMartinez/jless
      
      # lazydocker - terminal docker management ui
      jesseduffield/lazydocker
      
      # lazygit - terminal git management ui
      jesseduffield/lazygit
      
      # rga - ripgrep-all, grep for PDF
      phiresky/ripgrep-all
  • bradgessler 126 days ago
    I’ve been working on https://terminalwire.com to scratch my own itch making it easier to add command-line interfaces into my own web apps.

    If I pull this off, building out a CLI that’s as high quality as GitHub & Stripe’s should be trivial since it won’t require building out a web API and it can be dropped into existing web frameworks.

    It won’t be as fast as a CLI that runs locally, but that’s kinda not the point of terminal apps that primarily interact with web services.

    I have a private beta for folks working on commercial SaaS products that want to deploy a CLI, but don’t want to deal with building out an API.

    • edem 126 days ago
      Why ruby?
  • surfingdino 126 days ago
    The reason why cli commands are written in C is the fact that the OS is written in C. There is a lot of inertia, because it is more efficient to have the operating system and the tools for it written in the same language. We may be entering a transitional phase with new tools written, old tools rewritten in Rust. Some of the innovation or simplification highlighted in the article remind me of the chaos of the Unix wars which led to POSIX. I wouldn't chuck out the tools we have though. Modernise them, but keep their functionality. What many critics of CLI tools do not realise is that there is a lot of power hiding in that complexity. Familiarise yourself with xargs or parallel commands to see that quite often learning them given you results faster than reimplementing them.
  • kazinator 124 days ago
    My Bash prompt is now just $, and I have status line at the bottom that is protected from scrolling.

    https://www.kylheku.com/cgit/basta/about/

    Something called cdlog for directory navigation:

    https://www.kylheku.com/cgit/cdlog/about/

    Both of the above things are new [2024].

  • yonisto 126 days ago
    In my case I found it 20x times easier to hack a CLI tool that I can easily move around the organization that has mixture of Windows, Macs and Linux. Installation is just .zst file away.
  • anthk 126 days ago
    Meh. If any, for Make, clone this and read the files:

        git clone git://bitreich.org/english_knight
    
    On the rest, nvi/vim and vis (vis is nice to cool with Sam-like extructural regexen) are more than enough. Nvi as the uberfast vi clone with Unicode support, and vim for Common Lisp with Slimv.

    On TUI tools, there's mc, ncdu... those are useful. A lot of them aren't. Finch vs an IRC client and Bitlbee, for instance, swirc + bitlbee it's far more usable, with FInch I always had to do voodoo with the panes.

  • camgunz 125 days ago
    I understand where TFA is coming from here. A lot of these tools are built to handle complex tasks and thus while powerful are complex themselves. It's also true that computing has changed a lot, and we've learned a lot since someone defined how `find` would work; maybe we'd do it the same or maybe we wouldn't, but it's definitely true that things are a lot different now.

    But I think we should be careful before dismissing the existing CLI/TUI landscape. It's a huge achievement that it's possible for code written in over a dozen scripting languages will just run across different architectures and platforms. A #!/bin/sh script running on a Raspberry PI, a WiFi router, a $20k server, a $300 Chromebook, or a $200 Pinephone will run exactly the same. That's because the standards and technologies that script relies on are ubiquitous. They don't cater to the top 1% of computer users, and they don't change every 5 years when average screen resolution increases.

    Which is to say, be careful what you wish for. It's pretty easy to modify your CLI or your semantics when you're not installed on millions of routers across the world [0]. With success comes backwards compatibility concerns, and it's not too long before people start writing blog posts about how your tool needs to "shed historical baggage".

    CLI tools are typically respectful of your resources, resources like:

    - network bandwidth

    - screen size

    - attention

    - CPU/RAM/disk

    We should keep this in mind when we're talking about what defaults make sense. Does enabling LSPs by default mean you have to download a bunch of LSPs you won't use? Does it mean you've gotta maintain a database of LSPs you use/installed? Does it mean I can't use it on a Pinebook Pro without taking 10% off my battery life? Is this core computing infrastructure like grep, find and xargs or something a little more niche like ripgrep or fzf? Does making this interface colorful respect a user's color configuration on their machine (colorblind users, users avoiding blue light, users who setup a desktop theme, etc.)? If this tool generates an error, will it dump 1 error line in my logs or 13? If it generates 100,000 errors because I was processing 100,000 things, will it dump 100,000 error lines in my logs or 1.3 million?

    I'm not saying there are clear answers here. My point is that while TFA argues there are clear answers, I'm saying there aren't. You have to target a use case. Andrew Gallant (ripgrep author, among other bonkers things) says he deliberately skipped .gitignore files by default because that's the use case he was targeting. That's great, and I can totally understand where he's coming from. I could also understand a different tool not doing it for different reasons. Neither is correct or incorrect (aside: as engineers, I think we could be taken more seriously if we stopped trying to argue our aesthetic preferences are correct or optimal or whatever--it's OK to just prefer things). Pick a use case. Pick an aesthetic. Pick a mental model.

    So, yeah write Helix, write new CLI and TUI tools. But don't do it because existing tools are old and busted or fundamentally incorrect (according to you). Do it because you have a different aesthetic preference (you like colors, you like emoji, you like autocomplete, you like WASD as cursor movement). You don't need the backing of righteous engineering gods before you build something you like. Let me have Vim and I'll let you have PyCharm. Let me have Gleam and I'll let you have Go. There's room enough on this disk for both of us.

    [0]: https://daniel.haxx.se/blog/2020/04/15/curl-is-not-removing-...

    • burntsushi 125 days ago
      > (aside: as engineers, I think we could be taken more seriously if we stopped trying to argue our aesthetic preferences are correct or optimal or whatever--it's OK to just prefer things)

      I agree. As the author of ripgrep, I don't view its default behavior as "correct." But I do have an opinion that it is a better user experience. But like, other tools can and do make different choices based on diverging opinions. Which is totally fine. I mostly just get pretty irked when someone tries to tell me that I shouldn't express my opinion at all (through the behavior of tools I build).

      • camgunz 125 days ago
        Yeah I love that ripgrep has a different opinion on UX than grep (your "Can ripgrep replace grep" FAQ is great [0]) if only because the thought you put into it makes me also start thinking about those issues, which is fun. Like, maybe at first you balk at ripgrep not honoring locales, but then I was like, "wait why would I ever, ever want that". This is the kind of, I don't know, joy? Epiphany? Expansion? ... that we get from people like you just building a thing you think is good.

        [0]: https://github.com/BurntSushi/ripgrep/blob/master/FAQ.md#can...

  • throwaway984393 126 days ago
    CLI renaissance, or new dark age? The advent of the Web as the modern application platform of choice destroyed the advancement of graphical user interfaces. We now live in a bizarre world of CLIs when we should be using GUIs.

    Before the Web, and during its rise, there was a vast array of productivity tools designed to allow users to do more work, faster, and better, through graphical interfaces. It would have been ridiculous to release a program to users with only a command line interface. We left the dark ages of terminals behind, and pushed into new territory, advancing what users could do with computers.

    But once the Web began to develop the capabilities of browsers further, Web programming began to teach young programmers that the web was the only place that needed a graphical interface, because the web was a "universal" graphical application interface (lol, if you don't count the browser wars)

    This delighted programmers, as they never really liked making graphical interfaces. Logic and functions were more fun to write than user interfaces (which only made the users - not developers - happier).

    This was then hammered home when Markdown was widely adopted for its simplicity, inspiring a sort of text-based Stockholm-syndrome. People started to claim bizarre things, like that the command line and Markdown were preferable (or even superior) to GUIs and WYSIWYGs in almost all cases. More languages were adopted that had no inherent graphics capabilities, and the devs moved ever further towards text.

    So the web has unintentionally set back computer science and user productivity by decades. Until browsers lose the spotlight, this will probably continue, and non-web GUIs will continue to be that ugly thing you only write if you have to. Users will continue to languish in these half baked solutions, slaves to the solutions that are presented to them. And devs will continue to create text interfaces that only they enjoy.

    Rather than rethinking old ideas and creating new ones, we are simply doubling down on the past.

    • mrkeen 126 days ago
      We figured out that text is how to interact best. It can be piped between processes, checked into source control, tested, indexed & searched, compressed, archived, logged and diffed.

      It has accessibility benefits for those using screen-readers.

      It allows for interoperability: if a browser only has to display HTML, that HTML could have been produced by a service built in any programming language. That service in turn could send SQL text to database systems built by different companies.

      It allows for automation. The moment a process is controlled by a user clicking a button is the moment you can no longer automate or throw that action into a loop. Docker vs. InstallShield Wizard.

      As far as graphical testing goes, I tried selenium/webdriver (admittedly a long time ago now) and it was a dumpster fire.

      I think it's a mistake to conflate older with worse. The terminal continuing to last is a sign that it's useful, not that people are stubbornly sticking to it.

    • anthk 126 days ago
      I prefer the 9front/plan9 approach. Text bound, more than Unix itself, but not tied to 80x24 terminals. The windows are resizeable, so is the text inside of it, and the hard point is that everyting it's a file and almost everything can be network bound.
    • bool3max 126 days ago
      This was all bound to happen anyway if not due to the "Web" then due to some other platform that would've evolved the same capabilities eventually.
    • pjmlp 126 days ago
      Having been using computers since the mid 1980's, new dark age in some sense, yeah.
    • Timwi 126 days ago
      Thank you, this really needed saying. I was wondering if I was the only one who saw it.