OpenClaw isn't fooling me. I remember MS-DOS

(flyingpenguin.com)

113 points | by feigewalnuss 3 hours ago

20 comments

  • piker 2 hours ago
    Is anyone finding value in these things other than VCs and thought leaders looking for clicks and “picks and shovels” folks? I just personally have zero interest in letting an AI into my comms and see no value there whatsoever. Probably negative.
    • stingraycharles 2 minutes ago
      This is being asked on pretty much every Openclaw thread, and the use cases brought up seem roughly similar: digital assistant.

      It of course depends heavily on your work, but my work is 50% communication / overseeing, and I simply lose track of everything.

      I don’t give it any credentials of any sort, but I run data pipelines on an hourly basis that ingest into the agent’s workspace.

    • TheDong 2 hours ago
      I find some value as kinda a better alexa.

      I have it hooked up to my smart home stuff, like my speaker and smart lights and TV, and I've given it various skills to talk to those things.

      I can message it "Play my X playlist" or "Give me the gorillaz song I was listening to yesterday"

      I can also message it "Download Titanic to my jellyfin server and queue it up", and it'll go straight to the pirate bay.

      It having a browser and the ability to run cli tools, and also understand English well enough to know that "Give me some Beatles" means to use its audio skill, means it's a vastly better alexa

      It only costs me like $180 a month in API credits (now that they banned using the max plan), so seems okay still.

      • swiftcoder 1 hour ago
        > It only costs me like $180 a month in API credits (now that they banned using the max plan), so seems okay still.

        I have a hard time imagining how much better Alexa would have to be for me to spend $180/month on it...

        • miroljub 59 minutes ago
          Just to clarify to people focusing on the $180/month price tag.

          OpenClaw is not a CC-only product. You can configure it to use any API endpoint.

          Paying $180/month to Anthropic is a personal choice, not a requirement to use OpenClaw.

          • ThunderSizzle 35 minutes ago
            So that leads to a question: Is there a physical box I could buy that an amortize over 5-7 years to be half the API cost?

            In other words, assuming no price increase, 7 years of that pricing is $15k. Is there hardware I could buy for $7k or less that would be able to replace those API calls or alternativr subs entirely?

            I've personally been trying to determine if I should buy a new GC on my aging desktop(s), since their graphic cards can't really handle LLMs)

            • TheDong 0 minutes ago
              You can buy a roughly $40k gpu (the h100) which will cost $100/mo to get about 60% the performance of OpenAI or Anthropic frontier models.

              Over 5 years, that works out to ~$45k vs ~$10k, and during that duration, it's possible better open models will come available making the GPU better, but it's far more likely that the VC-fueled companies advance quicker (since that's been the trend so far).

              In other words, the local economics do not work out well at a personal scale at all unless you're _really_ maxing out the GPU at close to 50% at all times, and you're okay accepting worse results.

              As long as proprietary models advance as quickly as they are, I think it makes no sense to try and run em locally. You could buy an H100, and suddenly a new model that's too large to run on it could be the state of the art, and suddenly the resale value plummets and it's useless compared to using this new model via APIs or via buying a new $90k GPU with twice the memory or whatever.

            • ekidd 14 minutes ago
              You can't realistically replace a frontier coding model on any local hardware that costs less than a nice house, and even then it's not going to be quite as good.

              But if you don't need frontier coding abilities, there are several nice models that you can run on a video card with 24GB to 32GB of VRAM. (So a 5090 or a used 3090.) Try Gemma4 and Qwen3.5 with 4-bit quantization from Unsloth, and look at models in the 20B to 35B range. You can try before you buy if you drop $20 on OpenRouter. I have a setup like this that I built for $2500 last year, before things got expensive, and it's a nice little "home lab."

              If you want to go bigger than this, you're looking at an RTX 6000 card, or a Mac Studio with 128GB to 512GB of RAM. These are outside your budget. Or you could look at a Mac Minis, DGX Spark or Strix Halo. These let you bigger models much slower, mostly.

            • rcxdude 14 minutes ago
              For something the size of Claude, probably not. But for smaller models, maybe (though they also are much cheaper to buy tokens for)
        • vovavili 52 minutes ago
          I do see how a very busy businessman or a venture capitalist would gladly pay 180$/month to offload chores and mundane work from his schedule. That comes down to 6$/month, which probably matches his monthly coffee budget.
          • ThunderSizzle 33 minutes ago
            Chores, yes. If there was a $180/month where ALL my families chores could be accomplished, I'd consider it.

            That means picking up and cleaning the house after 3 kids and a dog. Grocery shopping. Dishes. Laundry. Chores.

            Tech crap? Nope.

            • vovavili 3 minutes ago
              I would imagine that the list of digital chores of a very busy businessman are a bit more extensive. Even in your list, groceries is something that becomes digital once you're high enough in income.
      • retired 2 hours ago
        > It only costs me like $180 a month in API credits

        In The Netherlands you can get a live-in au-pair from the Philippines for less than that. She will happily play your Beatles song, download the Titanic movie for you, find your Gorillaz song and even cook and take care of your children.

        It's horrible that we have such human exploitation in 2026, but it does put into perspective how much those credits are if you can get a real-life person doing those tasks for less.

        • vovavili 6 minutes ago
          Machines don't get tired, don't have to sleep, don't face principal-agent problems and can accumulate Skill.md instructions for decades without getting replaced. I definitely see the potential of something like OpenClaw for those who can afford.
        • quietbritishjim 1 hour ago
          I'm surprised to read that. Here in the UK, having a live-in au pair doesn't excuse you from paying the minimum wage for all the hours that they're working (approx $2300/month for a 35 hour week). You can deduct an amount to account for the fact that you're providing accomodation but it's strictly limited (approx $400/month).
          • retired 1 hour ago
            From what I can see online, the average compensation that an au-pair in The Netherlands receives is 300 euro per month, with living expenses being covered by the family. There is no minimum wage requirement for au-pairs like in the UK or the US.
            • spockz 32 minutes ago
              The added cost of having an additional person to provide room and food for way exceeds that €300/month. Especially, when taking into consideration that you might have to extend/renovate the house to lodge another person. Adding an extra bedroom and possibly bathroom is not cheap.
              • jjcob 17 minutes ago
                Even if you assume the cost of lodging was 1000€ (which it isn't) then the au-pair would still be significantly underpaid.

                A normal full time employee costs at least 2000€ a month (salary, tax, pension plan, health insurance, etc). If you are paying less than that you are definietly exploiting them.

            • throwthrowuknow 22 minutes ago
              So in reality you’re paying for their food, electricity and heat, letting them rent a room for free, and allowing them the use of the other facilities in your home and on top of that you’re giving them a spending allowance of 300 euro.
            • aianus 58 minutes ago
              A semi-skilled English-speaking customer service agent in PH makes less than $700 a month to put this into perspective.

              Working abroad is a totally reasonable proposition compared to working in the Philippines.

          • swiftcoder 1 hour ago
            The Netherlands has a weird and exploitative setup where you can classify your au pair as a "cultural exchange", and then pay them literal peanuts (room and board plus a token amount of "pocket money")
            • __alexs 1 hour ago
              Another weird cultural quirk of the Dutch that will hopefully go the way of Zwarte Piet one day.
          • redsocksfan45 23 minutes ago
            [dead]
        • kombine 1 hour ago
          We shouldn't have to "import" people from poorer countries to do the mundane tasks we got too lazy to do ourselves.
          • grosswait 18 minutes ago
            The concept of having this kind of help is totally foreign to me, but with the exception of one, every family I’ve encountered that had an au pair have been two very busy high earning parents, neither of them lazy. I think you could argue that perhaps priorities have been misplaced, but not lazy.
        • cameronh90 31 minutes ago
          You're paying the au pair partly in accommodation, food, bills and a visa. The visa isn't coming out of your bank account, but it's definitely part of the incentive, so you could see it as a government subsidy.

          For comparison, a full time "virtual assistant" with fluent English from the Philippines costs upwards of $700/month nowadays.

        • DrewADesign 1 hour ago
          Surely that’s subsidized?

          A lot of people in the Silicon Valley area spend that much ($6/day) on coffee. What they don’t realize is how out of touch they are in thinking makes sense for the rest of the fucking world. $180/mo is about 5% of the median US per capita income. It’s not going to pick your kids up from school, do your taxes, fix your car, or do the dishes. It’s going to download movies and call restaurants and play music. It’s a hobby, high-touch leisure assistant that costs a lot of money.

          • duskdozer 53 minutes ago
            They aren't selling it to the median US earner. They're selling it (and trying to generate FOMO) to the out of touch people so that it becomes so entrenched that the median earner will be forced to use it in some capacity through their interaction with businesses, schools, the government, etc.
        • CalRobert 52 minutes ago
          How is that remotely possible without committing enormous violations of labor law?
        • _zoltan_ 1 hour ago
          I doubt this is true in .nl. 180 a month is low for a live-in au-pair.
        • huflungdung 1 hour ago
          [dead]
      • tikotus 2 hours ago
        I don't want to be judgemental, but I do find it funny that you're paying $180 for this convenience, and use it to pirate movies.
        • llmocallm 1 hour ago
          Then allow me to be judgemental in your stead. I've done a similar setup as the above and completely locally. I dunno how they're paying so much, but that's ridiculously overpriced.
        • TeMPOraL 1 hour ago
          It's not the only thing they're doing with it. I mean, the logic is sound - $180 goes into automating bunch of manual processes in personal life, one of which is getting movies, which in some cases involves going out on the high seas.
        • LeCompteSftware 1 hour ago
          Let's also point out the $180 is going to a hideously evil AI company which pirated millions of books and movies.
      • puelocesar 2 hours ago
        180 grand a month for PA is a lot of money. But I guess each person has its own priority. I mean, I can pay a very fancy gym with that price instead of the shitty popular one I go, which would probably improve my well being much more than asking to play Gorillaz
        • quietbritishjim 2 hours ago
          "a grand" means a thousand (dollars or pounds or whatever). $180k / month really would be a lot of money. I'd be your PA for that!
      • bluedel 1 hour ago
        Am I right to be a little concerned by the phrase "it'll go straight to the pirate bay"?

        Not to be a narc or anything, but is OpenClaw liable to just perform illegal acts on your behalf just because it seemed like that's what you meant for it to do?

        • jappgar 13 minutes ago
          Seems like the only people using pirate bay in 2026 are "privacy obsessed" rich middle-aged guys.

          I think they do it mostly to feel young and edgy.

      • jappgar 16 minutes ago
        You're spendin 180 a month on tokens and still refusing to buy media like Titanic?
      • qsera 57 minutes ago
        I have the almost same thing using a network connected raspberry-pi and no AI.
      • Hendrikto 1 hour ago
        180$/month to queue playlists does not “seem okay” at all. We must be living in different worlds.
      • bigger_fish 54 minutes ago
        [dead]
    • pizza234 10 minutes ago
      > Is anyone finding value in these things other than VCs and thought leaders looking for clicks and “picks and shovels” folks?

      Mostly (but of course, not exclusively), porn for the techies. Receiving a phone notification every time a PR is opened on a project of yours? Exciting or sad, depends on one's outlook on life.

      • moffkalast 2 minutes ago
        I thought emails from github already did that?
    • vbezhenar 1 hour ago
      Many wealthy people use human assistants to offload mundane work.

      This is cheap replacement for ordinary people.

      It's going to be big. But probably it's best to wait for Google and Apple to step up their assistants.

      • piker 1 hour ago
        Yes, and that's because the workflow of those people generally requires managing a crazy, dynamic schedule including travel, meetings, comms, etc. Those folks need real humans with long-term memories and incentives to establish trust for managing these high-stakes engagements. Their human assistants might find these things useful, but there's zero chance Bill Gates is having an AI schedule his travel plans or draft his text messages.

        OTOH, this isn't an issue for "ordinary people". They go to work, school, children's sports events, etc. If they had an assistant for free, most of them would probably find it difficult to generate enough volume to establish the muscle memory of using them. In my own professional life, this occurred with junior lawyers and legal assistants--the juniors just never found them useful because they didn't need them even though they were available. Even the partners ended up consolidating around sharing a few of them for the same reason.

        Down in this thread someone mentions it being an advanced Alexa, which seems apt. Yes, a party novelty but not useful enough to be top of mind in the every day work flow.

        • Terr_ 9 minutes ago
          Side rant: A disproportionate amount of AI assistant marketing involves scenarios that look middle class, but actually require customers wealthy enough risk money on errors. Like buying the wrong thing, or even buying the right thing at the wrong price.
        • nainachirps_ 1 hour ago
          I am ordinary people. I have adhd. I have been dying for assistance in scheduling and planning. Am not employed enough to afford hiring a human yet. Am hopeful these will reach maturity for me to he able to host one on my own device. Or find a private provider with good security model and careful data handling.
          • user_7832 44 minutes ago
            Not +1, but +100 to your comment (fellow ADHD'er here). Even a virtual friend who'd help me stay on track would be excellent, and if I had a physical human assistant... that would legitimately make many aspects of my life much better. (Simple example: I could ask them to nag me to exercise.)
        • vbezhenar 1 hour ago
          Going to the shop and buying groceries is not hard work. But I don't do that since delivery became available. I'm lazy and delivery is free. Same for ordinary people needs. It's not a big deal to manage my life, but if I can avoid doing that for free, that's probably what I'll do. For $200? Not sure. For $20? Absolutely. So the question is already about price.
          • spockz 41 minutes ago
            Off-Topic: Are you sure delivery is free? When comparing prices online vs my local supermarket of the same brand, online prices trend higher. Locally the store also has more products on sale than available online. Only recently online shopping has become slightly cheaper because they now have “bulk” deals for 5-20% discount.
      • andai 40 minutes ago
        I'm not sure how solvable it is. It only takes one screw up to ruin the reputation, and a screw up is basically guaranteed.

        The tech has existed for a while but nobody sane wants to be the one who takes responsibility for shipping a version of this thing that's supposed to be actually solid.

        Issues I saw with OpenClaw:

        - reliability (mostly due to context mgmt), esp. memory, consistency. Probably solvable eventually

        - costs, partly solvable with context mgmt, but the way people were using it was "run in the background and do work for me constantly" so it's basically maxing out your Claude sub (or paying hundreds a day), the economics don't work

        - you basically had to use Claude to get decent results, hence the costs (this is better now and will improve with time)

        - the "my AI agent runs in a sandboxed docker container but I gave it my Gmail password" situation... (The solution is don't do that, lol)

        See also simonw's "lethal trifecta":

        >private data, untrusted content, and external communication

        https://simonwillison.net/2025/Jun/16/the-lethal-trifecta/

        The trifecta (prompt injection) is sorta-kinda solved by the latest models from what I understood. (But maybe Pliny the liberator has a different opinion!)

      • eloisant 47 minutes ago
        $180 a month is huge for "ordinary people".

        So I guess that leaves the in-between people who don't care about spending $180 every month but don't have any personal staff yet or even access to concierge services.

      • lionkor 22 minutes ago
        Those human assistants can be held accountable.
      • LeCompteSftware 25 minutes ago
        The problem is that if you're wealthy enough to hire someone to do your errands, those errands likely aren't very mundane - the exception is a socialite giving their friend a low-effort job, but executive assistants are paid well because their jobs are cognitively demanding.

        OTOH a lower-middle-class Joe like me really does have a lot of mundane social/professional errands, which existing software has handled just fine for decades. I suppose on the margins AI might free up 5 minutes here or there around calendar invites / etc, but at the cost of rolling snake eyes and wasting 30 minutes cleaning up mistakes. Even if it never made mistakes, I just don't see the "personal assistant" use case really taking off. And it's not how people use LLMs recreationally.

        Really not trying to say that LLM personal assistants are "useless" for most people. But I don't think they'll be "big," for the same reason that Siri and Alexa were overhyped. It's not from lack of capability; the vision is more ho-hum than tech folks seem to realize.

    • littlecranky67 35 minutes ago
      I can see a value in a smarter email-inbox sorting algorithm - but only because all major players (except google which I don't trust with my mails) have abandoned bayesian email filtering with training. This was standard in 2005 in such basic clients such as the Opera browser, but somehow we lost this technology along the way.
      • easygenes 12 minutes ago
        I was an original Thunderbird pre-1.0 (from 2003) user and prior to that, Netscape Mail, and am quite certain it has had bayesian spam filtering all this time, at least since the late ‘90s. That was a headline feature in the early days. My first email account used POP3 through a shared web host for my own domain in that era.

        Edit: Yes it’s still there https://support.mozilla.org/en-US/kb/thunderbird-and-junk-sp...

      • Terr_ 15 minutes ago
        I can't recall the name, but I vaguely remember a Bayesian spam filter for arbitrary POP3 accounts in the 2000s that had a local web frontend, and how excited I was at its effectiveness.

        I believe that the shift from "my one computer" to multiple clients (computer + phone + webmail) probably has something to do with it. Even with IMAP sharing state, you still don't have a great way to see and control the filtering, except by moving things in/out of spam folders.

    • jstummbillig 45 minutes ago
      > letting an AI into my comms

      Idk, it's strange for me to think of it that way. It's tech. If it does something useful, that's cool.

      Data protection is always a consideration. I just don't consider a LLM to be a special case or a person, the same way that I don't have strong feelings about "AI" being applied in google search since forever. I don't have special feelings or get embarrassed by the thought of a LLM touching my mails.

      Right now for me, agentic coding is great. I have a hard time seeing a future where the benefits that we experience there will not be more broadly shared. Explorations in that direction is how we get there.

      • piker 33 minutes ago
        My issues aren’t really with privacy so much as what the failure modes look like, and, more fundamentally, with becoming a passenger to my own life.
      • rowanG077 38 minutes ago
        The problem for me is not the LLM reading it. The problem is the company behind it can most likely recover the sessions. That is a problem since they could share it with whomever they want. Even if they are fully incorruptable it's also not uncommon that they simply get hacked and all this data ends up on the open market.
    • ZeroGravitas 1 hour ago
      I see the appeal, but I also see the risks.

      If you ignore the risks I don't see why it's hard to see value.

      The AI can read all your email, that's useful. It can delete them to free up space after deciding they are useless. It can push to GitHub. The more of your private info and passwords you give it the more useful it becomes.

      That's all great, until it isn't.

      Putting firewalls in place is probably possible and obviously desirable but is a bit of a hassle and will probably reduce the usefulness to some degree, so people won't. We'll all collectively touch the stove and find out that it is hot.

    • surgical_fire 3 minutes ago
      No.

      But I am someone that, for example, dislikes home automation. Know that thing that you ask Alexa to open your curtains? I think that is cringe af.

      Maybe there's an overlap with the crowd that likes that.

    • andai 45 minutes ago
      It's pretty much just Claude Code, except hooked up to your Telegram / WhatsApp / iMessage.

      I don't know why they don't make an official integration for it. Probably cause they're already out of GPUs lol

    • pjmlp 52 minutes ago
      Same here, I care to the extent I am obligated to, and staying relevant for finding a job.
    • _pdp_ 2 hours ago
      There is value but it is hard to discover and extract outside of a few known areas - like coding, etc.
      • piker 2 hours ago
        Yes, I can see the (potential) value in working with agents in software development. The “claw” movement I understood to suggest value in less constrained access to my inbox, personal messages, calendar etc like some sort of PA. It’s hard to quantify how much damage a bad PA can do to someone’s personal and professional life, so if my understand is correct, this seems like a dead end.
        • _pdp_ 1 hour ago
          I posted this comment in another thread so reposting it here because it seems to be on topic.

          ---

          IMHO, the biggest problem with OpenClaw and other AI agents is that the use-cases are still being discovered. We have deployed several hundred of these to customers and I think this challenge comes from the fact that AI agents are largely perceived as workflow automation tools so when it comes to business process they are seen as a replacement for more established frameworks.

          They can automate but they are not reliable. I think of them as work and process augmentation tools but this is not how most customers think in my experience.

          However, here are a several legit use-case that we use internally which I can freely discuss.

          There is an experimental single-server dev infrastructure we are working on that is slightly flaky. We deployed a lightweight agent in go (single 6MB binary) that connects to our customer-facing API (we have our own agentic platform) where the real agent is sitting and can be reconfigured. The agent monitors the server for various health issues. These could be anything from stalled VMs, unexpected errors etc. It is firecracker VMs that we use in very particular way and we don't know yet the scope of the system. When such situations are detected the agent automatically corrects the problems. It keeps of log what it did in a reusable space (resource type that we have) under a folder called learnings. We use these files to correct the core issues when we have the type to work on the code.

          We have an AI agent called Studio Bot. It exists in Slack. It wakes up multiple times during the day. It analyses our current marketing efforts and if it finds something useful, it creates the graphics and posts to be sent out to several of our social media channels. A member of staff reviews these suggestions. Most of the time they need to follow up with subsequent request to change things and finally push the changes to buffer. I also use the agent to generate branded cover images for linkedin, x and reddit articles in various aspect ratios. It is a very useful tool that produces graphics with our brand colours and aesthetics but it is not perfect.

          We have a customer support agent that monitors how well we handle support request in zendesk. It does not automatically engage with customers. What it does is to supervise the backlog of support tickets and chase the team when we fall behind, which happens.

          We have quite a few more scattered in various places. Some of them are even public.

          In my mind, the trick is to think of AI agents as augmentation tools. In other words, instead of asking how can I take myself out of the equation, the better question is how can I improve the situation. Sometimes just providing more contextually relevant information is more than enough. Sometimes, you need a simple helper that own a certain part of the business.

          I hope this helps.

    • onchainintel 2 hours ago
      It all depends on what you do aka your use case. If you're in the content creatio business, which is part of my responsibilities, then yes has been massively helpful. For other roles, I can absolutely see no use case or benefit. Context matters, like with everything.
    • rimliu 49 minutes ago
      I am also surprised by the number of people willing to outsource their lives.
    • mathgladiator 1 hour ago
      Agent environments like OpenClaw are in the toy phase, and OpenClaw is teaching people how to build things with agents in a toy-like and unreliable way. I used my understanding of OpenClaw to build scalable + secure + auditable agent infrastructure in my platform such that I can build products that other people can use.
      • bayindirh 1 hour ago
        We had better agent infrastructures (namely JADE) back in the day. I worked with them, and now these things look like flimsy 50¢ plastic toys to me, too.
    • dankobgd 1 hour ago
      no, it's only for scammers
    • iugtmkbdfil834 2 hours ago
      Eh, buddy says he uses them for his network and, apparently, some light IT maintenance for his family members. So far it seems to be working for him. I am not that brave.
  • stared 1 hour ago
    I don’t get this OpenClaw hype.

    When people vibe-code, usually the goal is to do something.

    When I hear people using OpenClaw, usually the goal seems to be… using OpenClaw. At a cost of a Mac Mini, safety (deleting emails or so), and security (litelmm attack).

    • Someone 9 minutes ago
      In the early 1980’s, what did people use home computers such as Atari’s and Commodore 64’s for? Mostly playing games; nerds also used their computer with the goal seeming to be… using their computer.

      It wasn’t (only) that, though; they also learned, so that, when people could afford to buy computers that were really useful, there were people who could write useful programs, administer them, etc.

      Same thing with 3D printers a decade or so ago. What did people use them for? Mostly tinkering with hard- and software for days to finally get them to print some teapot or rabbit they didn’t need or another 3D printer.

      This _may_ be similar, with OpenClaw-like setups eventually getting really useful and safe enough for mere mortals.

      But yes, the risks are way larger than in those cases.

      Also, I think there are safer ways to gain the necessary expertise.

    • d0gsg0w00f 41 minutes ago
      I have OC on a VPS. So far it's a way for me to play with non-Claude models and try to get them to get OC under control. So far I'm about $200 all in and OC is still not under control. Every few weeks it goes on an ACP bender and blows my credits in hidden sub-agents for no damn reason. I'm determined to break this horse though, it's like a fun video game with a glitchy end boss.
    • eloisant 42 minutes ago
      The idea is to get a virtual personal assistant. Like Siri or Gemini but with access to all of your accounts, computers, etc. (Well whatever you give it access to). Like having a butler with access to your laptop.

      From what I understand, the main appeal isn't the end result, but building that AI personal assistant as a hobby is the appeal.

    • SlinkyOnStairs 48 minutes ago
      The main "sales pitch" appears to be "You can have the computer do things for you without having to learn how to use a computer" (at the cost of now having to learn how to use a massively overcomplicated and fundamentally unreliable system; It's just an illusion of ease of use.)

      The thread's linked article is about comparing MS-DOS' security, but the comparison works on another level as well: I remember MS-DOS. When the very idea of the home/office computer was new. When regular people learned how to use these computers.

      All this pretension that computers are "hard to use", that LLMs are making the impossible possible, it's all ahistoric nonsense. "It would've taken me months!" no, you would've just had to spend a day or two learning the basics of python.

      • stared 35 minutes ago
        I was one of those using MS-DOS (still I remember blue Norton Commander). I didn't understand people mocking it later - as it just worked. Enough to run the Prince of Persia, Doom or so. Or edit text files. (As an excuse, I was just ~7 yo back then.)
    • leonidasrup 1 hour ago
      OpenClaw, the ultimate arbitrary code execution
    • thenthenthen 1 hour ago
      To me openclaw sounds like a software clickfarm?
  • pantulis 18 minutes ago
    This weekend I installed Hermes on my computer. My M4 Max Studio started spinning its fans as if it wanted to fly, so I went for some cloud hosted models. The thing works as advertised, but token consumption is through the roof. of course ymmv depending on the LLM you choose.

    But my main takeaway is that from the security standpoint this is a ticking bomb. Even under Docker, for these things to be useful there is no going around giving it credentials and permissions that are stored in your computer where they can be accessed by the agent. So, for the time being, I see Telegram, my computer, the LLM router (OpenRouter) and the LLM server as potential attack/exfiltration surfaces. Add to that uncontrolled skills/agents from unknown origins. And to top it off, don't forget that the agent itself can malfunction and, say, remove all your email inboxes by mistake.

    Fascinating technology but lacking maturity. One can clearly see why OpenAI hired Clawdbot's creator. The company that manages to build an enterprise-ready platform around this wins the game.

  • repelsteeltje 1 hour ago
    One could argue that the discussion is once again about tech debt.

    Both OpenClaw and MSDOS gaining a lot a traction by taking short cuts, ignoring decades of lessons learned and delivering now what might have been ready next year. MSDOS (or the QDOS predecessor) was meant to run on "cheap" microcomputer hardware and appeal to tinkerers. OpenClaw is supposed to appeal to YOLO / FOMO sentiments.

    And of course, neither will be able to evolve to their eventual real-world context. But for some time (much longer than intended), that's where it will be.

    • TeMPOraL 1 hour ago
      OpenClaw was an inevitability. An obvious idea that predates LLMs. It took this long for models and pricing to catch up. As much as I dislike this term, if there's one clear example of "Product Model Fit", it's OpenClaw - well, except that arguably what made it truly possible was subscription pricing introduced with Claude Code; before, people were extremely conservative with tokens.

      But the point is, OpenClaw is just the first that lucked and got viral. If not for it, something equivalent would. Much like LangChain in the early LLM days.

    • leonidasrup 1 hour ago
      OpenClaw, the ultimate example of Facebook's motto "Move Fast and Break Things"
    • Schlagbohrer 1 hour ago
      It worked to launch the creator into a gig at OpenAI.

      Similar YOLO attitude to OpenAI's launch of modern LLMs while Google was still worrying about all the legal and safety implications. The free market does not often reward conservative responsible thinking. That's where government regulation comes in.

    • Earw0rm 29 minutes ago
      MSDOS and similar single-user OS were not originally designed for networked computers with persistent storage. Different set of constraints.
  • nryoo 1 hour ago
    $180/month to control your lights and music. A Raspberry Pi + Home Assistant does this for $0/month and doesn't exfiltrate your home network topology to a third-party API. The value proposition only makes sense if your time is worth more than your privacy.
    • UqWBcuFx6NV4r 1 hour ago
      This comparison is dishonest, and you know that it is. This is coming from someone that uses Home Assistant and wouldn’t touch OpenClaw with a 10 foot pole. If I had a horse in this race it’d be your horse, but to pretend that these achieve the same goals is just… not in the spirit of an actual discussion.
      • albatrosstrophy 1 hour ago
        Kindly elaborate? Coming from someone who still uses AI mainly to draft emails and raspberry Pi as sandboxed automation project.
  • ymolodtsov 19 minutes ago
    I run OpenClaw on a $4 VPS with read-only access to most of the accounts. Just this morning I asked it to confirm how exactly our company is paying for a particular service and whether we ever switched to the vendor directly. In about 30s it found all the necessary emails and provided me with a timeline.

    It's like your actual asssitant. Now, most of this can be done inside ChatGPT/Claude/Codex now. Their only remaining problem for certain agentic things is being able to run those remotely. You can set up Telegram with Claude Code but it's somehow even more complicated than OpenClaw.

  • saidnooneever 19 minutes ago
    DOS didn't have certain protections because the hardware it targeted did not have those protections. For UNIX on the same machines, they also had no such protections. On 8086 there were no CPU rings, no virtual memory and no other features to help there.

    Memory isolation is enforced by the MMU. This is not software.

    Maybe you were confused with Linux, which came later, and landed in a soft x32 bed with CPU rings and Page Tables/VirtualMemory. ("Protected Mode", named for that reason...)

    That being said, OpenClaw is criminally bad, but as such, fits well in our current AI/LLM ecosystem.

  • electroglyph 2 hours ago
  • nopurpose 2 hours ago
    I agree that sandboxing whole agent is inadequate: I am fine sharing my github creds with the gh CLI, but not with the npm. More granular sunboxing and permission is what I'd like to see and this project seems interesting enough to have a closer look.

    I am not interested in the "claw" workflow, but if I can use it for a safer "code" environment it is a win for me.

    • mkesper 1 hour ago
      When the agent uses your GH credentials to nuke all your projects or put out a lot of crap, this separation will not save you.
      • nopurpose 1 hour ago
        whitelisting `gh` args should solve it. Event opencode's primitive permission system allows that.
  • teach 1 hour ago
    This isn't especially related to the article, but when I was at university my first assembler class taught the Motorola 680x0 assembly. I didn't own a computer (most people didn't) but my dorm had a single Mac that you could sign up to use so I did some assignments on that.

    Problem is, I was just learning and the mac was running System 7. Which, like MS-DOS, lacked memory protection.

    So, one backwards test at the end of your loop and you could -- quite easily -- just overwrite system memory with whatever bytes you like.

    I must have hard-locked that computer half a dozen times. Power cycle. Wait for it to slowly reboot off the external 20MB SCSI HDD.

    Eventually I took to just printing out the code and tracing through it instead of bothering to run it. Once I could get through the code without any obvious mistakes I'd hazard a "real" execution.

    To this day, automatic memory management still feels a little luxurious.

  • falense 1 hour ago
    Very cool project! I am working on something similar myself. I call mine TriOnyx. Its based on Simon Willison's lethal trifecta. You get a star from me :D

    https://www.tri-onyx.com/

  • tomasol 41 minutes ago
    I believe the codegen must be separated from the runtime. Every time you ask AI for a new task, it must be deployed as a separate app with the least amount of privileges possible, potentially with manual approvals as the app is executing. So essentially you need a workflow engine.
  • LudwigNagasena 1 hour ago
    And I remember OSes today, 1 year ago, 5 years ago, 10 years ago, etc. Security was always a problem. People blindly delegate admin privileges to scripts and programs from the internet all the time. It’s hard to make something secure and usable at the same time. It’s not like agent harnesses suddenly broke all adopted best practices around software and sandboxing.

    I remember Apple introducing sandboxing for Mac apps, extending deadlines because no one was implementing it. AFAIK, many apps still don’t release apps there simply because of how limiting it is.

    Ironically, the author suggests to install his software by curl’ing it and piping it straight into sh.

  • Schlagbohrer 1 hour ago
    Why am I totally unable to understand this post. I have been a long time computer user but this has way too much jargon for me.
    • wccrawford 57 minutes ago
      There's a difference between using a thing and understanding how it works. There's a lot of stuff in this that reference things that only hardware and software creators are going to understand, and only if they're deep enough into their craft.

      "Interrupts", for example, are an old concept that is rarely talked about anymore until you get into low-level programming. At a high level, you don't even think about them, let alone talk about them.

      • khalic 32 minutes ago
        cries in rust interrupts
  • sriku 1 hour ago
    "Fast" is not always a virtue and "efficiency" is not always the only consideration.
  • TacticalCoder 2 minutes ago
    > curl-pipe-sh as well. The installer verifies the release signature with ssh-keygen against an embedded key, fail-closed on every failure path. The installer’s own SHA is pinned in the README for readers who want to check the script before piping.

    Packages shipping as part of Linux distros are signed. Official Emacs packages (but not installed by the default Emacs install) are all signed too.

    I thankfully see some projects released, outside of distros, that are signed by the author's private key. Some of these keys I have saved (and archived) since years.

    I've got my own OCI containers automatically verifying signed hashes from known author's past public keys (i.e. I don't necessarily blindly trust a brand new signature key as I trust one I know the author has been using since 10 years).

    Adding SHA hashes pinning to "curl into bash" is a first step but it's not sufficient.

    Software shipped properly aren't just pinning hashes into shell scripts that are then served from pwned Vercel sites. Because the attacker can "pin" anything he wants on a pwned JavaScript site.

    Proper software releases are signed. And they're not "signed" by the 'S' in HTTPS as in "That Vercel-compromised HTTPS site is safe because there's an 'S' in HTTPS".

    Is it hard to understand that signing a hash (that you can then PIN) with a private key that's on an airgapped computer is harder to hack than an online server?

    We see major hacks nearly daily know. The cluestick is hammering your head, constantly.

    When shall the clue eventually hit the curl-basher?

    Oh wait, I know, I know: "It's not convenient" and "Buuuuut HTTPS is just as safe as a 10 years old private key that has never left an airgapped computer".

    Here, a fucking cluestick for the leftpad'ers:

    https://wiki.debian.org/Keysigning

    (btw Debian signs the hash of testing release with GPG keys that haven't changed in years and, yes, I do religiously verify them)

  • trilogic 2 hours ago
    Great article. Been skeptical of it since the beginning with this Python "Cli" agents. Been looking for local ai driven Agentic GUI that offers real privacy but coulnt find it anywhere. Finally what we call real local and ClI agents pipeline local ai driven with llama.cpp engine is done. Just pure bash and c++, model isolated, no http, no python, no api, no proprietary models. There is the native version (in c++) and the community version in Electron. Is electron Good enough to protect users Wrapping all the rest? This is exciting.
  • pointlessone 1 hour ago
    Wow. Much security.

    I too remember DOS. Data and code finely blended and perfectly mixed in the same universally accessible block of memory. Oh, wait… single context. nwm

  • 2muchcoffeeman 1 hour ago
    [dead]
  • maxbeech 3 hours ago
    [dead]