33 comments

  • hardsnow 21 hours ago
    I’ve been developing an open-source version of something similar[1] and used it quite extensively (well over 1k PRs)[2]. I’m definitely believer of the “prompt to PR model”. Very liberating to not have to think about managing the agent sessions. Seems that you have built a lot of useful tooling (e.g., session videos) around this core idea.

    Couple of learnings to share that I hope could be of use:

    1) Execution sandboxing is just the start. For any enterprise usage you want fairly tight network egress control as well to limit chances of accidental leaks or malicious exfiltration if theres any risk of untrusted material getting into model context. Speaking as a decision maker at a tech company we do actually review stuff like this when evaluating tools.

    2) Once you have proper network sandboxing, you could secure credentials much better: give agent only dummy surrogates and swap them to real creds on the way out.

    3) Sandboxed agents with automatic provisioning of workspace from git can be used for more than just development tasks. In fact, it might be easier to find initial traction with a more constrained and thus predictable tasks. E.g., “ask my codebase” or “debug CI failures”.

    [1] https://airut.org [2] https://haulos.com/blog/building-agents-over-email/

    • willydouhard 20 hours ago
      Willy from Twill here.

      I love the idea of emailing agents like we email humans! Thank you for sharing your learnings:

      1. Network constraints vary quite a bit from one enterprise customer to another, so right now this is something we handle on a case-by-case basis with them.

      2. We came to the same conclusion. For sensitive credentials like LLM API keys, we generate ephemeral keys so the real keys never touch the sandbox.

      3. Totally right, we support constrained tasks too (ask mode, automated CI fixes). We've gone back and forth on whether to go vertical-first or stay generic. We're still figuring out where the sweet spot is. The constrained tasks are more reliable today, but the open-ended ones are where teams get the most leverage.

  • blacksoil 4 hours ago
    Claude code has a version that runs on the cloud. You grant it access to Github then and you can tell it to make changes and create PR from your phone, tablet, or desktop. I'm curious, but what makes this difference than that?
    • willydouhard 2 hours ago
      The main difference is that you can pick/combine coding agent CLIs (claude code, codex, open code). There is no vendor lock-in.
  • 2001zhaozhao 20 hours ago
    24/7 running coding agents are pretty clearly the direction the industry is going now. I think we'll need either on-premises or cloud solutions, since obviously if you need an agent to run 24/7 then it can't live on your laptop.

    Obviously cloud is better for making money, and some kind of VPC or local cloud solution is best for enterprise, but perhaps for individual devs, a self-hosted system on a home desktop computer running 24/7 (hybrid desktop / server) would be the best solution?

    • zingar 8 hours ago
      Optimising to keep the coding going 24/7 feels like a local optimisation trap. The amount of code that can be written by coding agents in normal working hours dwarfs what humans can productively describe and assess.

      My efforts will be in improving agentic requirements gathering and assessment.

    • piker 20 hours ago
      > 24/7 running coding agents are pretty clearly the direction the industry is going now.

      This assertion needs some support for those of us that don't have a macro insight into the industry. Are you seeing this from within FAANG shops? As a solo developer? What? Honest question.

      • 2001zhaozhao 19 hours ago
        I'm speaking from my daily experience. Sometimes i don't want to close my laptop before going to bed because there are still 1-2 tasks ongoing in my AI kanban board, so I just leave my laptop open (lock but not suspend it) so that the agents keep working for a while. I don't even have things all that automated.

        I anticipate that once I have some more complex agentic scaffolds set up to do things like automatically explore promising directions for the project, then leaving the AI system on overnight becomes a necessity.

        • gabriel-uribe 7 hours ago
          100% this.

          I also have Claude Cowork automations running constantly. As-is, I can't shut down my laptop, and it gets frustrating when my laptop is in my backpack all day because of commutes or travel.

          • willydouhard 5 hours ago
            Yes or when you get good feedback/idea talking to someone, being able to spawn tasks from your phone makes everything much faster
    • danoandco 20 hours ago
      For a solo dev running one task at a time, a beefy desktop overnight is totally viable. We see a lot of this with the Mac Mini hype

      Cloud starts to matter when you want to (a) run a swarm of agents on multiple independent tasks in parallel, (b) share agents across a team, or (c) not worry about keeping a machine online

      • 2001zhaozhao 20 hours ago
        I would point out that a beefy desktop is probably faster at compiling code than a typical cloud instance simply due to more CPU performance. So maybe up to 10-ish concurrent agents it's faster to use a local desktop than a cloud instance, and then you start to get into the territory where multiple agents are compiling code at the same time, and the cloud setup starts to win. (That's assuming the codebase takes a while to compile and pegs your CPU at 100% while doing so. If the codebase is faster to compile or uses fewer threads, then the breakeven agent count is even higher.)

        Other than that, I agree with what you said. I don't know what the tradeoffs for local on-premises and cloud agents are in terms of other areas like convenience, but I do think that scalability in the cloud is a big advantage.

        • danoandco 19 hours ago
          Totally right on the compile time. CIs have the same bottleneck, and the ecosystem is working on fixing this (faster cpus, better caching) in both coding agents and CI to improve overall velocity
      • zingar 8 hours ago
        And (d) not worry about toddler agents wrecking your single point of failure beefy desktop
    • _pdp_ 3 hours ago
      To do what exactly? If the work requires 24/7 then fair enough.
    • ragelink 18 hours ago
      The core issue for me is, I don't want to trust someone else with my code, or run my stuff on their computers. I don't see serious enterprise organizations offloading something as critical to security outside their own network perimeter.
  • sschlegel 2 hours ago
    Just checked the demo, it looks super interesting. how can i make sure, it doesnt burn through endless tokens / credits when i let it work independently? Thanks
    • willydouhard 2 hours ago
      By default each plan has a limit (https://twill.ai/pricing). Then you can manually set an overage limit if you want to.

      The ralph loop mode also has the concept of a budget per task.

  • eranation 16 hours ago
    Edit: just noticed this is a semi duplicate question to https://news.ycombinator.com/item?id=47723506 so rephrasing my question - will you have computer use and will you have self-hosted runners option? (you being just the controlplane / task orchestrator, which is the hardest problem apparently...)

    Additional question - what types of sandboxes you use? (just docker or also firecracker etc...)

    Original comment:

    Congrats on the launch!

    What's the benefit over cursor cloud agents with computer use? (other than preventing vendor lock in?)

    https://cursor.com/blog/agent-computer-use

    Or the existing Claude Code Web?

  • Chrisszz 3 hours ago
    It's cool, and I completely get what you mean, I am curious to know how it differs from cloud agents by others like Cursor, Anthropic and Warp.
    • willydouhard 2 hours ago
      Definitely the same category of product. The main difference with cursor is that we reuse raw harnesses from ai-labs (claude code, codex). Cursor rebuilds its own harness. We believe nothing will beat the "natural" harness of each model because of RL.

      You are also free to swap/combine these harnesses as you please, which is something Anthropic can't do. For instance claude code implements and codex reviews.

  • cocoflunchy 13 hours ago
    Great timing as I'm exploring the space to get rid of Cursor in our stack. For local dev everyone is switching to Claude Code or Codex. The state of the art for cloud agents in my opinion right now is Cursor. But their pricing model per-user doesn't make sense when what I want is to enable anyone in the company to fix things in the product. 2 things not immediately clear from your homepage: - do you support full computer use? Again Cursor is the best I've tried there - what kind of triggers do you support? We have in particular one automation built with cursor to auto approve PRs that are low-risk. It triggers on a specific comment on a PR Finally some advice from a user's pov: you need to invest a lot in the onboarding experience. I tried Devin today and it couldn't get it to work after one hour of fiddling. How do you store the repo's setup scripts? Cursor cloud is pretty opaque and annoying to configure on that side. Anyway I'll try it!
    • danoandco 12 hours ago
      On computer use: Yes. Sandboxes come with a computer-use CLI for driving Linux GUI apps via X11.

      On triggers: Cron, GitHub (PRs, issues, @twill mentions in review comments), Slack, Linear, Notion, Asana webhooks, plus CLI and web. Our PR-comment workflow is you would have to tag @twill with an instruction. That being said, you can also setup a daily cron on Twill that checks PRs with a specific label like Confidence Score : x/5 and tell it to auto-approve when 5/5 for example.

      On setup scripts: Per-repo entrypoint script, env vars, and ports, all accessible on the UI. There is a dedicated Dev Environment agent mode that you start with to setup the infra. You can steer the agent into how to setup if it gets stuck. So this should be smooth. The agent can also rewrite the entrypoint mid-task.

      There is also a Twill skill you can add to your local agents to dispatch tasks to Twill. Meaning you can research and plan locally using your CLI and delegate the implementation to a sandbox on Twill.

    • wcdolphin 10 hours ago
      I’ve been hacking on something in this vein and would love your feedback. What if you could reuse your CI env by using Github Actions as your sandbox. You can reuse the caching, any oidc based roles and self host via runs-on.com for cost and performance. We expose a claude code web experience of interactive low latency chat. I have a working prototype I’m happy to share if you think it would be interesting.
      • willydouhard 6 hours ago
        This is very convenient but has limitations. GitHub actions are not built to resume state (conversations in our case) and handle multi player experiences.

        However reusing the GitHub workflows out of the box feels really nice

      • crohr 8 hours ago
        I’m the founder of runs-on.com, we should talk!
    • cocoflunchy 13 hours ago
      Sent you some feedback from the app, I can't get GitHub to connect. Feel free to contact me over email to troubleshoot!
      • danoandco 12 hours ago
        Mmh this works on my end. Sending you an email. Ty
  • woeirua 11 hours ago
    I think Cloud Agents are the future, but I’ll be honest I don’t see how a third party provider survives in this space.

    1. It’s really not that hard to stand this up on your own. GitHub agentic workflows gets you 95% of the way there already. 2. Anthropic and Cursor are already playing in this space and likely will eat your lunch.

    IMO, the only way you can survive is to make this deployable behind the firewall. If you could do that then I would seriously consider using your product.

    • danoandco 11 hours ago
      On gh-aw: it looks solid for the event-driven automation shape (triage, docs sync, CI fix). We're after a slightly different shape: interactive back-and-forth, steering from Slack or Linear, persistent sandboxes with a booted dev server for live previews. Thanks for the pointer, I'll dig into it more.

      On labs eating our lunch: it's definitely a risk. Our bet is that reusing lab-native CLIs is enough to position ourselves in the market

      On behind the firewall: it's something we're looking into. We open-sourced agentbox-sdk in that direction.

  • dennisy 19 hours ago
    Congrats on the launch, the agentbox-sdk looks interesting, but seeing as the first commit was 3 days ago - I feel a little wary to use it just yet!

    One question, do you have plans for any other forms of sandboxing that are a little more "lightweight"?

    Also how do you add more agent types, do you support just ACP?

    • willydouhard 19 hours ago
      Thank you! agentbox-sdk is very recent so it is not stable just yet indeed!

      For the lightweight sandbox, can you give an example?

      Currently we support main coding CLIs, ACP support is not shipped yet.

      • dennisy 19 minutes ago
        I was thinking something that runs in the same process, and does not require docker or a third party API.

        For example Monty by the pydantic team, or the Anthropic sandbox which I believe uses OS level primitives.

  • kuzivaai 9 hours ago
    "The agent can't skip steps" is doing a lot of work in that sentence. What happens when the plan itself is wrong? Curious whether the approval gate is genuinely blocking or if teams end up rubber-stamping to avoid being the bottleneck.
    • willydouhard 5 hours ago
      There is room for plan adaptation but the agent has to justify and highlight it in the PR.

      Defining the plan/acceptance criteria for long running task is the hard part.

      We recently added a Ralph loop mode in that spirit. The implementation won't start until the human and agent align on verifiable criteria and a different agent judges if criteria are met at the end of each run.

      Overall I think this problem is not yet completely solved and improvement on both the UX and model judgement are needed

  • zackify 3 hours ago
    I strongly believe all of these projects are unnecessary.

    Install LXC on a server Start a container called dev

    Add 25 lines to your zshrc

    I say dev1 and it spins up fresh

    Dev2 copies from that and is a fresh container.

    Auto uses tmux.

    Claude code with bypass mode. Do anything. Close laptop. Come back later.

    Even have a lock mode blocking all internet access except to the llm provider.

    Ssh key agent forwarding through 1pw CLI so it can't even push to github unless I reconnect.

    I feel like the Dropbox quote years ago but its a lot easier than people think, and its weird to delegate to another service something that a dev should already understand how to do.

    All I have to do to have the same issue to PR flow is open dev1. Open Claude. Have GH CLI and task system MCP.

    Do /loop watch for new ticket assigned to me and complete it and push it up.

    • willydouhard 2 hours ago
      We see developers build their own setup over SSH with success, so in that sense I agree with you.

      However, once you want to trigger tasks from Slack, Linear, or GitHub issues or onboard teammates who aren't comfortable wiring up LXC + tmux + agent forwarding, a managed layer is needed.

      I think we're at a moment where builders with great setups like yours and products like ours are feeding each other good ideas. The patterns you figure out in your zshrc inform what we productize, and the workflows we ship give you new things to try. It's a virtuous circle. Everyone should use the right-sized solution for their situation.

    • MattGaiser 3 hours ago
      Millions of people using Claude code couldn’t write hello world in the language they are using, if they even know what it is.

      I am anecdotally aware of at least one project where the author can’t recall off the top of his head what the stack is.

  • qainsights 13 hours ago
    Cool. Tried my side project ai.dosa.dev to create an utility; it did good. PR https://github.com/QAInsights/awesome-ai-tools/pull/23
    • danoandco 13 hours ago
      Awesome! Thanks for trying it.
  • ibrahimhossain 7 hours ago
    Cloud sandboxes for persistence and parallelization is a smart move. Local setups hit those exact walls very quickly. That feels like the right long term bet as the underlying models keep improving
    • willydouhard 5 hours ago
      This definitely feels like the end state. We need to improve models, agent/human UX and make the transition from cloud work to local work seemless to fully get there
      • ibrahimhossain 5 hours ago
        Cloud to local seamless transition is exactly where the real value lies. Local setups with zero API cost and full privacy make the biggest difference
  • eranation 16 hours ago
    HN hug of death probably, but your scorecard returns an error :(

    The analysis request failed.

    Hosted shell completed without parseable score_repo.py JSON output. 11 command(s), 11 output(s). (rest redacted)

    • willydouhard 16 hours ago
      Thank you for the feedback, we are working on it!
  • Mr_P 21 hours ago
    How does this compare to Claude Managed Agents?
    • danoandco 21 hours ago
      Claude managed agents is a general-purpose hosted runtime for Claude. While Twill focuses on SWE tasks.

      And so the SWE workflow is pre-built (research, planning, verification, PR, proof of work). Twill is also agnostic to the agent, so you can use codex for instance. Additionally you have more flexibility on sandbox sizing on Twill

  • hmokiguess 21 hours ago
    > Run the same agent n times to increase success rate.

    Are there benchmarks out there that back this claim?

    • lmeyerov 10 hours ago
      We find it true in Louie.ai evals (ai for investigations), about a 10-20% lift which meaningful. It'd measured here: botsbench.com .

      Unfortunately, undesirable in practice due to people being token-constrained even before. One case is retrying only on failure, but even that is a bit tricky...

    • danoandco 20 hours ago
      Yes, this is the pass@k metric from code generation research. Found the relevant paper Evaluating Large Language Models Trained on Code (Chen et al., 2021) which introduced the metric.
      • hmokiguess 20 hours ago
        Interesting, and how does Twill uses it in that feature?
        • danoandco 18 hours ago
          On the Twill web app, you can run the same task across different agents and multiple attempts (each in its own sandbox). Then you pick the best result. This is super handy for UI work where you can open the live preview for each attempt and compare. Next step for us is adding a final pass where an agent evaluates the results and combines the best parts into one PR.
        • danoandco 20 hours ago
          [dead]
    • j_gonzalez 17 hours ago
      [flagged]
  • a_t48 19 hours ago
    Does it support running Docker images inside the sandbox?
    • willydouhard 19 hours ago
      Yes, for instance Twill is running a local postgres and redis directly in the sandbox using docker compose when running on our codebase.

      This is what enables Twill to self verify its work before opening a PR

  • wordpad 18 hours ago
    How does this compare to Jules from Google?
    • danoandco 17 hours ago
      Jules is similar to Twill with the following differences:

      - Twill is CLI-agnostic, meaning you can use Claude Code, Codex or Gemini. Jules only works with Gemini.

      - We focus on the delegation experience: Twill has native integrations with your typical stack like Slack or Linear. The PRs comes back with proofs of work, such as screenshots or videos.

  • senordevnyc 18 hours ago
    How does this compare to something like Cursor Cloud Agents with a solid set of skills and tools?
    • danoandco 18 hours ago
      Similar but reusing lab-native CLIs like Claude Code or Codex, which they perform RL on. And so in the long-run, we believe this approach wins over custom harnesses.
  • auszeph 18 hours ago
    I built an internal version of this for my workplace.

    Something very useful that will be harder for you most likely is code search. Having a proper index over hundreds of code repos so the agent can find where code is called from or work out what the user means when they use an acronym or slightly incorrect name.

    It's quite nice to use and I'm sure someone will make a strong commercial offering. Good luck

    • willydouhard 17 hours ago
      I agree and that is why I think monorepos are making a comeback.

      That said, there are workarounds, like cloning all repos and enabling LSP (coding CLIs added that feature) or using a dedicated solution for codebase indexing and add a skill/mcp.

      Super fast models spamming grep commands are also fun to watch!

      Curious to know how you implemented it in house.

      • auszeph 17 hours ago
        https://github.com/sourcegraph/zoekt

        Run a copy of this in the same VPC. Monorepos would definitely help, but that's not the structure we have. I didn't want to rely on API limits (or stability) at GitHub for such a core feature.

        Using this we've had agents find dead APIs across multiple repos that can be cleaned up and the like. Very useful.

    • woeirua 11 hours ago
      Why not connect to the GitHub MCP?
      • willydouhard 5 hours ago
        We still need to get an oauth token to connect to GitHub. We started with the GitHub mcp but migrated to giving the gh cli to the agent directly.

        One learning we had is that most of the time CLI > MCP

  • gbnwl 20 hours ago
    So instead of using my Claude Code subscription, I can pay the vastly higher API rates to you so you can run Claude Code for me?
    • willydouhard 20 hours ago
      Anthropic recently killed the ability for third parties to use the Claude Code subscription, and it's assumed they're subsidising that price heavily. Which is fine, but it's a good reminder of the vendor lock-in risk. One policy change and your workflow breaks. Twill is agent-agnostic (Claude Code, Codex CLI, OpenCode), so you're not betting on any single vendor's pricing decisions.

      On the cost for solo devs, yeah, if you're one person running one agent at a time on your laptop, the sub is probably the better deal today. No argument there. The cloud agent model starts to make sense when you want to fire off multiple tasks in parallel.

      • gbnwl 20 hours ago
        Not sure if you've seen it yourself but Claude code can kick off parallel agents working in their own worktrees natively now. I do it all the time.
        • willydouhard 20 hours ago
          Yes, the difference is that Twill launches dedicated infra on each sandbox for each task. This means you can work on multiple tasks requiring a DB migration for instance.

          Also you can fire and forget tasks (my favorite) and don't have to keep your laptop running at night.

          • verdverm 18 hours ago
            See also Cowork and other upcoming Anthropic features.

            See also Show HN, this exact product is frequently shown as a github link.

            The paradigm shift in Ai means what you are making is (1) filling a gap until the primaries implement it, most have it in their pipeline if not already (2) how easy it is to replicate with said Ai using my preferred tech stack

            • willydouhard 17 hours ago
              Cowork does not seem to be focused on engineering, but we are fully expecting Anthropic to catch up in this category.

              What Anthropic can't offer is to let you use Codex or combine it with Claude Code. That is why we think non ai-labs players have a say in this market.

              To your last point, as always there is a buy vs build tradeoff which ultimately comes down to focusing on your core business which we think still remains important in the ai era

              • verdverm 17 hours ago
                > as always there is a buy vs build tradeoff

                it's a nonbinary decision now

                Google has a free, open source take on what you are building, looks more mature as well

                https://googlecloudplatform.github.io/scion/overview/

                My comment about Cowork is more about pointing out a different feature set that will crossover with Code. In example they have the Task related things as an affordance, Code has this coming.

                • willydouhard 16 hours ago
                  I believe there is a difference between an open source framework and a product. You would still have to manage and scale your infra, build the integration layer around it to make it accessible where your teams are, fix bugs etc...

                  I am not saying that build is always the bad choice, but the tradeoff did not disappear imo

                  • verdverm 16 hours ago
                    [flagged]
                    • gbnwl 15 hours ago
                      I’m newer to knowing and caring about what YC does at all in terms of the companies it funds. The fact that this is YC makes me think the org has forfeited any sense of “taste” at all. Complete scattershot from people who have money to scatter I guess.
                      • verdverm 15 hours ago
                        You can read old Paul Graham essays and the early YC Startup School (which is probably when peak YC happened) to get a sense of the ethos. They increased batch size to scale (as context for the "stopped doing things that don't scale" comment)

                        https://www.startupschool.org/

  • adamsilvacons 1 hour ago
    [dead]
  • mc-serious 3 hours ago
    [dead]
  • pavelbuild 5 hours ago
    [dead]
  • benjhiggins 3 hours ago
    [dead]
  • Jorda-dev 17 hours ago
    [flagged]
  • j_gonzalez 17 hours ago
    [flagged]
  • kratos007 8 hours ago
    [dead]
  • telivity-real 19 hours ago
    [flagged]
    • danoandco 19 hours ago
      We’re focused on SWE use cases. Code is nice because there’s already a built-in verification loop: diffs, tests, CI, review, rollback. But you do quickly get to a state where the agent needs to make a risky action (db migration, or an infra operation). And this is where the permissions features from the agents are handy: allowlist, automode, etc. So you have approve/reject only the high risk actions. And I think this risk model is valid for both technical and non-technical use cases
    • rytill 18 hours ago
      AI comments are against Hacker News rules.
  • korix 20 hours ago
    [flagged]
  • wtbland 7 hours ago
    [dead]
  • pukaworks 13 hours ago
    [dead]
  • jheriko 20 hours ago
    [dead]