Ask HN: Any AI / Agent power users out there? Do you have any tips?

At my large tech company, we're all being pushed to use AI. I, and most people I work with, have had success using the chatbots and Cursor-style tools and more recently Claude Code to accelerate the process of writing code.

Yet, with a few people in my network, it's like they're living 10 years ahead. Guys are automating everything in their jobs, spinning up 10 specialized agents at a time and running multi-agent pipelines, just doing all sorts of crazy things with this tech that I just can't even fathom. It seems like it's making them way more productive.

I have found a way to fit code-writing and question-answering chatbots into my workflow. I have NOT done the same in terms of these crazy Agent setups. There's clearly a way to leverage these tools to turbocharge your productivity, like at least 2x or maybe even 10x. But what is it?

Are there any Agentic power users out there who can enlighten me? What are the best ways to take advantage of these new tools?

7 points | by uejfiweun 1 day ago

10 comments

  • sak84 49 minutes ago
    yeah the gap between "chatbot that writes code" and "actual multi-agent workflow" is real

    built elvex to solve this with: - multi-provider access (Claude, GPT, Gemini, etc.) so different agents can use different models - actual team permissions so agents don't step on each other - workflow orchestration without duct-taping APIs together

    the parallel execution thing you mentioned - elvex handles that. you can spin up multiple agents with different contexts, they share a knowledge base, and you're not manually managing git worktrees or containers

    not saying it's magic but it definitely solves the "how do i go from 1 agent to 10 agents without chaos" problem.

    what workflows are you trying to automate?

  • ativzzz 2 hours ago
    First, don't do what people on the cutting edge are doing. They are AI hobbyists and their methods becomes obsolete within weeks. Many of the tricks they use become first class objects of frontier models/tooling or are unnecessary 2 model versions later.

    What you can do is empower your agent to solve more complex problems fully on its own. See if you can write a plan (claude is great at writing plans) that encapsulates all the work needed for a feature. Go back and forth with the AI on the plan. Spend some time thinking about it. Including tests, how the AI can automatically validate that things work, etc. Put it on auto-accept and tab away. Once it finishes, review the code, do your normal QA, follow ups, etc

    While it's working, go work on another feature in the same vein. Git worktrees are a simple way to work on the same codebase in parallel (though you likely work on an app that isn't set up for multiple instances of it running in parallel, have fun with that). Containers are another way to run these in parallel. Vibe code yourself a local tool to manage these. This is somewhat built into the claude/codex desktop apps, but you likely need to customize it for your environment

    Basically you do the architecture, code review & QA and let the model do as much of the code as possible, and try to do it in parallel. I still do manual coding when I need to explore what a solution might look like, but AI is much faster at experimenting with larger solutions, and if you don't like what it did, it's a `git checkout .` away from a clean slate

    How much time you spend on validating is a tradeoff of speed vs correctness needs for your business.

  • adrianwaj 17 hours ago
    "There's clearly a way to leverage these tools to turbocharge your productivity, like at least 2x or maybe even 10x. But what is it?"

    Many will keep it a secret, a few will share, the remainder will turn it into a product. You could spend hours on HN and find best practices floating around. At some point a product will come along and equalize everything.

    • muzani 12 hours ago
      I don't think people mean to keep it a secret. It took me literally 2 hours to type my answer, because it's deeply ingrained as muscle memory and habit. It's probably much easier to explain in a two-way call, or better yet, a class.

      Much of the time people don't even appreciate these answers; they get angry because what they really wanted to hear was that it's impossible.

      But sometimes it's useful to rephrase thoughts and put it out for criticism.

  • muzani 12 hours ago
    First, why do you use AI? What are you using it for? How? Is it just autocomplete? Do you use it to probe the problem? Go deeper and do spikes? Are you bouncing ideas off it? How hard do the ideas bounce? Are you simply bolting on features? Are you debugging in a scientific method kind of way? Do you pair program with the AI?

    Think in combos, not as an individual step. What's your workflow?

    Compare to martial arts. You jab with the weaker arm. It sets up space. Rhythm. Forces a defense. Then you follow with a more committed hit with the stronger arm. Or if you were doing Muay Thai, a jab sets up the angle for a full kick or knee.

    It's your spirit. You decide if you prefer a solid jab-tip-roundhouse kick or multiple safe jabs. Don't think in terms of what prompts I can copy. Find what your workflow is and fit it.

    Like martial arts, the foundations usually make the difference more than the technicalities. Footwork. Stances. Guards. Balance. Pull the shoulder back that way when you kick, not the other way. All these things that are difficult to learn from a video, and harder still if you only meet each other 10 mins a day in stand up.

    In code, it's your style guides, architecture, tech debt, documentation (specifically where), design systems.

    Multi-agent is simply combos. You know what your next steps will be, so you duplicate yourself, and automate yourself for those positions. Iterate over it. It's more like painting than printing.

  • sarbajitsaha 1 day ago
    What kind of work are you dealing with?

    Event if the agents are writing code (lets say) 10 times faster, some human has to review it right? So there is still a bottleneck. And I would say that person should be the first to review it before sending it for review to other team members.

    Even if there are tests, I never feel comfortable committing code that I haven't understood at least on a broad level.

  • thiago_fm 1 day ago
    The people on your network are BS'ing about it.

    I do use all those tools, since from the very beginning when ChatGPT even wasn't widely available.

    If you are automating something that isn't high impact or important, sure just let it write the code and don't even verify it.

    But in a big org, you'll need to validate it on every step, every generated line of code can have bugs, injections and negative side-effects.

    I believe you'll be more productive by using it, and prompting well, but you'll need to invest much more time on double-checking if everything is working accordingly to what you initially planned, or you might ship very broken code in production that can be difficult to revert, at a times.

    2x productivity is possible, but it really depends on the kind of tasks you get. If your entire job is prototyping stuff, sure you are now 100x+.

    But if you need to write very complex business logic that will last for years, with lots of back and forth and discussions with PMs and whatnot, which is the majority of SWE corporate jobs... my bet you'd be at maximum 1.5x!

  • chiengineer 1 day ago
    you should take the time to watch youtube videos on AI agents and coding in general

    also vs code + github copilot pro plus for more context

  • nivcmo 8 hours ago
    [dead]
  • octoclaw 1 day ago
    [dead]
  • anvevoice 1 day ago
    [dead]