Yep. I've given it tools (scripts, not MCP) that give it access to my bug tracker. It's able to handle requests like "please fix bug CO-1234" and off it goes. I have another script that uses Gemini to create map files, and that helps Claude find the right parts of the codebase quickly. It's able to do quite sophisticated bug fixes in a mature and unusual codebase (it's a deployment tool for desktop apps), and the quality of the produced code is high.
One thing I miss from Aider is speech recognition integration. When you're alone at home it's great to be able to speak what you want instead of typing it.
I'm pretty happy with it. The next big upgrade would be deep IDE integration, complete with the ability to search the IDE indexes, navigate around using cross-references and the like.
It's still not great, but it's better than anything else.
Best results when:
1. run /init and let it maintain a CLAUDE.md
2. Ask it to run checks + tests before / after every task, and add those commands to the "no permission needed list" – this improves quality by a lot
3. Ask it to do TDD, but manually check the actual test is correct
4. Every time it finishes something solid: git commit manually, /compact context (saves hella $$$ + seems to improve focus)
Honestly I treat it like a junior programmer I'm pairing with. If you pay attention, you can catch it being stupid early and get it back on track. Best when you know exactly what you want, it's just boring work. It's really good with clear instructions, eg "Refactor X -> Y, using {design pattern}."
Much worse than Cursor with Claude models in my experience. I'm getting many useless changes and things being reimplemented from scratch instead of moving files. Not impressed at all.
I'm finding it to be a force-multiplier in my side-project work.
It needs careful oversight, for if you're too generous with it, it'll happily add tons of code into your codebase that will make it horrendously difficult to understand and debug later. As capable as it is, I find it prudent to keep it on a short leash.
One thing I miss from Aider is speech recognition integration. When you're alone at home it's great to be able to speak what you want instead of typing it.
I'm pretty happy with it. The next big upgrade would be deep IDE integration, complete with the ability to search the IDE indexes, navigate around using cross-references and the like.
Best results when:
1. run /init and let it maintain a CLAUDE.md
2. Ask it to run checks + tests before / after every task, and add those commands to the "no permission needed list" – this improves quality by a lot
3. Ask it to do TDD, but manually check the actual test is correct
4. Every time it finishes something solid: git commit manually, /compact context (saves hella $$$ + seems to improve focus)
Honestly I treat it like a junior programmer I'm pairing with. If you pay attention, you can catch it being stupid early and get it back on track. Best when you know exactly what you want, it's just boring work. It's really good with clear instructions, eg "Refactor X -> Y, using {design pattern}."
Sorry to give opposite anecdotes. It’s one of the things I find most irritating about AI right now.
It needs careful oversight, for if you're too generous with it, it'll happily add tons of code into your codebase that will make it horrendously difficult to understand and debug later. As capable as it is, I find it prudent to keep it on a short leash.
https://www.anthropic.com/engineering/claude-code-best-pract...