Title, mostly. I'd wager most of us know what debugging is already, and a solid chunk of us have at least some hands-on experience using debuggers in any given language.
"AI Debugger" exposes familiar debugging capabilities to agents through an MCP interface. Think operations like:
I built it using the debugger components VS Code already uses (mainly debug adapters) to ensure reusability and a 100% open source codebase.
These are the key features I've shipped with `0.1.1`:
- VS Code `launch.json` support. Your launch configs in this file can be used to launch `aidb` sessions. Helpful for cross-team sharing, complex debug entry points, or just familiar VS Code workflows.
- Remote debugging. I was able to debug worker nodes in a Dockerized Trino cluster, meaning you can attach to remote ports and debug huge codebases remotely. Seems potentially useful for any sort of remote debugging or CI integration.
- An extensible core API, built around the "debug adapter protocol" (DAP), designed to make it as simple as possible to add support for any given DAP-compliant adapter. Future adapters will soon be added (probably Go, Kotlin (for my own use), and Rust).
- Tight integration with Claude. This made the project possible for me IMO, and hopefully will help contributors in the future. I've got a very nice skills system configured, based on my other project (https://github.com/jefflester/claude-skills-supercharged), which has boosted Claude's efficacy enormously in terms of implementation cleanliness and overall codebase knowledge. Additionally, the `dev-cli`, which is, perhaps unsurprisingly, the repo's internal developer CLI, bootstraps many of Claude's capabilities, like CI failure analysis, running tests, etc.
- 100% open source and fast CI/CD release times (CI run: https://github.com/ai-debugger-inc/aidb/actions/runs/2065017...). All components in my stack are open source (core Python deps, debug adapter deps, etc.). GitHub CI builds and publishes debug adapters, runs robust integration and unit tests, and ships everything in < 15 mins, which is awesome, considering many of my tests actually test the full stack with misc. external language dependencies, like Node, Spring, Maven, Gradle, etc.
My main goal is to make AI Debugger the go-to tool for agent-facing debugging. If this is interesting to you, let me know – I would love to get a few contributors up to speed, as this is a sizable codebase that needs to expand a bit still, and it will suck trying to maintain it solo indefinitely. Strength in numbers!
Let me know if you have any questions, and thanks for taking a look at my project.
Title, mostly. I'd wager most of us know what debugging is already, and a solid chunk of us have at least some hands-on experience using debuggers in any given language.
"AI Debugger" exposes familiar debugging capabilities to agents through an MCP interface. Think operations like:
- Breakpoints (basic breakpoints, conditional breakpoints, logpoints, etc.) - Stepping (into, over, out of) - Inspection (locals, globals, call stack, single stack frame, etc.)
I built it using the debugger components VS Code already uses (mainly debug adapters) to ensure reusability and a 100% open source codebase.
These are the key features I've shipped with `0.1.1`:
- VS Code `launch.json` support. Your launch configs in this file can be used to launch `aidb` sessions. Helpful for cross-team sharing, complex debug entry points, or just familiar VS Code workflows.
- Remote debugging. I was able to debug worker nodes in a Dockerized Trino cluster, meaning you can attach to remote ports and debug huge codebases remotely. Seems potentially useful for any sort of remote debugging or CI integration.
- An extensible core API, built around the "debug adapter protocol" (DAP), designed to make it as simple as possible to add support for any given DAP-compliant adapter. Future adapters will soon be added (probably Go, Kotlin (for my own use), and Rust).
- Tight integration with Claude. This made the project possible for me IMO, and hopefully will help contributors in the future. I've got a very nice skills system configured, based on my other project (https://github.com/jefflester/claude-skills-supercharged), which has boosted Claude's efficacy enormously in terms of implementation cleanliness and overall codebase knowledge. Additionally, the `dev-cli`, which is, perhaps unsurprisingly, the repo's internal developer CLI, bootstraps many of Claude's capabilities, like CI failure analysis, running tests, etc.
- 100% open source and fast CI/CD release times (CI run: https://github.com/ai-debugger-inc/aidb/actions/runs/2065017...). All components in my stack are open source (core Python deps, debug adapter deps, etc.). GitHub CI builds and publishes debug adapters, runs robust integration and unit tests, and ships everything in < 15 mins, which is awesome, considering many of my tests actually test the full stack with misc. external language dependencies, like Node, Spring, Maven, Gradle, etc.
My main goal is to make AI Debugger the go-to tool for agent-facing debugging. If this is interesting to you, let me know – I would love to get a few contributors up to speed, as this is a sizable codebase that needs to expand a bit still, and it will suck trying to maintain it solo indefinitely. Strength in numbers!
Let me know if you have any questions, and thanks for taking a look at my project.
-----
*Relevant Links*
- Repository: https://github.com/ai-debugger-inc/aidb - Documentation: https://ai-debugger.com/en/latest/ - PyPi Package: https://pypi.org/project/ai-debugger-inc/