1 comments

  • flopsy2 5 hours ago
    I got tired of copy-pasting code snippets into ChatGPT and losing context between different repositories. So I built OpenDeepWiki – an AI chat system that can ingest multiple GitHub repos (or ZIP uploads) simultaneously and reason across all of them.

    What makes it different:

    Multi-repo awareness: Drop in 5 different repositories and ask "How do my microservices communicate?" – it understands the full picture, not just isolated files Zero database setup: Leverages Google's Prompt Caching API (\$0.01 per 1M tokens) instead of building yet another vector database Universal model support: Type any model name (`gpt-4.1`, `claude-4-sonnet`, `o3`, `gemini-2.5-pro`) and it automatically routes to the right provider One-command setup: `make setup` and you're running locally

    Real use cases I've tested:

    "Compare the authentication systems between my frontend and backend repos" "Generate integration tests for connecting service A to service B" "Find how instructor integrates langfuse"

    The architecture is surprisingly simple – FastAPI microservices, React frontend, Docker containers. But the magic is in the multi-repo context aggregation pipeline that keeps costs low while maintaining comprehensive understanding.

    Live demo: Clone it, point it at your repos, and ask it to explain your own architecture back to you. It's... unsettling how well it works.

    GitHub: https://github.com/Flopsky/OpenDeepWiki

    Built this because I needed it for my own multi-service projects. Feedback welcome – especially if you try it on large/complex codebases.