24 comments

  • 6thbit 1 day ago
    This looks neat, we certainly need more ideas and solutions on this space, I work with large codebases daily and the limits on agentic contexts are constantly evident. I've some questions related to how I would consume a tool like this one:

    How does this fare with codebases that change very frequently? I presume background agents re-indexing changes must become a bottleneck at some point for large or very active teams.

    If I'm working on a large set of changes modifying lots of files, moving definitions around, etc., meaning I've deviated locally quite a bit from the most up to date index, will Nia be able to reconcile what I'm trying to do locally vs the index, despite my local changes looking quite different from the upstream?

    • jellyotsiro 1 day ago
      great question!

      For large and active codebases, we avoid full reindexing. Nia tracks diffs and file level changes, so background workers only reindex what actually changed. We are also building “inline agents” that watch pull requests or recent commits and proactively update the index ahead of your agent queries.

      Local vs upstream divergence is a real scenario. Today Nia prioritizes providing external context to your coding agents: packages, provider docs, SDK versions, internal wikis, etc. We can still reconcile with your local code if you point the agent at your local workspace (cursor and claude code already provide that path). We look at file paths, symbol names and usage references to map local edits to known context. In cases where the delta is large, we surface both the local version and the latest indexed version so the agent understands what changed.

      • adam_patarino 9 hours ago
        Your FAQ says you don’t store code. But this answer sounds like you do? Even if you’re storing as an embedding that’s still storage. Which is it?
        • jellyotsiro 5 hours ago
          We don’t store your code or any proprietary local content on our servers. When we say “external context” we mean public or user-approved remote sources like docs, packages or APIs. Those are indexed on our side. Your private project code stays local
  • mritchie712 1 day ago
    Cursor promises to do this[0] in the product, so, especially on HN, it'd be best to start with "why this is better than Cursor".

    > favorite doc sites so I do not have to paste URLs into Cursor

    This is especially confusing, because cursor has a feature for docs you want to scrape regularly.

    0 - https://cursor.com/docs/context/codebase-indexing

    • jellyotsiro 1 day ago
      The goal here is not to replace Cursor’s own local codebase indexing. Cursor already does that part well. What Nia focuses on is external context. It lets agents pull in accurate information from remote sources like docs, packages, APIs, and broader knowledge bases
      • jondwillis 1 day ago
        That’s what GP is saying. This is the Docs feature of Cursor. It covers external docs/arbitrary web content.

        `@Docs` — will show a bunch of pre-indexed Docs, and you can add whatever you want and it’ll show up in the list. You can see the state of Docs indexing in Cursor Settings.

        The UX leaves a bit to be desired, but that’s a problem Cursor seems to have in general.

        • jellyotsiro 1 day ago
          yeah ux is pretty bad and overall functionality. it still relies on a static retrieval layer and limited index scope.

          + as I mentioned above there are many more use cases than just coding.Think docs, APIs, research, knowledge bases, even personal or enterprise data sources the agent needs to explore and validate dynamically.

          • nrhrjrjrjtntbt 1 day ago
            As an AI user (claude code, rovo, github copilot) I have come across this. In code it didnt build something right where it needed to use up to date docs. Luckily those people have now made an MCP but I had to wait. For a different project I may be SOL. Suprised this isnt solved, well done for taking it on.

            From a business point of view I am not sure how you get traction without being 10x better than what Cursor can produce tomorrow. If you are successful the coding agents will copy your idea and then people being lazy and using what works have no inventive to switch.

            I am not trying to discourage. More like encourage you to figure out how you get that elusive moat that all startups seek.

            As a user I am excited to try it soon. Got something in mind that this should make easier.

            • jellyotsiro 1 day ago
              thanks! will be waiting for ur feedback
    • bn-l 1 day ago
      This is different because of the background refresh, the identifier extraction and the graph. I know because I use cursor and am building the exact same thing oddly enough.
  • alex-ross 1 day ago
    This resonates. I'm building a React Native app and the biggest friction with AI coding tools is re-explaining context every time.

    How does Nia handle project-specific patterns? Like if I always use a certain folder structure or naming convention, does it learn that?

    • jellyotsiro 1 day ago
      Nia is focused on external context rather than learning the patterns inside your own codebase. Cursor and IDE-native tools are better for local project structure today. Where Nia helps is when the agent needs ground truth from outside your repo. For example, you can index React Native docs, libraries you depend on, API references or Stack for your backend and let the agent search and validate against those sources directly instead of losing context between prompts.
  • bluerooibos 4 hours ago
    Can you explain why I would pay almost the full price of Cursor, ChatGPT, or Claude again - just for your context layer, when these companies are already working on context?

    I don't see you justify this with an explanation of the ROI anywhere.

    • jellyotsiro 2 hours ago
      The economics are simple. When an agent guesses, it produces wrong code, failed runs, and wasted time. External context is the biggest source of those mistakes, because IDEs only index what’s in your repo. We are a complement to Cursor, ChatGPT, and Claude, not a replacement.
  • bluerooibos 4 hours ago
    Your landing page tells me a whole lot of nothing.

    How does this work? How does it differ from other solutions? Why do I need this? What does the implementation look like if I added this to my codebase?

  • djoldman 10 hours ago
    > The calling agent then decides how to use those snippets in its own prompt.

    To be reductionist, it seems the claimed product value is "better RAG for code."

    The difficulties with RAG are at least:

    1. Chunking: how large and how is the beginning/end of a chunk determined

    2. Given the above quote, how much or many RAG results are put into the context? It seems that the API caller makes this decision, but how?

    I'm curious about your approach and how you evaluated it.

    • jellyotsiro 4 hours ago
      Not quite “better RAG for code”. The core idea is agentic discovery plus semantic search. Instead of static chunks pushed into context, the agent can dynamically traverse docs, follow links, grep for exact identifiers, and request only the relevant pieces on demand.

      No manual chunking. We index with multiple strategies (hierarchical docs structure, symbol boundaries, semantic splitting) so the agent can jump into the right part without guessing chunk edges.

      Context is selective. The agent retrieves minimal snippets and can fetch more iteratively as it reasons, rather than preloading large chunks. We benchmark this using exact match evaluations on real agent tasks: correctness, reduced hallucination, and fewer round trips.

  • marwamc 1 day ago
    I've no idea what their architecture/implementation looks like, but I've built a similar tool for my own use and the improvements are dramatic to say the least.

    Mine's a simple BM25 index for code keyword search (I use it alongside serena-mcp) and for some use cases the speeds and token efficiency are insane.

    https://gitlab.com/rhobimd-oss/shebe#comparison-shebe-vs-alt...

  • qcqcqc 21 hours ago
    Configure MCP Server One command to set up Nia MCP Server for your coding agent.

    Select your coding agentCursor Installation method Local Remote Runs locally on your machine. More stable. Requires Python & pipx.

    Create API Key test Create Organization required to create API keys

    i can not create api key? the create button is grey and can not be pressed.

    • jellyotsiro 19 hours ago
      hey, what error does it throw?
  • chrisweekly 1 day ago
    This looks interesting and worthwhile. I did a double-take when I read "when (a few months ago) I was still in high school in Kazakhstan"
  • canopi 1 day ago
    Congrats on the launch. The problem is definitely there. I wonder how are you planning to differentiate yourself from Cursor and the like. You mention you are complementary, but Cursor provide similar features to add external doc context for instance to a prompt. I understand you do better in your benchmark, but with the amount of funding they may be able to replicate and improve over it (unless you have a secret thing).
    • jellyotsiro 1 day ago
      as I mentioned above there are many more use cases than just coding (APIs, research, knowledge bases, even personal or enterprise data sources the agent needs to explore and validate dynamically)

      I started out with coding agents specifically because it came from personal pain of how horrible they are with providing up to date context.

  • dhruv3006 13 hours ago
    So many coding tools what makes you different.
    • jellyotsiro 4 hours ago
      We’re not a coding tool. We’re the context layer for agents.
  • RomanPushkin 1 day ago
    Having this RAG layer was always another thing to try for me. I haven't coded it myself, and super interested if this gives a real boost while working with Claude. Curious from anyone who have already tried the service, what's your feedback? Did you feel you're getting real improvements?
    • jellyotsiro 1 day ago
      Wouldn’t call it just RAG though. Agentic discovery and semantic search are the way to go right now, so Nia combines both approaches. For example, you can dynamically search through a documentation tree or grep for specific things.
      • zwaps 1 day ago
        We call it agentic RAG. The retriever is an agent. It’s still RAG
        • jellyotsiro 1 day ago
          Which would be much better than the techniques used in 2023. As context windows increase, combining them becomes even easier.

          There are a lot of ways of how you can interpret agentic rag, pure rag, etc

  • bn-l 1 day ago
    Is the RAG database on your servers or is it local? If not local is there a local option?
    • jellyotsiro 1 day ago
      hey! i use multiple DBs but the primary ones are turbopuffer and chroma for package search. they are really great

      re local, I do local for certain companies!

      • bn-l 15 hours ago
        Nice. Huge congrats on the launch!!!
  • brainless 13 hours ago
    Very happy to see this since I am building in this domain. We need external and internal context though. I am aiming for always available context for current and related projects, reference projects, documentation, library usage, commands available (npm, python,...), tasks, past prompts, etc. all in one product. My product, nocodo (1), is built by coding agents, Claude Code (Sonnet only) and opencode (Grok Code Fast 1 and GLM 4.6).

    I just made a video (2) on how I prompt with Claude Code, ask for research from related projects, build context with multiple documents, then converge into a task document, shared that with another coding agent, opencode (with Grok or GLM) and then review with Claude Code.

    nocodo is itself a challenge for me: I do not write or review code line by line. I spend most of the time in this higher level context gathering, planning etc. All these techniques will be integrated and available inside nocodo. I do not use MCPs, and nocodo does not have MCPs.

    I do not think plugging into existing coding agents work, not how I am building. I think building full-stack is the way, from prompt to deployed software. Consumers will step away from anything other than planning. The coding agent will be more a planning tool. Everything else will slowly vanish.

    Cheers to more folks building here!

    1. https://github.com/brainless/nocodo 2. https://youtu.be/Hw4IIAvRTlY

  • govping 1 day ago
    The context problem with coding agents is real. We've been coordinating multiple agents on builds - they often re-scan the same files or miss cross-file dependencies. Interested in how Nia handles this - knowledge graph or smarter caching?
    • jellyotsiro 1 day ago
      hey! knowledge graphs are also used at runtime but paired with other techniques, since graphs are only useful for relationship queries.
  • krisgenre 13 hours ago
    Is this similar to the indexing done by Jetbrains IDEs?
    • jellyotsiro 4 hours ago
      JetBrains is great at indexing your local codebase and understands it deeply. We don’t try to replace that. Nia focuses on external context: docs, packages, APIs and other remote sources that agents need but your IDE can’t index.
  • orliesaurus 1 day ago
    Benchmarks?
  • jacobgorm 1 day ago
    SOTA on internal benchmark?
  • kenforthewin 1 day ago
    Congrats. From my experience, Augment (https://augmentcode.com) is best in class for AI code context. How does this compare?
    • jellyotsiro 1 day ago
      augment is a coding agent. nia is an external context engine for coding agents that improves their code output quality
      • kenforthewin 1 day ago
        Sure, but Augment’s main value add is their context engine, and imo they do it really well. If all they had to do was launch an MCP for their context engine product to compete, I think the comparison is still worth exploring.
  • mike1505 1 day ago
    How does it compare to Serena MCP? :)

    https://github.com/oraios/serena

    • jellyotsiro 1 day ago
      Serena is great for semantic code editing and symbol-level retrieval on your own codebase. It gives the agent IDE-like capabilities inside the repo. Nia focuses on a different layer. We target external context: remote code, docs, packages, APIs, research, enterprise knowledge, etc

      w nia the agent can dynamically search, traverse, and validate information outside the local project so it never hallucinates against out-of-date or incomplete sources.

  • zwaps 1 day ago
    Absolutely insane that we celebrated coding agents getting rid of RAG, only with the next innovation being RAG
    • dang 1 day ago
      "Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something."

      "Don't be snarky."

      https://news.ycombinator.com/newsguidelines.html

    • jellyotsiro 1 day ago
      Not exactly just RAG. The shift is agentic discovery paired with semantic search.

      Also, most of the coding agents still combine RAG and agentic search. See cursor blog about how semantic search helps them understand and navigate massive codebases: https://cursor.com/blog/semsearch

    • choilive 1 day ago
      The pendulum swings back.
    • ModernMech 1 day ago
      This is happening over and over and over. The example of prompt engineering is just a form of protocol. Context engineering is just about cache management. People think LLMs will replace programming languages and runtimes entirely, but so far it seems they have been used mostly to write programs in programming languages, and I've found they're very bad interpreters and compilers. So far, I can't really pick out what exactly LLMs are replacing except the need to press the individual keys on the keyboard, so I still struggle to see them as more than super fancy autocomplete. When the hype is peeled away, we're still left with all the same engineering problems but now we have added "Sometimes the tool hallucinates and gaslights you".
  • Jakedismo 9 hours ago
    [dead]
  • dang 1 day ago
    [under-the-rug stub]

    [see https://news.ycombinator.com/item?id=45988611 for explanation]

    • ayushrodrigues 1 day ago
      congrats on the launch Arlan! Nia is a lifesaver when we're coding :)
    • rushingcreek 1 day ago
      Congrats on the launch! Definitely a problem I’ve run into myself.
    • agentastic 1 day ago
      Congrats on the launch, Nia looks great.
    • johnsillings 1 day ago
      super smart. congrats on the launch!
    • himike 1 day ago
      I love Nia, keep it up Arlan
    • ramzirafih 1 day ago
      Love it.
    • phegler 1 day ago
      Amazing product. As an individual heavy user I must say it does what it is supposed to do and it does it well.
  • ModernMech 1 day ago
    [flagged]
    • dang 1 day ago
      Please make your substantive points without being snarky or aggressive. You have a good point in there at the end, but the site guidelines ask you not to comment like this, and that's especially important in Show or Launch threads.

      https://news.ycombinator.com/newsguidelines.html

      • ModernMech 1 day ago
        Ok here is my less snarky and less aggressive take on the site and product: it doesn't leave me with confidence, it makes me feel uneasy about them as a company, it makes me not trust them, and it makes me feel like I'm being lied to. To fix this, provide proof. Otherwise, stop making the claims. Unfortunately with the frequency these kinds of products come out of YC, it seems like maybe YC coaches these companies as if this is an effective way to communicate.
        • dang 1 day ago
          The only coaching going on with Launch HNs is from me and tomhow, and I can tell you that we're constantly urging people to tone down grand claims, provide concrete examples, accessible demos, and so on—partly because it makes the posts more interesting, and partly to reduce surface area for the snarky and rigid objections that internet forums optimize for.

          We don't do a perfect job of this, because (1) Launch HN coaching is on top of our main jobs running HN and we only have so many hours; and (2) startup founders' priority is working on their startup (as it should be!). They only have so many cycles for reworking everything to suit HN's preferences, which are idiosyncratic and at times curmudgeonly or cynical. Curmudgeons and cynics can't be convinced in the first place so it's not a good idea for a founder to put too much time into indulging them.

          Some of what you're saying here boils down to that their home page shouldn't have any marketing tropes at all (e.g. testimonials, companies-using-us, etc.). I don't like those tropes either, but this is an example of what I mean by an idiosyncratic preference. Companies do that kind of thing because, obviously, it works. That's how the world is. The only thing that you accomplish by angrily blaming a startup founder for doing standard marketing is to make the discussion dyspeptic and offtopic. And yes, I do use the word "dyspeptic" too much :)

          • ModernMech 1 day ago
            I mean, you can dismiss me by calling me a cynical curmudgeon (which is not inaccurate), but from my perspective their website doesn't try to convince -- it tries to bamboozle. I don't think it's idiosyncratic at all to expect that claims made should be proven and supported, and that companies should present themselves with integrity and be genuine in their representations.

            > Curmudgeons and cynics can't be convinced in the first place so it's not a good idea for a founder to put too much time into indulging them.

            I'd say we're just not convinced by marketing lingo and puffery. I was convinced by the simple README containing code and transparent evidence that a fellow HNer put up in their personal capacity, so maybe you can direct the Nia team to that as an example of how to properly convince curmudgeons and cynics.

            • dang 23 hours ago
              Kudos to you for "which is not inaccurate" and the subtle shift to "we" - that made me smile.

              Personally my tastes are much the same as yours, but we're asking for too much if we want startups to stop doing normal marketing.

        • pdyc 1 day ago
          i am interested in knowing what would be the correct way to do it according to your checklist. For example you said testimonial from twitter can be bots, which testimonial according to you would give you confidence that product is genuine?
          • ModernMech 1 day ago
            Okay let's dissect it. First, I will say since this is a for-profit corporate website and they are trying to get something from me, I approach it with a fully skeptical, 0 benefit of doubt perspective.

            What's the gif supposed to tell me? It's supposed to demo the product and give me a feel for its capabilities. But it just flits around and goes so fast, offers zero explanation for anything, it just leaves me disoriented. So at minimum, this needs captions and it needs to go about 2x slower. But really, this one GIF should not be the most substantive element on the first page relating to the actual product and what it does. Trust lowered.

            Moving on to the "company carousel", which is trying to say "these other companies trust us so you should too". They're trying to ride on the reputations of Stanford, Cornell, Columbia, UPenn, Google, etc. as a sort of pseudo-endorsement, because they cannot post real endorsements from these institutions, because they do not exist (doesn't YC have legal counsel to tell them this is illegal?). How are engineers using Nia at Stanford? We don't know, Nia will not say, likely because no one at Stanford is using it in any real capacity that is impressive enough to put on the front page of the website. If they were, then why wouldn't Nia tell us about that rather than just flashing the Stanford logo? So the logo suffices, and I guess the more logos the better. Trust lowered.

            Next the investor list: who is this for and what does it communicate? It appears to be a list of Chiefs, VPs, Co-Founders, and various funds who are deemed to be "world class", which is just another parade of logos but for a different audience, likely other investors who know these people. Maybe this speaks to some people in terms of the project having a solid financial backing but that's a smokescreen to distract you from the fact there's no actual business plan here aside from running on the VC treadmill and hoping to get acquired by one of your customers and/or investors. Trust lowered.

            Then we get to the Twitter parade, which is a third instance of "just trust us bro". And it includes such gems as "Can confirm, coding agents go hard" and "go try Nia, go into debt if you have to". Testimonials are for products I can't try myself, this seems like something that can be demoed, so why isn't it? Why did they opt to devote all this space to show a bunch of random people saying random uninteresting things about their product, rather than use the space to say more interesting things about their product? Because the testimonials are a distraction from the actual product. Trust lowered.

            Again, I'm left asking: Why do I have to listen to and trust these other people if the technology is so good? Why am I halfway down the page reading this thing, and I've yet to hear any specifics about how this thing works or what it does for me. I was told other people are using it but not how, I was told other people invested in it but not how much, and I was told some companies are maybe using it but not in what capacity.

            So in summary, this page is: "Look how shiny! You trust us. No really, you can trust us! Seriously, look at all these people, who say you can trust us, you seriously can! Now give us money."

            So to answer your question:

            > what would be the correct way to do it according to your checklist.

            Don't do any of the things that were done, and instead lead with the product. Prove all claims made. If a claim can't be proven don't make it. Stand behind your technology rather than testimonials.

            • replwoacause 22 hours ago
              Well put. But what you’ve identified here is pretty much the norm for ~90% startup/SAAS sites. They all just regurgitate this formula.
              • akcho 19 hours ago
                Maybe worth considering whether this formula is still effective.
            • fazkan 1 day ago
              @dang now that he has made his point, shouldn't the thread be made undead?
              • ModernMech 22 hours ago
                My snark shall never be resurrected, lest it thirst for brains.