LLM Inference Handbook

(bentoml.com)

331 points | by djhu9 1 day ago

8 comments

  • sherlockxu 22 hours ago
    Hi everyone. I'm one of the maintainers of this project. We're both excited and humbled to see it on Hacker News!

    We created this handbook to make LLM inference concepts more accessible, especially for developers building real-world LLM applications. The goal is to pull together scattered knowledge into something clear, practical, and easy to build on.

    We’re continuing to improve it, so feedback is very welcome!

    GitHub repo: https://github.com/bentoml/llm-inference-in-production

    • DiabloD3 11 hours ago
      I'm not going to open an issue on this, but you should consider expanding on the self-hosting part of the handbook and explicitly recommend llama.cpp for local self-hosted inference.
      • leopoldj 8 hours ago
        The self hosting section covers corporate use case using vLlm and sglang as well as personal desktop use using Ollama which is a wrapper over llama.cpp.
        • DiabloD3 5 hours ago
          Recommending Ollama isn't useful for end users, its just a trap in a nice looking wrapper.
          • nl 3 hours ago
            Strong disagree on this. Ollama is great for moderately technical users who aren't really programmers or proficient with the command line.
            • DiabloD3 2 hours ago
              You can disagree all you want, but Ollama does not keep their llama.cpp vendored copy up to date, and also ships, via their mirror, completely random badly labeled models claiming to be the upstream real ones, often misappropriated from major community members (Unsloth, et al).

              When you get a model offered by Ollama's service, you have no clue what you're getting, and normal people who have no experience aren't even aware of this.

              Ollama is an unrestricted footgun because of this.

    • criemen 10 hours ago
      Thanks a lot for putting this together!

      I have a question. In https://github.com/bentoml/llm-inference-in-production/blob/..., you have a single picture that defines TTFT and ITL. That does not match my understanding (but you guys know probably more than me): In the graphic, it looks like that the model is generating 4 tokens T0 to T3, before outputting a single output token.

      I'd have expected that picture for ITL (except that then the labeling of the last box is off), but for TTFT, I'd have expected that there's only a single token T0 from the decode step, that then immediately is handed to detokenization and arrives as first output token (if we assume a streaming setup, otherwise measuring TTFT makes little sense).

    • armcat 21 hours ago
      Amazing work on this, beautifully put together and very useful!
    • sethherr 8 hours ago
      This seems useful and well put together, but splitting it into many small pages instead of a single page that can be scrolled through is frustrating - particularly on mobile where the table of contents isn't shown by default. I stopped reading after a few pages because it annoyed me.

      At the very least, the sections should be a single page each.

  • subset 18 hours ago
    Ooh this looks really neat! I'd love to see more content in the future on Structured outputs/Guided generation and sampling. Another great reference on inference-time algorithms for sampling is here: https://rentry.co/samplers
    • larme 14 hours ago
      Wow that's really thorough
  • gchadwick 9 hours ago
    Very glad to see this. There is (understandably) much excitement and focus on training models in publicly available material.

    Running them well is very important too. As we get to grips with everything models can do and look to deploy them widely knowledge of how to best run them becomes ever more important.

  • aligundogdu 21 hours ago
    It's a really beautiful project, and I’d like to ask something purely out of curiosity and with the best intentions. What’s the name of the design trend you used for your website? I really loved the website too.
    • Jimmc414 12 hours ago
      it appears to be using Infima, which is Docusaurus's default CSS framework plus a standard system font stack

      [0] font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, Helvetica, Arial, sans-serif;

  • qrios 18 hours ago
    Thanks for putting this together! From now on I only need one link to point interested ones to learn.

    Only one suggestion: On page "OpenAI-compatible API" it would be great to have also a simple example for the pure REST call instead of the need to import the OpenAI package.

  • srameshc 15 hours ago
    If I remember, BentoML was about MLOps, I remember trying it about a year back. Did the company pivot ?
    • fsjayess 15 hours ago
      There is a big pie in the market around LLM serving. It make sense for a serving framework to extend into the space
  • holografix 18 hours ago
    Very good reference thanks for collating this!
  • Domainzsite 17 hours ago
    [dead]