Hi everyone. I'm one of the maintainers of this project. We're both excited and humbled to see it on Hacker News!
We created this handbook to make LLM inference concepts more accessible, especially for developers building real-world LLM applications. The goal is to pull together scattered knowledge into something clear, practical, and easy to build on.
We’re continuing to improve it, so feedback is very welcome!
I'm not going to open an issue on this, but you should consider expanding on the self-hosting part of the handbook and explicitly recommend llama.cpp for local self-hosted inference.
The self hosting section covers corporate use case using vLlm and sglang as well as personal desktop use using Ollama which is a wrapper over llama.cpp.
You can disagree all you want, but Ollama does not keep their llama.cpp vendored copy up to date, and also ships, via their mirror, completely random badly labeled models claiming to be the upstream real ones, often misappropriated from major community members (Unsloth, et al).
When you get a model offered by Ollama's service, you have no clue what you're getting, and normal people who have no experience aren't even aware of this.
Ollama is an unrestricted footgun because of this.
I have a question. In https://github.com/bentoml/llm-inference-in-production/blob/...,
you have a single picture that defines TTFT and ITL.
That does not match my understanding (but you guys know probably more than me): In the graphic, it looks like that the model is generating 4 tokens T0 to T3, before outputting a single output token.
I'd have expected that picture for ITL (except that then the labeling of the last box is off), but for TTFT, I'd have expected that there's only a single token T0 from the decode step, that then immediately is handed to detokenization and arrives as first output token (if we assume a streaming setup, otherwise measuring TTFT makes little sense).
This seems useful and well put together, but splitting it into many small pages instead of a single page that can be scrolled through is frustrating - particularly on mobile where the table of contents isn't shown by default. I stopped reading after a few pages because it annoyed me.
At the very least, the sections should be a single page each.
Ooh this looks really neat! I'd love to see more content in the future on Structured outputs/Guided generation and sampling. Another great reference on inference-time algorithms for sampling is here: https://rentry.co/samplers
Very glad to see this. There is (understandably) much excitement and focus on training models in publicly available material.
Running them well is very important too. As we get to grips with everything models can do and look to deploy them widely knowledge of how to best run them becomes ever more important.
It's a really beautiful project, and I’d like to ask something purely out of curiosity and with the best intentions. What’s the name of the design trend you used for your website? I really loved the website too.
Thanks for putting this together! From now on I only need one link to point interested ones to learn.
Only one suggestion: On page "OpenAI-compatible API" it would be great to have also a simple example for the pure REST call instead of the need to import the OpenAI package.
We created this handbook to make LLM inference concepts more accessible, especially for developers building real-world LLM applications. The goal is to pull together scattered knowledge into something clear, practical, and easy to build on.
We’re continuing to improve it, so feedback is very welcome!
GitHub repo: https://github.com/bentoml/llm-inference-in-production
When you get a model offered by Ollama's service, you have no clue what you're getting, and normal people who have no experience aren't even aware of this.
Ollama is an unrestricted footgun because of this.
I have a question. In https://github.com/bentoml/llm-inference-in-production/blob/..., you have a single picture that defines TTFT and ITL. That does not match my understanding (but you guys know probably more than me): In the graphic, it looks like that the model is generating 4 tokens T0 to T3, before outputting a single output token.
I'd have expected that picture for ITL (except that then the labeling of the last box is off), but for TTFT, I'd have expected that there's only a single token T0 from the decode step, that then immediately is handed to detokenization and arrives as first output token (if we assume a streaming setup, otherwise measuring TTFT makes little sense).
At the very least, the sections should be a single page each.
Running them well is very important too. As we get to grips with everything models can do and look to deploy them widely knowledge of how to best run them becomes ever more important.
[0] font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, Helvetica, Arial, sans-serif;
Only one suggestion: On page "OpenAI-compatible API" it would be great to have also a simple example for the pure REST call instead of the need to import the OpenAI package.