Show HN: I made an Ollama summarizer for Firefox

(addons.mozilla.org)

132 points | by tcsenpai 71 days ago

8 comments

  • RicoElectrico 71 days ago
    I've found that for the most part the articles that I want summarized are those which only fit the largest context models such as Claude. Because otherwise I can skim-read the article possibly in reader mode for legibility.

    Is llama 2 a good fit considering its small context window?

    • tcsenpai 71 days ago
      Personally I use llama3.1:8b or mistral-nemo:latest which have a decent contex window (even if it is less than the commercial ones usually). I am working on a token calculator / division of the content method too but is very early
      • garyfirestorm 71 days ago
        why not llama3.2:3B? it has fairly large context window too
        • reissbaker 71 days ago
          I assume because the 8B model is smarter than the 3B model; it outperforms it on almost every benchmark: https://huggingface.co/meta-llama/Llama-3.2-3B

          If you have the compute, might as well use the better model :)

          The 3.2 series wasn't the kind of leap that 3.0 -> 3.1 was in terms of intelligence; it was just:

          1. Meta releasing multimodal vision models for the first time (11B and 90B), and

          2. Meta releasing much smaller models than the 3.1 series (1B and 3B).

    • reissbaker 71 days ago
      I don't think this is intended for Llama 2? The Llama 3.1 and 3.2 series have very long context windows (128k tokens).
    • tempodox 70 days ago
      What about using a Modelfile for ollama that tweaks the context window size? I seem to remember parameters for that in the ollama GitHub docs.
      • tcsenpai 69 days ago
        I applied (for now) a pre-filled table with a 4096 default limit. Users can also specify an upper or lower limit from the UI directly now. Added chunk and recursive summarization too.
    • htrp 69 days ago
      do multi stage summarization?
      • tcsenpai 69 days ago
        Hi! This was a good suggestion! I implemented it in v 1.1 which is already out :)
  • asdev 71 days ago
    I built a chrome version of this for summarizing HN comments: https://github.com/built-by-as/FastDigest
    • larodi 70 days ago
      Thank you been thinking about this for long time while copying lots of conversations back and forth Claude.
      • asdev 70 days ago
        no problem! hope it works out for you. currently only supports Ollama and OpenAI but should be pretty easily extended to Claude and other APIs
  • chx 71 days ago
    Help me understand why people are using these.

    I presume you want information of some value to you otherwise you wouldn't bother reading an article. Then you feed it to a probabilistic algorithm and so you can not have any idea what the output has to do with the input. Like https://i.imgur.com/n6hFwVv.png you can somewhat decipher what this slop wants to be but what if the summary leaves out or invents or inverts some crucial piece of info?

    • InsideOutSanta 70 days ago
      "Then you feed it to a probabilistic algorithm and so you can not have any idea what the output has to do with the input"

      This is theoretically true, but to me at least, practically irrelevant. In all cases, for most values of the word "all", the summary does tell you what the article contains.

      For me at least, the usefulness is not that the summary replaces reading the article. Instead, it's a signal telling me whether I should read it in the first place.

    • andrewmcwatters 71 days ago
      People write too much. Get to the point.
      • ranger_danger 71 days ago
        I think you just insulted every journalist on Earth.
        • seb1204 71 days ago
          Nowadays a lot of websites are written in a style that goes on and on and on dancing around a topic, adding historical context all in a terrible writing style only to lengthen the text for SEO. In such cases a summary can be a good thing.
        • Spivak 71 days ago
          It's really not that deep. There's writing you read for its aesthetic merits and writing you read for its contents. When you want the latter but the piece is written for the former a summary fixes the mismatch.
      • throwup238 71 days ago
        Even if I want to read the entirety of a piece of long form writing I'll often summarize it (with Kagi key points mode) so that I know what the overall points are and can follow the writing better. Too much long form writing is written like some mystery thriller where the writer has to unpack an entire storyline before they'll state their main thesis, so it helps my reading comprehension to know what the point is going in. The personal interest stories that precede the main content always land better that way.
      • chx 71 days ago
        any point? regardless of what's written? does that work for you?
        • 87m78m78m 71 days ago
          Why don't you try using these tools yourself so you have an understanding of them? People like to get shit summarized, its really not as deep as you are trying to make it out to be.
          • chx 69 days ago
            Did you read my original point? https://news.ycombinator.com/item?id=41814310 given that, why on earth would I waste my time?

            Also note

            https://hachyderm.io/@inthehands/112006855076082650

            > You might be surprised to learn that I actually think LLMs have the potential to be not only fun but genuinely useful. “Show me some bullshit that would be typical in this context” can be a genuinely helpful question to have answered, in code and in natural language — for brainstorming, for seeing common conventions in an unfamiliar context, for having something crappy to react to.

            > Alas, that does not remotely resemble how people are pitching this technology.

        • garyfirestorm 71 days ago
          sometimes you don't have time to read an entirety of a large article. You want a quick summary, some people are poor at summarizing things in their head as they go and can get lost in dense text. Extensions like these really help me with headers, structure that I want to follow, quick overview and gives me an idea if I want to deep dive further.
          • drdaeman 71 days ago
            Sometimes it's not even an article, but a video. And sometimes all you care is just a single tiny fact from that video.

            Although I don't think this particular summarizer works for videos. And I don't think Ollama API supports audio ingestion for transcription. There are some summarizers that work with YouTube specifically (using automatic subtitles).

    • KaiMagnus 70 days ago
      At least for me it’s less about the individual article, in that case I agree with you, but more about the case where you have 25 articles.

      Now you can’t possibly get through all of them and have to decide which of those could be worth your time. And in that case, the tradeoff makes sense.

  • tcsenpai 69 days ago
    Update: v 1.1 is out!

    - # Changelog

    ## [1.1] - 2024-03-19

    ### Added - New `model_tokens.json` file containing token limits for various Ollama models. - Dynamic token limit updating based on selected model in options. - Automatic loading of model-specific token limits from `model_tokens.json`. - Chunking and recursive summary for long pages - Better handling of markdown returns

    ### Changed - Updated `manifest.json` to include `model_tokens.json` as a web accessible resource. - Modified `options.js` to handle dynamic token limit updates: - Added `loadModelTokens()` function to fetch model token data. - Added `updateTokenLimit()` function to update token limit based on selected model. - Updated `restoreOptions()` function to incorporate dynamic token limit updating. - Added event listener for model selection changes.

    ### Improved - User experience in options page with automatic token limit updates. - Flexibility in handling different models and their respective token limits.

    ### Fixed - Potential issues with incorrect token limits for different models.

  • oneshtein 71 days ago
    I use PageAssist with Ollama for two months, but I never called "Summarise" option in menu. :-/
    • tcsenpai 70 days ago
      TIL, I am experimenting with PageAssist right now
  • donclark 71 days ago
    If we can get this as the default for all the newly posted HN articles please and thank you?