2 comments

  • codezero 1 hour ago
    I am amazed, though not entirely surprised, that these models keep getting smaller while the quality and effectiveness increases. z image turbo is wild, I'm looking forward to trying this one out.

    An older thread on this has a lot of comments: https://news.ycombinator.com/item?id=46046916

    • roenxi 57 minutes ago
      There are probably some more subtle tipping points that small models hit too. One of the points about a 100GB model is that there is a non-trivial difficulty in downloading and running the thing that a 4GB model doesn't face. At 4GB I think it might be reasonable to assume that most devs can just try it and see what it does.
  • SV_BubbleTime 14 minutes ago
    Flux2 Klein isn’t some generation leap or anything. It’s good, but let’s be honest, this is an ad.

    What will be really interesting to me is the release of Z-image, if that goes the way it’s looking, it’ll be natural language SDXL 2.0, which seems to be what people really want.

    Releasing the Turbo/Distilled/Finetune months ago was a genius move really. It hurt Flux and Qwen releases on a possible future implication alone.

    If this was intentional, I can’t think of the last time I saw such shrewd marketing.

    • refulgentis 9 minutes ago
      I’m a bit confused, both you and another commenter mention something called Z-Image, presumably another Flux model?

      Your frame of it is speculative, i.e. it is forthcoming. Theirs is present tense. Could I trouble you to give us plebes some more context? :)

      ex. Parsed as is, and avoiding the general confusion if you’re unfamiliar, it is unclear how one can observe “the way it is looking”, especially if turbo was released months ago and there is some other model that is unreleased. Chose to bother you because the others comment was less focused on lab on lab strategy.