Transformers Without Normalization

(jiachenzhu.github.io)

108 points | by hellollm 8 hours ago

8 comments

  • joshlk 3 hours ago
    When using low precision formats like float8 you usually have to upscale the activations to BF16 before normalising. So the normalisation layers are proportionally using more compute when going to lower precision. Replacing these layers would help reduce the compute cost significantly.
  • kouteiheika 5 hours ago
    If true this is very nice incremental improvement. It looks like it doesn't meaningfully improve the capabilities of the model, but is cheaper to compute than RMSNorm (which essentially all current state of art LLMs use) which means faster/cheaper training.
    • rryan 4 hours ago
      RMSNorm is pretty insigificant in terms of the overall compute in a transformer though -- usually the reduction work can be fused with earlier or later operations.
      • londons_explore 3 hours ago
        Rmsnorm acts like a barrier. No compute on the next network layer can start before all compute in the previous layer is done.

        Splitting networks across multiple GPU's, this means you must wait for the slowest node and the longest latency.

        As soon as you can remove most of these barriers, compute over non-latency-guaranteed networks becomes more practical, as does non-homogeneous compute (ie. Mixing different GPU models).

        • elcritch 2 hours ago
          What are other barriers in transformers? Or is the normalization layer the primary one?
          • woadwarrior01 2 hours ago
            dot-product attention is the biggest barrier. This is why there are so many attempts to linearize it.
      • atgctg 41 minutes ago
        The paper's Table 7 shows DyT reducing overall LLaMA 7B inference time by 7.8% and training time by 8.2%. That is not insignificant.
  • qmatch 5 hours ago
    Need to read the details, but removing the norm can be big. It’s always a pain to make sure that your network is normalized properly when trying new architectures. Likely there will still be other implications of the tanh, since the norm is sometimes solving a conditioning problem, but IMO more alternatives are welcome
  • Lerc 4 hours ago
    Is it just me or have they provided graphs of LNinput againt LNoutput when the tanh(a*x) is also followed by a weight and bias.

    Surely you would want to compare the output of the LayerNorm without the weight and bias to get an impression on their similarity.

    I guess it doesn't matter if the final result works, but I feel like looking at the bit that they are changing in isolation might provide a better insight as to what is happening.

    • lukah 4 hours ago
      From their implementation it looks like they’re calculating tanh and then applying a weight and bias
      • Lerc 3 hours ago
        Exactly, And that's what happens in LayerNorm too. So if figured the best base for comparison would have been to leave that bit out when looking at their difference or similarity, because obviously the bits that have the same implementation will be the same.
  • blackbear_ 4 hours ago
    And so vanishing gradients are not a thing anymore?
  • gdiamos 5 hours ago
    What are the practical implications of this?
    • gricardo99 5 hours ago
      from the abstract

        By incorporating DyT, Transformers without normalization can match or exceed the performance of their normalized counterparts, mostly without hyperparameter tuning.
  • adamnemecek 5 hours ago
    It feels like the end goal of this is energy-based models, Yann LeCun's favorite ML approach.

    We at Traceoid http://traceoid.ai have identified a promising approach for scaling EBMs. Join the discord channel https://discord.com/invite/mr9TAhpyBW

    • randomNumber7 4 hours ago
      I'll give you a call when I finished building my tesla tower. This was also unnoticed by the engineering/science communities.