If true this is very nice incremental improvement. It looks like it doesn't meaningfully improve the capabilities of the model, but is cheaper to compute than RMSNorm (which essentially all current state of art LLMs use) which means faster/cheaper training.
rryan 52 days ago [-]
RMSNorm is pretty insigificant in terms of the overall compute in a transformer though -- usually the reduction work can be fused with earlier or later operations.
londons_explore 52 days ago [-]
Rmsnorm acts like a barrier. No compute on the next network layer can start before all compute in the previous layer is done.
Splitting networks across multiple GPU's, this means you must wait for the slowest node and the longest latency.
As soon as you can remove most of these barriers, compute over non-latency-guaranteed networks becomes more practical, as does non-homogeneous compute (ie. Mixing different GPU models).
elcritch 52 days ago [-]
What are other barriers in transformers? Or is the normalization layer the primary one?
woadwarrior01 52 days ago [-]
dot-product attention is the biggest barrier. This is why there are so many attempts to linearize it.
amitport 52 days ago [-]
that fail... linearization is a bad idea. But plenty of other optimizations are done
atgctg 52 days ago [-]
The paper's Table 7 shows DyT reducing overall LLaMA 7B inference time by 7.8% and training time by 8.2%. That is not insignificant.
Herring 51 days ago [-]
But LLM performance scales according to the log of compute, so yeah it’s pretty insignificant. I think we’ve reached a bit of a plateau.
51 days ago [-]
kouteiheika 52 days ago [-]
Okay, I just tried this on my pet transformer training benchmark and the results are very disappointing; it converges much more slowly than just using RMSNorm.
It either needs some significant hyperparameter tuning (besides tweaking alpha, which doesn't seem to do much for me), or some fancier initialization (tried both pytorch default and orthogonal, no difference), or maybe my scalar optimizer doesn't work on it (I have a custom optimizer for scalars which speeds up convergence vs Adam, but for DyT layers it seems to be just as good as Adam), or maybe it only catches up after billions of tokens (which I don't have the budget to test for so long).
kouteiheika 51 days ago [-]
Slight update, more fancy initialization of DyT weights (instead of having them be ones) seems to help a lot in my case (although it's still not as good as just using RMSNorm). Do something like this on the very first training step (`x` is the input to the layer):
y = x.to(torch.float32)
y = y * torch.rsqrt(y.pow(2).mean(-1, keepdim=True) + 1e-6)
z = torch.tanh(self.alpha * x)
scale = (y / (z + 1e-6)).mean(dim = -2).flatten()
self.weight.detach().copy_(scale)
This basically tries to initialize the weights so that the output of DyT is closer to what RMSNorm would have outputted, and it seems to help.
kadushka 51 days ago [-]
Which model are you training and on what dataset?
kouteiheika 51 days ago [-]
It's a fully custom architecture, heavily inspired by the modded-nanogpt speedrun (https://github.com/KellerJordan/modded-nanogpt) but written fully from scratch and further tweaked/modified. I use it for experiments and as a testbed when developing my training harness (which I use for training other models too, and which receives all of my non-LLM-specific improvements like e.g. better than Adam optimizers, a custom GPU memory allocator, custom gradient accumulation that accumulates directly into the optimizers' state without using extra VRAM for gradient, etc.).
For the dataset I just use FineWeb-Edu.
kadushka 51 days ago [-]
Wow, thank you for the link to the code - I haven't seen it before - it contains a ton of useful tricks. Lots to learn from there.
joshlk 52 days ago [-]
When using low precision formats like float8 you usually have to upscale the activations to BF16 before normalising. So the normalisation layers are proportionally using more compute when going to lower precision. Replacing these layers would help reduce the compute cost significantly.
qmatch 52 days ago [-]
Need to read the details, but removing the norm can be big. It’s always a pain to make sure that your network is normalized properly when trying new architectures. Likely there will still be other implications of the tanh, since the norm is sometimes solving a conditioning problem, but IMO more alternatives are welcome
blackbear_ 52 days ago [-]
And so vanishing gradients are not a thing anymore?
tsurba 52 days ago [-]
Proper initialization of layers keeps gradient magnitudes from vanishing/exploding in deep networks. If you make sure the output of each layer has mean 0, std 1, the gradients will be reasonable as well, for example.
I recommend e.g. the og resnet paper and its follow-up from Kaiming He et al.
There essentially the point is that largest eigenvalue (spectral radius) needs to be around 1, meaning repeated applications of a linear transformation doesn’t cause increase or decrease of the activations.
blackbear_ 52 days ago [-]
Sure initialization helps, but are there also results about long term training dynamics? Even the paper you suggested had to use some sort of normalization to keep things stable
tripplyons 51 days ago [-]
I think ResNet pretty much solved vanishing gradients. As for exploding gradients, that is typically with good parameter initialization and normalization. The paper in question proposes an alternative to normalization.
imjonse 52 days ago [-]
Good question. That was an issue with tanh as activation function, and before residual connections and normalization layers. Tanh as a normalization but with other activations and residual present apparently is ok.
tsurba 52 days ago [-]
Proper initialization is more important.
Batch norm and others are important for faster convergence due to forcing the model to focus creating second and higher order nonlinearities, as a simple shift in mean/std is normalized out, and thus the gradient does not point in a direction that would only change those properties of the output distribution.
toxik 51 days ago [-]
Transformers learn residuals, as you can see in the figure. y = x + f(x).
Lerc 52 days ago [-]
Is it just me or have they provided graphs of LNinput againt LNoutput when the tanh(a*x) is also followed by a weight and bias.
Surely you would want to compare the output of the LayerNorm without the weight and bias to get an impression on their similarity.
I guess it doesn't matter if the final result works, but I feel like looking at the bit that they are changing in isolation might provide a better insight as to what is happening.
lukah 52 days ago [-]
From their implementation it looks like they’re calculating tanh and then applying a weight and bias
Lerc 52 days ago [-]
Exactly, And that's what happens in LayerNorm too. So if figured the best base for comparison would have been to leave that bit out when looking at their difference or similarity, because obviously the bits that have the same implementation will be the same.
52 days ago [-]
gdiamos 52 days ago [-]
What are the practical implications of this?
gricardo99 52 days ago [-]
from the abstract
By incorporating DyT, Transformers without normalization can match or exceed the performance of their normalized counterparts, mostly without hyperparameter tuning.
gdiamos 51 days ago [-]
Sure, but why would one prefer tanh instead of normalization layers if they have the same accuracy?
I suppose normalization kernels have reductions in them, but how hard are reductions in 2025?
adamnemecek 52 days ago [-]
[flagged]
randomNumber7 52 days ago [-]
I'll give you a call when I finished building my tesla tower. This was also unnoticed by the engineering/science communities.
adamnemecek 52 days ago [-]
I don’t follow.
qoez 51 days ago [-]
Don't advertise blatantly like this on HN please
adamnemecek 51 days ago [-]
It was relevant.
Rendered at 03:13:56 GMT+0000 (Coordinated Universal Time) with Vercel.
Splitting networks across multiple GPU's, this means you must wait for the slowest node and the longest latency.
As soon as you can remove most of these barriers, compute over non-latency-guaranteed networks becomes more practical, as does non-homogeneous compute (ie. Mixing different GPU models).
It either needs some significant hyperparameter tuning (besides tweaking alpha, which doesn't seem to do much for me), or some fancier initialization (tried both pytorch default and orthogonal, no difference), or maybe my scalar optimizer doesn't work on it (I have a custom optimizer for scalars which speeds up convergence vs Adam, but for DyT layers it seems to be just as good as Adam), or maybe it only catches up after billions of tokens (which I don't have the budget to test for so long).
For the dataset I just use FineWeb-Edu.
I recommend e.g. the og resnet paper and its follow-up from Kaiming He et al.
For a modern take on RNNs, read https://arxiv.org/abs/2303.06349 by DeepMind.
There essentially the point is that largest eigenvalue (spectral radius) needs to be around 1, meaning repeated applications of a linear transformation doesn’t cause increase or decrease of the activations.
Batch norm and others are important for faster convergence due to forcing the model to focus creating second and higher order nonlinearities, as a simple shift in mean/std is normalized out, and thus the gradient does not point in a direction that would only change those properties of the output distribution.
Surely you would want to compare the output of the LayerNorm without the weight and bias to get an impression on their similarity.
I guess it doesn't matter if the final result works, but I feel like looking at the bit that they are changing in isolation might provide a better insight as to what is happening.
I suppose normalization kernels have reductions in them, but how hard are reductions in 2025?