Does having 1 billion tokens mean more total tokens in the context window are actually good quality, or do we just get more dumb tokens?
RugnirViking 6 minutes ago [-]
the article is almost entirely about this, yes.
Current approaches require fancy tricks to fit tokens into memory, and spread attention thinner over larger numbers of tokens. The new approach tries to find a way to keep everything in a single shared memory, and process the tokens in parallel using multiple GPUs
schnitzelstoat 33 minutes ago [-]
Is such a large context window even desirable? It seems like it would consume an awful lot of tokens and, unless one was very careful to curate the context, could even result in worse performance.
withinboredom 27 minutes ago [-]
For larger codebases ... maybe it will cut down on "let me create a random number wrapper for the 15th time" type problems.
Rendered at 10:24:36 GMT+0000 (Coordinated Universal Time) with Vercel.
Current approaches require fancy tricks to fit tokens into memory, and spread attention thinner over larger numbers of tokens. The new approach tries to find a way to keep everything in a single shared memory, and process the tokens in parallel using multiple GPUs