We are always looking for representations that can capture the meaning of information. However, most representations that compress information for retrieval are lossy. For example, embeddings are a form of lossy compression. Similar to the no-free-lunch theorem, no lossy compression method is universally better than another, since downstream tasks may depend on the specific information that gets lost. Therefore, the question is not which representation is perfect, but which representation is better aligned with an AI system. Because AI evolves rapidly, it is difficult to predict the limitations of the next generation of LLMs. For this reason, a good representation for information retrieval in future LLM systems should be closer to how humans represent knowledge.
When a human tries to retrieve information in a library, they first locate a book by category or by using a metadata keyword search. Then, they open the table of contents (ToC) to find the relevant section, and repeat this process as needed. Therefore, I believe the future of AI retrieval systems should mimic this process. The recently popular PageIndex approach (see this discussioin: https://news.ycombinator.com/item?id=45036944) also belongs to this category, as it generates a table-of-contents–like tree for LLMs to reason over. Again, it is a form of lossy compression, so its limitations can be discussed. However, this approach is the closest to how humans perform retrieval.
mingtianzhang 10 hours ago [-]
A follow-up question is: what is the lossless way to represent knowledge? That would mean reading all the knowledge at once, which is the most accurate but also the least efficient method. Therefore, for different applications, we need to find an appropriate trade-off between accuracy and efficiency. In systems like real-time recommendation, we prefer efficiency over accuracy, so vector-based search is suitable. In domain-specific QA, we prefer accuracy over efficiency, so maybe a table-of-contents–based search may be the better choice.
jandrewrogers 4 hours ago [-]
This is the subject of the Hutter Prize and the algorithmic information theory that underpins it. There are some hard algorithm and data structure problems underlying lossless approximations of general learning even for relatively closed domains.
As an example, current AI is famously very poor at learning relationships between non-scalar types, like complex geometry, which humans learn with ease. That isn’t too surprising because the same representation problem exists in non-AI computer science.
mingtianzhang 10 hours ago [-]
It is also worth mentioning that compression and generative AI are two sides of the same coin. I highly recommend the book "Information Theory, Inference, and Learning Algorithms" by David MacKay, which explores these deep connections.
simne 10 hours ago [-]
Looks like, people use hybrid approach, as all these ToC, metadata, etc, are essentially semantic (executed on neural network), but just recognition of text and recognition of characters are neural.
mingtianzhang 10 hours ago [-]
Yeah, I guess another point is that with ToC, metadata representation is transparent—people know what information is lost. On the other hand, vector-based representation is a black box. Explainability and transparency are also important considerations in production-level AI systems.
10 hours ago [-]
quadhome 7 hours ago [-]
Humans only retrieve information in a library in that way due to the past limitations on retrieval and processing. The invention of technologies like tables of contents or even the Dewey Decimal Classification are strongly constrained by fundamental technologies like ... the alphabet! And remember, not all languages are alphabetic. And embeddings aren't alphabetic and don't share the same constraints.
I recommend Judith Flanders' "A Place for Everything" as a both a history and survey of the constraints in sorting and organising information in an alphabetic language. It's also a fun read!
tl;dr why would we want an LLM do something as inefficiently as a human?
anothernewdude 9 hours ago [-]
> Similar to the no-free-lunch theorem, no lossy compression method is universally better than another,
No lunch theorem only works, because they assume you care about every single value of noise. Nobody does. There's a free lunch to be had, and it's order. You don't care about a single pixel difference between two cat pictures, NFL does.
Lossy compression is precisely where NFL does not apply.
mingtianzhang 9 hours ago [-]
Just similar in theorem style, I try to emphasise that no lossy representation is universally (i.e. for all downstream tasks) better than another.
lunarmony 16 hours ago [-]
Researchers have discussed limitations of vector-based retrieval from a rank perspective in various forms for a few years. It's further been shown that better alternative exists; some low-rank approaches can theoretically approximate arbitrary high-rank distribution while permitting MIPS-level efficient inference (see e.g., Retrieval with Learned Similarities, https://arxiv.org/abs/2407.15462). Such solutions are already being used in production at Meta and at LinkedIn.
yorwba 13 hours ago [-]
I don't think Mixture of Logits from the paper you link circumvents the theoretical limitations pointed out here, since their dataset size mostly stays well below the limit.
In the end they still rely on Maximum Inner Product Search, just with several lookups for smaller partitions of the full embedding, and the largest dataset is Books, where this paper suggests you'd need more than 512 embedding dimensions, and MoL with 256-dimensional embeddings split into 8 parts of 32 each has an abysmal hit rate.
So that's hardly a demonstration that arbitrary high-rank distributions can be approximated well. MoL seems to approximate it better than other approaches, but all of them are clearly hampered by the small embedding size.
Straw 19 hours ago [-]
In the theoretical section, they extrapolate assuming a polynomial from 40 to thousands of dimensions. Why do they trust a polynomial fit to extrapolate two orders of magnitude? Why do we even think it's polynomial instead of exponential in the first place? Most things like this increase exponentially with dimension.
In fact, I think we can do it in d=2k dimensions, if we're willing to have arbitrarily precise query vectors.
Embed our points as (sin(theta), cos(theta), sin(2 x theta), cos(2 x theta)..., sin(k x theta), cos(k x theta)), with theta uniformly spaced around the circle, and we should be able to select any k of them.
Using a few more dimensions we can then ease the precision requirements on the query.
namibj 18 hours ago [-]
In practice you're actually hitting further problems because you don't have those synthetic top-k tasks but rather open-domain documents and queries to support.
And if you hope to get better than "just" having the top-k correct and instead get some sort of inclusion/exclusion boundary between what should be matched and what should not be matched, you'll hit the same bounds as apply to context length limitations for kq dimenionality in a transformer's attention layers, as I mentioned about 6 weeks ago: https://news.ycombinator.com/item?id=44570650
yorwba 13 hours ago [-]
I'm not following your construction. In the k=2 case, how do you construct your 4-dimensional query vector so that the dot product is maximized for two different angles theta and phi, but lower for any other arbitrary angle?
Straw 12 hours ago [-]
Viewing our space as two complex points instead of four real:
Let H(z) = (z- exp(i theta)) (z - exp(i phi))
H(z) is zero only for our two points. We'll now take the norm^2, to get something that's minimized on only our two chosen points.
|H(exp(i x))|^2 = H(z) x conj(H(z)) = sum from -2 to 2 of c_j x exp(i j x)
For some c_j (just multiply it out). Note that this is just a Fourier series with highest harmonic 2 (or k in general), so in fact our c_j tell us the four coefficients to use.
For a simpler, but numerically nastier construction, instead use (t, t^2, t^3, ...), the real moment curve. Then given k points, take the degree k poly with those zeros. Square it, negate, and we get a degree 2k polynomial that's maximized only on our selected k points. The coefficients of this poly are the coefficients of our query vector, as once expanded onto the moment curve we can evaluate polynomials by dot product with the coefficients.
The complex version is exactly the same idea but with z^k and z ranging on the unit circle instead.
gdiamos 22 hours ago [-]
Their idea is that capacity of even 4096-wide vectors limits their performance.
Sparse models like BM25 have a huge dimension and thus don’t suffer from this limit, but they don’t capture semantics and can’t follow instructions.
It seems like the holy grail is a sparse semantic model. I wonder how splade would do?
tkfoss 20 hours ago [-]
Wouldn't holy grail then be parallel channels for candidate generation;
We already have "sparse" embeddings. Google's Matryoshka embedding schema can scale embeddings from ~150 dimensions to >3k, and it's the same embedding with layers of representational meaning. Imagine decomposing an embedding along principle components, then streaming the embedding vectors in order of their eigenvalue, kind of the idea.
miven 4 hours ago [-]
Correct me if I'm misinterpreting something in your argument but as I see it Matryoshka embeddings just sort the vector bases of the output space roughly by order of their importance for the task, PCA-style, so when you truncate your 4096-dimensionnal embedding down to a set of let's say 256 dimensions, those are the exact same 256 vector bases doing the core job of encoding important information for each sample, so you're back to dense retrieval on 256-dimensional vectors, just that all the minor miscellaneous slack useful for a very low fraction of queries has been trimmed away.
True sparsity would imply keeping different important vector bases for different documents, but MRL doesn't magically shuffle vector bases around depending on what's your document contains, were that the case cosine similarity between the resulting documents embeddings would simply make no sense as a similarity measure.
jxmorris12 20 hours ago [-]
Matryoshka embeddings are not sparse. And SPLADE can scale to tens or hundreds of thousands of dimensions.
CuriouslyC 19 hours ago [-]
If you consider the actual latent space the full higher dimensional representation, and you take the first principle component, the other vectors are zero. Pretty sparse. No it's not a linked list sparse matrix. Don't be a pedant.
yorwba 13 hours ago [-]
When you truncate Matryoshka embeddings, you get the storage benefits of low-dimensional vectors with the limited expressiveness of low-dimensional vectors. Usually, what people look for in sparse vectors is to combine the storage benefits of low-dimensional vectors with the expressiveness of high-dimensional vectors. For that, you need the non-zero dimensions to be different for different vectors.
zwaps 13 hours ago [-]
No one means Matryoshka embeddings when they talk about sparse embeddings. This is not pedantic.
cap11235 6 hours ago [-]
Why?
CuriouslyC 13 hours ago [-]
No one means wolves when they talk about dogs, obviously wolves and dogs are TOTALLY different things.
3abiton 18 hours ago [-]
Doesn't PCA compress the embeddings in this case, ie reduce the accuracy? It's similar to quantization.
CuriouslyC 17 hours ago [-]
Component analysis doesn't fundamentally reduce information, it just rotates it into a more informative basis. People usually drop vectors using the eigenvalues to do dimensionality reduction, but you don't have to do that.
ArnavAgrawal03 20 hours ago [-]
we used multi-vector models at Morphik, and I can confirm the real-world effectiveness, especially when compared with dense-vector retrieval.
bjornsing 11 hours ago [-]
Can you share more about the multi-vector approach? Is it open source?
codingjaguar 20 hours ago [-]
Curious is that colbert-like ones?
10 hours ago [-]
zwaps 13 hours ago [-]
How does it look with ColBert style late interaction embeddings?
yorwba 13 hours ago [-]
The dashed brown line in figures 3, 4, and 6 is labeled GTE-ModernColBERT. So better than all single-vector models, but worse than BM25.
simne 6 hours ago [-]
Could somebody suggest good introduction to simulation of complex behavior with neural networks?
I mean, I hear, about experiments of running Turing machine simulation on NN, or even simulation of some physics on NN, but I have not seen any good survey on these topics, and they could be very interest on subject.
10 hours ago [-]
Rendered at 19:35:34 GMT+0000 (Coordinated Universal Time) with Vercel.
When a human tries to retrieve information in a library, they first locate a book by category or by using a metadata keyword search. Then, they open the table of contents (ToC) to find the relevant section, and repeat this process as needed. Therefore, I believe the future of AI retrieval systems should mimic this process. The recently popular PageIndex approach (see this discussioin: https://news.ycombinator.com/item?id=45036944) also belongs to this category, as it generates a table-of-contents–like tree for LLMs to reason over. Again, it is a form of lossy compression, so its limitations can be discussed. However, this approach is the closest to how humans perform retrieval.
As an example, current AI is famously very poor at learning relationships between non-scalar types, like complex geometry, which humans learn with ease. That isn’t too surprising because the same representation problem exists in non-AI computer science.
I recommend Judith Flanders' "A Place for Everything" as a both a history and survey of the constraints in sorting and organising information in an alphabetic language. It's also a fun read!
tl;dr why would we want an LLM do something as inefficiently as a human?
No lunch theorem only works, because they assume you care about every single value of noise. Nobody does. There's a free lunch to be had, and it's order. You don't care about a single pixel difference between two cat pictures, NFL does.
Lossy compression is precisely where NFL does not apply.
In the end they still rely on Maximum Inner Product Search, just with several lookups for smaller partitions of the full embedding, and the largest dataset is Books, where this paper suggests you'd need more than 512 embedding dimensions, and MoL with 256-dimensional embeddings split into 8 parts of 32 each has an abysmal hit rate.
So that's hardly a demonstration that arbitrary high-rank distributions can be approximated well. MoL seems to approximate it better than other approaches, but all of them are clearly hampered by the small embedding size.
In fact, I think we can do it in d=2k dimensions, if we're willing to have arbitrarily precise query vectors.
Embed our points as (sin(theta), cos(theta), sin(2 x theta), cos(2 x theta)..., sin(k x theta), cos(k x theta)), with theta uniformly spaced around the circle, and we should be able to select any k of them.
Using a few more dimensions we can then ease the precision requirements on the query.
Let H(z) = (z- exp(i theta)) (z - exp(i phi))
H(z) is zero only for our two points. We'll now take the norm^2, to get something that's minimized on only our two chosen points.
|H(exp(i x))|^2 = H(z) x conj(H(z)) = sum from -2 to 2 of c_j x exp(i j x)
For some c_j (just multiply it out). Note that this is just a Fourier series with highest harmonic 2 (or k in general), so in fact our c_j tell us the four coefficients to use.
For a simpler, but numerically nastier construction, instead use (t, t^2, t^3, ...), the real moment curve. Then given k points, take the degree k poly with those zeros. Square it, negate, and we get a degree 2k polynomial that's maximized only on our selected k points. The coefficients of this poly are the coefficients of our query vector, as once expanded onto the moment curve we can evaluate polynomials by dot product with the coefficients.
The complex version is exactly the same idea but with z^k and z ranging on the unit circle instead.
Sparse models like BM25 have a huge dimension and thus don’t suffer from this limit, but they don’t capture semantics and can’t follow instructions.
It seems like the holy grail is a sparse semantic model. I wonder how splade would do?
True sparsity would imply keeping different important vector bases for different documents, but MRL doesn't magically shuffle vector bases around depending on what's your document contains, were that the case cosine similarity between the resulting documents embeddings would simply make no sense as a similarity measure.
I mean, I hear, about experiments of running Turing machine simulation on NN, or even simulation of some physics on NN, but I have not seen any good survey on these topics, and they could be very interest on subject.