NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
Lessons from building an AI data analyst (pedronasc.com)
PeterStuer 3 hours ago [-]
I gradually came to the conclusion that RAG is just a new term for ye old Knowledge Management techniques.
pil0u 11 hours ago [-]
Maybe I haven't found the right tool yet, but every time I see a product trying to solve data analysis with AI, the first example often deals with aggregated revenue over time or something similar.

This can be solved by a student after 3 days of learning SQL from scratch.

The article, while technical, remains pretty vague about implementation and what real, business problem they managed to solve with such a framework.

Of course building on top of a semantic layer is good for LLMs, but that assumes 1. this semantic layer exists 2. it is not a freaking mess. While tools like dbt helped with 1, I'm yet to see a clean, well-documented, lineage-perfect semantic layer.

djoldman 8 hours ago [-]
Agreed.

Data curation is barely on the radar of most non-tech industries. Even in tech, it's rare to have any meta data.

This is a huge blocker to many efforts.

Someone somewhere has to go through every table and field and document where it came from, when, and what it actually means.

Very very few places do this.

"Oh yeah I go to gold.inventory. I think it updates every night. Columns? Should be pretty intuitive, just look at the names."

mrklol 2 hours ago [-]
But IF there’s documentation a LLM could help and find things faster than a "new“ human who isn’t familiar with all the tables. I would rather see it as a helper - maybe even more with good architecture and docs.
mrtimo 11 hours ago [-]
Very cool to see Malloy mentioned here. Great stuff. There is an MCP server built into Malloy Publisher[1]. Perhaps useful to the author or others trying to do something similar to what the author describes. Directions on how to use the MCP server are here [2]. [1] https://github.com/malloydata/publisher [2] https://github.com/malloydata/publisher/blob/main/docs/ai-ag...
pedromnasc 11 hours ago [-]
One big problem now is that LLMs are not great at writing Malloy, so it is important to have a intermediate DSL. In the future as the language models evolve or someone creates a fine-tuned model that can write Malloy well, we will be able to have more autonomous agents.
loganfrederick 3 days ago [-]
The "Short Story" section definitely matches my experience at most companies, startups and bigger non-tech companies alike: They already have more data than they're aware of and know what to do with, and understanding what they have is the starting point before most analysis can be done.

Glad I read the post as I hadn't heard of Malloy before. Excuse me if I missed the answer to this, but: How much do you as Findly/Conversion Pattern implement the Semantic Layer on behalf of your users (and if so, I assume you have some process for auto-generating the Malloy models), or do your users have to do something to input the semantics themselves?

pedromnasc 3 days ago [-]
> The "Short Story" section definitely matches my experience at most companies, startups and bigger non-tech companies alike: They already have more data than they're aware of and know what to do with, and understanding what they have is the starting point before most analysis can be done.

exactly, most of them are concerned about the data they don't have, while in practice they do have a lot to generate good insights.

> Glad I read the post as I hadn't heard of Malloy before. Excuse me if I missed the answer to this, but: How much do you as Findly/Conversion Pattern implement the Semantic Layer on behalf of your users (and if so, I assume you have some process for auto-generating the Malloy models), or do your users have to do something to input the semantics themselves?

We do have an automatic semantic layer generation framework which works as a great starting point, but for the generic case you still have to manually edit / improve it based on the customer's internal context. User's can edit themselves in our UI too, but it usually requires some level of help from us.

We do have a vertical product for commodity trading and shipping: https://www.darlinganalytics.ai/ -> in that case the semantic layer is much more well defined, which makes setup way easier.

attogram 3 days ago [-]
Great TL;DR section. Context is indeed the product.
blef 11 hours ago [-]
I would also add that context and tools are the product. This is super important to correctly tune the tools and details matter (ok you can also argue that tools are context in some way)
pedromnasc 11 hours ago [-]
Exactly. The main thing is that it is easy to underestimate the impact of the context and proper tools. Narrowing down the search space by adding inductive biases into the the system not only make the multi-agent system more correct, but also faster.
pedromnasc 3 days ago [-]
Hi all,

I wrote a post on some lessons from building an AI data analyst. The gap from a nice demo to a real production system is big -> with a lot of yet to be solved challenges.

Would love to share ideas with other builders in the space and willing to learn more about it.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 09:12:14 GMT+0000 (Coordinated Universal Time) with Vercel.