Follow up, how are you handling actual calls to LLM?
eugenegusarov 4 days ago [-]
Is this a full screenshot of the page? You can not only collapse the panel, you can also resize it in a way you want. Just drag the top edge of the panel.
In terms of calls to LLMs. I do not use any frameworks or LLM proxies like OpenRouter etc. Instead, I make the calls directly to LLM providers with a tiny thin proxy endpoint I created in Supabase.
One of the problems I had with other tools was the difficulty in understand the actual responses that particular playgrounds provided. Especially when it came to error responses. I guess that they are either built with the some Proxy providers like OpenRouter who handles and interprets errors internally before giving a response to the user, or they are using frameworks like LangChain with their abstraction hell.
In our case on Yola, it was crucial to have a playground that provided this raw type of experience that I have builtin.
jaredsohn 5 days ago [-]
Some feedback when I tried to share:
1. Think it should prepopulate the name like various AI apps do.
You can provide a way better UX than this when your app throws errors by providing your own ErrorBoundary or errorElement prop on your route."
eugenegusarov 4 days ago [-]
For sure, man. This is absolutely unexpected. I will check what went wrong and fix the issue.
Cheer2171 5 days ago [-]
Is this open source? Is it local browser API calls, or routing through your server?
eugenegusarov 4 days ago [-]
It's not OpenSource yet. Do you think it should be?
API calls are routed through a thin proxy on my side, this is how you get access to all the models with my API keys. I definitely would not want to store those keys in code of the JS bundle in the browser (:
owebmaster 4 days ago [-]
> It's not OpenSource yet. Do you think it should be?
Only if you want to spend more time managing an open source project than building a real world project. It is not easy and it can be a big distraction.
eugenegusarov 4 days ago [-]
I would definitely like to avoid that for now. It's just me for now.
alansammarone 4 days ago [-]
Very cool, congrats!
just one minor suggestion: It seems that the responses are not saved anywhere? even after signup, opening a new tab or just navigating within the app makes the responses disappear - if they're really not being stored, might be worth considering storing them. if they are, maybe the ux could make that more obvious (I couldn't find it). in any case, very useful project!
eugenegusarov 4 days ago [-]
You're absolutely right. Currently they are not stored. This is one of the things that are on my short list.
grandimam 5 days ago [-]
> Then came the pricing. The last quote I got for one of the tools on the market was $6,000/year for a team of 16 people in a use-it-or-loose-it way. For a tool we use maybe 2–3 times per sprint.
What tool was this?
eugenegusarov 4 days ago [-]
Get to a sales call with Velum, Basalt and others to find out.
From the BAML team, so it uses the BAML syntax (open source).
eugenegusarov 4 days ago [-]
As far as I understand it's a much more robust syntax that allows complex logic?
redhale 3 days ago [-]
BAML is a language that gives you jinja-like templating. But it also provides structured output parsing and bindings in Python and TypeScript that allow you to execute prompts as typed functions in your code. They also have a VS Code "playground" extension that provides a way to quickly iterate on prompts. There's a bunch of other good stuff too, but these are the main reasons we use it.
piterrro 5 days ago [-]
Nice tool! Im working on something similar but focused on repeatability and testing on multiple models/test data points.
eugenegusarov 4 days ago [-]
Do you have a link? I'd like to see it.
Any specific feedback so far?
commienews 4 days ago [-]
[flagged]
Rendered at 20:39:48 GMT+0000 (Coordinated Universal Time) with Vercel.
Maybe not obvious to users to collapse the panel.
Follow up, how are you handling actual calls to LLM?
In terms of calls to LLMs. I do not use any frameworks or LLM proxies like OpenRouter etc. Instead, I make the calls directly to LLM providers with a tiny thin proxy endpoint I created in Supabase.
One of the problems I had with other tools was the difficulty in understand the actual responses that particular playgrounds provided. Especially when it came to error responses. I guess that they are either built with the some Proxy providers like OpenRouter who handles and interprets errors internally before giving a response to the user, or they are using frameworks like LangChain with their abstraction hell.
In our case on Yola, it was crucial to have a playground that provided this raw type of experience that I have builtin.
2. Got an error:
"Unexpected Application Error! Cannot read properties of null (reading 'slice') TypeError: Cannot read properties of null (reading 'slice') at Hv (https://langfa.st/main.1510e80706059046a306.js:2:11907588) at hi (https://langfa.st/main.1510e80706059046a306.js:2:10922009) at Xa (https://langfa.st/main.1510e80706059046a306.js:2:10941715) at fs (https://langfa.st/main.1510e80706059046a306.js:2:10952350) at $c (https://langfa.st/main.1510e80706059046a306.js:2:10997432) at Gc (https://langfa.st/main.1510e80706059046a306.js:2:10997360) at Zc (https://langfa.st/main.1510e80706059046a306.js:2:10997202) at Nc (https://langfa.st/main.1510e80706059046a306.js:2:10993991) at yd (https://langfa.st/main.1510e80706059046a306.js:2:11006790) at Cd (https://langfa.st/main.1510e80706059046a306.js:2:11005523) Hey developer
You can provide a way better UX than this when your app throws errors by providing your own ErrorBoundary or errorElement prop on your route."
API calls are routed through a thin proxy on my side, this is how you get access to all the models with my API keys. I definitely would not want to store those keys in code of the JS bundle in the browser (:
Only if you want to spend more time managing an open source project than building a real world project. It is not easy and it can be a big distraction.
just one minor suggestion: It seems that the responses are not saved anywhere? even after signup, opening a new tab or just navigating within the app makes the responses disappear - if they're really not being stored, might be worth considering storing them. if they are, maybe the ux could make that more obvious (I couldn't find it). in any case, very useful project!
What tool was this?
From the BAML team, so it uses the BAML syntax (open source).
Any specific feedback so far?