This is really good! It would be really cool to somehow get human designs in the mix to see how the models compare. I bet there are curated design datasets with descriptions that you could pass to each of the models and then run voting as a "bonus" question (comparing the human and AI generated versions) after the normal genAI voting round.
grace77 11 days ago [-]
wow this is a super interesting idea, and the team loves it — we'll fast follow-through and follow-up here when we add it, thanks for the suggestion!
debesyla 11 days ago [-]
This would be extra interesting for unique designs - something more experimental, new. As as for now even when you ask AI to break all rules it still outputs standard BS.
muskmusk 11 days ago [-]
This is a surprisingly good idea. The model vs model is fun, but not really that useful.
But this could be a legitimate way to design apps in general if you could tell the models what you liked and didn't like.
grace77 11 days ago [-]
yes! that is the hope — /play is our first attempt at building out utility, would love your feedback and will ship hard to make it happen!
a2128 11 days ago [-]
I tried the vote and both results always suck, there's no option to say neither are winners. Also it seems from the network tab you're sending 4 (or 5?) requests but only displaying the first two that respond, which biases it to the small models that respond more quickly which usually results in showing two bad results
grace77 11 days ago [-]
Yes — great point. We originally waited for all model responses and randomized the vote order, but that made it a very bad user experience -- some models, especially open-source ones, took over 4 minutes to respond, leading to a high voter drop-off rate.
To preserve the voter experience without introducing bias, our current approach waits for the slowest model within each binary comparison — so even if one model is faster, we don’t display until both are ready. You're right that this does introduce some bias for the two smallest models, and we'd love to hear suggestions for how to make this better!
As for the 5th request: we actually kick off one reserve model alongside the four randomly selected for the tournament. This backup isn’t shown unless one of the four fails — it’s not the fastest or lowest-latency model, just a randomly selected fallback to keep the system robust without skewing results.
ethan_smith 11 days ago [-]
Adding a "neither is good" option would improve data quality by preventing forced choices between two poor designs.
grxxxce 11 days ago [-]
this is a great note — will be sure to add!
yamatokaneko 8 days ago [-]
I never thought comparing different models could be this fun!
Generating a new image is great, but it would be even better if I could see multiple images from different models in the /feed, just to explore how other prompts look without having to generate and wait.
jjani 11 days ago [-]
How about adding "mobile"? A lot of the time models tend to default to designs that don't make sense on mobile, even when instructed to design it as such.
anonzzzies 10 days ago [-]
Really? When I have a system prompt 'mobile-first design' it 100/100 works perfectly. What sort of things are you trying?
jjani 10 days ago [-]
The designs are passable for a mobile version of a simple website, but really sub-standard compared to the average app on the Play/App Store, whether native (Swift/Kotlin) or hybrid (Flutter/RN). In B2B SaaS you can get away with the 5000th shadcn UI, not so much for B2C mobile. The days that stock Material UI actually saw usage there are a decade behind us.
If you have a tool/mode/prompt that creates good mobile UI designs, I'd love to know. Doesn't even have to generate code!
ppyyss8 10 days ago [-]
As a UX/UI designer in Korea, I love seeing related products being released. I hope they become even more advanced in the future.
justusm 11 days ago [-]
nice! Training models using reward signals for code correctness is obviously very common; I'm very curious to see how good things can get using a reward signal obtained from visual feedback
grace77 11 days ago [-]
As are we, seems like the natural next step
iJohnDoe 11 days ago [-]
Very cool! Can the code and design that is generated be used?
But this could be a legitimate way to design apps in general if you could tell the models what you liked and didn't like.
To preserve the voter experience without introducing bias, our current approach waits for the slowest model within each binary comparison — so even if one model is faster, we don’t display until both are ready. You're right that this does introduce some bias for the two smallest models, and we'd love to hear suggestions for how to make this better!
As for the 5th request: we actually kick off one reserve model alongside the four randomly selected for the tournament. This backup isn’t shown unless one of the four fails — it’s not the fastest or lowest-latency model, just a randomly selected fallback to keep the system robust without skewing results.
Generating a new image is great, but it would be even better if I could see multiple images from different models in the /feed, just to explore how other prompts look without having to generate and wait.
If you have a tool/mode/prompt that creates good mobile UI designs, I'd love to know. Doesn't even have to generate code!