Y'all have got to check out the color palette widget wizardry of David Aerne. Seriously, the guy's prolific. The first link is similar to OP's, an image color palette extractor:
I've been working on an adjacent problem (extracting website branding data from a URL) for the past year, and previously had to rely on procedural techniques such as these -- props to the author!
However, models are now getting to the point where we are starting to learn the bitter lesson[0] even with stuff like color-palette generation. Nano Banana 2 [gemini-3.1-flash-image-preview] especially is adept at performing arbitrary operations on images. Before then, you would have to use a model such as Gemini Flash to perform segmentation[1] and then post-analyze those segments.
Here's a prompt I used with Nano Banana 2 in AI Studio
> Derive a coherent, designer's color palette from this image alone.
> Provide 5 distinct HEX color codes as your response.
[Attachment == the picture of the car, first in the author's article]
[Settings: Output .. images & text; Thinking level .. minimal]
Response:
> I have extracted five distinct hex color codes directly from the key elements in this image, representing the colorful facade and the vintage car:
> #FF96C5 (The main pink wall)
> #38C6F1 (The light blue car)
> #AEF6A5 (The green wall)
> #E51988 (The dark pink trim and railing)
> #5F432B (The dark wood of the door and windows)
And they all pretty-much check out. Not hyper-accurate, but really not far off anymore. I didn't even have to try!
This might be the best color palette generator I’ve ever seen. I used to work in Operating Systems, and trying to get a good color palette from a photo is HARD. A lot of very smart very well paid people have dedicated years of their life to this type of thing. Really fantastic work.
If the author of the blog post ever comes across this thread/ comment, bravo and I hope you feel pride in your work and I’d go so far to say discovery.
assimpleaspossi 49 minutes ago [-]
Can you show the link to it? I just don't see it.
altmanaltman 6 hours ago [-]
Can you talk a bit more on what makes this hard from your experience/pov, asking as someone who doesn't have much experience with this type of work
jamesfinlayson 6 hours ago [-]
Agreed - I remember implementing colour quantisation in MATLAB at university and it seemed simple enough, though we only used it for some simple cases (to learn the theory more than anything). Looking at some of the example images there it looks like it's easy to hit edge cases.
gedy 6 hours ago [-]
I agree with the kudos, but back when I was an interesting person and in early 2000s I stumbled on this same/similar approach of using K-means clusters with LAB color space for a painting algorithm I was using in my masters project. RGB was not effective.
This is quite amazing. The quality of the created palettes is surprisingly good.
For the fourth iteration (guarding against phantom blue from shadow pixels), I wonder if it may help to also take into account how close the pixels in each cluster actually cluster together in the actual photo. (None of the heuristics used here seem to be interested in the position of the pixels at all, only in their values - as-is, it seems one could sort the photo's pixels before running the program and get the same result.) Actual objects usually form connected areas, whereas at least in the fruit image, the phantom shadows are spread across the entire photo in largely disconnected chunks.
wkoszek 5 hours ago [-]
Very cool. Looks very nice. I have a small utility that I made for myself and just added your algo to it: https://github.com/wkoszek/imgstat -- palette looks much better to what I had before.
We just threw Cursor/Claude at the images and it dug out the colours we wanted.
comradesmith 9 hours ago [-]
This is cool, I’m going to try write my own implementation to follow along as a learning exercise
firebot 9 hours ago [-]
Interesting design choices.
Have you ever tried allrgb.com? The idea is to use every 24-bit RGB triplet once and only once. Many people naturally choose 4096x4096 as the final image size.
dfordp11 7 hours ago [-]
[dead]
Rendered at 12:31:52 GMT+0000 (Coordinated Universal Time) with Vercel.
https://okpalette.color.pizza
https://meodai.github.io/RYBitten
https://rybitten.space
However, models are now getting to the point where we are starting to learn the bitter lesson[0] even with stuff like color-palette generation. Nano Banana 2 [gemini-3.1-flash-image-preview] especially is adept at performing arbitrary operations on images. Before then, you would have to use a model such as Gemini Flash to perform segmentation[1] and then post-analyze those segments.
Here's a prompt I used with Nano Banana 2 in AI Studio
> Derive a coherent, designer's color palette from this image alone.
> Provide 5 distinct HEX color codes as your response.
[Attachment == the picture of the car, first in the author's article] [Settings: Output .. images & text; Thinking level .. minimal]
Response:
> I have extracted five distinct hex color codes directly from the key elements in this image, representing the colorful facade and the vintage car:
> #FF96C5 (The main pink wall)
> #38C6F1 (The light blue car)
> #AEF6A5 (The green wall)
> #E51988 (The dark pink trim and railing)
> #5F432B (The dark wood of the door and windows)
And they all pretty-much check out. Not hyper-accurate, but really not far off anymore. I didn't even have to try!
[0] - https://en.wikipedia.org/wiki/Bitter_lesson [1] - https://ai.google.dev/gemini-api/docs/image-understanding#se...
If the author of the blog post ever comes across this thread/ comment, bravo and I hope you feel pride in your work and I’d go so far to say discovery.
For the fourth iteration (guarding against phantom blue from shadow pixels), I wonder if it may help to also take into account how close the pixels in each cluster actually cluster together in the actual photo. (None of the heuristics used here seem to be interested in the position of the pixels at all, only in their values - as-is, it seems one could sort the photo's pixels before running the program and get the same result.) Actual objects usually form connected areas, whereas at least in the fruit image, the phantom shadows are spread across the entire photo in largely disconnected chunks.
We just threw Cursor/Claude at the images and it dug out the colours we wanted.
Have you ever tried allrgb.com? The idea is to use every 24-bit RGB triplet once and only once. Many people naturally choose 4096x4096 as the final image size.