Because people don’t want to listen to robots. There was a radio station here in Norway caught playing AI music to save on royalties, it was not good for them.
Flemlord 50 seconds ago [-]
The concept of AI music is extremely polarizing. One friend I played it for got visibly angry. Oddly none of the anti-‘s were musicians themselves.
codethief 23 minutes ago [-]
From the OP:
> For complex AI generated music, tools like Suno and Udio are obviously in a different league as they're trained specifically on audio and can produce genuinely impressive results. But that's not what this experiment was about.
kypro 17 minutes ago [-]
I agree. It's shockingly good.
It's not just good at producing complete songs though, AI has made it trivial to take garbage and make it sound good.
I largely stopped making music because imo unless you're in the top 5% of musicians AI is probably able to write better music than you.
I guess it's the same with visual artists. Unless you're really, really good it's hard to understand why anyone would produce art by hand these days.
jader201 3 minutes ago [-]
> I largely stopped making music because imo unless you're in the top 5% of musicians AI is probably able to write better music than you.
It won't be long before this becomes:
> I largely stopped making _____ because imo unless you're in the top 5% of making _____ AI is probably able to make _____ better than you.
Especially where _____ is anything that can be created digitally.
ramon156 56 minutes ago [-]
> Recently I was listening to music and doing some late night vibe coding when I had an idea. I love art and music, but unfortunately have no artistic talent whatsoever. So I wondered, maybe Claude Code does?
Do I need to read further? Seriously, everyone has talent. If you're not reaady to create things, just don't do it at all. Claude will not help you here. Be prepared to spend >400 hrs on just fiddling around, and be prepared to fail a lot. There is no shortcut.
altmanaltman 40 minutes ago [-]
Yeah, it's just weird to expect people to find AI-generated art interesting when the person generating it has no unique take or talent. This is the worst case where there is absolutely 0 creativity in the process and the created "art" reflects that imo.
Marha01 33 minutes ago [-]
I don't find it interesting in an artistic way, but I do find it very interesting from an "AI experiment" angle.
altmanaltman 20 minutes ago [-]
I don't get what the "AI experiment" angle here is? The fact that AI can write python code that makes sounds? And if the end product isn't interesting or artistically worthwhile, what is the point?
josters 34 minutes ago [-]
While the author explicitly wanted Claude to be in the creative lead here, I recently also thought about how LLMs could mirror their coding abilities in music production workflows, leaving the human as the composer and the LLM as the tool-caller.
Especially with Ableton and something like ableton-mcp-extended[1] this can go quite far. After adapting it a bit to use less tokens for tool call outputs I could get decent performance on a local model to tell me what the current device settings on a given track were. Imagine this with a more powerful machine and things like "make the lead less harsh" or "make the bass bounce" set off a chain of automatically added devices with new and interesting parameter combinations to adjust to your taste.
In a way this becomes a bit like the inspiration-inducing setting of listening to a song which is playing in another room with closed doors: by being muffled, certain aspects of the track get highlighted which normally wouldn’t be perceived as prominently.
Curious to see how this worked, I tried this on Deepseek using Claude Code Router, following the author’s guide, with two small changes: Make it an emo song that uses acoustic guitar (or, obviously an equivalent), and it could install one text-to-speech tool using Python.
It double-tracked the vocals like freaking Elliott Smith, which cracked me up.
fassssst 37 minutes ago [-]
Related: ChatGPT Canvas apps can send/receive MIDI in desktop Chrome. A little easter egg. You can use it to quickly whip up an app that controls GarageBand or Ableton or your op-1 or whatever.
It can also just make sounds with tone.js directly.
Very interesting experiment! I tried something related half a year ago (LLMs writing midi files, musical notation or guitar tabs), but directly creating audio with Python and sine waves is a pretty original approach.
bgirard 38 minutes ago [-]
I like how the author shared the prompt + conversation transcripts. I wish OAI / Anthropic would do that when they share content demos.
jonathaneunice 37 minutes ago [-]
_Neon Dreams_ is ELO × Daft Punk.
Rendered at 22:18:54 GMT+0000 (Coordinated Universal Time) with Vercel.
This song was generated from my 2-sentence prompt about a botched trash pickup: https://suno.com/s/Bdo9jzngQ4rvQko9
> For complex AI generated music, tools like Suno and Udio are obviously in a different league as they're trained specifically on audio and can produce genuinely impressive results. But that's not what this experiment was about.
It's not just good at producing complete songs though, AI has made it trivial to take garbage and make it sound good.
I largely stopped making music because imo unless you're in the top 5% of musicians AI is probably able to write better music than you.
I guess it's the same with visual artists. Unless you're really, really good it's hard to understand why anyone would produce art by hand these days.
It won't be long before this becomes:
> I largely stopped making _____ because imo unless you're in the top 5% of making _____ AI is probably able to make _____ better than you.
Especially where _____ is anything that can be created digitally.
Do I need to read further? Seriously, everyone has talent. If you're not reaady to create things, just don't do it at all. Claude will not help you here. Be prepared to spend >400 hrs on just fiddling around, and be prepared to fail a lot. There is no shortcut.
Especially with Ableton and something like ableton-mcp-extended[1] this can go quite far. After adapting it a bit to use less tokens for tool call outputs I could get decent performance on a local model to tell me what the current device settings on a given track were. Imagine this with a more powerful machine and things like "make the lead less harsh" or "make the bass bounce" set off a chain of automatically added devices with new and interesting parameter combinations to adjust to your taste.
In a way this becomes a bit like the inspiration-inducing setting of listening to a song which is playing in another room with closed doors: by being muffled, certain aspects of the track get highlighted which normally wouldn’t be perceived as prominently.
[1]: https://github.com/uisato/ableton-mcp-extended
It double-tracked the vocals like freaking Elliott Smith, which cracked me up.
It can also just make sounds with tone.js directly.