Hacker Newsnew | past | comments | ask | show | jobs | submit | BrokrnAlgorithm's commentslogin

I'm a musician, but am also pretty amused by this anti ai wave.

There was recently a post referencing aphex twin and old school idm and electronic music stuff and i can't help bein reminded how every new tech kit got always demonized until some group of artists came along and made it there own. Even if its just creative prompting, or perhaps custom trained models, someday someone will come along and make a genuine artistic viable piece of work using ai.

I'd pay for some app which allows be to dump all my ableton files into, train some transformer on it, just to synthesize new stuff out of my unfinished body of work. It will happen and all lines will get blurred again, as usual.


Also a musician and I don't think it's that amusing. IMO this isn't an "AI can't be art" discussion. It's about the fact that AI can be used to extract value from other artists' work without consent, and then out-compete them on volume by flooding the marketplace.


And you create music without ever having heard music before? Or are you also extracting other artist’s work and using it as inspiration for what you do?

AI music is the same as AI code. It’s derived from real code, but it’s not just regurgitated wholesale. You still as a person with taste have to guide it and provide inputs.

Electronic music made it so you didn’t have to learn to play an instrument. Auto tune made it so you didn’t have to learn how to sing on key. There are many innovations in music over time that make it easier and less gatekeepy to make music.

We are just moving from making music as a rote activity similar to code, to making music like a composer in much the way that you can create software without writing code. It’s moving things up a level. It’s how the steady march of innovation happens.

It won’t work to put the genie back in the bottle, now it’s to find what you love about it and makes it worth it for you and to focus on that part. Banning the new types of art is only going to last as long as it takes for people to get over their initial shock of it and for good products to start being produced with it.


>And you create music without ever having heard music before? Or are you also extracting other artist’s work and using it as inspiration for what you do?

Personally, I don't buy this "AI models are learning just like we do." It's an appeal to ignorance. Just because we don't fully understand how a human brain learns, one can't claim it's the same as a statistical model of ordered tokens.

But even if it were true, I'm alright with drawing a line between AI learning and human learning. The law and social conventions are for humans. I want the ability to learn from others and produce original works that show influences. If this right is allowed to all humans, there is a chance one learn from and outperform me. That would suck for me, but I can accept it because it came from a universal human right I also enjoy. But an AI model doesn't have human rights. For models, the law and social conventions should still favor humans. The impact on the creative community and future creative endeavors should be balanced against the people who create and use the models.

I don't know how to do that with LLMs in a way that doesn't prevent the development of these amazing models. Maybe the government should distribute a portion of the revenue generated by the models amongst all citizens, to reflect how each model's value came from the written works of those citizens.


> If this right is allowed to all humans, there is a chance one learn from and outperform me. That would suck for me,

This is a rather sad take. If someone learned from my art or music and did something new and more popular, I would be happy! I had influence, I mattered. That new more popular work takes nothing away from my previous work. In fact, when I do science I'm doing it explicitly for this reason, to build on.

For me, creating music is not about "being the best" or "making more money than some other artist." It's about telling the stories I want to tell. An AI would not tell my stories, ever. It might produce things that somewhat similar, but it won't tell a human story, just a shallow imitation.

On the flip side, AI can be immensely useful. For example, stemming means that DJs or visualizer applications can do more with music. Perhaps AI can be used to create interesting new effects, or interesting new instruments or sounds. It can give ideas and help with inspiration.

I honestly have a hard time seeing AI actually driving musicians out of business because it can't tell a story. And it can't do that because it hasn't lived a life. Yes, I can see it producing low quality ad-jingles or low quality filler tracks like you see in spotify, so some people will be impacted. But we're long past time for some form of universal basic income to deal with this. It's not just artists that need a basic income at this point.


You didn't finish the sentence:

>That would suck for me, but I can accept it because it came from a universal human right I also enjoy.


>The law and social conventions are for humans.

I don't know about that. America shows us that laws and social conventions are for corporations. Humans are just entities to extract profit from.


We don't talk about it much in these AI topics, but there's definitely the elephant in the room of the whole "low trust society" aspects that make a lot of actions automatically scrutinous from corporations, especially American.

But I've seen the discussion here on that's and we're pretty far away from being able to have a good discussion on that. Let along bridging the two topics together.


> Electronic music made it so you didn’t have to learn to play an instrument.

This is cliche. Most celebrated artists in the electronic music world can play several instruments, if not expertly, than at least with enough familiarity to understand the nuances of musical performance.

Electronic musicians are more akin to composers and probably have more in common with mathematicians and programmers in the way that they practice their craft, whereas musical performers probably have more in common with athletes in the way that they practice their craft.


You also need to understand how instruments make sound at an engineering level if you want to make timbre-perfect synthesizers which sound like said instrument, for instance


Electronic music is also very closely related to computer animation. Animated film technology is much more advanced, but a lot of techniques are similar.


Probably a good analogy too. Pixar's creative process is quite different from drawing it frame by frame and at least some aspects of it will use have used some sort of generative process, but it's incredibly involved and conscious in a way that typing "video of cute cartoon cat, Pixar style" into a prompt isn't.

Same applies to Bandcamp not having any issues with people making music in a DAW


I watched some youtube video where they got complete beginners to animate a character jumping across a canyon gap. No skills, no muscle memory, constant struggling. The character looks like a rag doll. Then the professional does it and she's playing with the arc of the jump, adding emotion to the jump, adding little details like turning the head back for a reaction shot. She's playing with it, and explaining her thoughts and having fun. That really shows how much artistic skill there is involved. It's not just "automation". It's like brush strokes, but applied to splines and velocity curves and shaders.

People don't understand that about music either. We may use sequencers and automation, but the work happens in real time, and it is an instrument that we are playing. It's just that we work at a higher level than just playing something on a keyboard.


Yeah, but we also haven't seen what making actually decent music or movies or whatever with AI will look like. Maybe it simply won't be possible and there will not be a market for it.

But if it is possible it's probably going to be a lot more involved than just '"video of cute cartoon cat, Pixar style" into a prompt'.


You might be interested in this article: https://www.economist.com/culture/2023/05/24/art-made-by-art...

Though relatively old in the AI world (2023), it's still quite interesting.

In case you can't access the article, the prompt used is:

> 35mm, 1990s action film still, close-up of a bearded man browsing for bottles inside a liquor store. WATCH OUT BEHIND YOU!!! (background action occurs)…a white benz truck crashes through a store window, exploding into the background…broken glass flies everywhere, flaming debris sparkles light the neon night, 90s CGI, gritty realism


> Electronic music made it so you didn’t have to learn to play an instrument. Auto tune made it so you didn’t have to learn how to sing on key.

Neither of those things are really true, though. They made it possible to make poor music without learning those things, I suppose, but not make good music.

> Banning the new types of art

Nobody is seriously talking about banning AI generated music. What you're seeing is a platform deciding that AI generated music isn't something that platform is into. There are a lot of different platforms out there.


What is "good" music?


Perhaps music that at least the author would listen to? To this day I haven't heard an AI song that made me wish I press the rewind/play to listen it again. Granted, most human-generated songs are crap, too, but at least they are not crap to their authors.


But aren't many crap songs popular too?

Doesn't seem like a good way to measure a "good song".


The eternal question.

I think in this context, the term "intentional music" or "earnest music" applies better. People who just wants "music that sounds good" already has mainstream stuff. Many who want a more niche sound deliberately look to support humans in that endeavor. Not yet another billionaire label who puts out "safe" but "boring" stuff. Except it's worse now.


Humans are humans, computer programs aren't. A computer program learning doesn't matter, and it's not comparable to human learning. I have no empathy, sympathy or any sort of allegiance to computer programs.

I would imagine the vast majority of other humans agree with me. I'm not just gonna betray humankind because some 1s and 0s "learned" how to write music. Who cares, it's silicon.


> AI music is the same as AI code. It’s derived from real code, but it’s not just regurgitated wholesale. You still as a person with taste have to guide it and provide inputs.

I guess the difference is proprietary code is mostly not used for training. It's going to be trained on code in the public. It's the inverse for music, where it's being trained on commercial work, not work that has been licensed freely.


LLMs are absolutely trained on commercial work. You just need to look at the lawsuits coming out against the AI companies.


> Or are you also extracting other artist’s work and using it as inspiration for what you do?

Yes, when I make music, I am taking inspiration from all of the other artists I've listened to and using that in my music. If someone listens to my music, they are getting some value from my contribution, but also indirectly from the musicians that inspired me.

The difference between that and AI is that I am a human being who deserves to live a life of dignity and artistic expression in a world that supports that while AI-generated music is the product of a mindless automaton that enriches billionaires who are actively building a world that makes it harder to live a life of stability, comfort, and dignity.

These are not the same thing any more than fucking a fleshlight is the same as being in a romantic relationship. The physical act may appear roughly the same, but the human experience, meaning behind it, and societal externalities are certainly not.


100%. I think there are some clear distinctions between AI training and human learning in practice that compound this. Humans learning requires individual investment and doesn't scale that efficiently. If someone invests the time to consume all of my published work and learn from it, I feel good about that. That feels like impact, especially if we interact and even more if I help them. They can perhaps reproduce anything I could've done, and that's cool.

If someone trains a machine on my work and it means you can get the benefit of my labor without knowing me, interacting with my work or understanding it, or really any effort beyond some GPUs, that feels bad. And, it's much more of a risk to me, if that means anything.


> If someone invests the time to consume all of my published work and learn from it, I feel good about that.

Agreed. My goal, my moral compass, is to live in a world populated by thriving happy people. I love teaching people new things and am happy to work hard to that end and sacrifice some amount of financial compensation. (For example, both of my books can be read online for free.)

I couldn't possibly care less about some giant matrix of floats sitting in a GPU somewhere getting tuned to better emulate some desired behavior. I simply have no moral imperative to enrich machines or their billionaire owners.


> I am a human being who deserves to live a life of dignity

Sure, but so does the homeless guy living on the streets right now because computers and the internet automated his job - and yet here you are using the very tools ("mindless automatons") that put him out of work.


That's a good observation, but it doesn't cancel out the GP's point, or its author's dignity. On the contrary, actually, it provides more depth and force to their argument.


A given technology may benefit some while harming others. And it may have harms and benefits that operate on different time scales.

The invention of the shipping container put nearly every stevedore out of a job. But it made it radically cheaper to ship things and that improved the quality of life of nearly everyone on Earth.

I suspect that for most stevedores, it was a job where the wages provided dignity and meaning in their life, but where the work itself wasn't that central to their identity. I hope that most were able to find other work that was equally dignified.

That's certainly less true for musicians, poets, and painters where what they do is central to the value of the work and not just how much they can get paid.

There's no blanket technology-independent answer here. You have to look at a technology and all of its consequences and try to figure out what's worth doing and what isn't.

I think shipping containers are a pretty clear win. I think machine learning for classification is likely a win.

It's not at all clear to me that using generative AI to produce media is a win. I suspect it is a very large loss for society as a whole. Automating bullshit drudgery is fine. Most people don't want to do that shit anyway. But automating away the very acts that people find most profoundly human seems the height of stupidity to me.

Do you really want to live in a world where more people have to be Uber drivers and fewer people get to make art? Do you want to live in that world when it appears that the main people who benefit are already billionaires?


You say that as if creative jobs haven't been obsoleted by technology in the past. How many sign painters or weavers do you see around today?

In fact, the theoretical turn in 20th century art was due in part to the invention of the camera. What's the point in continuing down the path of representational art if the camera can recreate a scene with infinitely more realism than the best painter?

Many of the same criticisms that people have of photography as art are being used against AI today, like that it's too easy, that it's soulless, or that the machine is the real artist.


> You say that as if creative jobs haven't been obsoleted by technology in the past.

You say that as if it's a given that that's a good thing.

> Many of the same criticisms that people have of photography as art are being used against AI today, like that it's too easy, that it's soulless, or that the machine is the real artist.

I made none of those criticisms.


I think it's pretty insulting to posit that artists are some special "dignified" profession and that, by implication, there is "no dignity" or no meaning to be found in being an Uber Driver. I know plenty of people who love the opportunity to be useful, socialize, and get to know a broad slice of the local populace.

Plenty of people miss taking care of their horses, but we still drive cars.

The vast majority of humans do not, in fact, think making art is "the most profoundly human" thing. They are about socializing, they care about their family, they want to go on fun vacations and have fun experiences. Most people do not spend their free time painting.


Nowhere did I posit that being an Uber driver has no dignity.

I observed, which is entirely likely to be true, that on average people probably find more personal fulfillment in the work of being an artist than the work of hauling crates off a ship.

Yes, we humans are clever creatures and will extract as much upside and value as we can out of any situation. That does not at all mean that all jobs are thus equivalent in all respects.

> they want to go on fun vacations and have fun experiences.

And how many of those vacations are to places with incredible architecture and rewarding art museums? How many of those fun experiences are music, plays, and movies?

Certainly, family and socializing are important avenues of meaning as well. Those aren't mutually exclusive with wanting to live in a world full of art made by others who care about it.


Spot on Sir


> AI music is the same as AI code. It’s derived from real code, but it’s not just regurgitated wholesale. You still as a person with taste have to guide it and provide inputs.

Not necessarily apples-to-apples here. Full songs generated from AI prompts don't crash like a computer program would. You could simply upload the garbage to Spotify and reap the rewards until it got removed (if it even does).


Some of the worst (best?) AI "artists" on Spotify have millions of views. It's tragic what it says about us. That most of us not only can't tell, but actually prefer this kind of uni-tone, blase, on-the-nose, emotionally manipulative crap.


There's music and there's music. When I want to listen to Music then I pick an artist and album manually. But 99% of the time, I just need something to play in the background when I'm working or cooking or cleaning - then it just has to sound pleasant, the value of that for me is exactly zero. Some of the best mixes I find for that are ai generated because they have a uniform pleasant sound for a long time, without anyone trying to impart anything on them.


The sterility of AI generated music will lead to a sterility in creativity of humans if "AI" generated music ever becomes dominant. The world is messy, and human music reflects that. But good for you if your life is so uncomplicated that human-created music seems offensive to you, I guess?


Well let me ask you this - if you want to listen to sounds of the rain in a forest or waves crashing on the beach as you fall asleep(as many people do), do you care if someone actually sat on a beach with a microphone for 4 hours straight, or is it ok if what is effectively white noise is computer generated?

It's the same with background music when I work. But like as a specific example, that's a specific track I quite listening to, and it's 100% AI generated. Can you really say it doesn't have any character?

https://www.youtube.com/watch?v=RJUvNVCqtpI

>> The world is messy, and human music reflects that

Are you familiar with the term "elevator music"? It doesn't need to be messy or have any character to it - it just has to cover the noises of the elevator moving up and down its shaft.

>>d that human-created music seems offensive to you,

I literally never said that, please stop implying so.


Yes there's hold music, and yes there's <pink> noise for falling asleep (has a falloff), but in either case I personally don't think it should be on Spotify/another generic music streaming service.

Put a different way, if I'm listening to music on random, and Led Zeppelin finishes, do I want there to be a chance of pink noise or elevator music playing after that song? Not really, but if "it's all on Spotify," then it could happen


Sure, but if Spotify gives you that after Led Zeppelin, then that sounds like a Spotify problem, not a problem that this music is on Spotify in the first place.


>sounds of the rain in a forest

Not music.

>waves crashing on the beach

Also not music.

>It's the same with background music when I work.

You do you. I like good music when I work, not "background music". The better the music, the more fun it is to work. YMMV.

>But like as a specific example, that's a specific track I quite listening to, and it's 100% AI generated. Can you really say it doesn't have any character?

Maybe it's not only the AI-generated music that is lacking in character.

>Are you familiar with the term "elevator music"? It doesn't need to be messy or have any character to it - it just has to cover the noises of the elevator moving up and down its shaft.

And if it's pretty bad music, then it makes me anticipate getting out of the elevator even more, but most likely I'll be listening to music that I like in my earbuds while I'm in the elevator. And I've been in some fancy elevators with actually nice non-AI generated music.

>>>without anyone trying to impart anything on them.

>> that human-created music seems offensive to you,

>I literally never said that, please stop implying so.

Okay, maybe I read more into it than you were expressing, but it seems like having a human put effort into relating an experience is just too distracting for some people, or something... I took it as "offensive" because you seem to just want a machine to sanitize what someone else wrote and regurgitate it out in a non-distracting way. If that's what you want, nobody here is stopping you from having it, but we can form opinions based on what you write about yourself. You are free to do the same, and yes, I'm sure I can be seen as kind of an asshole sometimes. Maybe I should write a song about it, I'd call it "Ballad of an Internet Asshole", and I'm sure a lot of people would relate to it.


Stop trying not to be a strawman!!!1


> That most of us not only can't tell, but actually prefer this kind of uni-tone, blase, on-the-nose, emotionally manipulative crap.

This is already what pop music, EDM and some other genres have been about for decades. Most of it is slop made with overused similar chord progressions and beats. The very fact that we can easily separate music into genres is a proof most of the music we produce nowadays is super generic and follows very basic repetitive patterns.

There is AI slop but there is human slop too and it tends to be very successful.


> And you create music without ever having heard music before? Or are you also extracting other artist’s work and using it as inspiration for what you do?

But the parent poster is, presumably, human! Humans have the right to take inspiration like that from other humans (or machines)! Why do we seem so keen on granting machines the right to take from us? Are we not supposed to be their masters?


Couldn't you just as well say it's a human taking inspiration from other humans through a machine?


Only if the human is actually making the music. If a machine is just generating the song at a human's request, then the human isn't making music, the machine is.


No. Because the inspiration does not pass through the human, only through his machine.


> And you create music without ever having heard music before? Or are you also extracting other artist’s work and using it as inspiration for what you do?

The volumes of production are really scales of magnitude of difference between a human producing music, and a computer.

With a script and generator 1 individual could oversaturate the whole marketplace overnight rendering it impossible for other individuals to be found let alone extract any value.

Also, I don't know if you've ever done music production for fun but you don't really just setup only a prompt. It takes a significant amount of time to actually produce something. Time setting up a DAW system and export an empty track, and submitting it. An empty track.

Let alone actually doing all the microoptimizations by ear and trial to produce any catchy tune. Meanwhile a statistical approach doesn't even have to understand what's it's doing, could as well be white noise for all it matters.


>We are just moving from making music as a rote activity similar to code

From this statement, I doubt you've written any music worth listening to, or any code that's not trivial.

Don't confuse music with muzak. What you get from an "AI" is muzak. It will never, ever have the same depth, warmth, or meaning as a human translating human emotions and experience into music and lyrics.


Where did I claim in my post to have written music worth listening to or nontrivial code? Seems like you’re just insulting me in particular instead of providing a counter-argument.

There have already been AI-created #1 hits.

Sure, there’s music that has all of the attributes you lay out as “requisite” for “good” music, but this is classic moving of the goalposts. It’s how people always justify that AI is not here yet, because there’s this facet of it that’s not human enough.

A lot of the music people listen to is devoid of the depth, warmth and meaning you mention, even without AI involvement. It’s written and produced by tens or hundreds of people and there’s no single visionary behind it. It’s a product.

Similarly, there can be AI assisted music that has just as much depth, warmth and meaning as a human, BECAUSE a human is involved in the decision-making of that music.

Do you believe that if someone uses a sample, or uses a prebuilt drum loop, that their music automatically is bad? What level of assistance is acceptable? Where do you draw the line?


> There have already been AI-created #1 hits.

It's an old story, but it was a fabricated one.

The only reason this sort of tracks is that a lot of people today don't listen to music, they just put it on as background noise to drown out the silence. It seems to pay off for some producers, but I don't think there's big money there, or a real threat of replacing artists.

By and large, the general public has shown that they notice the vapidness, blandness, and incongruity of GenAI music, and don't much care for it apart from seeing it as an interesting curiosity.


>Where did I claim in my post to have written music worth listening to or nontrivial code? Seems like you’re just insulting me in particular instead of providing a counter-argument.

You didn't, and I never claimed that you did - I wrote that I doubt you have. If you had written non-trivial code, or written any music worth listening to, then I doubt you would have the same conclusions.

>A lot of the music people listen to is devoid of the depth, warmth and meaning you mention, even without AI involvement.

I agree, and it will be forgotten, and that's fine. Not every song is a winner. I guarantee that #1 AI generated hit will not be thought about a year after it comes out. Yes we're still listening to hits from the 1960s that real people created because they express human experience that isn't easily fabricated by a machine.

> lukevp 13 hours ago | parent | next [–]

Where did I claim in my post to have written music worth listening to or nontrivial code? Seems like you’re just insulting me in particular instead of providing a counter-argument. There have already been AI-created #1 hits.

Sure, there’s music that has all of the attributes you lay out as “requisite” for “good” music, but this is classic moving of the goalposts. It’s how people always justify that AI is not here yet, because there’s this facet of it that’s not human enough.

A lot of the music people listen to is devoid of the depth, warmth and meaning you mention, even without AI involvement. It’s written and produced by tens or hundreds of people and there’s no single visionary behind it. It’s a product.

>Similarly, there can be AI assisted music that has just as much depth, warmth and meaning as a human, BECAUSE a human is involved in the decision-making of that music.

AI-boosting nonsense

>Do you believe that if someone uses a sample, or uses a prebuilt drum loop, that their music automatically is bad?

Generally, yes. I abhor Kanye and his ilk. YMMV.


I think the analogy here is with Grok generating images of (real) people wearing bikini. It could always be done in Photoshop before (and with hand-made photo montages before that), but it's now accessible at scale to people with zero skill. That's when a quantitative change becomes qualitative.


Actually, to me this is the perfect argument to make AI music not have copyright.

Normally the copyright is owned by the creator. Algorithms can't own copyrights, so there is no copyright. There is already legal history on this.


> And you create music without ever having heard music before? Or are you also extracting other artist’s work and using it as inspiration for what you do?

For me, one key difference is that I can cite my stylistic influences and things I tried, while (to my knowledge) commercial musical generation models specifically avoid doing that, and most don't provide chord/lead sheets either -- I would find it genuinely sad to talk to a musician about their arrangement/composition choices, only to find they couldn't


> I would find it genuinely sad to talk to a musician about their arrangement/composition choices, only to find they couldn't

So much of music composition is what "feels right" and is instinctual. Artists aren't consciously aware of probably most of their influences. They can cite some of the most obvious ones, but the creative process is melding a thousand different vibes and sounds and sequences you've heard before, internalized, and joined into something new, in a way only your particular brain could.

Let music historians work on trying to cite and trace influences. That's not something artists need to worry about.


> They can cite some of the most obvious ones

Thus already doing much better than the average Suno producer

E: More seriously, this strikes me as a motte-and-bailey where "Artists cannot list every single influence they have or provide an explicit motivation for every single creative choice" is treated the same as "artists cannot list influences or justify creative choices at all"


I am 100% sure you can't cite all of them


Depends - how long do you have, and do you accept answers in CSV, Arrow or Parquet?


> It won’t work to put the genie back in the bottle

It's not about putting the genie back in the bottle, it's about helping folks realize that the vague smell of farts in the air IS the genie--and this particular genie only grants costly monkey paw wishes that ultimately do more harm to the world than good.


> less gatekeepy to make music

Is "gatekeepy" how we're referring to skill now? "Man I'd like to make a top-quality cabinet for my kitchen, lame how those skilled carpenters are gatekeeping that shit smh"


Gatekeepy to not like something that's not to your taste


> And you create music without ever having heard music before? Or are you also extracting other artist’s work and using it as inspiration for what you do?

This is an argument that the AI should be allowed to benefit, not the person prompting it.


> Electronic music...

Your instrument is the computer and designing sound. You still have to have talent and musical ear to make this music.


It's really only about the flooding the marketplace part, not about the extracting volume without their consent part. The current set of GenAI music models may involve training a black box model on a huge data set of scraped music, but would the net effect on artists' economic situations be any different if an alternate method led to the same result? Suppose some huge AI corporation hired a bunch of musicians, music theory Ph. D's, Grammy winning engineers, signal processing gurus, whatever, and hand-built a totally explainable model, from first principles, that required no external training data. So now they can crowd artists out of the marketplace that way instead. I don't think it would be much better.


but if no one is making Linkin Funk, can't I enjoy it just because it's made with AI?

https://www.youtube.com/watch?v=fH-BNwBV4EI


Wasn't it Picasso that said "good artist borrow, great artists steal?"

I've never heard an artist confident in their own ability complain about this because they're not threatened by other competent human artists knocking them off never mind an AI that's even worse at it.

AI not going to out-compete anyone on volume by flooding the marketplace because switching costs are effectively zero. Clever artists can probably find a way to grease controversy and marketing out of finding cases where they are knocked off, taking it as a compliment, and juicing it for marketing.

But I liked the Picasso quote when I was younger and earlier on in my journey as a musician because it reminded me to be humble and resist the desire to get possessive -- if what I was onto was really my own, people would like it and others could try to knock it off and fail. That is a lesson that has always served me very well.


I'm starting to think more and more in my older age that being 'great' isn't a good thing. I might actually prefer being good. We'll see how that thought plays out though; give me a couple more years


> then out-compete them on volume by flooding the marketplace


The whole idea of outcompeting on volume doesn't add up for music. It's a power law game not a commodities game. Spotify is playing a dangerous game trying to pretend that it is but I have little faith it won't destroy their business long term and turn them into a future Blockbuster or Macy's.


IMHO, it would be solved by just making AI "art" un-copyrightable. Fine, make "AI art" as much as you wish. Sell and buy it as much as you please if you find it to your taste. BUT, you can NOT participate in organizations that take royalties from radio stations, TVs, movies, records, etc. for publishing, performance, etc.


Wait until you hear about sampling...


“great artists steal”?


Trickle-down economics with the "trickle" reduced to zero.

Why are people mad? Don't they understand that you can't stop progress? Fssss... /s


[flagged]


Spotify has a history of intentionally boosting internally produced, royalty-free and/or AI music over actual artists.

https://harpers.org/archive/2025/01/the-ghosts-in-the-machin...


That article is bandied around, and no one either reads or understands what's written there. Neither do article authors BTW.

1. Spotify doesn't have "internally produced music"

2. There are companies that provide white-label ambient/white noise/similar music.

3. Spotify may have preferential licensing deals with some of them (as any company would seek preferential contract terms)

4. Some of that music is generated (AI or otherwise)


Preferential contracts to AI-gen music makers is equivalent to "internally produced music" in my mind, even though they're not technically equivalent.

`==` vs. `===` essentially


> You're just mad that people actually like AI music.

Yes, I am! I'm also mad that people like shitty over-produced pop, though (including me sometimes), so what can you do. Life is shit.


Let people enjoy what they like. It makes it easier to just sit back and enjoy what you like.


That's fine until, for example and by analogy, you go to the store to buy beer, and you don't particularly care for IPA, but IPAs have crowded out half the beers that used to be there including the one you used to sit back and enjoy.


That is still fine. There should be no expectation that what you want will always be available in the market.


How does the analogy work with music though? Are you saying that because there is now over-produced pop there is now less rock, jazz or whatever you prefer? If so, is that actually true and verifiable by numbers?


More like among the things you could stumble on at random, a greater proportion of them are things you're not interested in. You incur more of a burden of intentionality/effort. Less like discovering, where something happens to you, and more like seeking/finding, an act of will. Which some will say they prefer, maybe even me included...


The problem with that approach is when what people like impacts other people negatively. If your habits don’t make things worse for others, have at it!


The problem is economies of scale. Surely me enjoying heroin on my open-air back porch wouldn't be a bother to others, right?


Oh I do! But I'm also a (failed) musician so a bit bitter (lol). Still do it for fun, though!


Curation is a real concern. 'Flooding the market' is bad for everyone, being seen is difficult as is. It's even harder in a slopstorm.


Is this not the constant state of the world? A technology floods a market, the market finds a) the price floor and b) ways to curate

If you’re a producer in that zone, you adapt or get minimized.


This is actually the definition of competition. You are just being drowned by AI music so no one can discover your music. Steam had the same issue years ago with asset flips drowning out the discoverability of actual titles and they implemented many curating tools to help resolve the issue. Acting like AI music isn't having a similar effort on genuine musicians is just playing dumb.


as a musician, the internet has made it that there already is a shit ton of competition. AI will make it worse sure, but it was already a 'problem' and never going to be solved.

The thing is, you aren't entitled to distribution.

Most musicians who make it these days work really hard at doing live shows, or growing a following on tiktok.

once they have an audience - who cares about competition?


The hardest pill to swallow as a musician is that despite everyone who ever listened to you telling you you're great, despite being in a band and playing shows, despite maybe even selling some merch...if you are not in the top 1%, you probably will never even get chance to play a show that might put you on someone meaningful's radar.


I hear you and feel you on this being a hard (hardest) pill to swallow, and I think I have a helpful phrase. It helped me quite a bit so I hope it helps you:

'For the love of the game.'

When you don't make any money and no one comes to your shows; when the booking emails go unanswered and the likes on soundcloud remain <10, just remember why you picked up the instrument in the first place. For the love of the game.


I like that strategy.

As a non-musician, but being into music to an unreasonable degree, I always thought that the best artists are those where I feel that, even if no one bought their records and there were just five people at their concert, they'd still be doing the exact same thing and with the same passion. Audiences notice.


Exactly! Glad you're into music. It's a fun strange journey


> The thing is, you aren't entitled to distribution.

That applies to people spamming AI slop too. People are right to complain about spammers. Platforms are right to try to stop spam, even though everyone knows that spam is a problem that is never going be solved.

> Most musicians who make it these days work really hard at doing live shows, or growing a following on tiktok.

Live shows, by their nature, have almost zero reach. A performance for 40 people takes place once in a single location at a specific time and then it's over. You're either there when it happens or you missed it. A song on youtube or bandcamp can be heard by millions quickly over a few weeks or gradually over years. Social media was a massive boon for musicians.

Sadly, it will get substantially harder to grow a following on tiktok or any other social media platform if those platforms are flooded with AI generated garbage. Real artists will be harder to find. Anyone doing anything new will be drowned out by AI regurgitating everything old. When creative people can't succeed, the creativity they'd inspire in others is lost and everything stagnates.


What you call slop others may enjoy. Calling stuff AI slop doesn't mean it isn't someone's art.


I feel that human artists as a class are more entitled to distribution than generated slop.

And decisions like Bandcamp's above reflects essentially the same view.


Why? Are human made tools more entitled to distribution than machine made tools?


if no one wants the slop, then its not competition. the problem is that people do actually want the slop and artists are mad about it.


That's not how discoverability works. If it becomes too much of a chore to sort through the swamp people will often just opt for whatever is popular.


All of the "discoverability" algorithms are specifically and fundamentally about sifting through the millions to find the few that are preferred. That is their many-billion-dollar industry purpose. Spotify does a fantastic job with this, for me.

> will often just opt for whatever is popular.

Are you suggesting that people consume media they don't like? I'm not familiar with anyone that does this. I personally skip if I don't like a song even a little.


> All of the "discoverability" algorithms are specifically and fundamentally about sifting through the millions to find the few that are preferred.

They are fundamentally about finding the content that will generate the most revenue. That changes the dynamics quite a bit.


You're not wrong, but the need to please the user is still paramount, otherwise they'll just do something else. This is why TikTok is eating everyone's lunch.


I don't agree with this and to answer the question you originally asked me, I do think users are consuming things they don't actually enjoy. The goal isn't to please the user, the goal is to not bore the user. If you talk to people I'm sure you'll find a lot of the music they listened to isn't "enjoyed" so much as it is inoffensive background noise.


It's not surprising that some people are mindless consumers, but it's not useful to assume the majority is, especially of paying customers, and competition exists.


You're assuming it's not useful because it doesn't bode well for your argument. What makes you think assuming the majority aren't mindless consumers is useful?


Again, if people enjoyed watching things they didn't like TikTok would not be eating everyone's lunch.


Tiktok is not eating everyone's lunch. Instagram Reels and Youtube Shorts have caught up to and in some metrics even beat Tiktok.


> I'm not familiar with anyone that does this.

I see this a lot, actually. People put things on in the background, for instance, and don't really care if they like it or not (as long as they don't hate it). They just want noise. Or people just scrolling through their feeds without genuinely liking much in them.

In the old days, this was also how the majority of television was watched. People watched TV out of habit, and frequently watched things they didn't like because choices were limited and often there was nothing they actually like on. Thus all the complaints in the day about how "there's nothing on TV".

People are willing to sacrifice quite a lot of real enjoyment for convenience.


Many people don't care because it sounds like music.

It sounds like music, because it was generated by a model that was trained on actual music.

It is music that has been chewed up and regurgitated. It provides no benefit to the actual artists whose music fed that model.


should artists pay royalties in perpetuity to their teachers and musical inspirations?


No - human learning is still something special in this world.

It is a gift of time and effort, from both the student and teacher. The ability to be inspired by other works and draw from them, not merely imitate them.

You can ask any human musician to make music that is either inspired or outright copied from another artist. They have a moral compass to do so in a way that is not infringing on the works of others.

A music AI model will ingest what is thrown at it, and generate whatever you ask of it. It is a tool, and if it is ingesting human works to be formed into something else, proper attributions and royalties to the sources need to be made.


I have not met a single person offline who wants more AI music


AI music gets millions of listens, idk what to tell you dawg.


Sure it's almost entirely things like background music in shops and cafes where nobody is actually paying real attention to the music? I find it hard to believe anybody is actively listening to that kind of stuff (apart from perhaps checking our some of the more notorious cases for novelty value).


but people do want it. people who listen to top 40 want slop. most people want slop


At least top 40 has a room of engineers and at least they're getting some compensation. Yes, I understand splits are a bloodbath.


In order to find the stuff to listen to you have to... find it. If you had to wade through, say, 1 million AI generated books to find one that isn't, then ALL of your reading would be AI generated.


A sufficient proportion of junk can cause a market to fail, taking down "legitimate" or "quality" purveyors.


Yet your argument is deeply flawed too. Flooding the market with slop makes it much more difficult to discover genuine, quality, art from smaller creators.

ad hominem has no place on HN.


The market was already flooded 20 years ago.

Your biggest competition as musician is not AI or any new music it’s the music released in the last 50 years.

I predict that slop won’t significantly change the game - which was already rigged against new (and good) artists when I was a little baby


> It's about the fact that AI can be used to extract value from other artists' work without consent, and then out-compete them on volume by flooding the marketplace.

What do you think about The Prodigy?


I didn't even think about the analogy to sampling (and the prior controversy) but that is an even better analogy. Ultimately, the different between what's creative re-use and what's a ripoff is a matter of how skillfully it's done and there's a lot of controversy in the middle!



If you want to read the contemporary discussion of samping, the early 90s opinion columns of Sound on Sound magazine are worth a look.


AI takes all of that old school idm and electronic music and repackages it without a human story to tell, ripping off actual musicians in the process. AI didn’t magically ‘make old IDM its own.’ It scraped decades of artists’ work, stripped out the context and intent, and reassembled the surface features. There’s no human arc, no lived constraint, no risk and no culture.

What’s being repackaged isn’t a new instrument, it’s other people’s careers. I’m not sure what part of that is supposed to be amusing.


I'm honestly not getting the human story thing when it comes to music and maybe art in general. I mean I get what it means, but I don't think it describes why people enjoy art.

To me, it seems more like people place their own meaning in art. A particular song might remind one individual of the good times they had in their teens, while the actual meaning of the song is completely different.

Bachs 5th symphony (or whatever) might be extremely annoying to someone because they had to listen to it every day at work.

And what exactly is the meaning of jazz fusion? I really like a good solo, but a lot of people hate it, they need to hear a voice. (though I don't particularly like the signature Suno or Udio solo..)

I found this ai track on Spotify that I unironically enjoyed. I listened to it every day while working on reviving an old passion project, which became its meaning to me. The tune, a long with its album with random disparate suno generations was taken down.

I'm not sure if I have a point here, but something is off with the story thing in art to me from a consumers point of view. Maybe from other artists as consumers point of view?


Your point echoes the "death of the author" concept in literature, where the work is independent of the creator, full stop. It's a useful concept up to a point, but if you really have no idea what it means to have a deep connection to music that is wrapped up in some idea of the creator as a human being, you should trust others when they say they do and it's important to them. For those of us with that value, AI slop is offensive, and to be clear, it has precedents in history with Muzak, early schlager music etc -- what they all share is a desire to use the power of music for non-artistic ends, which sucks from any number of viewpoints. If music has non-artistic utility, that doesn't justify a concerted effort to take away artist-made music from those who may not be paying attention.


I appreciate the honesty. I'm not saying people don't have this relationship with art, I think everyone can have some degrees of it, including me.

But my experience as an artist talking to non-artists about art, I don't think the sentiment that art without a struggling artist, purpose, story to tell, human arc, etc, is not real art is a true sentiment. First of all, because it's not true, because people apply their own meaning and form their own unique relationship with an artist. (The saying don't meet your heroes come to mind.)

Note that I'm not talking about AI at all here. I'm 100% for banning purely generated AI on soundcloud, bandcamp, spotify, etc. What I really want is to filter out art created by people who has put profit as first priority and thrown away any shred of artistic integrity.

But this is an impossible feat, because who am I to judge that someone else's favorite artist is devoid of artistic integrity?


except that what you’re describing is the CONSUMER SIDE of meaning, not the SOURCE of it.

yes, listeners project their own memories onto music, no one’s disputing that. but that doesn’t make the creator, context, intent, or labor irrelevant. treating music as interchangeable stimulus is how you end up defending systems that strip human work of attribution, risk, and livelihood while still feeding on the cultural residue artists created in the first place.


I think maybe we're talking past each other then. I'm saying I don't agree with the argument that music necessarily needs to have a story to be widely consumed in a positive way.

While I personally like it when people put their heart and soul into something, even if the result is technically not very great, it's society who is the ultimate judge of whether that creation benefits them or not.

I know that the track I'm currently listening to is superior in every way to some modern pop song. The artists have practiced for decades, they have their own unique style I can recognize in other tracks. But I also know that 99.999% of people don't give a shit and think it's noisy music, and depending on your perspective, they're correct.


> I think maybe we're talking past each other then. I'm saying I don't agree with the argument that music necessarily needs to have a story to be widely consumed in a positive way.

I can imagine that this is true for a lot of people. There are certainly folks out there who see music as an interesting sensory stimulus. This song makes you dance, this one makes you cry, this other one makes you feel nostalgic. To these people, the only thing that matters is what the music makes them feel. It's a strange, solipsistic way of engaging with art, but who am I to judge?

I personally don't connect to music—or any other art—that way. The process that goes into making a piece of music is as important to me as the music itself. The people who make that music are even more important. I don't believe in separating art from the artist. In fact, I find the whole idea of separating art and artist to be fundamentally rotten.

Here's an admittedly extreme example, but it's demonstrative of how I personally relate to music. In the wake of the #MeToo movement (see https://en.wikipedia.org/wiki/MeToo_movement), some of the musicians I used to love as a teenager were outed as sexual predators. When I found out, I scoured my music library and deleted all their work. The music was still the exact same music I fell in love with all those years ago, but I could no longer listen to it without being reminded of the horrible actions of the musicians. Listening to it was triggering.

And so to me, music is not just a series of sounds that make me feel good. There are humans behind those sounds, and I care deeply about those humans. They don't need to be perfect—everyone fucks up from time to time—but they need to demonstrate some level of human decency. And they certainly can't be machines, because machines aren't people.

I love machines. I've spent my life building them, programming them, and caring for them. But machines aren't people, and therefore I don't care about the art they make. Maybe one day machines will be able to make art in the same way humans do: by going out into the world, having experiences, making mistakes, learning, connecting with others, loving and being loved, or being rejected soundly, and understanding deeply what it means to be a living thing in this universe. A generative AI model doesn't do that (yet!) and so I'm utterly uninterested in whatever a generative AI model has to say about anything.


I don't think appreciating art separated from the author is solipsistic, in fact I'd argue the opposite. Needing a human presence to engage with art is very human-centric. Or maybe that's due to your definition of art? I can be stunned by how beautiful a sunset is, the same way that I am by a painting, even if no human had a hand in that sunset. I can appreciate the cleverness of a gull stealing some bread from a duck the same way I can appreciate the cleverness of a specific music being used at a specific point in a movie. I can shiver at the brutality of humanity watching Night and Fog, just like I can shiver at the brutality of a praying mantis, eating alive a roach.

>Maybe one day machines will be able to make art in the same way humans do: by going out into the world, having experiences, making mistakes, learning, connecting with others, loving and being loved, or being rejected soundly, and understanding deeply what it means to be a living thing in this universe.

I think this is a good description of the process of how some art is created, but not all? Some art is a pursuit of "what is beautiful" rather than "what it means to be human" ie a sensory experience, some art is accidental, some art just is. For some art knowing the person behind is important, to me; for some not; for some it adds to the experience; for some it removes from it.

I would also highlight some small contradiction:

>I can imagine that this is true for a lot of people. There are certainly folks out there who see music as an interesting sensory stimulus. This song makes you dance, this one makes you cry, this other one makes you feel nostalgic. To these people, the only thing that matters is what the music makes them feel. It's a strange, solipsistic way of engaging with art, but who am I to judge?

>Here's an admittedly extreme example, but it's demonstrative of how I personally relate to music. In the wake of the #MeToo movement (see https://en.wikipedia.org/wiki/MeToo_movement), some of the musicians I used to love as a teenager were outed as sexual predators. When I found out, I scoured my music library and deleted all their work. The music was still the exact same music I fell in love with all those years ago, but I could no longer listen to it without being reminded of the horrible actions of the musicians. Listening to it was triggering.

That seems to me a case of "the only thing that matters is what the music makes them feel".


> I can be stunned by how beautiful a sunset is, the same way that I am by a painting, even if no human had a hand in that sunset.

As can I, but a gorgeous sunset is not art. It's beauty.


If the definition of art is that a human must be involved, then fine. AI generated music is not art. But it is everything art is minus the human component? ie, it can be beautiful, ugly, etc, just like how a sunset can be beautiful and a rotting corpse can be ugly.


> Bachs 5th symphony (or whatever) might be extremely annoying to someone because they had to listen to it every day at work.

Or Beethoven's 9th. For different reasons...


"little of the old ludwig van"?


I know a few EDM producers and the culture seems to consist of doing the most drugs of anyone you've ever met. Which is quite risky, true.


The issue is not so much an artist that will use it as a tool, even though there is much to say about it, it's the hundred of thousands of people with no interest in music whatsoever, that will flood the platforms in order to make a quick buck.


> it's the hundred of thousands of people with no interest in music whatsoever, that will flood the platforms in order to make a quick buck.

Whenever I look at popular artists on streaming platforms, I see 'remixes' where people just slowed down the particular original song and added reverb or some other silly effect to it. I don't think AI existing or not will change the behaviour of people trying to make a quick buck. If they aren't using AI, they'll use a different tool as they did before.


People who won’t invest anything and just want to make a quick buck won’t be successful with AI generated content/music.

You still need to invest significant time and effort to make it work.


Musicians who are being threatened by AI impersonating them, flooding the market with music like theirs, and otherwise actually harmed by this would disagree with you. Benn Jordan speaks at length about it in this video: https://www.youtube.com/watch?v=QVXfcIb3OKo


They will be successful in drowning out the artists. Not individually so, but collectively.


LOL you clearly haven’t checked the flows of million-plus-views ambient music bullshit videos on YouTube


I never said it’s impossible to be successful with slop, i said it still needs work and time investment to be successful


But that’s exactly my point. The time and effort invested in making ambient music for those channels is utterly negligible


They are though by sheer volume. Finding anything half good will be a needle in a planet sized haystack of slop.


This has already been the case way before AI was a thing.

As a new artist you have to compete against 60+ years of music history - much of it really good music too.


> As a new artist you have to compete against 60+ years of music history

Kinda, sorta. Good music is reflective of the society and era it was produced in and that matters. I regularly listen to music, from all over the world, that was composed (and some, recorded) 100+ years ago, music that was recorded 50+ years ago, and music that was recorded last month. None of them are a substitute for the other because each has a unique voice expressing things that were unique about the time and place they were made in.

So, in a sense, they aren't in competition with each other. But also, there are only so many hours in a day and there isn't enough time in your life to listen to all the worthy music that humans have made. Hard choices are necessary. In that sense, they are in competition with each other.


As a listener this is a good thing. Too much good music.

Now it’s that the listener has to compete with 60x absolute slop to find something good.


I personally don’t have that problem. I can find new good music easily on soundcloud/bandcamp/youtube - much more than I have time to consume. Maybe this 60x absolute slop thing is a problem if you use services like Spotify - which arguably are a much bigger plight upon artists than AI generated slop


You're right, but for EDM this was pretty much already the case. The scene survives in large part thanks to DJs who wade through countless mediocre tracks looking for the few hidden gems to deploy at the right moment. I think AI means that DJs will become much more important in all genres.


How many engineers are using ai-generated software libraries at this point? This could be all over github, but the software mostly sucks (because the AI doesn’t do architecture and real engineering, that has to be input into it right now). Increasing the volume of production doesn’t necessarily lead to the abandonment of the “good stuff”. You still have to compose the music and write the lyrics, the AI is not sophisticated enough to competently do that right now


The main differentiator I've noticed is: how much work is the tool doing, and how much work is the artist doing? And that's not to say that strictly more effort on the part of the artist is a good thing, it just has to be a notable amount to, IMHO, be an interesting thing.

This is the primary failure of all of the AI creative tooling, not even necessarily that it does too much, but that the effort of the artist doesn't correlate to good output. Sometimes you can get something usable in 1 or 2 prompts, and it almost feels like magic/cheating. Other times you spend tons of time going over prompts repeatedly trying to get it to do something, and are never successful.

Any other toolset I can become familiar and better equipped to use. AI-based tools are uniquely unpredictable and so I haven't really found any places beyond base concepting work where I'm comfortable making them a permanent component.

And more generally, to your nod that some day artists will use AI: I mean, it's not impossible. That being said, as an artist, I'm not comfortable chaining my output to anything as liquid and ever-changing and unreliable as anything currently out there. I don't want to put myself in a situation where my ability to create hinges on paying a digital landlord for access to a product that can change at any time. I got out of Adobe for the same reason: I was sick of having my workflows frustrated by arbitrary changes to the tooling I didn't ask for, while actual issues went unsolved for years.

Edit: I would also add the caveat that, the more work the tool does, the less room the artist has to actually be creative. That's my main beef with AI imagery: it literally all looks the same. I can clock AI stuff incredibly well because it has a lot of the same characteristics: things are too shiny is weirdly the biggest giveaway, I'm not sure why AI's think everything is wet at all times, but it's very consistent. It also over-populates scenes; more shit in the frame isn't necessarily a good thing that contributes to a work, and AI has no concept at all of negative space. And if a human artist has no space to be creative in the tool... well they're going to struggle pretty hard to have any kind of recognizable style.


There is an AI plugin for krita that lets you define regions, selection bounds, sub-prompts, control nodes, and lots more control over a given image generation model than standard Automattic or comfyUI workflows...down to 'put an arm wearing armor here' for example in my RPG NPC token writing.

It has full image generation mode, it has an animation mode, it has a live mode where you can draw a blob of images and it will refine it 2-50 steps only in that area.

So you are no longer doing per line stroke and saved brush settings, but you are still painting and composing an image yourself, down to a pixel by pixel rate. It's just that the tool it gives is WAY more compute intensive, the AI is sort of rendering a given part of a drawing as you specify as many times as you need.

How much of that workflow is just prompting a one-shot image, vs photoshopping +++ an image together until it meets your exact specifications?

No, the final image cannot be copyrighted under current US law in 2026, but for use in private settings like tabletop RPGs...my production values have gone way up and I didn't need to get a MFA degree in The old Masters drawing or open a drawing studio to get those images.


What's the plugin called?


> Sometimes you can get something usable in 1 or 2 prompts, and it almost feels like magic/cheating. Other times you spend tons of time going over prompts repeatedly trying to get it to do something, and are never successful.

That's normal for any kind of creative work. Some days it just happens quickly, other days you keep trying and trying and nothing works.

I spent some of the 90s and 00s making digital art. There was a lot of hostility to Photoshop then, and a lot of "That's not really art."

But I found that if I allowed myself to experiment, the output still had a unique personality and flavour which wasn't defined by the tool.

AI is the same.

The requirement for interesting art is producing something that's unique. AI makes that harder, but there's a lot of hand-made art - especially on fan sites like Deviant Art - which has some basic craft skill but scores very low on original imagination, unusual mood, or unique personality.

The reality is that most hand-made art is an unconscious mash-up of learned signifiers mediated by some kind of technique. AI-made art mechanises the mash-up, but it's still up to the creator to steer the process to somewhere interesting.

Some people are better at that than others, and more willing to dig deep into the medium and not take it at face value.


> That's normal for any kind of creative work. Some days it just happens quickly, other days you keep trying and trying and nothing works.

For me, the artist, sure. I've not yet had a day where Affinity Photo just doesn't have the juice, and I don't see the appeal. Photoshop, for all it's faults, doesn't have bad days.

That's the difference between the artist and the artists' tool. A difference so obvious I feel somewhat condescending pointing it out.

> I spent some of the 90s and 00s making digital art. There was a lot of hostility to Photoshop then, and a lot of "That's not really art." ... But I found that if I allowed myself to experiment, the output still had a unique personality and flavour which wasn't defined by the tool.

"People were wrong about a completely different thing" isn't the slam dunk counterpoint you think it is.

Also as someone else in that space at that time, I genuinely haven't the slightest idea what you mean about photoshop not being real art. I knew (and was an) artists at that time, we used Photoshop (of questionable legality but still) and I never heard this at all.

> The requirement for interesting art is producing something that's unique. AI makes that harder,

Understatement of the year.

> The reality is that most hand-made art is an unconscious mash-up of learned signifiers mediated by some kind of technique. AI-made art mechanises the mash-up, but it's still up to the creator to steer the process to somewhere interesting.

The difference is the lack of intent. A "person" mashes up what resonates with them (positively or negatively) and from those influences, and from the broader cultures they exist in, creates new and interesting things.

AI is fundamentally different. It is a mash up of an average mean of every influence in the entire world, which is why producing unique things is difficult. You're asking for exceptional things from an average machine (mathematical sense not quality sense.).


That's normal for any kind of creative work. Some days it just happens quickly, other days you keep trying and trying and nothing works.

Usually this means I have forgotten to eat, or that I need to take a step back and consider whatever I’m doing at a deeper level. Once I recognized that the “keep trying and trying and nothing works” days vanished for good.


> The reality is that most hand-made art is an unconscious mash-up of learned signifiers mediated by some kind of technique

Yeah, no. Competent artists are not generalizable as "unconscious", solely "mashing up" influences or input, or even working with "signifiers": many are exquisitely aware of their sources; many employ diverse and articulated methodologies for creation and elaboration; many enjoy working with the concrete elements of their medium with no concern for signification. Even "technique" does not have a uniform meaning across different fields and modes.


Cf. Holly Herndon's album Proto.

This is something people spent a lot of time on, is trained lovingly on only their own stuff, and makes for some great music.

It's "AI" but in an almost unrecognizable way to us now: its not attached to some product, and its not about doing special prompting. It is definitely pop/electronic music, but it follows from a tradition of experimentation between what we can control and what we can't, which is here their bespoke stochastic program.

https://youtu.be/sc9OjL6Mjqo

It is not about how the computer or the model enables us, which is so silly. (As if art is simply about being able to do something or not!) Its about doing something with the pieces you have that only those pieces can do.


Holly Herndon's music is original and creative. Unlike most LLM-generated pastiche text, picture or music.

And since it's from 2019, it's not quite the same thing. I like it, unlike the current wave of unwanted LLM slop.

It's original. Of course if 1000 people were doing the same with minimal creative effort and passing it off as something else, that would ruin it.


Gotcha thanks!


For instance Frontier (1) or Eternal (2)

I get Kate Bush and Dead Can Dance vibes, filtered through a mechanical chorus with a bit of glitched breakbeats.

It's empathically not the forgettable, bland averaged pastiches that LLMS emit on lazy command. Even if it's not your favourite, I'm sure that it's something.

1) https://www.youtube.com/watch?v=rvNqNgHAEys

2) https://www.youtube.com/watch?v=r4sROgbaeOs


AI as tool is included inside almost every daw (or can be through VST) and there is no way bandcamp could enforce a strict "non AI has been used in the process" policy. I think it is sane to separate cases where a record is entirely generated by a single prompt vs AI used as instruments/tools.


Precisely - every reverb is impulse response, lots of other effects are effectively some sort of convolution with neural networks that we otherwise call AI. Arpegiattors are AI and the random jumps between patterns in Ableton are a Markov Chain.

What does Bandcamp really mean? Perhaps sampling others voices and music is barred, not these mini-AIs that are everywhere ?


Please stop being intentionally obtuse. Convolution, arpeggiators, impulse responses are not at all comparable to output from generative AI / LLMs.


Yeah, and oscillators ringing together in an FFT choir based on notes from a diffused image is absolutely, totally not an AI, just algorithms. Really, why be so rude, given you understand the math behind it? Obtuse is not a nice word, not something I would say to people at random. Because, you see, back in the day generative grammars were called AI, so were so many other discreet structures which are employed in music generation, sorry production, on an everyday basis.

Algorithmic progression generation IS IN USE for years, sorry you didn't mention, or perhaps you don't listen that much to everyday radio. Markov chains, constraint solvers, and rule-based harmony live in many VSTs... the fact there are so many "experimentors" out dare winding knobs to match a pleasurable pattern, does not change the fact they be 100% ignorant about the 'deux ex machina'.

I'm surrounded by producers having absolutely no clue about the vast amount of actual AI and actual probabilistic algorithms that make their "unique" sounds possible. And all of them are 100% ignorant of what AI means when they say it, because they don't mean a specific thing.

How is this not AI? Or one needs an transformer-based model to call it AI? This whole story did not start an year or two ago, you may be late for history class though. The fact there's been this moving marketing concept of what "AI" actually is, does not change the reality of most modern music (including acoustic) at some point of the production process getting artificially enhanced by honestly super-complex systems that are intelligent enough to do what otherwise would take 20x more effort to get right.


> there is no way bandcamp could enforce a strict "non AI has been used in the process" policy

Good thing that's not what Bandcamp is doing, then. To spare you a click, here's the exact wording:

"Music and audio that is generated wholly or in substantial part by AI is not permitted on Bandcamp."


I don’t think this is an AI issue, but the amount of effort, the thought process and the story telling about the track they made.

Before generative AI, there were already a swarm of people who aimed at maximising the number of track they made within a short time, with abusing marketing. It is not wrong that they can pump up 100 tracks in a year, with a template and a specialised workflow and correct marketing techniques but… what is the story to these music? For many tracks, I only heard the story of:

> I am the most productive person and I can make most of the money because of that.

Quantity wise, for sure, they wins, but quality wise, I failed to imagine a more complex story than things above although they are good to hype the dance floor or a concert. These days, I mostly listen to music I have bought, or made by specific music communities because of their story behind their track despite not as perfect.

Same reasons why don’t I watch many movies since Ironman 3, most of the blockbusters follow the same winning formula rather than trying something new and in depth or unexpected, CGI and product placements all over the place instead of a good story.

AI just emphasised this problem even more since commercial “art” has been testing majorities’ newest lows.

There are differences between using a tool to create art or use it to spam.


> Even if its just creative prompting, or perhaps custom trained models, someday someone will come along and make a genuine artistic viable piece of work using ai.

We've now had this technology for 2 years. Show me one, just ONE track that is purely(!) made by AI you find honestly exciting. Not "commercially successful", mind you, something you, a musician, personally think is actually great. You are referencing Aphex Twin there, and I'm old enough to remember when I first heard "Digeridoo", so, you know, something where you just go "Wow, that's a banger". If you're DJing: something you would actually put on in a club and the crowd would go wild.

Let's cut the crap: there is none. All GenAI is good for is generating stupid memes, shitposting, ads, and generic background music. There is ZERO creative value in purely generative AI. Yes, there are tools leveraging AI models which can help musicians create tracks - entirely different thing. This is also not what Bandcamp is banning here. Most people will freely admit that AI tooling can be used creatively, like what De Staat did with the "Running backwards into the future" music video - that's all fine, really nobody is disputing that fact, although that "look" is now well established and people are mostly bored and annoyed by it, but that's just how it goes.


There are plenty of places to publish AI generated music. Why should a platform allow music it's users clearly don't want.


I find it interesting that there's so much pushback against ai generated art and music while there seems to be very little for ai generated code.


Perhaps that's because there's an enormous difference between fine art and computer programs.

Also, there's quite a lot of pushback against AI-generated code, but also because unlike music, normal people have no interest in and aren't aware of the code.


They are obviously different things, but aren't the people who spent thousands of hours honing their coding and releasing their code spending just as much time and effort if not more than the people who made non-ai images and music?


There certainly are. If you by chance imply equivalence based on time and effort spent, neither is the thing that differentiates arts, crafts and other activities.


I won't merge anything AI generated in any of my FOSS projects, unless I'm successfully deceived.

In the first place, I do not regard a copyright notice and license on AI generated code to be valid in my eyes, so on those grounds alone, I cannot use it any more than I could merge a piece of proprietary, leaked source code.


The copyright office agreed with you about the non-copyrightability of AI generated media so in that sense you can safely ignore copyright claims on anything AI-generated.


You can safely ignore claims made by the operator of the AI who had the material generated, true enough.

You cannot ignore a credible/plausible claim from a third party.

It's not simply the case that the output has a public-domain-like status.


Music is art, code is engineering. "Hackers and painters"[1] was always wishful fluff, unfortunately.

When it comes to code, I don't think anyone cares how the sausage is made, and only very rarely do people care by whom. The only question is "does it work well?"

Art is totally different. Provenance is much more important - sometimes essential. David is a beautiful work, but you could 3d print or cast a replica of "David". No one would pretend that the copy is the same as the original though - even if they're indistinguishable to the untrained eye - because one was painstakingly hand sculpted and the others were cheaply produced. This sense of provenance is the property that NFTs were (unsuccessfully) trying to capture.

[1] https://www.paulgraham.com/hp.html


If someone painstakingly hand sculpted an exact replica of "David", does it make it art, or a forgery? Is hand written code to produce generative art not art?

It's difficult to pin down the line. Ultimately it's up to the individual to define them. "The relationship to art, and this kind of painting, to their work, varied with the person entirely."[1]

[1]: https://news.berkeley.edu/2025/03/31/berkeley-voices-transfo...


> If someone painstakingly hand sculpted an exact replica of "David", does it make it art, or a forgery? Is hand written code to produce generative art not art?

No and no.

If you raise and teach a child and they generate a painting, are you the artist?


Musicians and artists are under pressure to make money, but they cant rush it

while programmers have to rush it these days or they lose their jobs. Programmers don't have much of say in their companies.


Devs are quite used to using others peoples work for free via packages, frameworks and entire operating systems and IDE’s. It’s just part of the culture.

Music has its history in IP, royalties, and most things need to be paid for in the creation of music or art itself.

It’s going to be much easier for devs to accept AI when remixing code is such a huge part of the culture already. The expectation in the arts is entirely different.


This doesn't make sense to me. I mean, the term "remix" literally comes from the music scene.

Artists are constantly getting inspiration from one another, referencing one another, performing together or having their works exhibited together...

While there are some big name artists who are famously protective of the concept of IP, those artists have made headlines exactly because when they litigate they seem so unreasonable compared to the bedroom musicians and pub bands and church choirs and school teachers and wedding DJs and millions of other artists and performers whose way of participating in "the culture" is much less tied to ownership.


Most code people interact with are creations shat out by soulless corporations, why would they care? Being honest here, the vast majority of people have their code experience dictated by less than a handful of companies; at their jobs they are told to use these tools or get file for welfare. The animosity has been baked into the industry for quite a while, it's only very very recently that the masses have been able to interact with open source code and even that is getting torn down by big tech.

Compare this to music where you are free to choose and listen to whatever you want, or stare at art that moves you. IF you don

At work most people are force to deal with code like SalesForce or MSFT garbage, not the same experience at all.

Why would people care about code coming from an industry that has been bleeding them dry and making their society worse for nearly 20+ years?


What???

Every thread on HN that touches on the topic has countless people talking about how LLM generated code is always bad, buggy and people that utilize them are inexperienced juniors that don't understand anything.

And they're not completely wrong. If you don't know what you're doing, you'll absolutely create dumster fires instead of software


Sure, I am one of the people who will say that. But where are the people calling for it to be banned? Where are the stores and websites that are banning AI generated software?


I feel like part of the difference is how art vs code is viewed. You could make the argument code is art, though most don't have that stance. Visual art and music tend to be made by a few people, there is ego involved, you care who the artist is. Code tends to be made by shops and consumers don't know who the coders are. Programmers are already faceless.

I think it's also about money. Places code and code samples are stored tend to be large companies that are in tech and on the AI hype wagon. Bandcamp is not one of those places.


There's one popular platform that requires disclosing whether and how AI was used (Steam), and if you search anything about it, all you can find is like a sea of articles opposing it.



Yes and it was probably only done because of people complaining about AI art, not AI code.


Really? You've not seen the numerous open source projects banning AI-generated PRs with extreme prejudice?


That's not really the same as stores outright banning AI code.

An apt analogy would be like a shared drawing taking merge requests and having to spend 30 minutes looking at every single merge request zoomed in to see if there was a microscopic phallus embedded somewhere.

It is completely fair for an open source project to have their own standards, and you are also free to fork it so you can accept as many AI PRs as you want.

None of these options are available for someone that wants to sell AI generated music. There are really only 2 marketplaces to sell your own music and if both of them banned AI, then you are effectively locked out of the entire market.


Positing that AI generated code is always bad and buggy is delusional.

I have dozens of little programs and websites that are AI generated and do their job perfectly.


I think a key factor there is that programmers (in the actual sense, rather than so-called “vibe coders”) are more likely on average than (current) artists and musicians to have intimate knowledge of how AI works and what AI can and can't do well — and consequently, the quality of their output is high enough that it's harder to notice the use of AI.

Eventually that'll change, as artists and musicians continue to experiment with AI and come up with novel uses for it, just as digital artists did with tablets and digital painting software, and just as musicians did with keyboards and DAWs.


AI music from suno sounds indistinguishable to non-ai generated music to me.

In terms of how well it works, the quality of AI music is far better than art or code. In art there are noticeble glitches like multiple fingers. For code, it can call non existent functions, not do what it is supposed to do, or have security issues or memory leaks. From what I can tell, there is no such deal breaker for AI music.


The tells in music are there. The most common being: vocals have a subtle constant hiss to them, voices and instruments sound different in the second half than they did in the first, the hiss filter gets more prominent and affects all instruments towards the end of the song, auditory artifacts like volume jumps or random notes/noises near transitions.

More subjective tells: drums are hissy and weak, lyrics are generic or weird like "Went to the grocery store to buy coffee beans for my sadness", weirdly uniform loudness and density from start to finish, drops/climaxes are underwhelming, and (if you've listened to enough of them) a general uncanny feel to them.

I've generated about 70 hours of AI music and have listened to all of the songs at least once, so it's become intuitive for me to pick them out.

Some examples for listening for the hiss filter:

https://suno.com/s/qvUKLxVV6HDifknq (Easiest to hear at 0:00 with the inhale)

https://suno.com/s/QZx1t0aii0HVZYGx (Really strong at 0:09)

Some examples for more hiss and other (subjective) tells like weak drums:

https://suno.com/s/tTYygsVFo88SX6OV

https://suno.com/s/CzFgC6dxSQLWyGSn


> For code, it can call non existent functions, not do what it is supposed to do, or have security issues or memory leaks.

I guess what I'm getting at is that, since programmers are typically more inclined than the average person to understand how AI works, programmers are therefore ahead of the curve when it comes to understanding those pitfalls and structuring their workflows to minimize them — to play to the strengths and weaknesses of LLMs. A “fancy” autocomplete v. a “fancy” linter v. something pretending to be a junior programmer are all going to have very different rates of success.

The issue hindering art and music is that most people using generative AI for art and music are doing so analogously to the “something pretending to be a junior programmer” role instead of the “fancy autocomplete” or “fancy linter” roles. That is: they're typically using AI to generate works end-to-end, whereas (non-vibe-coder) programmers are typically using AI in far narrower scopes, with more direct control over the final output. I think the quality of AI-based art and music will improve as more narrowly-scoped AI-driven workflows catch on among actually-skilled artists and musicians — and the result will be works that are very different from existing works, rather than works that only cheaply imitate some statistical average of existing works.


Companies sell products built on code, not the code itself. Code is a means to an end.


The pushback is motivated by the interests of the petty-bourgeois class, and those are a larger proportion of the former.


Imagine being a Marxist and not respecting the craft and labor required for art production. Couldn't be me.


> demonized until some group of artists came along and made it there own

I'm pretty sure the people at Bandcamp agree with you and that's why they mention future "updates to the policy as the rapidly changing generative AI space develops".


Well that's the issue. We're not seeing "artists" coming along and applying it to their years/decades of creative knowledge. We're seeing the equivalent of some cushy heir to a fortune come in with a drill and say "I can outdo these teams of diggers! We don't need diggers anymore!"

And on the surface the drill is better. But this heir is assuming that all diggers do is diplace dirt. Not thinking about where to dig, how to dig safely,, what to dig for, and where brute force is needed vs a subtle touch (because even in 2025, miners keep shovels with them). That's all going out the window for "hey I made a hole, mission accomplished!".

Instead of working with diggers to enhace their mining, they want to pretend they can dig themselves. That's why no one in the creative space is confident in this.


> its just creative prompting,

Sure, you just can't upload the resulting track directly on Bandcamp, but you're free to "creatively prompt" on SUNO all you want, they'll even host your "music".

It's also a matter of resources. People uploading gigabites of AI generated slop a day isn't really what Bandcamp is about.


>someday someone will come along and make a genuine artistic viable piece of work using ai

And in the mean time, AI will continue to clutter creative spaces and drown out actual hardworking artists, and people like you will co-opt what it means to be an artist by using tools that were trained on their work without consent.


[flagged]


> you sound like someone from the 1800's shouting about how photography should be banned and not allowed to crowd out hard working painters.

I'm saying that you shouldn't call photographs paintings because they aren't paintings. I don't particularly care if people make AI "music" or "art" and I don't particularly care if they consume it (people have been consuming awful media for the entire history of humanity, they aren't going to stop because I say so), but if you give me a ham sandwich and call it a hamburger I am going to be annoyed and tell you that it isn't a hamburger and to stop calling it that because you're misleading people who actually appreciate hamburgers.

AI "art" isn't art. I don't care whether you like it. It's like fractals or rock formations or birdsong - it may be aesthetically appealing to some people, but that isn't the definition of art.


Similarly, people keep posting articles to HN that get upvoted which are substantially AI edited. They're never labeled as such, and it's unpleasant to find myself reading unlabeled ChatGPTese again. There's a Show HN up now that has an entirely generated readme, which is just... fine, I guess. I just don't want to engage with it.

Ed: two Show HNs that are substantially AI generated readmes, now


I would say trying to dictate what is and isn't art really goes against the spirit of art in general. plenty of art exists to push boundaries including what can be considered art.


> if you cant create something that competes with AI slop

Nothing can compete with AI slop when the ratio is 100000:1 AI slop vs real music.

Look at Google search results... they're not all AI slop yet, but they're 100000:1 content mills vs useful results.


Can artists compete with algorithms that push AI slop because it costs less to license?


can craft breweries compete with light beer slop because it costs less to produce? if the product is better people will pay more for it. yet lots of people love Bud Light


"greetings, fellow musicians" - genAI and quant guy


I’m more hopeful that MIDI completion/in-filling models will be easier for musicians to control and use. But right now, the most popular tools are things like Suno, where you barely have any control and it spits out an entire, possibly mediocre song. It’s the same vein as ChatGPT image generation vs. Stable Diffusion, where you can do much more controllable inpaints with the latter.


It's like the reverse of the product that advertises itself as "AI driven." As if that's supposed to be a selling point. OK, it's AI driven, but is it good?

There may be short term emotional strings to pull. "AI driven!" or "AI free!"

But ultimately, no one will care if it's AI or not if it's good.


People don't listen to music because it sounds good, most music sounds downright awful, they listen to it for the human stories and connection. 99% of the music I listen

Nobody listens to techno - Eminem

AI needs to make music that sells. The same way Scorsese and Brando sell the Godfather. until it can do that literally nobody will care.


You can already do this, but the platformised generative AI is sloppy by comparison and not that interesting.

https://github.com/acids-ircam/RAVE


There is a difference between Richard D. James hand-training an LLM on foley sounds he recorded himself to put in his latest IDM track, and the script kiddie spamming out 50 AI-generated mixes per day to get that sweet ad revenue on Youtube.

No one is complaining about the first case, because they are outnumbered by the second 100,000 to 1. RDJ isn't gonna use suno.ai no matter how pro-LLM he is.

Note: this is for sake of argument, I am not aware of RDJ using LLMs in any shape or form.


Electronic music history is basically a graveyard of "this isn't real music" takes that aged badly


So you just want to be lazy and subsidize to the parrot machine the very essence of what it means to be creative. I am utterly baffled by this recurring comparison between past electronic tools, which actually have a pretty harsh learning curve to be mastered, and a software contraption that overtakes your creative agency. I see it everywhere, like comparing Midjourney to the shift to digital photography. What are y’all blokes on? How is it possible that even fine minds just lazily accept such a flawed parallel between two completely different technological paradigms?


This is not a super well thought out position, but I've been leaning towards really disliking AI art in general (without having an opinion on any strong policy action yet).

First, art is, I think, one of the most enjoyable activities we have. One evidence is a lot of people forego higher salaries to choose an art job (although being a job carries additional responsibilities and some inconveniences compared to doing it as a hobby). It's a shame to see it diminished, when I believe we should be diverting efforts to automate other stuff.

Second, most AI art I've seen has been quite substandard compared to human art. We still don't know very well what human emotions are, the origin of sentience and qualia, etc.. But I think humans still lead here in having and probably understanding emotions. While for other tasks most implementation detail is irrelevant (e.g. in code, that it works tends to be most important, vs. minute choices in style), in art every detail is particularly relevant. Knowing this, it bothers me usually when I see this art that it doesn't carry the same knowledge of context and nuance a human would have.

Third, There's also the effect of making me question whether each piece of artwork was made by a human or AI, that didn't exist before. It does carry a bit of a magical feeling I think knowing a real person made every piece of artwork prior to 2018 or so (I think algorithmic art[1] is fine in this regard, because it tends to be more clearly algorithmic, and the involvement of the artist in coding is significant), that is now gone or at risk. Even the thought of imagining say their work day or what they had for lunch or talked to coworkers or friends is pleasant to me (at the risk of romanticizing it too much).

I suppose if AI art actually understood human nature, and specially the specific context of each art piece, better than us some of my arguments might be diminished. But the negatives so far seem to outweigh the positives, and I would like to e.g. give preference to content that doesn't use AI art.

(It is, admittedly, also the case that we lost a similar amount of craftsmanship when the industrial revolution happened, and in return we were able to support a larger population, and greater material conditions for most people. Every object now isn't carefully handcrafted. I think it's different because well, now material conditions are relatively abundant, and second there's no such insatiable, significant and irreplaceable demand for art as there were to common industrialized objects (take shoes for example), at least not to the same extent or vital significance. That is, the ability to have a shoe at all far outweighs it being carefully handcrafted, I believe; while experiencing a poorly made AI movie or artwork might be actually worse than none at all (or simply an older human made movie), and it also gets more cumbersome to evaluate for ourselves whether AI was employed or not. Also, while say shoes only last a limited time and need to be constantly produced, good artwork can last indefinitely (using digital storage), and even if you account for cultural change and relevance, can still last a really long time, motivating investing more into it.)

I'm quite sure that if we're still around in 500 or so years, we'll still be enjoying say Starry Night by Vincent van Gogh (probably as a digital reproduction). Current AI art will probably be largely discarded, so seems largely an unwise investment. Actually this kind of applies to code as well. It seems plausible Linux could still be used in 500 years from now (see how we still value finding Unix v4 50 years after), or at least of some interest. Those durable intellectual goods don't seem like wise places to invest anything but the best of us :) (at least in the cases it's not disposable)

The arguments above also don't seem to apply say in concept stages, or say for bland corporate diagrams that will be disposed of in 1 day, and which a huge quantity is needed. I think the main criteria I would evaluate is (1) Was it enjoyable to produce (for the artist(s))?; (2) Will it have a significant (artistic) impact on who is experiencing it?; (3) Will it last a long time?

[1] W.r.t. algorithmic art (and digital in general) (take bytebeat[2] for example), which is a field I really love, I am not any kind of absolutist about it. I know there tends to be extremely more degrees of freedom for human expression in a manual piece than in an algorithmic piece, so I see it more as a complement and not a substitute for more conventional art. I'd never give up ever hearing human musician player music for bytebeat, just bytebeat is a lovely experimental other dimension of expression. Writing a prompt seems a too few degrees of freedom and context, and too much of an uniform context that is less rich than humans can provide.

[2] https://dollchan.net/bytebeat/


Along the same lines, the anti-AI attitude among musicians today reminds me quite a bit of the anti-synthesizer attitude of the 60's and 70's, down to the same exact talking points: fears of “real” musicians being replaced by nerds pushing buttons on machines that can imitate those musicians.

I think the fears were understandable then, and are understandable now. I also think that, just as the fears around synthesizers didn't come to fruition, neither will the fears around AI come to fruition. Synthesizers didn't, and generative AI won't, replace musicians; rather, musicians did and will add these new technologies into their toolsets and use them to push music beyond what was previously understood to be possible. Synthesizers didn't catch on by just imitating other instruments, but by being understood and exploited as instruments in their own right; so will generative AI catch on not by just imitating other instruments, but by being understood and exploited as an instrument in its own right.

The core problem right now is that AI (even beyond just music) ain't being marketed as a means of augmenting one's creativity and skills, but as a means of replacing them. That'll always be misguided, both in the practical sense of producing worse outputs and in the philosophical sense of atrophying that same creativity and skills. AI doesn't have to produce slop, but it will inevitably produce slop when it's packaged and sold and marketed in a way that actively encourages slop — much like taking one of those cheap electric keyboards with built-in beats and songs and advertising it as able to replace a whole band. Yeah, it's cool that keyboards can play songs on their own and AI can generate songs on their own, but that output will always be subpar compared to what someone with even the slightest bit of creativity and skill can pull out of those exact same tools.


> generative AI catch on not by just imitating other instruments,

but generative AI didn’t catch on by "imitating instruments." It caught on by imitating artists, which streaming platforms and record labels then repackage and outsell you with. false analogy.


This argument won't get you anywhere because "imitating artists" and "outselling artists" aren't actually the same thing.

i.e. complaining about training on copyrighted material and getting it banned is not sufficient to prevent creating a model that can create music that outsells you. Because training isn't about copying the training material, it's just a way to find the Platonic latent space of music, and you can get there other ways.

https://en.wikipedia.org/wiki/Law_of_large_numbers

https://phillipi.github.io/prh/


you're dodging the point by retreating into silly abstractions. I’m talking about cultural and economic displacement of artists, not a pedantic debate about latent spaces. "Training isn’t copying" is the cynical AI shill statement that doesn’t address the fact that systems trained on artists are then packaged and monetized to outsell them. why is this part so complicated for you? or are you just being obnoxious...

dropping wiki links and math jargon avoids the ethical / market reality here.


> "Training isn’t copying" is the cynical AI shill statement that doesn’t address the fact that systems trained on artists are then packaged and monetized to outsell them.

No, that's the whole problem. The systems are capable of outselling the artist whether or not they're trained on the artist. So you can't prevent it by complaining about the training data.


> but generative AI didn’t catch on by "imitating instruments."

My bad. As the first part of my comment suggested, what I meant to say here was "imitating instruments and the performers thereof".

> which streaming platforms and record labels then repackage and outsell you with

But that's the thing: it doesn't seem very likely that they'd ever succeed at actually outselling very many actual musicians, for the same reason those cheap keyboards that can play pop songs at the press of a button don't actually replace any actual musicians: not just because the quality sucks compared to even amateur performers, but because even if the quality didn't suck, the end result is about as interesting to the audience as a karaoke backing track or musak playing in an elevator. If anyone can press a button to make some statistical average of popular music, then that's gonna get real boring real quick, while the actual musicians will be making actual, novel music. It's just like what happened to the “vaporwave” and “nightcore” genres: they got flooded with “new songs” that are just slowed down / sped up (respectively) versions of existing songs, and nobody bothered seeking out those songs unless they were really into vaporwave/nightcore for their own sake or they were trying to put together one of the umpteen bajillion “anime girl studying while listening to lo-fi beats” playlists out there.

That is:

> false analogy.

Then here's another “false” analogy for you: just like with synthesizers, just like with vaporwave/nightcore, just like with all sorts of other musical phenomena where all of a sudden people with no skill could very easily and cheaply make musical slop, this new AI-driven wave of slop will, too, consume itself until it's yet another layer of background noise against which the actual musicians distinguish themselves and push the boundaries of music. It's a wildfire burning away yet another underbrush of mediocrity and creative stagnation, and while it's absolutely terrifying and dangerous in the present, it paves the way for a healthier regrowth in the aftermath.


> I'm a musician, but am also pretty amused by this anti ai wave.

Let me guess: you're an amateur musician. Not that there's anything wrong with that, but it makes it much easier to be amused about this topic.

> There was recently a post referencing aphex twin and old school idm and electronic music stuff and i can't help bein reminded how every new tech kit got always demonized until some group of artists came along and made it there own.

What are you talking about? Which "tech kit" got demonized by whom? Of course, there were always controversies around techniques like sampling or whatever, or conservatives in the UK demonizing rave culture, but otherwise, I have no idea what you're referring to.


He's talking about the demonization of synthesizers, sampling, and digital audio workstations when each were respectively released.


There was no "demonization" about these things even remotely comparable to what we are witnessing now w.r.t. AI generated music (and I'm old enough to remember most of these things). Of course there was intense dislike from certain groups representing the "old-school" around new styles of music and new techniques. However, at the same time, you also had the "new-wave" which loved it and made it successful. For instance, a ton of people hated Disco music, at the same time, you had a ton of people who genuinely loved it. Same with practically any kind of electronic music. This simply does not exist with GenAI music. People listen to GenAI music because they either don't care, or don't know, not because they genuinely prefer it. There's absolutely nothing new about GenAI music that would make it exciting.


nobody demonized afx or idm bro. autotune, yes. but that's different. damn autotune to hell


[flagged]


The various lo-fi channels are also likely carrying heavily AI-generated music and it's actually kind of fine. The 'pieces' seem like undifferentiated background music of a certain mood, which is often what I'm looking for while I'm doing something else.

Previously, search was such a big problem. For instance, I'm not big on hip-hop and so on but I like songs like Worst Comes To Worst by Dilated Peoples. I've searched in all sorts of ways for other songs like that and come up with a handful of examples. Likewise, I want the vibe of Thick As A Brick by Jethro Tull during various parts. It's hard to find this kind of stuff.

But Suno.ai can generate that boom-bap vibe pretty easily and it's not the kind of thing where I'm going to put the same song on all the time like I do with the Dilated Peoples one, but it's good enough to listen to while I'm working.


>someday someone will come along and make a genuine artistic viable piece of work using ai

Always has been :)

https://www.youtube.com/watch?v=SpUj9zpOiP0

(And honorary mention)

https://www.youtube.com/watch?v=fYKAOPj_uts


Lobotomywave


I find python's async to be lacking in fine grained control. It may be fine for 95% of simple use cases, but lacks advanced features such as sequential constraining, task queue memory management, task pre-emption etc. The async keword also tends to bubble up through codebases in aweful ways, making it almost impossible to create reasonably decoupled code.


I've been out of the loop for stats for a while, but is there a viable approach for estimating ex ante the number of clusters when creating a GMM? I can think if constructing ex post metrics, i.e using a grid and goodness of fit measurements, but these feel more like brute forcing it


Is the question fundamentally: what's the relative likelihood of each number or clusters?

If so then estimating the marginal likelihood of each one and comparing them seems pretty reasonable?

(I mean in the sense of Jaynes chapter 20.)


Unsupervised learning is hard, and the pick K problem is probably the hardest part.

For PCA or factor analysis, there's lots of ways but without some way of determining ground truth it's difficult to know if you've done a good job.


There are Bayesian nonparametric methods that do this by putting a dirichlet process prior on the parameters of the mixture components. Both the prior specification and the computation (MCMC) are tricky, though.


Is there any didactic implementation of the Disruptor / multicast ring available somewhere? I've been curious in working through some practical example to understand the algorithm better.


LMAX have an open source version of the disruptor in GitHub https://github.com/LMAX-Exchange/disruptor


Here's a high level description in TLA+: https://github.com/nicholassm/disruptor-rs/blob/main/verific...

(Disclaimer: I wrote it.)

There's also a spec for a Multi Producer Multi Consumer (MPMC) Disrupter.


Is there also a decent c++ implementation of the disruptor out there?


Here’s one I’ve actually used/played with (though never measured performance of): https://github.com/lewissbaker/disruptorplus

And here’s one I saw linked on HN recently: https://github.com/0burak/imperial_hft/tree/main/distuptor


- Location: Switzerland

- Remote: Yes

- Wiling to relocate: Yes

- Technologies: C++, Python, Torch

- CV: https://www.linkedin.com/in/estlan-7217a8aa/

- Email: estebanlanter86 [at] gmail [dot] com

About me: Professional Machine Learning / Quant development background (Python, Torch). Experience with low latency software Engineering in C++. Expertise in time series related engineering topics in both ML and software engineering.

Happy to relocate, no need to stay in my field (curious to see new stuff too!)


This is a known problem in generative workflows for AI vids, but solvable. Midjourney recently introduced a feature that does this for stills, and controlnets available for the comfyui ecosystem also can partially solve this, albeit with some hassle. I'm pretty sure if not OpenAI themselves others will follow with their foundation models.


Coming from finance, I always wonder how and if these large pre-trained models are usable on any financial time series. I see the appeal of pre-trained models in areas where there is clearly a stationary pattern, even if its very hidden (i.e industrial or biological metrics). But given the inherently high signal/noise ratio and how extremely non-stationary or chaotic the financial data processes tend to be, i struggle to see the use of pre-trained foundation models.


Stock prices change continuously based on the current price and future events that have not happened. I don't think they are at all predictable.



I played around with timeGPT beta against predicting the sp500 index performance for the next day (not multi variate time series as I couldn't figure out how to get it setup) and trying to use the confidence intervals it generated to buy options was useless at best

I can see chronos working a bit better, as it tries to convert trends, and pieces of time series into tokens, like gpt does for phrases.

Ie. Stock goes down terribly, then dead cat bounces. This is common.

Stock goes up, hits resistance due to existing sell orders, comes down

Stock is on stable upward trend, continues upward trend

If I can verbalize these usual actions, it's likely chronos can also pickup on them.

Once again quality of data trumps all for LLM's, so performance might vary. If you read the paper, they point out a few situations where the LLM is unable to learn a trend, ie. When the prompting time series isn't long enough.


Imitation learning of discretionary traders who rely on a mixture of rules and intuition.


I'm not a 3D artist, but why are we still, for lack of a better word, "stuck" with having / wanting to use simple meshes? I appreciate the simplicity, but isn't this an unnecessary limitation of mesh generation? It feels like an approach that imitates the constraints of having both limited hardware and artist resources. Shouldn't AI models help us break these boundaries?


We're not stuck on meshes. Check out neural radiance fields as an alternative.


My understanding is that it's quite hard to make convex objects with radiance fields, right? For example the furniture in OP would be quite problematic.

We can create radiance fields with photogrammetry, but IMO we need much better algorithms for transforming these into high quality triangle meshes that are usable in lower triangle budget media like games.


"Lower triangle budget media" is what I wonder if its still a valid problem. Modern game engines coupled with modern hardware can already render insane number of triangles. It feels like the problem is rather in engines not handling LOD correctly (see city skylines 2), although stuff like UE5 nanite seems to have taken the right path here.

I suppose though there is a case for AI models for example doing what nanite does entirely algorithmically and research like this paper may come in handy there.


I was referring to being stuck with having to create simple / low tri polygonal meshes as opposed to using complex poly meshes such as photogrammetry would provide. The paper specifically addresses clean low poly meshes as opposed to what they call complex iso surfaces created by photogrammetry and other methods


Lots of polys is bad for performance. For a flat object like a table you want that to be low poly. Parallax can also help to give a 3D look without increasing poly count.


Deleted comment, wrong thread


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: