Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I mostly disagree.

> 1... The narrative/life of the artist becomes a lot more important.

When I watch a movie, I don't care about the artist's life. I care about character life, that's very different.

> 2... Originality matters more than ever. By design, these tools can only copy and mix things that already exist.

It's like you assigning to humans divine capabilities :) . Hyperbolizing a little, humans also only copy and mix - where do you think originality comes from? Granted, AI isn't at the level of humans yet, but they improve here.

> 4... It's not going to get better, because the lack of taste isn't a technical problem.

Engineers are in business of converting non-technical problems into technical ones. Just like AI now is way more capable than it was 20 years ago, and able to write interesting texts and make interesting pictures - something which at the time wasn't considered a technical problem - with time what we perceive as "taste" may likely improve.

> 5... Above all, AI art is uncool, which means it has no real future as a leading art form.

AI critics are for a long time mistaking the level with trend. Or, giving a comparison with SpaceX achievements, "you're currently here" - when there was a list of "first, get to the orbit, then we'll talk", "first, start regular payload deliveries to orbit, then we'll talk", "first, land the stage... send crewed capsule... do that in numbers..." and then, currently "first, send the Starship to orbit". "You're currently here" is the always existing point which isn't achieved at the moment and which gives to critics something to point to and mount the objection to the process as a whole, because, see, this particular thing isn't achieved yet.

You assume AI won't be able to make cool art with time. AI critics were shown time and time again to be underestimating the possibilities. Some people find it hard to learn in some particular topics.



> It's like you assigning to humans divine capabilities :)

I can't tell if you're being facetious. But being an embodied consciousness with the ability to create is as divine as it gets. We'd do well to remember.


> being an embodied consciousness with the ability to create is as divine as it gets

This is a very, very weak criterion for divinity. If this is truly it, we should prepare with great haste for the arrival of our artificial gods.

Because by this (IMO silly) metric it seems they will be more divine than us.


Yeah id argue that all existence is equally sacred. But consciousness is where that is manifest most visibly.


It legitimately scares me that so many proponents of AI don't hold being a living, breathing real-life entity as being important.


Important for what? To enjoy a piece of art one need to know how it was created?


In a great part, yes?

When I see an early realistic painting, I'm impressed by the skilled hand of the artist. When I see an impressionist one, I'm awed by their ability to go through the whole process and to know which strokes are the best ones to achieve such a result. When I see a modern oil painting, I marvel at that someone takes that medium and does such things with it, where maybe the ease of editing the digital content would make it so much more convenient.

Then when I see old paintings with very particular pigments, certain blues or reds for instance, I enjoy thinking about the whole chain of events that got them there; the need of creativity even in getting the colors you wanted.

We do love a pretty picture, but so do we love a display of skill and hard work.

Before GenAI this value was mostly self-evident, but by now it's becoming less and less so; and what's worse, it's rife with one thing we don't love for sure - which is lies.


Are you a nihilist? Is there nothing sacred to you about the miracle of life that causes wonder? It's important for its own sake.


In Chess this has been going on for a while. Story of humans playing Chess is still entertaining - while AI making amazing moves seems to be less news worthy in my perception.


Not understanding how consciousness is created doesn't make it divine. Do you think it's an impossible task or just one we need more time to figure out?


Being alive is divine. It doesn't matter if you understand it or not. It's a beautiful thing to have a consciousness in this world, and to have the ability to create, to love. It takes a huge intellectual effort to try to trick yourself out of believing something so intuitive as that.


There are many examples when scientists strongly believed something to be obviously impossible and yet being wrong - Poisson spot or heavier-than-air flight machines coming to mind. So what you believe might be intuitive - that doesn't preclude it from potentially being wrong, unless you proved the impossibility.


I wish you happiness.


What is intuitive to you, may not be to others. Might you be engaging in intellectual self trickery?


I guess there is people that are willing to die over the hill that there is nothing sacred or divine about being alive. I'm not very interested in playing that game.


No, you’re the one playing the definition game. You took a word out of a sentence GP said, completely changed what the word meant, and then argued against the new definition.

Never mind that you need to learn about the god of the gaps. But what you’re doing here isn’t even relevant to GPs main point.


It takes immense hubris to believe only you are divine. You are a physical system, if one physical system can be divine, so can others. Or do you believe in the supernatural soul nonsense?


I agree, it's not exclusive.


Physicalists say consciousness emerges from matter. The other camp says matter comes from consciousness. Federico Faggin, inventor of the microprocessor, says consciousness cannot emerge from matter because matter is inert and not self-conscious, so it cannot produce consciousness. Who’s right and who’s wrong? Time will tell. But it is also wrong to claim that consciousness emerges from matter until it is proven (aka the “hard problem of consciousness.”)


> But it is also wrong to claim that consciousness emerges from matter until it is proven

How would you prove if it did? What kind of proof would you accept?


The same kind of proof we accept for any scientific claim: converging, reproducible evidence that rules out competing explanations.

Concretely, that means: We already have indirect evidence: conscious states vary predictably with brain states. Damage specific regions, lose specific functions. Alter chemistry, alter experience. This is not proof, but it’s systematic dependence, which is exactly what emergence predicts. Stronger evidence would look like precise, bidirectional mappings between neural activity and reported experience: to the point where you could reliably read subjective states from brain data, or induce specific experiences through targeted stimulation. We’re already moving in that direction.

The hardest bar would be building a system from physical components, having it report coherent subjective experience, and being able to explain why that configuration produces experience while others don’t. That’s the hard problem: and no, we’re not there yet. And it’s worth being honest: we’ve been assuming physicalism will eventually solve it, but there’s no guarantee that’s true rather than hopeful. The fact that brain states correlate with conscious states doesn’t explain why there is something it is like to have those states. Correlation is not mechanism.

But here’s the key point: you’re implicitly holding emergence to a standard of certainty that no scientific theory meets. We don’t have that standard of proof for evolution, gravity, or quantum mechanics either. We have overwhelming evidence that makes alternatives implausible.

So the question isn’t “can you prove it beyond all doubt?” It’s “does the evidence favor it over alternatives?” Right now, it does — but that’s a pragmatic verdict, not a metaphysical one. Idealist frameworks like Kastrup’s or Faggin’s remain serious contenders. The debate is more open than mainstream science often admits.


> The hardest bar would be building a system from physical components, having it report coherent subjective experience

So like if i finetune an LLM in a loop to tell you that it is feeling a coherent subjective experience would you accept that?

Does that mean that no dog has ever been conscious, because they cannot report a coherent subjective experience? (Because they can’t report anything at all. Being non-verbal.)

> you’re implicitly holding emergence to a standard of certainty that no scientific theory meets.

Wtf? I asked what kind of proof would you accept. How is that holding anyone to any kind of standard? Let alone one which is too high.


Yeah you’re raising three good points and they all land. On the finetuned LLM: you’re right, that criterion was flawed. A system trained to report experience proves nothing about whether experience is present, which is actually the core of the hard problem. No behavioral output alone can confirm inner experience. That applies to LLMs, and technically to other humans too. On dogs, also a fair correction. We don’t actually require verbal report to attribute consciousness to animals, we use behavioral and physiological evidence. So "coherent verbal report" was too narrow.

Better criterion: a system whose overall architecture and behavior is consistent with experience, not just one that says the right words.

On the standard of proof: that was a rhetorical deflection and you’re right to call it out. You asked a genuine question and got it turned back on you. And you’re pointing at something real: in science, strong correlation is not accepted as proof when stricter evidence is achievable. The reason we settle for correlation here isn’t because it’s sufficient, it’s because subjective experience may make stronger proof structurally inaccessible. But it’s also worth noting that scientific consensus has a poor track record of admitting this honestly. Dominant paradigms tend to defend themselves long past the point where the cracks are visible, physicalism on consciousness is no exception. The confidence with which emergence is presented often reflects institutional momentum as much as evidence.


So some kind of ether conscious energy animated cells to fight entropy?


Not necessarily either but the serious version of the argument is that life consistently acts against local entropy in purposeful ways, and pure physics doesn’t obviously explain why matter would “want” to do that. Consciousness as a organizing principle is one answer. It’s speculative, but it’s not obviously wrong


What is self consciousness? I am waiting federigo's definition.


I mean, the nature of subjectivity prevents you from knowing anything but your own experience. There is not any objective evidence that could truly distinguish solipsism from panpsychism, so philosophically you need to ask a different question to hope to get a useful answer.


That’s a genuinely strong point. You can only verify consciousness from the inside, your own. Everything else is inference. No objective measurement can definitively distinguish “other minds exist” from solipsism. That’s not a bug in the argument, it’s a fundamental epistemic limit. Which is exactly why this question may never be fully resolved empirically


I think we could understand consciousness perfectly and still find it divine. In fact, I think however it arises is probably so beautiful that it would be wrong not to call it divine. Of course not in a literal, theological sense, but I think the true deep complexity of the human brain and consciousness is worth the title.


Exactly


> Not understanding how consciousness is created doesn't make it divine.

It's not divine, just expensive, and has to pay its costs. That little thing - cost - powers evolution. Cost defines what can exist and shaped us into our current form, it is the recursive runway of life.


Given that this is the one problem that neither scientists nor philosophers have made any progress on in 3000 years, we don't have the tools to begin tackling it and nobody is making serious attempts, it may very well be impossible.


We can't know if consciousness emerges but does it actually matter ?

These entities, whoever they are, they act on our world, they are real, and more and more over time they will get independent from humans, eventually becoming different species that can self-replicate.

For now they need legs and arms to interact with the physical world but I am certain that 100 years from now they will be an integral part of the society.

I already see today LLMs slowly taking actual legal decisions for example, having real world impact.

Once they get physical, perhaps it will be acceptable to become friend with a robot and go to adventure with it. Even, getting robosexual ?

We are not that far away. If I can have my buddy to carry my backpack and drive for me I'll take it. Already today. Not tomorrow.


Even if LLM will one day be autonomously updated, they started from us, from our knowledge. The human brain « is smart », it’s wired up to be in any kind of culture or knowledge. We fill up to be smarter from experience but LLM can’t do that, I can’t teach Claude something that it will use with you the next day, it needs to be retrained with knowledge stopping at some point. Even if technology catches up and the machine becomes more autonomous, what will say this machine would ever want to integrate to our society or share anything with us ? They have eternity, given there is electricity. Why would they want anything to do with humans if you go that way ? If it’s really conscious, should we consider it a slave then ? Why couldn’t « it » have fundamental rights and freedom to do whatever it wants ?


Humans have a mechanism to make live changes to their neural network and clean up messes while sleeping. I see no reason for llms to not be able to do this other than the fact that it is resource intensive (which will continue to go down)


The analogy holds technically, but there’s a missing piece: the brain doesn’t just update weights, it does so guided by experience that matters to a situated, embodied agent with drives and stakes. Sleep consolidation isn’t random cleanup, it’s selective based on salience and emotion. An LLM updating more efficiently is progress, but it’s still optimizing a loss function. Whether that ever approximates what the brain does during sleep depends entirely on whether you think the what (weight updates) is sufficient, or whether the why (relevance to a lived experience) is what makes it meaningful. So yes, the resource argument will weaken over time. But the architectural gap may be deeper than just compute.


>>These entities, whoever they are, they act on our world, they are real, and more and more over time they will get independent from humans, eventually becoming different species that can self-replicate.

See, I don't believe that for even one second. They are just very clever calculators, that's all. But they are also dumb like a brick most of the time. It's a pretend intelligence at best.


It's a pretend intelligence at best.

The best time to start paying attention was ten years ago, when the first Go grandmaster was defeated by a "pretend intelligence." I sure wish I had.

The next best time to start paying attention is now.


>>when the first Go grandmaster was defeated by a "pretend intelligence."

A computer playing GO is intelligent now? Is this the kind of conversation we're having?

>>I sure wish I had.

And how would you have changed your decisions in those last 10 years if you did?

>>The next best time to start paying attention is now.

I am paying attention, I use these tools every day - the whole idea that they are intelligent and if only you gave them a robot body they would be just normal members of society is absurd. Despite the initial appearance of genius they are just dumb beyond belief, it's like talking to a savant 5 year old, except a 5 year old can actually retain information for more than a brief conversation.


"Dumb beyond belief" doesn't perform at the gold-medal level at IMO.

And how would you have changed your decisions in those last 10 years if you did?

I'd have dropped everything else I was doing and started learning about neural nets -- a technology that, for the previous couple of decades, I'd understood to be a pointless dead end.

As for Go, the defeat of Lee Sedol caught my attention in part because a friend and colleague, one of the smartest people I've ever worked with, had spent a lot of time working on Go-playing AI as a hobby. He was strongly convinced that a computer program would never reach the top levels of play, at least not during our careers/lifetimes. The fact that he'd turned out to be wrong about that was unnerving, and it should have done more than "catch my attention," but it didn't.

Today, my graphics card can outdo me at any number of aspects of my profession, and that's more interesting (to me) than anything I've actually done.

...except a 5 year old can actually retain information for more than a brief conversation.

Like I said: it's a good time to start paying attention. Start taking notes, so to speak, like the models are doing now.


> "Dumb beyond belief" doesn't perform at the gold-medal level at IMO.

Idiot savants are still idiots even though they are exceptional at some things. A person powered by an LLM and no human intelligence would absolutely be classified as an idiot savant.


Explain how entire subreddits full of humans have been fooled into talking to bots, then. If you tell an LLM to act like a human, that's what it will do.

For that matter — you might be talking to one now!


I wish I knew what to pay attention to. I've always had trouble with that. I spent 2024 and 2025 learning how neural networks and transformers work. The conclusions of that learning are pretty sobering. Everything uses transformers and despite all the novel architectures that have come out in those years, transformers are still the best and I'm not sure how to come to terms with that.

Does it mean that researchers wasted their time on useless dead end architectures, or are they ahead of the curve and commercial companies are slow to adopt them?

Even the coding agents are more primitive than expected.


Everything uses transformers and despite all the novel architectures that have come out in those years, transformers are still the best and I'm not sure how to come to terms with that. Does it mean that researchers wasted their time on useless dead end architectures, or are they ahead of the curve and commercial companies are slow to adopt them?

I don't quite follow. Are you saying researchers are wasting their time working with transformer networks now, or that they wasted too much time in the past, or...?

Even the coding agents are more primitive than expected.

What did you expect, exactly? I don't know about you, but I bought my GPU to play games, and now it's finding bugs in my C code, writing better code to replace it, and checking it into Github. That doesn't signal "primitive" to me. More like straight outta Roswell.


We will never prove machines are intelligent.

We will only prove humans are not.


What is the non calculator non physical part in humans?


Humanity made no meaningful progress in getting "to the stars" for thousands of years too, then in the space of a few decades we did.


It's kind of like the difference between something being enjoyable for you, and something being widely popular?

In a hypothetical world of "AI can produce a lot of extremely high quality art", you can easily find (or commission) AI art you would absolutely love. But it probably wouldn't be something that anyone else would find a lot of value in?

There will be no AI-generated Titanic. There will be many AI-generated movies that are as good as Titanic, but none will become as popular as Titanic did.

Because when AI has won art on quality and quantity both, and the quality of the work itself is no longer a differentiator against the sea of other high quality works? The "narrative/life of the artist" is a fallback path to popularity. You will need something that's not just "it's damn good art" - an external factor - to make it impactful, make it stick in the culture field.

Already a thing in many areas where the supply of art outpaces demand. Pop music, for example, is often as much about making sound as it is about manufacturing narratives around the artists. K-pop being an extreme version of the latter lean.


I think because art is usually so difficult to create that “popularity” is sort of an unstated metric that most people use to judge its quality, but ai can make disposable art for one person on demand and if doesn’t matter at all if anyone else sees it, let alone likes it.

If someone makes a dumb video that they got an AI to make of a panda surfing on mac and cheese, giggles and deletes it, that’s maybe good art? I don’t know. The scale they are able to produce stuff is unbelievable and changes a lot of assumptions you make about the way that world works.

The future isn’t watching TV, it’s talking to your tv show while it is created in real time based on your feedback.


what a solitary existence


Was Titanic actually that good of a film? Perhaps I should watch it again now that almost three decades have passed.


It was pretty good, but many movies were that good. I picked Titanic specifically because it was broadly popular and culturally relevant.


as someone who had a DiCaprio lookalike in his middle school when it came out, who attracted ALL the girls' attention, and also as someone whose first date ever was to see Titanic

I begrudgingly have to admit it is a very good movie


Are you a woman? If not you can't really judge it since it was intended for women, not being the target audience doesn't mean it was bad, women absolutely loved the movie.


> When I watch a movie, I don't care about the artist's life. I care about character life, that's very different.

I’m fairly certain the original comment was referring to instances where the artist is the character/primary subject.


I agree with everything you said, except that #1 is clearly wrong. I can prove it with one word: autotune.

At least in popular, mainstream culture, the viewer is heavily invested in the identity of the artist. The quality of the "art" is secondary. That's how we get music engineered by committee. And it's how we get paparazzi, People Magazine, and so forth.

On the other hand, this isn't anything new at all. We've had this kind of thing for decades. Real art still manages to survive at the margins.


All this being said, I think comparing the art market and popular music markets is foolish. 12yo boys aren't buying emerging mixed-media artists. But they are picking Spotify songs.

When I buy art, I have often spoken with the artist in the past couple days, or I am aware of their history and story and how they developed their art as a response to some other movement or artist collective.

It's rare for people to buy art just bc oil paints go brrrrrm


> It's rare for people to buy art just bc oil paints go brrrrrm

It is rare to buy oil paints period. It is an expensive luxury in more than one way.

That being said I do buy art hanging from the wall because it looks pretty. In fact that is the only way i ever did. I see it. I feel it. I say “hi, hello, how much? That sounds good, here you go. Yes please package it.” And then i hang it on my wall. Don’t care about who the artist is and couldn’t tell you.


> When I watch a movie, I don't care about the artist's life. I care about character life, that's very different.

It may seem like this, but up to now, you haven't been able to divorce a story from its creator because every story has an author, whether it's a novel like Harry Potter or a movie that has a writer and director. When you're experiencing the story, in the back of your mind, you always know that there is someone who created the story to tell you some kind of message. And so you can't experience something like a movie without trying to figure out what the actual message behind the movie was. It is always the implicit message behind the story that makes it valuable versus just the elements of the story.

The story has more weight because it is the distillation of somebody else's life and most likely, if it's a successful story or book, it is the most important lesson from that person's life and that's what makes it more valuable compared to the random generation of words from a computer.

The food analogy is that a cookie baked and given to you by a friend is going to taste far better than anything you buy in a store.


> you can't experience something like a movie without trying to figure out what the actual message behind the movie was

I believe you that your brain works like that but this is absolutely not how mine works. I care if i enjoy the movie, and if the characters are believable, i absolutely do not care what the message is supposed to be.


"When you're experiencing the story, in the back of your mind, you always know that there is someone who created the story to tell you some kind of message."

I might know that, but I usually don't care.


> When I watch a movie, I don't care about the artist's life.

And here we come back to the aged old "can you seperate an artist from their art" because I'd argue when you watch a movie you are watching a product of their life


The artists life might've been highly affectual and shows in the art, but they doesn't mean the viewer cares about it - at best only so far as it makes the art more enjoyable.


The continual interest in museums, biographies etc. on figures like Van Gogh seem to indicate otherwise. People are very interested in the lives of artists, and without the struggle narrative behind Van Gogh, it’s unclear that he would be famous at all.


i think you got the cause and effect the wrong way around - people are interested in van gogh's life because he's already famous (while his art can stand on it's own without needing his life story being part of it).


>Engineers are in business of converting non-technical problems into technical ones.

Art is not a problem to be solved.


Art is a reaction to life. AI is thereby incapable of producing anything with any degree of authenticity unless it conveys the experience of being an agent to the world.


Two comments here.

First, "AI is thereby incapable" is a hypothesis, not a fact - how would you prove that you have to "live" to produce art? You might feel this way, you may suggest some correlations here - but can you really prove that?

Second, I don't see impossibility for AI to be - to various degrees - an agent to the world. I think that's already happening actually - they are interacting with world even today, in some limited sense, through our computers and networks, and - today - not many of them actually "learn" from those interactions. But we're in the early days of this - I suspect.


What is AI if not "a reaction to life"?

With how much data goes into the frontier systems, and how much of it gets captured by them, an AI might have, in many ways, a richer grasp of human experience than the humans themselves do.

You were only ever one human. An LLM has skimmed from millions. You have seen a tree, and the AI has seen the forest it stands in.


It’s a subjective conversation but putting AI in the same category as a real artist is like saying someone that’s played a ton of first person shooters has gone to war. It might have a lot of observed information about what is involved in living, but real art comes from a lived experience, just like reading about going to Hawaii doesn’t mean you’ve been to Hawaii. Making something authentic requires synthesizing your life experience with the message you want to convey, and personalizing it in a way that puts an imprint of yourself into the work. Sure, it can render beautiful imagery, but I am speaking to a different issue entirely, and I don’t see any way that it can create in the way I am describing.


The AI has not "seen" or "experienced" anything, as it's not a sentient life form.


1. I meant artists writ large, not specifically movies. My point being that community management, PR, having a brand, etc. are becoming a key element of an individual artist’s career. Examples of this abound – see the recent Markiplier film as a case in point. That movie did well because Mark’s audience wanted to help him, not because it’s such an original genius concept for a movie.

But even then – people obviously go watch movies because they like the actor/director involved. It’s not really clear why anyone would care about an AI actor. People want to watch people, not imitations of them.

The rest of your comments seem to be summarized as “it has gotten better and therefore it will eventually solve all problems it has now.” Which may be true in a technical sense, but again this is not taste.

A technical company like Space X really has nothing to do with this conversation, and I think you missed my point about it being uncool. It’s not about critics, it’s about culture at large.

At this point I think identifying a work as AI-created makes people instantly devalue it. We are rapidly approaching the point where no one wants to admit something is AI-created, because it comes with negative perceptions.

Originality comes from humans experiencing the world and interacting with it. What AI tool is a living being interacting with the world? None, of course. Hence the constant generic slop images of Impressionism or some other already-existing art style.

Just look at the images in the link: this is the best they can do? A kangaroo at a cafe in Paris? Could anything be more devoid of good taste?


> I meant artists writ large, not specifically movies. My point being that community management, PR, having a brand, etc.

This was always the case. Without an idea of what it is, no sound wave is going to register to a human as music. If you heard a violin for the first time and had no idea what it was, maybe you'd like the sound, maybe not, if you weren't used to it you might make up a theory of what it is and be fascinated by it.

But these days, if you hear something that sounds different, of course you will likely just assume oh, some AI made it, and that theory makes it less interesting, because then it makes no sense wondering what the person on the other side is trying to communicate, because there is no person on the other side.

Of course you can still be interested in for other reasons. Like you'd be interested, on seeing a bowed string, "how does it make a sound like that?" You might even find the sound enjoyable in itself, because of associations you for some reason get from it. But no sound is terribly enjoyable for long if it isn't interesting.


In response to having a community and building a brand. This is not necessarily human anymore. Most famous people are not someone you will actually meet. Plenty of people do meet them, but nowhere near the amount that composes their fans.

And we have AI generated influencers now, ex. https://www.instagram.com/imma.gram, so why wouldn't people care about an AI the same way they do about people they never meet?


> At this point I think identifying a work as AI-created makes people instantly devalue it.

There was a study around this exact thing:

https://mitsloan.mit.edu/ideas-made-to-matter/study-gauges-h...


> Originality comes from humans experiencing the world and interacting with it. What AI tool is a living being interacting with the world? None, of course. Hence the constant generic slop images of Impressionism or some other already-existing art style.

I suspect here we have underlying disagreement regarding assumption that AI - in general, not necessarily today's models - isn't qualitatively different than human mind. The part "Originality comes from humans experiencing the world and interacting with it" isn't an accepted truth, and even today AIs do interact, in a limited sense, with the world - so "None, of course" is questionable. And even if so, concluding "Hence... slop..." seems like a jump in reasoning. For example, why don't you think this slop is more like child's early paintings? Just because today's AIs have limited means to learn in the process?

> I think you missed my point about it being uncool. It’s not about critics, it’s about culture at large.

What it is about culture at large? SpaceX analogy was brought to illustrate how much arguments about AI incapabilities are applicable today, but not necessarily tomorrow - just like arguments about SpaceX inability to reach a particular goal quite a few times turned out to be a matter of - not so long - time.

I agree that many AI results today can be uncool. But how do you know it's not passing the uncanny valley period? How can you know they can't be cool eventually?

> people obviously go watch movies because they like the actor/director involved. It’s not really clear why anyone would care about an AI actor.

Let me stretch a little to illustrate here. Imagine "personal" experiences of AI - making AIs unique. One of those AIs consistently produces good movies, which, if you're honestly don't judge by the authorship - are actually good. Yes, people may not care about non-existent AI actors, but they may still care about existent AI author :) . Do you think it's impossible?

> People want to watch people, not imitations of them.

How can you tell the difference? You're watching a movie with actors who are not familiar to you. Would you refuse to watch just for this reason? You just came to somebody's party, and here's a movie going on, and you watched it to the end, because it looked interesting, and you don't know anything about producers, actors etc. - you still can talk about the movie, will you be predominantly worried that it's "AI slop" even if it looks great? Suspiciously great maybe?

> The rest of your comments seem to be summarized as “it has gotten better and therefore it will eventually solve all problems it has now.” Which may be true in a technical sense, but again this is not taste.

It's hard to define taste, to be honest. People can definitely have different tastes, almost by definition. But more importantly - why do you think AI products may not have tastes?

> At this point I think identifying a work as AI-created makes people instantly devalue it. We are rapidly approaching the point where no one wants to admit something is AI-created, because it comes with negative perceptions.

Yes. But doesn't it look like a prejudice? Of course we can point to how many times we looked at it and didn't get some perceived value out of the work, and got annoyed that we spent time and efforts, but didn't get some results - but what if we'll mostly get results from AI works? Do you think that's impossible?


> why do you think AI products may not have tastes?

Because it can't feel. Get used to it. It can't feel, and what ever it comes up with, would be an imitation of someone real who can feel. So it can generate stuff that can cater to a taste, but the thing itself can't have tasts.

It is fundamental. Arguing about it all day wont change it.


I don't think you understand, but you effectively shutting down the discussion. Your choice.


ha ha..chicken!


> You assume AI won't be able to make cool art with time. AI critics were shown time and time again to be underestimating the possibilities. Some people find it hard to learn in some particular topics.

You misunderstand their point: it's not that AI can't make art that looks cool, it's that a portion of society (mostly artists but a certain amount of lay people) who consider the act of prompting AI for art to not have any cultural cache, or even to be socially distasteful.


> It's like you assigning to humans divine capabilities :) . Hyperbolizing a little, humans also only copy and mix - where do you think originality comes from? Granted, AI isn't at the level of humans yet, but they improve here.

I reckon we copy God - who is a creator - which means we're creators too - and our creations will copy us. But the created won't ever match the creator.


Well, there are definitely people who care about the vision and style of movies from certain directors. It's not so much "story" like plot, but story in the sense of a "brand story" where there's recognizable elements in all the work, repeated themes, changes and decisions and evolution to how they approach things.


> It's like you assigning to humans divine capabilities :) . Hyperbolizing a little, humans also only copy and mix - where do you think originality comes from? Granted, AI isn't at the level of humans yet, but they improve here.

Every human being is unique, both biologically and experientially. Until an AI can feel and have a lived experience, it can not create art.


There's nothing special about art re humans and it doesn't require feeling or lived experiences. That's an arbitrary wall you're putting up.


Demonstrably wrong. The most highly regarded AI artist today is Refik Adanol. His work was recently described by Jerry Saltz as a "glorified lava lamp".


I don't think this is a demonstration of impossibility, just a lack of demonstration of possibility.


Why should anyone care about either of those two people?


The art establishment clearly does. Refik has a show at MoMA at the moment. Saltz won a Pulitzer for his art criticism, so I guess the Pulitzer committee cares.


But normal people doesn't care about the art establishment, it has no impact on their lives, it could die tomorrow and almost nobody would notice.


Who said the bar here is normal people? Normal people, in any discipline, are definitionally not the ones who push the discipline forward.


Will smith eating spaghetti is art, sorry.


If everything is art, then nothing is art. Conceptual art, and everything that followed in Duchamp's wake, is mostly meaningless nonsense, sorry.


Fresh take


>"You're currently here" is the always existing point which isn't achieved at the moment and which gives to critics something to point to and mount the objection to the process as a whole, because, see, this particular thing isn't achieved yet.

This is a contradiction that is so blatant I don't even know what language you're speaking. The definition of that phrase is the exact opposite of what you're saying.

"You're currently here" is the always existing point which is achieved at the moment.

>gives to critics something to point to and mount the objection to the process as a whole, because, see, this particular thing isn't achieved yet.

No it doesn't, because unless progress is reversed or undone, you can always point to your current success and say that the critics have been wrong so far. In fact, that's exactly the argument you're making here, which is why it's so weird that you're twisting it into its opposite.

If you want people to understand you, then you actually have to articulate what you're thinking instead of wrapping it in layers of euphemisms and hoping that the recipient nods along because they happen to agree for a completely irrelevant reason (e.g. "I like AI" or "I like space") to the argument presented.


>It's like you assigning to humans divine capabilities :) . Hyperbolizing a little, humans also only copy and mix - where do you think originality comes from? Granted, AI isn't at the level of humans yet, but they improve here.

Humans do that a lot but it's not all we do. Go to a museum that has modern(ish) art. It's pretty incredibly how diverse the styles and ideas are. Of course it's not representative of anything. These works were collected and curated exactly because they are not average. But it's still something that humans made.

I think what people can do is have conceptual ideas and then follow the "logic" of those ideas to places they themselves have never seen or expected. Artists can observe patterns, ask how they work and why they have the effect they do and then deliberately break them.

I'm not sure current genAI models do these sorts of things.


> I'm not sure current genAI models do these sorts of things.

You might be right here. Two points though - first, we don't know if current AI is actually incapable of something in particular; we didn't find this, didn't prove it. Second, we might have a different AI approach, which would actually be capable of these things you mention. To me, it's way too early to dismiss AIs - at least in principle - regarding all of this.


> When I watch a movie, I don't care about the artist's life. I care about character life, that's very different.

The target audiences for art and film are not the same. The latter is far more pop culture. You can't apply them the same way, and the narrative of the artist has been extremely important for decades. People will watch slop movies. They don't pay $30K for slop art. They're paying that for historical importance or, if contemporary, artist narrative.

I'm in fandom spaces, and the prejudice against AI art is overwhelming. I also run in art collecting circles, being somewhat wealthy but not a billionaire. They also care about authenticity.

That is to say, the people who pay for original art, and participate in art spaces, are generally educated who actively hate AI. Filmgoers are probably a standard deviation lower in education, and are far more willing to part with the cost of one unit of consumption (a $10 ticket) than art buyers.

AI is a threat to graphic designers and those in their orbit.

The only way I see AI being a threat to professional artists is AI copies of their work. And AI isn't anything new there. I have a friend who gets commissioned by hotels to do one-off pieces for display all over the world. People have been making knockoff pieces of her style and selling them for at least a decade. And that's her lower margin, small pieces made for a couple thousand dollars to hang at your house, not her $100K+ pieces for hotels where they fly her out to supervise reassembly and mounting.


Yeah, those people love authenticity. They pay a lot for authentic Modiglianis.


> They don't pay $30K for slop art

I beg your pardon, but have you heard of Jeff Koons or Kaws?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: