Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
People liked AI art – when they thought it was made by humans (sciencenorway.no)
19 points by dxs on April 1, 2024 | hide | past | favorite | 36 comments


For many people (most?), part of appreciating art is to engage with it beyond mere color and shape. I mean context about the artist themselves, and their life and work, and what they were trying to articulate, and why they made the choices that they did. All of this is embedded in a social context in which art expresses intention and meaning and some artifact of human experience.

It's because many/most people feel and understand this context that they react so differently to artwork that they know is forged. If a painting were just an issue of color and shape, forgery would be irrelevant to how we relate to art. But for most, a painting is embedded in the social context I described above, and forgery feels like a violation of that contract.

AI art seems to fall in a similar category, since AI systems are not social agents and not human beings.


There are different levels of art, some of it is just something nice to look at. There is is no shame in that and often that is exactly what the artist was intending, it shouldn't be considered a lesser art, just a single form of it.

Other forms of art are what I think of as "an invitation to consider" which could range from a painting to a collection of chairs pointed at a small window (and beyond). There is a degree of snobbishness within the art community about the former, but really much less than you might expect. This, I think correlates with the survey's findings on open-mindedness and acceptance.

The nature of provenance and forgery is a completely different beast. The thing that is valued (in monetary terms) is the same intangible thing that NFTs trade, and when you think about it it makes as little sense.

There are instances where forgers have been caught and art owners have resisted wider investigations because it might reveal that something they own is a forgery. In that respect they really don't care if it is a work by the original artist but that it is believed that it is a work by the original artist.

In a similar vein is the peculiar case of Milli-Vanilli where the artwork was audio media but considered fake because of the artificial supplementary material.

In some respects I might even consider AI art to have an advantage by being unencumbered by such shenanigans.


I cut a digression on forgery to keep my comment short, but my focus is less on provenance and the habits of art owners and more on what it feels like to look at a painting that you know is forged (or copied, etc.) The mental story around that painting, which is half of the fun and most of the meaning, is quite different.


I have been thinking about this recently, AI "Art" does have a certain style too it. Almost abstract.

I have found myself enjoying the look of it when I am unsure if it is AI or not. I almost bought a playmat I really liked on Etsy but I am 99% sure it is AI generated. Doesn't change that it looks appealing.

I have to wonder, if we could somehow ethically train an AI model to generate AI "Art". Maybe through some cost sharing method, idk. Would there a valid place for it given its style?

But as it is right now I refuse to support it until we can say it was actually ethically trained.

At absolute best I could see using it as a mockup tool before paying an actual artist or designer to be able to more easily communicate what it is I am looking for. But not for the end result.


I suspect that there is already enough public domain art and photography to train an effective visual art AI model. But there probably won't be much incentive to train such tailored models until/unless the legal landscape tilts against the current "train with every image you can find on the Web" approach.


"AI" art fails in at least two ways:

1. It can't actually communicate anything, as it's a fake simulation that lacks essential elements ("AI" art is to art as playing Guitar Hero is to playing a real guitar).

2. It's a wholly derivative rehash of existing art that will ultimately lead to artistic stagnation (as most art creation becomes a deskilled, automated task, knocking out the first step in the pipeline of artistic innovation).


AI art can be aesthetic pleasing. Not all drawings and paintings need to communicate a deep message.

One way to see the human touch in AI art is that humans need to select the right AI art to use in products, so it's not like the human is completely out of the picture. Additionally, I don't think people are generally arguing for a world with 100% AI art, a lot of human art is fairly repetitive and uninspired, and AI can take care of this type of art and free humans for the truly special work.


Fails what? If it fails your personal preferences that's fine, but hardly worth commenting broad statements about that.


It's always refreshing to see someone else who actually cares about art engaging in these fora but I don't entirely agree with either point.

> 1. It can't actually communicate anything, as it's a fake simulation that lacks essential elements ("AI" art is to art as playing Guitar Hero is to playing a real guitar).

I think people can use NN image generated images to communicate nearly as much as they could by commissioning a piece from an artist with the skills to make the piece-- which I think is a much better analog than a guitar hero controller. Your take would mean art directors don't communicate anything visually because they didn't make all of the pieces they used. Indeed it is far more difficult-- probably impossible in most cases-- to communicate specific, intentional, and nuanced things with AI generated images as we would with any medium involving an artist directly making marks or using physical media.

Using procreate even without a smoothed brush removes the ability for a very skilled practitioner to make marks as expressive as they are accustomed to. Those procreate marks still communicate-- for dilettante painters like me, even moreso than using a physical brush-- but they don't have the specificity or control that a physical brush does. Procreate makes decisions about each stroke that limit the possibility of both error and expression. As a result, to get the results that would satisfy them, a professional painter would need to spend a LOT LONGER in Procreate to get equivalent results. Though all of it is obviously a hell of a lot more expressive than even the fussiest and most detailed NN image generation workflows. Just as People who've only used digital painting tools often have no idea how much more experience and technique real paint requires, people that think their familiarity with the generative tool chain and prompt wrangling rarely understand how many decisions those models are still making for them.

But the ability to actually create images is not the big selling point for artists, despite what many in this space ostensibly think. I reckon the failure to communicate specific, interesting things which you're picking up on is the lack of visual communication skills among the vast majority of people using these tools. There's a whole lot of people climbing Mt. Stupid, just like with any other newly popular technology. However, understanding what an image is communicating to us through many layers of context and subcontext is a hard-won skill that that is adjacent to actually making images. Since gen AI spits out flashy amalgamations of other people's visually compelling work, even complete novices can make something superficially appealing. Most people just don't understand how different that is from what professional artists do, and that's why I'm confident my career is safe. I feel really bad for people whose market is high-volume sales with lower-effort work to people who don't have any way to judge if it's good work, like people that spit out derivative logos for hobby businesses or whatever on Fiverrrrrr. I can't say I find that stuff interesting from a professional or artistic standpoint, but they were doing honest and underpaid work and are suddenly screwed because someone used their own work against them. I've never seen anyone answer to that fact with anything other than edgelord survival of the fittest bullshit or copout by conflating laws with ethics, some hamfisted pontifications about the "true" nature of art, some glib nonsense how human-like ML is, some bullshit about artists being "privileged," or a butchered version of "smash all copyrights" which deliberately ignores that when the rubber hits the road, big corporations are exactly who will benefit most in this case, which defeats the original purpose of the idea.

> 2. It's a wholly derivative rehash of existing art that will ultimately lead to artistic stagnation (as most art creation becomes a deskilled, automated task, knocking out the first step in the pipeline of artistic innovation).

I think your take is more hopeful than mine in some ways. There might be some amount of deskilling in the general populace, but fine art certainly isn't going anywhere, and higher-end commercial art will almost certainly still be done largely by humans-- those two forces break most of the aesthetic ground in our society without most people even realizing it. (It's honestly kind of shocking to see how little most people perceive of the of art and design that imbue our lives.)

I just think it will become yet another piece of humanity for corporations to vacuum up and commoditize. People crow about keeping it free and open-- well, as we can see from the rest of the end-user Linux ecosystem, usability is a lot more consequential to most people than licensing or nominal fees. Most people capable of jumping through the technical hoops to install SD seem not to be aware of how few others are willing to do the same, in the grand scheme of things. This being "open" will affect a tiny number of people-- maybe even smaller than Desktop Linux. Everyone else will pay. Rich get richer. Commercial artists-- already consistently shit on by most industries-- get shit on harder.

While many pros will (and already do) use generative AI for minor roles in their work, it will involve far less text prompting than many people currently realize. I do find SD useful for grabbing a pose reference or popping out a quick mood board, but the prospect of any of this stuff finding its way into deliverable content for the sort of work I do is laughable. The copycat tool in Nuke or the generative fill tool in Photoshop are what generative AI tools look like professional pipelines-- parameter driven, specific, expressive, abstract, fast, deterministic by default, and predictable.

I think that many, many non-professional artists will always make art the way they always have because having the piece at the end isn't always even the point of making it. Some people who need props or decor might like a machine that inexpensively spat out Japanese rock gardens, but that sure as hell won't stop the practice. However, the cost of having an online community or advertising your skills online will essentially require feeding the machines that allowed large corporations to lop the bottom off of the commercial art market. Just like most of us pay for our desire for community in an online world with our communication metadata, we professional artists will pay more directly and dearly for this fundamentally human desire for connection around a shared interest.


Those specific images all seemed recognisably AI-generated to me.

Not that AI can't generate images which would be indistinguishable.


I also found them easily recognizable due to the random artifacts.

My favorites are the nonsense ones that kinda look like other elements in the art but have no connection to reality regardless of how abstract or impressionistic it's supposed to be.

A couple of these containing buildings had things that looked like roofs, awnings, doors, etc. in the middle of a lake or jutting out from a wall or obscuring another feature just like it.


But, do you like them?


AI art is made by humans. Everything an AI model produces is made by humans. Taking a great big dataset of art made by humans and producing a composite work using code written by humans, on hardware made and designed by humans, to a prompt decided by a human, it's humans all the way down.

At some point maybe we'll make an AI that can make it's own creative decisions for reasons other than a random seed.


If it's AI then you know there's nothing underneath, nothing there.

There's some amazing scifi and fantasy imagery that looks like images from the most amazing story ever. But you know there's no story, nothing underneath - there's no story to read, no human adventure.

AI art is the like a Hollywood set - just the facade but if you look behind it, there's no buildings or houses.


I was having this debate with my sister recently. To be clear I agree with you, but I think the question is whether or not most people care.

She was arguing that something like ChatGPT would obviously not have the emotion while writing a Poem. And while obviously true, the question is can it fake enough that most people won't care or notice.

She was like, I simply can't accept that ChatGPT could do this and if I did it would be the end to me. And it's like, yeah. That's the problem.

What is scary, is it seems like a lot simply don't care.

We can scream all day about the lack of emotion, the stealing to train the models, etc. But it clearly isn't slowing them down. It isn't slowing down what seems like most people just happily using it to generate "Art" to sell.


What they don't seem to realise is that humans can fake emotions too, just to produce a piece of art. There was a time when most art was made in a workshop to order. The whole backstory stuff is quite a recent invention.


I'm not worried about future of artists: if (hopefully) most of people are like me, they consume some piece of art basically because its author wanted to say something about something. AIs will only replace art which is consumed without this reason.


> Participants gave poorer ratings to images they thought were created by artificial intelligence.

> “It seems that where the image actually comes from doesn't matter, rather what people believe is what governs their feelings. The objective quality doesn’t seem to be that important,” says Grassini.

> “When participants thought that an image was made by AI, it was regarded as uglier and of lower emotional value, regardless of where it came from. When the image was assumed to be made by a human, it was considered more beautiful and as having a higher emotional value.”

Later:

> [...] “AI products will become more and more similar to things made by humans. What we're seeing now is that it doesn't really matter whether art is human or not, but whether it's believed to be made by a human or not," he says.

> “Customer service created by AI might be perceived as worse, not because it’s objectively worse, but just because it's degraded as something created by AI.”

Here are two examples where the researcher contrasts the "objective" merits of a thing with the subjective matter of an individual's beliefs about where that thing came from. But that is a rather loaded framing. Can we rule out the conclusion that artworks created by human artists are objectively better? I'd rate my enjoyment of holding a 5000 year old Ancient Egyptian relic much lower if I am told that it is a modern reproduction. Is it that I have a subjective bias towards an authentic original, regardless of the objective features of the relic? Or is it that the relic's provenance is itself an objective attribute of the relic, and if I am lied to about that attribute, it will affect my opinion?

This reminds me of a study on disgust. As I remember it, subjects in the study were shown a dead cockroach and told it had been totally disinfected and sanitized in some way. The roach was dipped in a glass of water, then the subjects were asked to take a sip. Obviously, many participants were grossed out and didn't drink the water. The researchers' conclusions went something like this: "the disgust response is an adaptation which serves to limit our exposure to dangerous pathogens and such. This cockroach has no pathogens on it. Therefore, our disgust reactions (and the subjects who did not drink the water) are irrational". An excellent reply was published, which went something like this: "Dead cockroaches are gross. Drinking dead cockroach water is gross. Therefore not drinking the water was rational".


> Dead cockroaches are gross. Drinking dead cockroach water is gross.

Says who? Surely this is a subjective belief. And whether that belief is rational is the question.

> Therefore not drinking the water was rational.

Only if believing "dead cockroach water is gross" is rational.


"Art more complicated than just aesthetics"

Thanks science. More news at 11.

Less facetiously, I think there's value in both. AI seems ideal for a hero image on a blog, but I expect something I hang on my wall to have a backstory.


> AI seems ideal for a hero image on a blog

No, no, no!

No "hero image" is better than an "AI" generated hero image. I see them all the time now, and all they've done is train me to ignore them. Whenever one generates a flicker of interest, I look and there's no there there. It's like seeing a delicious bowl of ice cream, taking a bite, and having it turn to ash in your mouth. Not a fun experience.


Oh, I totally get that! The "AI style" that's becoming so ubiquitous looks awful to me (and tends to correlate with low quality writing as well). However, I think that's a problem of poor stylistic choice, rather than a problem inherent to AI.

The images from the article are what I had in mind. They're quite pleasing. The JetBrains blog is another example of AI hero images done well:

https://blog.jetbrains.com/blog/2023/10/16/ai-graphics-at-je...


> Oh, I totally get that! The "AI style" that's becoming so ubiquitous looks awful to me (and tends to correlate with low quality writing as well).

IMHO, that's in large part because they're made by non-artists relying on the generative model to do all artistic lifting (which seems to me to be the whole point of the models).

> The images from the article are what I had in mind. They're quite pleasing. The JetBrains blog is another example of AI hero images done well:

Most of those only avoid failing in the way I described because they're bland "computer wallpaper"-style or "hotel art"-style. Basically art that's meant to be ignored.


Totally agree once again! AI is not a replacement for artists. It's amazing what a even a bit of artistic direction can do for your final results.

But art that's meant to be ignored has a place in our society, and I think hero images for blogs is the perfect example! I will not be framing JetBrains artwork, but I do think it looks nice and adds value. I'd argue it's because of the clear and well thought out artistic direction.


> But art that's meant to be ignored has a place in our society, and I think hero images for blogs is the perfect example!

I should clarify: the Jetbrains graphics aren't just "art that's meant to be ignored," but art that cues you to ignore it. That's actually the most important characteristic that makes those images not-annoying.

But if the acceptable result is basically colorful whitespace, then I have to question what the point of a "hero image" is in the first place.

But my gripe about hero images is a consumerist objection, and there are more issues with these techniques than that.


I still agree, and I think we're getting to the meat of it!

Hero images are made to be ignored. They should draw the eye, but not hold it. This is where current "AI styles" completely fail; they add too much detail for no clear reason, and it raises questions AI isn't prepared to answer. Hero images are there to spice things up so the page doesn't read like a scientific paper, but shouldn't distract from the article itself. It's a blog, not a gallery. I think "colorful whitespace" is the perfect description.

Art for my wall should be interesting. I want it to evoke emotion and tell a story. It needs more than good composition and color. It should hold up to scrutiny.

I guess my point simply boils down "not everything needs to be fine art."

> But my gripe about hero images is a consumerist objection, and there are more issues with these techniques than that.

I'm with you here as well, but I'm not really prepared to discuss the deeper issues. Big pile of shite, that.


People also like reheated mass-produced "food" from plastic vacuumed sealed bags at Applebees.

So what.


It suggests the strident protests against AI art is just carbon chauvinism.


"chauvinism" ?

What do you mean by that?

Wikipedia describes "chauvinism" as "the unreasonable belief in the superiority or dominance of one's own group or people".

Now, if there were silicon-based persons, and some carbon-based person (in particular some humans) had an irrational bias against them, I can see the term "carbon chauvinism" making sense to describe that.

But, (so far) there are no such silicon-based persons.


>> It suggests the strident protests against AI art is just carbon chauvinism.

> What do you mean by that?

"Carbon chauvinism" is a stupid idea used by "AI" enthusiasts to dismiss criticism and worry about the technology they're hyping. The basic (sloppy) idea is to evoke taboos around stuff like racism and sexism to shut down the conversation, with the implication that it is wrong for humans to object to being replaced by robots (and the unacknowledged bleakness of their perspective doesn't stop there).


There's no right or wrong about it, only the march of history. Those who oppose it are having to come up with more and more contrived excuses. TFA talks specifically how the audience loves AI art if they don't know it's AI generated. It's mostly the carbon artists who seem outraged that it's now possible for somebody with a vision but not the skills to produce art that they can't compete with. And a few in the audience that really hate AI generated art seem to just want for there to be a meat bag with feelings involved, but not silicon beyond 90s level of Photoshop. It's not even a question of quality here, but a question of degree.


> There's no right or wrong about it, only the march of history.

That's awful nonsense right there. The awful part is the amorality, and the nonsense is the assumption that history has a direction and it will inevitably go your way.

> TFA talks specifically how the audience loves AI art if they don't know it's AI generated.

No. TFA talks about an online survey with 200 participants, and even the researcher says he "wouldn’t take this result too seriously." Not once did it use the word "love."

> And a few in the audience that really hate AI generated art seem to just want for there to be a meat bag with feelings involved, but not silicon beyond 90s level of Photoshop.

Come on, you're cherry-picking to make the point you already had in mind. TFA article you just cited for support says "The most important result of the study is...[p]articipants gave poorer ratings to images they thought were created by artificial intelligence," which directly contradicts you here. The audience wants that "meat bag with feelings involved."

Personally, I expect the audience's default assumptions to change. Digital art will be presumed to be machine generated until proven otherwise (a near impossible task), and suffer for it.

BTW, you're also proving my point about the bleakness of the perspective of "AI" enthusiasts who use terms like "Carbon chauvinism."


No it doesn't conflict with what I said - think about it, they gave poorer scores to images they thought were AI generated. And they were bad at telling them apart, and thus shot down genuine human art. The actual AI art, they gave high scores because they thought it was human art.

There's no morality in profit. If there's a profit to be made, a bunch of rich people will make that profit.

The only way to save human art will be to have an audience watch you create it and everybody signs it with their private key. This means that human generated art will be even more valuable for collectors, not less.


We don't currently have silicon persons, but we do have silicon enhanced carbon persons - the carbon component writes the prompts and the silicon component produces the art. The objections appear to be that 90s Photoshop is okay, but generative AI is not. It's pure emotional dogma to oppose it.


People are allowed to have preferences.


Yes! Exactly!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: