It’s not the same as an artist being inspired. It’s more like an artist painting something in the style of someone else. AI can generate anything new and it doesn’t transform things in its own way. It just copies and melds together. Nothing about it is really it’s own. It’s just a biased algorithm putting things together. Moreover, the artist could actually forget what the painting looks like, but still be inspired. If you erase something from the LLM, it will change it’s output. It’s basically more of a constant copying.
That analogy is what a bunch of people who want to sell AI art try to pitch. It’s the difference between content and art.
It is possible to do more of what I would call inspired. Models are not just restricted to “in the style of” in that unrelated abstract ideas can be mixed to create something altogether new. It takes a good model and training, but like this is just from 15 minutes of messing around in Stable Diffusion trying to make Van Gogh do his best impression of Bob Ross. I’m adding all kinds of inspirational concepts all the way to emotions and contrasting them and doing this in layers of refinement using a series of images. I’m not very practiced at this. I would call this an artist’s tool. Yes it changes the paradigm, but people need to get over their resistance to change as this is evolution; adapt or die.
I used tricks like image to image, and this was not my best result as far as Van Gogh:Bob Ross, but I like it most of the 150 images I made.
Positive: texture, (in the style of Vincent van Gogh:Bob Ross), [nasa], swirl, spiral, foreground tree, mountain drive, kindness, love, masterclass, (abstract:1.8), painting, dark, silhouette, swirls, texture, branches, ocean waves, anger, lonely
I think you’re missing the point. You’re still generating something purely based only on other things. There’s nothing of an artist in there. There’s no message. There’s no art. You created content. You aren’t in there. And I know this seems odd because there’s no way to know this without extra knowledge, but something is lost. And it’s not an artist’s tool. It’s a non-artist’s tool.
You are wrong because you are arbitrary in your assumptions. I have spent years painting cars and doing graphics and airbrush professionally. I am a Maker. I can craft with almost any medium both digital and physical. Once upon a time, anyone that did not craft all of their own colors and base media were considered fake artists. This is a tool. I can create exponentially more than you to search for a better composition. So can you, so can everyone else. The stupid people will resist this change while intelligent people will learn the tech, adapt, and raise everyone’s expectations about what art really is. This is the fundamental shift happening right now. The value of time investment has changed drastically. If you can’t adapt to that change you only hurt yourself in the end. Open Source offline AI at a useful level is around 6 months old. It is at the stage where products targeting end users are still getting developed. In the next 2 years, everything is going to be different. In 10 years the quality of art media will make the present look like child’s play. Feel free to plan your own obsolescence. This is the biggest game changer since the internet of the late 90’s. It is funny how people that have not tried it or really looked into what this can be used for have strong opinions about it, or put their head in the sand when they are told. I got it to learn computer science so that I can upload a book as a database and ask the book plain text questions, and so that I could do some interesting CAD techniques in Blender. The second I saw I could question a book offline with citations, I was sold.
I’m not arbitrary. I explicitly gave a reasonable difference between content and art. You can create content without soul, that’s fine. I’m not saying you need to mix your own paint. I’m saying art is inherently human by definition. You can pump out all the content you want, but it will just make finding decent art that much worse. It’s like saying ChatGPT can pump out android apps more quickly, but I don’t think anyone would argue it’d raise the quality of the Android app markets.
You’re just thinking of everything from the point of view of middle management. Quantity over quality.
When you remove humans from the equation, it’s not art. It’s content. It’s disposable fluff. It’s mass produced. It’s soulless. But sure, think yourself intelligent because you literally put money over anything else. Why don’t you just flood the market with remakes and remasters at this point. It fits your argument.
You can’t raise an expectation of art by literally removing any meaning to it.
You need to learn and try this. You don’t know what you don’t know and you are making a lot of bad assumptions. The result is not random. The creativity is understanding what the words do and the process just like any other art. There is a lot of nuance. Every word I chose has an impact in both sets of prompts. This is a the result of taking the best image of 60, and and then using it to generate a chain where I slowly adjusted a whole bunch of tools to make this output. I got to the point where each new iteration has very little change to the final image. The word order matters, the “()” brackets strengthen the power and even more if it includes a number like “:1.8” The “[ ]” makes something weaker. Words are more powerful at the beginning and the last word. The placement of composition, technique, and metadata words matters. There are dozens of other techniques just when it comes to the basic settings, and there are limitless ways to alter the output learning about how the AI actually works. This is similar to what digital photography did to film photography. Is it going to kill old techniques? it will completely change the paradigm.
With the best outputs from AI, you can’t spot the difference unless you are told; no one can. This is the only thing that matters in the end. Art is made to be looked at, and if the viewer can’t tell the difference, that is the only difference that matters. I’m not ‘the enemy,’ this isn’t a team sport or black and white. I’m just a regular dude actually using this to improve myself. I’ve used it enough to know what I’m doing, and know what I’m talking about, but like, I barely touch image generation stuff. If I spent a week putting together the toolchains better I could produce a much better image that what I posted.
Every word has an impact that you can’t predict. So no. All your words and condescending tone speak more about what you don’t know. You are are hitting a button and continually trying new things until you get the results from the AI that you want. That is not the same. Especially since you’ll start just changing things just because your original intent didn’t match what you want so you’ll start reaching for other synonyms and the like.
It simply isn’t the same as human inspiration. There’s a reason courts voted against giving rights to AI generated art to the prompt creator. Their reasoning holds.
Just because someone might not be able to tell the difference between a forgery and the real thing doesn’t make them both equally art.
Same holds true to your example which I literally already used and explained why it didn’t work. Are you even reading my comments or just ranting?
You have no clue what you are talking about. I can dial in very specific results anywhere I want and at any point with the tools. I can mask any area and control what it does through prompting. I only used basic tools for a few minutes with my most simple tool. I could open up ComfyUI and make a much more detailed network. I can figure out the new Open Dream GUI and break apart images into mask layers and generate whatever I want on these. Or if I cared anything about it, I would do all of it myself on the command line like I am doing with text generative AI. If the only tools you've seen are those posted by proprietary companies online, you have no clue how this really works or what is possible.
If it's specifically what you want, it's not AI otherwise you'd be over fitting.
I'm not talking about any specific tools. I'm talking about the actual theory. I'm glad you can contradict yourself by claiming very little can get you immense details (except it's also exactly what you want?)
I'm sorry I offended you and that you're getting ridiculously angry and defensive when I said creating something via AI isn't art.
It’s not the same as an artist being inspired. It’s more like an artist painting something in the style of someone else. AI can generate anything new and it doesn’t transform things in its own way. It just copies and melds together. Nothing about it is really it’s own. It’s just a biased algorithm putting things together. Moreover, the artist could actually forget what the painting looks like, but still be inspired. If you erase something from the LLM, it will change it’s output. It’s basically more of a constant copying.
That analogy is what a bunch of people who want to sell AI art try to pitch. It’s the difference between content and art.
It is possible to do more of what I would call inspired. Models are not just restricted to “in the style of” in that unrelated abstract ideas can be mixed to create something altogether new. It takes a good model and training, but like this is just from 15 minutes of messing around in Stable Diffusion trying to make Van Gogh do his best impression of Bob Ross. I’m adding all kinds of inspirational concepts all the way to emotions and contrasting them and doing this in layers of refinement using a series of images. I’m not very practiced at this. I would call this an artist’s tool. Yes it changes the paradigm, but people need to get over their resistance to change as this is evolution; adapt or die.
I used tricks like image to image, and this was not my best result as far as Van Gogh:Bob Ross, but I like it most of the 150 images I made.
Positive: texture, (in the style of Vincent van Gogh:Bob Ross), [nasa], swirl, spiral, foreground tree, mountain drive, kindness, love, masterclass, (abstract:1.8), painting, dark, silhouette, swirls, texture, branches, ocean waves, anger, lonely
Negative: red, (signature), multiple moons, buildings, modern, structures, guard rail, snow, realism, yellow, orange, detailed mountains, left side line, stretchy stars, brake lights, forest
Seed: 1053938996 Model: Absolute Reality V1.6525
I think you’re missing the point. You’re still generating something purely based only on other things. There’s nothing of an artist in there. There’s no message. There’s no art. You created content. You aren’t in there. And I know this seems odd because there’s no way to know this without extra knowledge, but something is lost. And it’s not an artist’s tool. It’s a non-artist’s tool.
You are wrong because you are arbitrary in your assumptions. I have spent years painting cars and doing graphics and airbrush professionally. I am a Maker. I can craft with almost any medium both digital and physical. Once upon a time, anyone that did not craft all of their own colors and base media were considered fake artists. This is a tool. I can create exponentially more than you to search for a better composition. So can you, so can everyone else. The stupid people will resist this change while intelligent people will learn the tech, adapt, and raise everyone’s expectations about what art really is. This is the fundamental shift happening right now. The value of time investment has changed drastically. If you can’t adapt to that change you only hurt yourself in the end. Open Source offline AI at a useful level is around 6 months old. It is at the stage where products targeting end users are still getting developed. In the next 2 years, everything is going to be different. In 10 years the quality of art media will make the present look like child’s play. Feel free to plan your own obsolescence. This is the biggest game changer since the internet of the late 90’s. It is funny how people that have not tried it or really looked into what this can be used for have strong opinions about it, or put their head in the sand when they are told. I got it to learn computer science so that I can upload a book as a database and ask the book plain text questions, and so that I could do some interesting CAD techniques in Blender. The second I saw I could question a book offline with citations, I was sold.
I’m not arbitrary. I explicitly gave a reasonable difference between content and art. You can create content without soul, that’s fine. I’m not saying you need to mix your own paint. I’m saying art is inherently human by definition. You can pump out all the content you want, but it will just make finding decent art that much worse. It’s like saying ChatGPT can pump out android apps more quickly, but I don’t think anyone would argue it’d raise the quality of the Android app markets.
You’re just thinking of everything from the point of view of middle management. Quantity over quality.
When you remove humans from the equation, it’s not art. It’s content. It’s disposable fluff. It’s mass produced. It’s soulless. But sure, think yourself intelligent because you literally put money over anything else. Why don’t you just flood the market with remakes and remasters at this point. It fits your argument.
You can’t raise an expectation of art by literally removing any meaning to it.
You need to learn and try this. You don’t know what you don’t know and you are making a lot of bad assumptions. The result is not random. The creativity is understanding what the words do and the process just like any other art. There is a lot of nuance. Every word I chose has an impact in both sets of prompts. This is a the result of taking the best image of 60, and and then using it to generate a chain where I slowly adjusted a whole bunch of tools to make this output. I got to the point where each new iteration has very little change to the final image. The word order matters, the “()” brackets strengthen the power and even more if it includes a number like “:1.8” The “[ ]” makes something weaker. Words are more powerful at the beginning and the last word. The placement of composition, technique, and metadata words matters. There are dozens of other techniques just when it comes to the basic settings, and there are limitless ways to alter the output learning about how the AI actually works. This is similar to what digital photography did to film photography. Is it going to kill old techniques? it will completely change the paradigm.
With the best outputs from AI, you can’t spot the difference unless you are told; no one can. This is the only thing that matters in the end. Art is made to be looked at, and if the viewer can’t tell the difference, that is the only difference that matters. I’m not ‘the enemy,’ this isn’t a team sport or black and white. I’m just a regular dude actually using this to improve myself. I’ve used it enough to know what I’m doing, and know what I’m talking about, but like, I barely touch image generation stuff. If I spent a week putting together the toolchains better I could produce a much better image that what I posted.
Every word has an impact that you can’t predict. So no. All your words and condescending tone speak more about what you don’t know. You are are hitting a button and continually trying new things until you get the results from the AI that you want. That is not the same. Especially since you’ll start just changing things just because your original intent didn’t match what you want so you’ll start reaching for other synonyms and the like.
It simply isn’t the same as human inspiration. There’s a reason courts voted against giving rights to AI generated art to the prompt creator. Their reasoning holds.
Just because someone might not be able to tell the difference between a forgery and the real thing doesn’t make them both equally art.
Same holds true to your example which I literally already used and explained why it didn’t work. Are you even reading my comments or just ranting?
You have no clue what you are talking about. I can dial in very specific results anywhere I want and at any point with the tools. I can mask any area and control what it does through prompting. I only used basic tools for a few minutes with my most simple tool. I could open up ComfyUI and make a much more detailed network. I can figure out the new Open Dream GUI and break apart images into mask layers and generate whatever I want on these. Or if I cared anything about it, I would do all of it myself on the command line like I am doing with text generative AI. If the only tools you've seen are those posted by proprietary companies online, you have no clue how this really works or what is possible.
If it's specifically what you want, it's not AI otherwise you'd be over fitting.
I'm not talking about any specific tools. I'm talking about the actual theory. I'm glad you can contradict yourself by claiming very little can get you immense details (except it's also exactly what you want?)
I'm sorry I offended you and that you're getting ridiculously angry and defensive when I said creating something via AI isn't art.