If you knew the exact terms to get the the AI to recreate something in its training data, it could, 1:1.
That's because you told it to. Don't make it recreate existing art then.
And if you ask it to create you something new, no matter what parameters you use it will look like a mess of garbage data.
This is not always true. You can train it on a certain style and a photo of a random object, then have it generate an image of the random object in that style. It will "understand" the concept of a style and an object.
ultimately all the capitalists running these tools see it as is another method to bring the public under their exclusive and totalitarian control.
Exactly why I'm not supporting the closed source paid services (Midjourney, ChatGPT, Bing Chat, DALL-E etc.) and instead advocate for open source projects like Stable Diffusion and LLaMA.
That’s because you told it to. Don’t make it recreate existing art then.
If you took a random concept and explained it to a person they could using their existing knowledge set, draw it somewhat competently. That is because people are able to apply knowledge to make something new. If you told someone to recreate something that already exists, even if they're a professional, would never be able to recreate it no matter how much time and effort the put into it. AI can do the latter because it's basically copying, and it can't do the former because there's nothing to copy from.
If you took a random concept and explained it to a person they could using their existing knowledge set, draw it somewhat competently. That is because people are able to apply knowledge to make something new.
Theoretically it can, but it would involve meticulous and proper labeling of each training data. Currently most of the trained data are automatically labeled and they're not descriptive/verbose enough. I believe the improvements from the latest version of DALL-E is due to OpenAI's use of a more advanced image labeler.
That's because you told it to. Don't make it recreate existing art then.
This is not always true. You can train it on a certain style and a photo of a random object, then have it generate an image of the random object in that style. It will "understand" the concept of a style and an object.
Exactly why I'm not supporting the closed source paid services (Midjourney, ChatGPT, Bing Chat, DALL-E etc.) and instead advocate for open source projects like Stable Diffusion and LLaMA.
If you took a random concept and explained it to a person they could using their existing knowledge set, draw it somewhat competently. That is because people are able to apply knowledge to make something new. If you told someone to recreate something that already exists, even if they're a professional, would never be able to recreate it no matter how much time and effort the put into it. AI can do the latter because it's basically copying, and it can't do the former because there's nothing to copy from.
Theoretically it can, but it would involve meticulous and proper labeling of each training data. Currently most of the trained data are automatically labeled and they're not descriptive/verbose enough. I believe the improvements from the latest version of DALL-E is due to OpenAI's use of a more advanced image labeler.
OK so throw more Kenyans at it. Got it!
Never said that.
Well how do you think tagging was done? Because that's what they did.