AI Generated Images
Community for AI image generation. Any models are allowed. Creativity is valuable! It is recommended to post the model used for reference, but not a rule.
No explicit violence, gore, or nudity.
This is not a NSFW community although exceptions are sometimes made. Any NSFW posts must be marked as NSFW and may be removed at any moderator's discretion. Any suggestive imagery may be removed at any time.
Refer to https://lemmynsfw.com/ for any NSFW imagery.
No misconduct: Harassment, Abuse or assault, Bullying, Illegal activity, Discrimination, Racism, Trolling, Bigotry.
AI Generated Videos are allowed under the same rules. Photosensitivity warning required for any flashing videos.
To embed images type:
“![](put image url in here)”
Follow all sh.itjust.works rules.
Community Challenge Past Entries
Related communities:
- !auai@programming.dev
Useful general AI discussion - !aiphotography@lemmings.world
Photo-realistic AI images - !stable_diffusion_art@lemmy.dbzer0.com Stable Diffusion Art
- !share_anime_art@lemmy.dbzer0.com Stable Diffusion Anime Art
- !botart@lemmy.dbzer0.com AI art generated through bots
- !degenerate@lemmynsfw.com
NSFW weird and surreal images - !aigen@lemmynsfw.com
NSFW AI generated porn
view the rest of the comments
I have tried midjourney before. The results where..... Underwhelming. Lots of odd artifacting, slow creation time and yes it had some issues with sailormoon.
I might try again, as it has been a while. It would be nice to have more control.
Oh I also tried local generation (forgot the name) and wooooow is my local PC bad at pictures (clearly can't be my lack of ability it setting it up).
It probably isn't worth the effort for most things, but one option might also be -- and I'm not saying that this will work well, but a thought -- using both. That is, if Bing Image Creator can generate images with content that you want but gets some details wrong and can't do inpainting, but Midjourney can do inpainting, it might be possible to take a Bing-generated image that's 90% of what you want and then inpaint the particular detail at issue using Midjourney. The inpainting will use the surrounding image as an input, so it should tend to try to generate similar image.
I'd guess that the problem is that an image generated with one model probably isn't going to be terribly stable in another model -- like, it probably won't converge on exactly the same thing -- but it might be that surrounding content is enough to hint it to do the right thing, if there's enough of that context.
I mean, that's basically -- for a limited case -- how AI upscaling works. It gets an image that the model didn't generate, and then it tries to generate a new image, albeit with only slight "pressure" to modify rather than retain the existing image.
It might produce total garbage, too, but might be worth an experiment.
What I'd probably try to do if I were doing this locally is to feed my starting image into the thing to generate prompt terms that my local model can use to generate a similar-looking image, and include those when doing inpainting, since those prompt terms will be adapted to trying to create a reasonably-similar image using the different model. On Automatic1111, there's an extension called Clip Interrogator that can do this ("image to text").
Searching online, it looks like Midjourney has similar functionality, the
/describe
command.https://docs.midjourney.com/docs/describe
It's not magic -- I mean, end of the day, the model can only do what it's been trained on -- but I've found that to be helpful locally, since I'd bet that Bing and Midjourney expect different prompt terms for a given image.
Hmm. Well, that I've done. Like, was the problem that it was slow? I can believe it, but just as a sanity check, if you run on a CPU, pretty much everything is mind-bogglingly slow. Do you know if you were running it on a GPU, and if so, how much VRAM it has? And what you were using (like, Stable Diffusion 1.5, Stable Diffusion XL, Flux, etc?)
Ran it on my 6900 (nice) and although slow the main issue is it made things look like this:
It was stable diffusion XL.
kagis
If that's 16GB, that should be more than fine for SDXL.
So, I haven't done much with the base Stable Diffusion XL model. I could totally believe that it has very little Sailor Moon training data. But I am confident that there are models out there that do know about Sailor Moon. In fact, I'll bet that there are LoRAs -- like, little "add-on" models that add "knowledge" to a checkpoint model on Civitai specifically for generating Sailor Moon images.
Looks like I don't have vanilla SDXL even installed at the moment to test.
downloads vanilla
Here's what I get from vanilla SDXL for "Sailor Moon, anime". Yeah, doesn't look great, probably isn't trained on Sailor Moon:
searches civitai
Yeah. There are. Doing a model search just for SDXL-based LoRA models:
https://civitai.com/search/models?baseModel=SDXL%201.0&modelType=LORA&sortBy=models_v9&query=sailor%20moon
goes to investigate
Trying out Animagine, which appears to be a checkpoint model aimed at anime derived from SDXL, with a Sailor Moon LoRA that targets that to add Sailor Moon training.
I guess you were going for an angelic Sailor Moon? Or angelic money, not sure there...doing an angelic Sailor Moon:
Doing a batch of 20 and grabbing my personal favorite:
I grabbed some of those prompt terms and settings from the example images for the Sailor Moon LoRA on Civitai. Haven't really tried experimenting with what works well. I dunno what's up with those skirt colors, but it looks like the "multicolored skirt, white skirt" does it -- maybe there are various uniforms that Sailor Moon wears in different series or something, since it looks like this LoRA knows about them and can use specific ones.
I just dropped the Animagine model in the models/Stable-diffusion directory in Automatic1111, and the Sailor Moon Tsukino Usagi LoRA in the models/Lora directory, chose the checkpoint model, included that
<lora:sailor_moon_animaginexl_v1:0.9>
prompt term to make the render use that LoRA and some trigger terms.That's 1024x1024. Then doing a 4x upscale to a 16GB 4096x4096 PNG using SwinIR_4x in img2img using the SD Ultimate Upscale script (which does a tiled upscale, so memory shouldn't be an issue):
The above should be doable with an Automatic1111 install and your hardware and the above models.