this post was submitted on 07 Nov 2024
19 points (95.2% liked)

AI Generated Images

7149 readers
193 users here now

Community for AI image generation. Any models are allowed. Creativity is valuable! It is recommended to post the model used for reference, but not a rule.

No explicit violence, gore, or nudity.

This is not a NSFW community although exceptions are sometimes made. Any NSFW posts must be marked as NSFW and may be removed at any moderator's discretion. Any suggestive imagery may be removed at any time.

Refer to https://lemmynsfw.com/ for any NSFW imagery.

No misconduct: Harassment, Abuse or assault, Bullying, Illegal activity, Discrimination, Racism, Trolling, Bigotry.

AI Generated Videos are allowed under the same rules. Photosensitivity warning required for any flashing videos.

To embed images type:

“![](put image url in here)”

Follow all sh.itjust.works rules.


Community Challenge Past Entries

Related communities:

founded 1 year ago
MODERATORS
 

Some of my recent attempts that did not work out.

top 10 comments
sorted by: hot top controversial new old
[–] tal@lemmy.today 1 points 12 hours ago* (last edited 12 hours ago) (2 children)

@M0oP0o@mander.xyz, as far as I can tell, you always use Bing Image Creator.

And as far as I can tell, @Thelsim@sh.itjust.works always uses Midjourney.

I don't use either. But as far as I know, neither service currently charges for generation of images. I don't know if there's some sort of different rate-limit that favors one over the other, or another reason to use Bing (perhaps Midjourney's model is intentionally not trained on Sailor Moon?), but I do believe that Midjourney can do a few things that Bing doesn't.

One of those is inpainting. Inpainting, for those who haven't used it, lets one start with an existing image, create a mask that specifies that only part of the image should be regenerated, and then regenerate that part of the image using a specified prompt (which might differ from the prompt used to generate the image as a whole). I know that Thelsim's used this feature before with Midjourney, because she once used it to update an image with some sort of poison witch image with hands over a green glowing pot, so I'm pretty sure that it's available to Midjourney general users.

I know that you recently expressed frustration with Bing's Image Creator's current functionality, wanted more.

Inpainting's time-consuming, but it can let a lot of images be rescued, rather than having to just re-reroll the whole image. Have you tried using Midjourney? Was there anything there that you found made it not acceptable?

[–] thelsim@sh.itjust.works 3 points 8 hours ago (1 children)

The inpainting has improved a lot since then. Recently they introduced an external editor that allows you to do more accurate inpainting and even retexturing.
For example, taking one of the images here.

With retexturing I can write: A 1900s photograph, of sailor moon and politicians and a xenomorph, in congress
And have it transformed while keeping the original characters:

There's also the option to repaint:

And to expand the image:

But things it doesn't do well is accurate stuff, like flags, characters, that kind of thing. It likes to hallucinate a little so, for example, you won't get a perfect flag. And even a sailor moon will often look a bit off-brand.

[–] tal@lemmy.today 3 points 7 hours ago (1 children)

Thanks for trying it out! Both the inpainting and outpainting -- the expansion -- worked better than I'd expected, though I dunno if that's exactly what M0oP0o's after.

[–] thelsim@sh.itjust.works 2 points 2 hours ago (1 children)

I don't know, but I felt like sharing :)

The inpainting has definitely improved, a while ago it was impossible to get it to properly match the style of the rest of the image. You could always see where the original was altered. Now it blends much better with the rest of the image.
And inpainting of non-generated images is a recent thing, before that you could only alter images that Midjourney originally created.

[–] M0oP0o@mander.xyz 1 points 41 minutes ago

Yeap, that is functionality I could use. Will have to try later.

[–] M0oP0o@mander.xyz 3 points 10 hours ago (1 children)

I have tried midjourney before. The results where..... Underwhelming. Lots of odd artifacting, slow creation time and yes it had some issues with sailormoon.

I might try again, as it has been a while. It would be nice to have more control.

Oh I also tried local generation (forgot the name) and wooooow is my local PC bad at pictures (clearly can't be my lack of ability it setting it up).

[–] tal@lemmy.today 1 points 9 hours ago* (last edited 9 hours ago) (1 children)

Lots of odd artifacting, slow creation time and yes it had some issues with sailormoon.

It probably isn't worth the effort for most things, but one option might also be -- and I'm not saying that this will work well, but a thought -- using both. That is, if Bing Image Creator can generate images with content that you want but gets some details wrong and can't do inpainting, but Midjourney can do inpainting, it might be possible to take a Bing-generated image that's 90% of what you want and then inpaint the particular detail at issue using Midjourney. The inpainting will use the surrounding image as an input, so it should tend to try to generate similar image.

I'd guess that the problem is that an image generated with one model probably isn't going to be terribly stable in another model -- like, it probably won't converge on exactly the same thing -- but it might be that surrounding content is enough to hint it to do the right thing, if there's enough of that context.

I mean, that's basically -- for a limited case -- how AI upscaling works. It gets an image that the model didn't generate, and then it tries to generate a new image, albeit with only slight "pressure" to modify rather than retain the existing image.

It might produce total garbage, too, but might be worth an experiment.

What I'd probably try to do if I were doing this locally is to feed my starting image into the thing to generate prompt terms that my local model can use to generate a similar-looking image, and include those when doing inpainting, since those prompt terms will be adapted to trying to create a reasonably-similar image using the different model. On Automatic1111, there's an extension called Clip Interrogator that can do this ("image to text").

Searching online, it looks like Midjourney has similar functionality, the /describe command.

https://docs.midjourney.com/docs/describe

It's not magic -- I mean, end of the day, the model can only do what it's been trained on -- but I've found that to be helpful locally, since I'd bet that Bing and Midjourney expect different prompt terms for a given image.

Oh I also tried local generation (forgot the name) and wooooow is my local PC bad at pictures (clearly can’t be my lack of ability it setting it up).

Hmm. Well, that I've done. Like, was the problem that it was slow? I can believe it, but just as a sanity check, if you run on a CPU, pretty much everything is mind-bogglingly slow. Do you know if you were running it on a GPU, and if so, how much VRAM it has? And what you were using (like, Stable Diffusion 1.5, Stable Diffusion XL, Flux, etc?)

[–] M0oP0o@mander.xyz 1 points 42 minutes ago

Ran it on my 6900 (nice) and although slow the main issue is it made things look like this:

It was stable diffusion XL.

[–] lnxtx@feddit.nl 2 points 19 hours ago (1 children)

Resurrected Marx for the US president!

[–] M0oP0o@mander.xyz 2 points 19 hours ago

Eh, I would think most would settle for the cold corpse. No need getting necromancy involved.