this post was submitted on 15 Aug 2023
20 points (95.5% liked)
Stable Diffusion
4297 readers
1 users here now
Discuss matters related to our favourite AI Art generation technology
Also see
Other communities
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
If you use ComfyUI you can queue as many images as you want, so you can let it run over night. Not ideal but probably the best solution without GPU.
TBH, it would take a long time to come up with a prompt that didn't produce trash even when generating quickly on Colab. Trying to do this on my CPU sounds like it could end up taking weeks or even months just to get a few good images.
EDIT: Well, it took me a long while to get set up, but as it turns out, I'm one of the lucky ones who can run on 2GB VRAM. I generated this in about a minute.
Good to hear you got it working.
If you want to speed it up even further and you're willing to boot Linux from a USB, ROCm is much faster than DirectML right now.
edit: Also, you can run without UI, saving even more VRAM.
Interesting about ROCm, I'll have to look into that. As for running without the UI, I honestly don't think I know enough to do that right now, lol.