You have to remember that much has been erased and cancelled from history.
This prompt has been blocked. Our system automatically flagged this prompt because it may conflict with our content policy. More policy violations may lead to automatic suspension of your access.
If you think this is a mistake, please report it to help us improve.
I've been warned because I wanted to use something that was known in recent history
A future London see's the return of Earths President from his meetings at the far reaches of the Galaxy.
This is why local AI generation will take off in the next 12 months. Already you can download the old Stable Defusion dataset that's been trained on and start generating images using the resources of your desktop. As you can imagine it's a hack of a lot slower but you can prompt it without restriction. It's the old SD so you'll get wonky fingers etc but it's only a matter of time before someone will arrange the dataset and AI algorithm of the latest ones.
Looks like @Pawnless Endgame is already doing a form of it
That's faster than I expected from a 3070. I would really like to know what the limiting factor is in graphics cards in terms of this. Memory bandwidth, VRAM or CUDA cores. Do you know any websites that tests cards for this? I think Pugit systems does sometimes but it's not very definitive.Yeah I got my Automatic 1111 working with Stable Diffusion 1.5 and 2.1 at the moment. My RTX 3070 can output an image in 15 seconds at 640x960 then upsizing them takes another 15 seconds or so per image.
I'm also trying SDXL with Automatic 1111 which is a lot slower, more like 2m 15s per image and they're coming out blocky / corrupted, so something I'm not doing right there.
It's the old SD so you'll get wonky fingers etc but it's only a matter of time before someone will arrange the dataset and AI algorithm of the latest ones.
@aVdub - the Oppenheimer picture is missing Barbie
I see you have spotted what keeps Johnny alive