Make_image_grid(, rows= 1, cols= 3) Stable Diffusion XL (SDXL) Inpainting Image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, generator=generator).images Prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" Generator = torch.Generator( "cuda").manual_seed( 92) Pipeline.enable_xformers_memory_efficient_attention() # remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed "runwayml/stable-diffusion-inpainting", torch_dtype=torch.float16, variant= "fp16" To use this model for inpainting, you’ll need to pass a prompt, base and mask image to the pipeline:įrom diffusers import AutoPipelineForInpaintingįrom diffusers.utils import load_image, make_image_grid It is a good starting point because it is relatively fast and generates good quality images. Stable Diffusion Inpainting is a latent diffusion model finetuned on 512x512 images on inpainting. SDXL typically produces higher resolution images than Stable Diffusion v1.5, and Kandinsky 2.2 is also capable of generating high-quality images. Stable Diffusion Inpainting, Stable Diffusion XL (SDXL) Inpainting, and Kandinsky 2.2 Inpainting are among the most popular models for inpainting. Once you’re done, click Run to generate and download the mask image. Upload a base image to inpaint on and use the sketch tool to draw a mask. Use the Space below to easily create a mask image. You can inpaint on your own images, but you’ll need to create a mask image for it. Throughout this guide, the mask image is provided in all of the code examples for convenience.
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |