<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
    <channel>
        <title>GFPGAN on Producthunt daily</title>
        <link>https://producthunt.programnotes.cn/en/tags/gfpgan/</link>
        <description>Recent content in GFPGAN on Producthunt daily</description>
        <generator>Hugo -- gohugo.io</generator>
        <language>en</language>
        <lastBuildDate>Tue, 23 Sep 2025 15:29:34 +0800</lastBuildDate><atom:link href="https://producthunt.programnotes.cn/en/tags/gfpgan/index.xml" rel="self" type="application/rss+xml" /><item>
        <title>stable-diffusion-webui</title>
        <link>https://producthunt.programnotes.cn/en/p/stable-diffusion-webui/</link>
        <pubDate>Tue, 23 Sep 2025 15:29:34 +0800</pubDate>
        
        <guid>https://producthunt.programnotes.cn/en/p/stable-diffusion-webui/</guid>
        <description>&lt;img src="https://images.unsplash.com/photo-1590147074903-b9ad6ba9eb5a?ixid=M3w0NjAwMjJ8MHwxfHJhbmRvbXx8fHx8fHx8fDE3NTg2MTI0NzN8&amp;ixlib=rb-4.1.0" alt="Featured image of post stable-diffusion-webui" /&gt;&lt;h1 id=&#34;automatic1111stable-diffusion-webui&#34;&gt;&lt;a class=&#34;link&#34; href=&#34;https://github.com/AUTOMATIC1111/stable-diffusion-webui&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;AUTOMATIC1111/stable-diffusion-webui&lt;/a&gt;
&lt;/h1&gt;&lt;h1 id=&#34;stable-diffusion-web-ui&#34;&gt;Stable Diffusion web UI
&lt;/h1&gt;&lt;p&gt;A web interface for Stable Diffusion, implemented using Gradio library.&lt;/p&gt;
&lt;p&gt;&lt;img src=&#34;https://producthunt.programnotes.cn/screenshot.png&#34;
	
	
	
	loading=&#34;lazy&#34;
	
	
&gt;&lt;/p&gt;
&lt;h2 id=&#34;features&#34;&gt;Features
&lt;/h2&gt;&lt;p&gt;&lt;a class=&#34;link&#34; href=&#34;https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Detailed feature showcase with images&lt;/a&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Original txt2img and img2img modes&lt;/li&gt;
&lt;li&gt;One click install and run script (but you still must install python and git)&lt;/li&gt;
&lt;li&gt;Outpainting&lt;/li&gt;
&lt;li&gt;Inpainting&lt;/li&gt;
&lt;li&gt;Color Sketch&lt;/li&gt;
&lt;li&gt;Prompt Matrix&lt;/li&gt;
&lt;li&gt;Stable Diffusion Upscale&lt;/li&gt;
&lt;li&gt;Attention, specify parts of text that the model should pay more attention to
&lt;ul&gt;
&lt;li&gt;a man in a &lt;code&gt;((tuxedo))&lt;/code&gt; - will pay more attention to tuxedo&lt;/li&gt;
&lt;li&gt;a man in a &lt;code&gt;(tuxedo:1.21)&lt;/code&gt; - alternative syntax&lt;/li&gt;
&lt;li&gt;select text and press &lt;code&gt;Ctrl+Up&lt;/code&gt; or &lt;code&gt;Ctrl+Down&lt;/code&gt; (or &lt;code&gt;Command+Up&lt;/code&gt; or &lt;code&gt;Command+Down&lt;/code&gt; if you&amp;rsquo;re on a MacOS) to automatically adjust attention to selected text (code contributed by anonymous user)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Loopback, run img2img processing multiple times&lt;/li&gt;
&lt;li&gt;X/Y/Z plot, a way to draw a 3 dimensional plot of images with different parameters&lt;/li&gt;
&lt;li&gt;Textual Inversion
&lt;ul&gt;
&lt;li&gt;have as many embeddings as you want and use any names you like for them&lt;/li&gt;
&lt;li&gt;use multiple embeddings with different numbers of vectors per token&lt;/li&gt;
&lt;li&gt;works with half precision floating point numbers&lt;/li&gt;
&lt;li&gt;train embeddings on 8GB (also reports of 6GB working)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Extras tab with:
&lt;ul&gt;
&lt;li&gt;GFPGAN, neural network that fixes faces&lt;/li&gt;
&lt;li&gt;CodeFormer, face restoration tool as an alternative to GFPGAN&lt;/li&gt;
&lt;li&gt;RealESRGAN, neural network upscaler&lt;/li&gt;
&lt;li&gt;ESRGAN, neural network upscaler with a lot of third party models&lt;/li&gt;
&lt;li&gt;SwinIR and Swin2SR (&lt;a class=&#34;link&#34; href=&#34;https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/2092&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;see here&lt;/a&gt;), neural network upscalers&lt;/li&gt;
&lt;li&gt;LDSR, Latent diffusion super resolution upscaling&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Resizing aspect ratio options&lt;/li&gt;
&lt;li&gt;Sampling method selection
&lt;ul&gt;
&lt;li&gt;Adjust sampler eta values (noise multiplier)&lt;/li&gt;
&lt;li&gt;More advanced noise setting options&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Interrupt processing at any time&lt;/li&gt;
&lt;li&gt;4GB video card support (also reports of 2GB working)&lt;/li&gt;
&lt;li&gt;Correct seeds for batches&lt;/li&gt;
&lt;li&gt;Live prompt token length validation&lt;/li&gt;
&lt;li&gt;Generation parameters
&lt;ul&gt;
&lt;li&gt;parameters you used to generate images are saved with that image&lt;/li&gt;
&lt;li&gt;in PNG chunks for PNG, in EXIF for JPEG&lt;/li&gt;
&lt;li&gt;can drag the image to PNG info tab to restore generation parameters and automatically copy them into UI&lt;/li&gt;
&lt;li&gt;can be disabled in settings&lt;/li&gt;
&lt;li&gt;drag and drop an image/text-parameters to promptbox&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Read Generation Parameters Button, loads parameters in promptbox to UI&lt;/li&gt;
&lt;li&gt;Settings page&lt;/li&gt;
&lt;li&gt;Running arbitrary python code from UI (must run with &lt;code&gt;--allow-code&lt;/code&gt; to enable)&lt;/li&gt;
&lt;li&gt;Mouseover hints for most UI elements&lt;/li&gt;
&lt;li&gt;Possible to change defaults/mix/max/step values for UI elements via text config&lt;/li&gt;
&lt;li&gt;Tiling support, a checkbox to create images that can be tiled like textures&lt;/li&gt;
&lt;li&gt;Progress bar and live image generation preview
&lt;ul&gt;
&lt;li&gt;Can use a separate neural network to produce previews with almost none VRAM or compute requirement&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Negative prompt, an extra text field that allows you to list what you don&amp;rsquo;t want to see in generated image&lt;/li&gt;
&lt;li&gt;Styles, a way to save part of prompt and easily apply them via dropdown later&lt;/li&gt;
&lt;li&gt;Variations, a way to generate same image but with tiny differences&lt;/li&gt;
&lt;li&gt;Seed resizing, a way to generate same image but at slightly different resolution&lt;/li&gt;
&lt;li&gt;CLIP interrogator, a button that tries to guess prompt from an image&lt;/li&gt;
&lt;li&gt;Prompt Editing, a way to change prompt mid-generation, say to start making a watermelon and switch to anime girl midway&lt;/li&gt;
&lt;li&gt;Batch Processing, process a group of files using img2img&lt;/li&gt;
&lt;li&gt;Img2img Alternative, reverse Euler method of cross attention control&lt;/li&gt;
&lt;li&gt;Highres Fix, a convenience option to produce high resolution pictures in one click without usual distortions&lt;/li&gt;
&lt;li&gt;Reloading checkpoints on the fly&lt;/li&gt;
&lt;li&gt;Checkpoint Merger, a tab that allows you to merge up to 3 checkpoints into one&lt;/li&gt;
&lt;li&gt;&lt;a class=&#34;link&#34; href=&#34;https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Custom-Scripts&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Custom scripts&lt;/a&gt; with many extensions from community&lt;/li&gt;
&lt;li&gt;&lt;a class=&#34;link&#34; href=&#34;https://energy-based-model.github.io/Compositional-Visual-Generation-with-Composable-Diffusion-Models/&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Composable-Diffusion&lt;/a&gt;, a way to use multiple prompts at once
&lt;ul&gt;
&lt;li&gt;separate prompts using uppercase &lt;code&gt;AND&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;also supports weights for prompts: &lt;code&gt;a cat :1.2 AND a dog AND a penguin :2.2&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;No token limit for prompts (original stable diffusion lets you use up to 75 tokens)&lt;/li&gt;
&lt;li&gt;DeepDanbooru integration, creates danbooru style tags for anime prompts&lt;/li&gt;
&lt;li&gt;&lt;a class=&#34;link&#34; href=&#34;https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Xformers&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;xformers&lt;/a&gt;, major speed increase for select cards: (add &lt;code&gt;--xformers&lt;/code&gt; to commandline args)&lt;/li&gt;
&lt;li&gt;via extension: &lt;a class=&#34;link&#34; href=&#34;https://github.com/yfszzx/stable-diffusion-webui-images-browser&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;History tab&lt;/a&gt;: view, direct and delete images conveniently within the UI&lt;/li&gt;
&lt;li&gt;Generate forever option&lt;/li&gt;
&lt;li&gt;Training tab
&lt;ul&gt;
&lt;li&gt;hypernetworks and embeddings options&lt;/li&gt;
&lt;li&gt;Preprocessing images: cropping, mirroring, autotagging using BLIP or deepdanbooru (for anime)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Clip skip&lt;/li&gt;
&lt;li&gt;Hypernetworks&lt;/li&gt;
&lt;li&gt;Loras (same as Hypernetworks but more pretty)&lt;/li&gt;
&lt;li&gt;A separate UI where you can choose, with preview, which embeddings, hypernetworks or Loras to add to your prompt&lt;/li&gt;
&lt;li&gt;Can select to load a different VAE from settings screen&lt;/li&gt;
&lt;li&gt;Estimated completion time in progress bar&lt;/li&gt;
&lt;li&gt;API&lt;/li&gt;
&lt;li&gt;Support for dedicated &lt;a class=&#34;link&#34; href=&#34;https://github.com/runwayml/stable-diffusion#inpainting-with-stable-diffusion&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;inpainting model&lt;/a&gt; by RunwayML&lt;/li&gt;
&lt;li&gt;via extension: &lt;a class=&#34;link&#34; href=&#34;https://github.com/AUTOMATIC1111/stable-diffusion-webui-aesthetic-gradients&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Aesthetic Gradients&lt;/a&gt;, a way to generate images with a specific aesthetic by using clip images embeds (implementation of &lt;a class=&#34;link&#34; href=&#34;https://github.com/vicgalle/stable-diffusion-aesthetic-gradients&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;https://github.com/vicgalle/stable-diffusion-aesthetic-gradients&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;&lt;a class=&#34;link&#34; href=&#34;https://github.com/Stability-AI/stablediffusion&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Stable Diffusion 2.0&lt;/a&gt; support - see &lt;a class=&#34;link&#34; href=&#34;https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#stable-diffusion-20&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;wiki&lt;/a&gt; for instructions&lt;/li&gt;
&lt;li&gt;&lt;a class=&#34;link&#34; href=&#34;https://arxiv.org/abs/2211.06679&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Alt-Diffusion&lt;/a&gt; support - see &lt;a class=&#34;link&#34; href=&#34;https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#alt-diffusion&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;wiki&lt;/a&gt; for instructions&lt;/li&gt;
&lt;li&gt;Now without any bad letters!&lt;/li&gt;
&lt;li&gt;Load checkpoints in safetensors format&lt;/li&gt;
&lt;li&gt;Eased resolution restriction: generated image&amp;rsquo;s dimensions must be a multiple of 8 rather than 64&lt;/li&gt;
&lt;li&gt;Now with a license!&lt;/li&gt;
&lt;li&gt;Reorder elements in the UI from settings screen&lt;/li&gt;
&lt;li&gt;&lt;a class=&#34;link&#34; href=&#34;https://huggingface.co/segmind/SSD-1B&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Segmind Stable Diffusion&lt;/a&gt; support&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;installation-and-running&#34;&gt;Installation and Running
&lt;/h2&gt;&lt;p&gt;Make sure the required &lt;a class=&#34;link&#34; href=&#34;https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Dependencies&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;dependencies&lt;/a&gt; are met and follow the instructions available for:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a class=&#34;link&#34; href=&#34;https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-NVidia-GPUs&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;NVidia&lt;/a&gt; (recommended)&lt;/li&gt;
&lt;li&gt;&lt;a class=&#34;link&#34; href=&#34;https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-AMD-GPUs&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;AMD&lt;/a&gt; GPUs.&lt;/li&gt;
&lt;li&gt;&lt;a class=&#34;link&#34; href=&#34;https://github.com/openvinotoolkit/stable-diffusion-webui/wiki/Installation-on-Intel-Silicon&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Intel CPUs, Intel GPUs (both integrated and discrete)&lt;/a&gt; (external wiki page)&lt;/li&gt;
&lt;li&gt;&lt;a class=&#34;link&#34; href=&#34;https://github.com/wangshuai09/stable-diffusion-webui/wiki/Install-and-run-on-Ascend-NPUs&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Ascend NPUs&lt;/a&gt; (external wiki page)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Alternatively, use online services (like Google Colab):&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a class=&#34;link&#34; href=&#34;https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Online-Services&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;List of Online Services&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;installation-on-windows-1011-with-nvidia-gpus-using-release-package&#34;&gt;Installation on Windows 10/11 with NVidia-GPUs using release package
&lt;/h3&gt;&lt;ol&gt;
&lt;li&gt;Download &lt;code&gt;sd.webui.zip&lt;/code&gt; from &lt;a class=&#34;link&#34; href=&#34;https://github.com/AUTOMATIC1111/stable-diffusion-webui/releases/tag/v1.0.0-pre&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;v1.0.0-pre&lt;/a&gt; and extract its contents.&lt;/li&gt;
&lt;li&gt;Run &lt;code&gt;update.bat&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Run &lt;code&gt;run.bat&lt;/code&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;blockquote&gt;
&lt;p&gt;For more details see &lt;a class=&#34;link&#34; href=&#34;https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-NVidia-GPUs&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Install-and-Run-on-NVidia-GPUs&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h3 id=&#34;automatic-installation-on-windows&#34;&gt;Automatic Installation on Windows
&lt;/h3&gt;&lt;ol&gt;
&lt;li&gt;Install &lt;a class=&#34;link&#34; href=&#34;https://www.python.org/downloads/release/python-3106/&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Python 3.10.6&lt;/a&gt; (Newer version of Python does not support torch), checking &amp;ldquo;Add Python to PATH&amp;rdquo;.&lt;/li&gt;
&lt;li&gt;Install &lt;a class=&#34;link&#34; href=&#34;https://git-scm.com/download/win&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;git&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Download the stable-diffusion-webui repository, for example by running &lt;code&gt;git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Run &lt;code&gt;webui-user.bat&lt;/code&gt; from Windows Explorer as normal, non-administrator, user.&lt;/li&gt;
&lt;/ol&gt;
&lt;h3 id=&#34;automatic-installation-on-linux&#34;&gt;Automatic Installation on Linux
&lt;/h3&gt;&lt;ol&gt;
&lt;li&gt;Install the dependencies:&lt;/li&gt;
&lt;/ol&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;div class=&#34;chroma&#34;&gt;
&lt;table class=&#34;lntable&#34;&gt;&lt;tr&gt;&lt;td class=&#34;lntd&#34;&gt;
&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code&gt;&lt;span class=&#34;lnt&#34;&gt;1
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt;2
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt;3
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt;4
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt;5
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt;6
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt;7
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt;8
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class=&#34;lntd&#34;&gt;
&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-bash&#34; data-lang=&#34;bash&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;c1&#34;&gt;# Debian-based:&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;sudo apt install wget git python3 python3-venv libgl1 libglib2.0-0
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;c1&#34;&gt;# Red Hat-based:&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;sudo dnf install wget git python3 gperftools-libs libglvnd-glx
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;c1&#34;&gt;# openSUSE-based:&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;sudo zypper install wget git python3 libtcmalloc4 libglvnd
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;c1&#34;&gt;# Arch-based:&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;sudo pacman -S wget git python3
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;p&gt;If your system is very new, you need to install python3.11 or python3.10:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;div class=&#34;chroma&#34;&gt;
&lt;table class=&#34;lntable&#34;&gt;&lt;tr&gt;&lt;td class=&#34;lntd&#34;&gt;
&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code&gt;&lt;span class=&#34;lnt&#34;&gt; 1
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt; 2
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt; 3
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt; 4
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt; 5
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt; 6
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt; 7
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt; 8
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt; 9
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt;10
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt;11
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt;12
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt;13
&lt;/span&gt;&lt;span class=&#34;lnt&#34;&gt;14
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class=&#34;lntd&#34;&gt;
&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-bash&#34; data-lang=&#34;bash&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;c1&#34;&gt;# Ubuntu 24.04&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;sudo add-apt-repository ppa:deadsnakes/ppa
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;sudo apt update
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;sudo apt install python3.11
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;c1&#34;&gt;# Manjaro/Arch&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;sudo pacman -S yay
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;yay -S python311 &lt;span class=&#34;c1&#34;&gt;# do not confuse with python3.11 package&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;c1&#34;&gt;# Only for 3.11&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;c1&#34;&gt;# Then set up env variable in launch script&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;nb&#34;&gt;export&lt;/span&gt; &lt;span class=&#34;nv&#34;&gt;python_cmd&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;=&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;python3.11&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;c1&#34;&gt;# or in webui-user.sh&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;nv&#34;&gt;python_cmd&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;=&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;python3.11&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;ol start=&#34;2&#34;&gt;
&lt;li&gt;Navigate to the directory you would like the webui to be installed and execute the following command:&lt;/li&gt;
&lt;/ol&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;div class=&#34;chroma&#34;&gt;
&lt;table class=&#34;lntable&#34;&gt;&lt;tr&gt;&lt;td class=&#34;lntd&#34;&gt;
&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code&gt;&lt;span class=&#34;lnt&#34;&gt;1
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class=&#34;lntd&#34;&gt;
&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-bash&#34; data-lang=&#34;bash&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;wget -q https://raw.githubusercontent.com/AUTOMATIC1111/stable-diffusion-webui/master/webui.sh
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;p&gt;Or just clone the repo wherever you want:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;div class=&#34;chroma&#34;&gt;
&lt;table class=&#34;lntable&#34;&gt;&lt;tr&gt;&lt;td class=&#34;lntd&#34;&gt;
&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code&gt;&lt;span class=&#34;lnt&#34;&gt;1
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class=&#34;lntd&#34;&gt;
&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-bash&#34; data-lang=&#34;bash&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;ol start=&#34;3&#34;&gt;
&lt;li&gt;Run &lt;code&gt;webui.sh&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Check &lt;code&gt;webui-user.sh&lt;/code&gt; for options.&lt;/li&gt;
&lt;/ol&gt;
&lt;h3 id=&#34;installation-on-apple-silicon&#34;&gt;Installation on Apple Silicon
&lt;/h3&gt;&lt;p&gt;Find the instructions &lt;a class=&#34;link&#34; href=&#34;https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Installation-on-Apple-Silicon&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;contributing&#34;&gt;Contributing
&lt;/h2&gt;&lt;p&gt;Here&amp;rsquo;s how to add code to this repo: &lt;a class=&#34;link&#34; href=&#34;https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Contributing&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Contributing&lt;/a&gt;&lt;/p&gt;
&lt;h2 id=&#34;documentation&#34;&gt;Documentation
&lt;/h2&gt;&lt;p&gt;The documentation was moved from this README over to the project&amp;rsquo;s &lt;a class=&#34;link&#34; href=&#34;https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;wiki&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;For the purposes of getting Google and other search engines to crawl the wiki, here&amp;rsquo;s a link to the (not for humans) &lt;a class=&#34;link&#34; href=&#34;https://github-wiki-see.page/m/AUTOMATIC1111/stable-diffusion-webui/wiki&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;crawlable wiki&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;credits&#34;&gt;Credits
&lt;/h2&gt;&lt;p&gt;Licenses for borrowed code can be found in &lt;code&gt;Settings -&amp;gt; Licenses&lt;/code&gt; screen, and also in &lt;code&gt;html/licenses.html&lt;/code&gt; file.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Stable Diffusion - &lt;a class=&#34;link&#34; href=&#34;https://github.com/Stability-AI/stablediffusion&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;https://github.com/Stability-AI/stablediffusion&lt;/a&gt;, &lt;a class=&#34;link&#34; href=&#34;https://github.com/CompVis/taming-transformers&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;https://github.com/CompVis/taming-transformers&lt;/a&gt;, &lt;a class=&#34;link&#34; href=&#34;https://github.com/mcmonkey4eva/sd3-ref&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;https://github.com/mcmonkey4eva/sd3-ref&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;k-diffusion - &lt;a class=&#34;link&#34; href=&#34;https://github.com/crowsonkb/k-diffusion.git&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;https://github.com/crowsonkb/k-diffusion.git&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Spandrel - &lt;a class=&#34;link&#34; href=&#34;https://github.com/chaiNNer-org/spandrel&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;https://github.com/chaiNNer-org/spandrel&lt;/a&gt; implementing
&lt;ul&gt;
&lt;li&gt;GFPGAN - &lt;a class=&#34;link&#34; href=&#34;https://github.com/TencentARC/GFPGAN.git&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;https://github.com/TencentARC/GFPGAN.git&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;CodeFormer - &lt;a class=&#34;link&#34; href=&#34;https://github.com/sczhou/CodeFormer&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;https://github.com/sczhou/CodeFormer&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;ESRGAN - &lt;a class=&#34;link&#34; href=&#34;https://github.com/xinntao/ESRGAN&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;https://github.com/xinntao/ESRGAN&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;SwinIR - &lt;a class=&#34;link&#34; href=&#34;https://github.com/JingyunLiang/SwinIR&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;https://github.com/JingyunLiang/SwinIR&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Swin2SR - &lt;a class=&#34;link&#34; href=&#34;https://github.com/mv-lab/swin2sr&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;https://github.com/mv-lab/swin2sr&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;LDSR - &lt;a class=&#34;link&#34; href=&#34;https://github.com/Hafiidz/latent-diffusion&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;https://github.com/Hafiidz/latent-diffusion&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;MiDaS - &lt;a class=&#34;link&#34; href=&#34;https://github.com/isl-org/MiDaS&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;https://github.com/isl-org/MiDaS&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Ideas for optimizations - &lt;a class=&#34;link&#34; href=&#34;https://github.com/basujindal/stable-diffusion&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;https://github.com/basujindal/stable-diffusion&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Cross Attention layer optimization - Doggettx - &lt;a class=&#34;link&#34; href=&#34;https://github.com/Doggettx/stable-diffusion&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;https://github.com/Doggettx/stable-diffusion&lt;/a&gt;, original idea for prompt editing.&lt;/li&gt;
&lt;li&gt;Cross Attention layer optimization - InvokeAI, lstein - &lt;a class=&#34;link&#34; href=&#34;https://github.com/invoke-ai/InvokeAI&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;https://github.com/invoke-ai/InvokeAI&lt;/a&gt; (originally &lt;a class=&#34;link&#34; href=&#34;http://github.com/lstein/stable-diffusion&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;http://github.com/lstein/stable-diffusion&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Sub-quadratic Cross Attention layer optimization - Alex Birch (&lt;a class=&#34;link&#34; href=&#34;https://github.com/Birch-san/diffusers/pull/1%29&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;https://github.com/Birch-san/diffusers/pull/1)&lt;/a&gt;, Amin Rezaei (&lt;a class=&#34;link&#34; href=&#34;https://github.com/AminRezaei0x443/memory-efficient-attention&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;https://github.com/AminRezaei0x443/memory-efficient-attention&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Textual Inversion - Rinon Gal - &lt;a class=&#34;link&#34; href=&#34;https://github.com/rinongal/textual_inversion&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;https://github.com/rinongal/textual_inversion&lt;/a&gt; (we&amp;rsquo;re not using his code, but we are using his ideas).&lt;/li&gt;
&lt;li&gt;Idea for SD upscale - &lt;a class=&#34;link&#34; href=&#34;https://github.com/jquesnelle/txt2imghd&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;https://github.com/jquesnelle/txt2imghd&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Noise generation for outpainting mk2 - &lt;a class=&#34;link&#34; href=&#34;https://github.com/parlance-zz/g-diffuser-bot&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;https://github.com/parlance-zz/g-diffuser-bot&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;CLIP interrogator idea and borrowing some code - &lt;a class=&#34;link&#34; href=&#34;https://github.com/pharmapsychotic/clip-interrogator&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;https://github.com/pharmapsychotic/clip-interrogator&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Idea for Composable Diffusion - &lt;a class=&#34;link&#34; href=&#34;https://github.com/energy-based-model/Compositional-Visual-Generation-with-Composable-Diffusion-Models-PyTorch&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;https://github.com/energy-based-model/Compositional-Visual-Generation-with-Composable-Diffusion-Models-PyTorch&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;xformers - &lt;a class=&#34;link&#34; href=&#34;https://github.com/facebookresearch/xformers&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;https://github.com/facebookresearch/xformers&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;DeepDanbooru - interrogator for anime diffusers &lt;a class=&#34;link&#34; href=&#34;https://github.com/KichangKim/DeepDanbooru&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;https://github.com/KichangKim/DeepDanbooru&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Sampling in float32 precision from a float16 UNet - marunine for the idea, Birch-san for the example Diffusers implementation (&lt;a class=&#34;link&#34; href=&#34;https://github.com/Birch-san/diffusers-play/tree/92feee6&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;https://github.com/Birch-san/diffusers-play/tree/92feee6&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Instruct pix2pix - Tim Brooks (star), Aleksander Holynski (star), Alexei A. Efros (no star) - &lt;a class=&#34;link&#34; href=&#34;https://github.com/timothybrooks/instruct-pix2pix&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;https://github.com/timothybrooks/instruct-pix2pix&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Security advice - RyotaK&lt;/li&gt;
&lt;li&gt;UniPC sampler - Wenliang Zhao - &lt;a class=&#34;link&#34; href=&#34;https://github.com/wl-zhao/UniPC&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;https://github.com/wl-zhao/UniPC&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;TAESD - Ollin Boer Bohan - &lt;a class=&#34;link&#34; href=&#34;https://github.com/madebyollin/taesd&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;https://github.com/madebyollin/taesd&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;LyCORIS - KohakuBlueleaf&lt;/li&gt;
&lt;li&gt;Restart sampling - lambertae - &lt;a class=&#34;link&#34; href=&#34;https://github.com/Newbeeer/diffusion_restart_sampling&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;https://github.com/Newbeeer/diffusion_restart_sampling&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Hypertile - tfernd - &lt;a class=&#34;link&#34; href=&#34;https://github.com/tfernd/HyperTile&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;https://github.com/tfernd/HyperTile&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Initial Gradio script - posted on 4chan by an Anonymous user. Thank you Anonymous user.&lt;/li&gt;
&lt;li&gt;(You)&lt;/li&gt;
&lt;/ul&gt;
</description>
        </item>
        
    </channel>
</rss>
