SDToolsCapturing conceptsSettingsFinishingEditingCompositionPromptingGetting startedFine tuningTrainingMixingImage2text SamplersTweaksUpscalingImage2ImageRestoringSegmentationCreatingPreprocessorsControllingText prompting Image prompting EmbeddingsModelsInterfaceCaption basedToken basedData PreparationLORAHypernetworkTextual InversionControlNet Merging Using multipleImage2textDPM++AncestralBrightnessSaturationColor ESRGANSD UpscaleImg2ImgText based editingFace restorationAdetailerErasing objectsInpainting 2D Pose 2D3DPose Depth2ImageConditional controlPrompt editingBrute ForceRandomnessUtilityUnderstandingImages2ImageClip visionIP-AdapterControlNetEmbeddings VAELORA CommunityOfficialCommercialFreeFine tuning DreamboothTools LORA trainingOne HypernetworkNew EmbeddingNegative EmbeddingControlNet UnweightedBlock weightedLORA Multiple HypernetworksMultiple EmbeddingCLIP InterrogationBLIP Image CaptioningDPM++ 2M DPM++ SDEUniPCEuler ADPM++ 2S AOffset noiseDynamic tresholdingT2I-ColorESRGAN-basedSD Upscale ControlNet tileLoopbackInpaintingOutpaintingImg2Img InstructPix2PixGFPGANCode FormerFaceHandsPersonCleanerInpainting ControlNet Sketch Pose editorCanny Edge M-LSD lines HED boundary Semantic segBinaryPidinetFake scribleDepth Normal Map PoseOpen Pose FaceDepth Preserving Img2ImgMultipleContorlNet T2I AdapterPrompt WeightingPrompt DelayAlternating WordsNegative PromptsOne parameterXY gridInfinity GridPrompt MatrixRandom wordsVariationsAttention mapImage mixerT2I-styleFull facePlus facePlusReferenceShuffleEmbeddings VAE LORA Fine tunedMergedMegamerged1.52.1SDXLCascadeCommercial Free
sdtools.orgv1.7

SDTools

There are plenty of pages explaining how stable diffusion works. This is essentially a mini wiki or cheat sheet. Clicking on a segment provides a very brief explanation and relevant links. The purpose of this mini wiki is to address this simple problem:
Why am I unable to generate the exact image I want?
What tools could help me reach my goal?
This page introduces you to some tools you may find helpful in crafting your images. The focus is on how to obtain what you want rather than how it works. There is so much good ressources out there that we try to mostly point to it, with a few filler text here and there.

Are you just starting, then look at using Fooocus . If you want more control yet don't want to bother tweeking your interface use SD.Next . If you want to customise your interace with lots of extra addons and want to be on the bleeding edge and don't mind a few things breaking you can use Auto1111 . If you love inpainting, outpainting and just a great UI and canvas you can use InvokeAI. Finally, if you want advance pipelines and workflows and love node based interfaces you can use ComfyUI

The next step is whether you can run model locally or you need to use the cloud. Go on your system info and check you graphics card. Go to graphics and check the name of your graphics card. For instance on the computer I am typing this it says GeForce RTX 3050Ti. If you google your graphic card you can find out how much VRAM you have. For me on this device it is 4GB which is just enough to run stable diffusion locally. If you have 4GB VRAM (not RAM) or more, then you can can install and run stable diffusion locally. If not then you will have to use a cloud provider for instance use a template from Runpod or Vast.ai

NB: Bigger area does not mean more important! We have plenty of gaps in what we capture, things are moving so fast! Thank you to many on reddit who have made suggestions, keep them coming!