Installing ComfyUI
Introduction
This tutorial walks you through installing ComfyUI on your local machine. We’ll cover installation on Windows, Linux, and Mac, plus initial configuration and downloading your first model.
What You’ll Need
- A computer with a decent GPU (NVIDIA recommended, 8GB+ VRAM)
- Python 3.10+ installed
- Git installed
- At least 20GB free disk space
What You’ll Learn
- Cloning the ComfyUI repository
- Setting up a Python virtual environment
- Installing dependencies
- Downloading and placing models
- Running ComfyUI for the first time
- Basic troubleshooting
Prerequisites Check
Before we start, verify you have:
-
GPU Requirements:
- NVIDIA GPU with 8GB+ VRAM (recommended)
- AMD or Apple Silicon can work but with limitations
- Intel Arc GPUs are experimentally supported
-
Check GPU VRAM:
# Windows (Command Prompt)
nvidia-smi
# Or check in Task Manager > Performance > GPU
- Python 3.10+:
python --version
# Should show Python 3.10.x or higher
- Git installed:
git --version
Don’t have Python/Git?
- Python: Download from python.org (get 3.11.x)
- Git: Download from git-scm.com
Installation Method 1: Automatic (Recommended)
Windows
-
Download the portable version:
- Visit ComfyUI GitHub releases
- Download
ComfyUI_windows_portable_nvidia_cu121_or_cpu.7z
-
Extract and run:
- Extract the 7z file to
C:\ComfyUI\ - Double-click
run_nvidia_gpu.bat(orrun_cpu.batif no GPU)
- Extract the 7z file to
-
First launch:
- A browser window will open to
http://127.0.0.1:8188 - You’ll see an empty node graph - that’s normal!
- A browser window will open to
Mac (Apple Silicon)
- Clone the repository:
git clone https://github.com/comfyanonymous/ComfyUI.git
cd ComfyUI
- Install dependencies:
pip3 install torch torchvision torchaudio
pip3 install -r requirements.txt
- Run ComfyUI:
python3 main.py
Linux
- Clone and setup:
git clone https://github.com/comfyanonymous/ComfyUI.git
cd ComfyUI
# Create virtual environment (recommended)
python3 -m venv venv
source venv/bin/activate
# Install PyTorch (adjust for your CUDA version)
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121
# Install requirements
pip install -r requirements.txt
- Run ComfyUI:
python main.py
Installation Method 2: Manual Setup (Advanced)
If the automatic method fails or you want more control:
Windows Manual Setup
- Clone ComfyUI:
git clone https://github.com/comfyanonymous/ComfyUI.git
cd ComfyUI
- Create virtual environment:
python -m venv venv
venv\Scripts\activate
- Install PyTorch (check pytorch.org for latest):
# For NVIDIA GPUs with CUDA 12.1
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121
# For CPU only
pip install torch torchvision torchaudio
- Install dependencies:
pip install -r requirements.txt
Downloading Your First Model
ComfyUI needs AI models to generate images. Here’s how to get started:
- Create models directory (if not exists):
ComfyUI/
├── models/
│ ├── checkpoints/ ← Main models go here
│ ├── vae/ ← VAE files
│ ├── clip/ ← CLIP models
│ └── controlnet/ ← ControlNet models
- Download a base model:
Option A: Stable Diffusion 1.5 (4GB, good for beginners):
- Download: sd-v1-5-inpainting.ckpt
- Place in:
ComfyUI/models/checkpoints/
Option B: SDXL (7GB, better quality):
- Download: sd_xl_base_1.0.safetensors
- Place in:
ComfyUI/models/checkpoints/
- Download VAE (improves image quality):
- Download: vae-ft-mse-840000-ema-pruned.safetensors
- Place in:
ComfyUI/models/vae/
First Run & Testing
- Start ComfyUI:
# Windows portable: run_nvidia_gpu.bat
# Manual setup: python main.py
-
Open browser: Navigate to
http://127.0.0.1:8188 -
Load default workflow:
- You should see nodes connected with lines
- If empty, drag & drop this example workflow:
-
Basic workflow setup:
- Load Checkpoint node: Select your downloaded model
- CLIP Text Encode nodes: Add your prompts
- KSampler: Keep default settings for now
- VAE Decode: Select your VAE
-
Generate your first image:
- Click “Queue Prompt” button
- Watch the progress in the terminal
- Image appears in the Save Image node
Troubleshooting Common Issues
”No module named ‘torch’"
# Reinstall PyTorch
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121
"CUDA out of memory”
- Lower resolution: Try 512x512 instead of 1024x1024
- Reduce batch size: Set to 1 in KSampler
- Enable CPU offload: Add
--cpuflag when starting
”Model not found”
- Check file path: Models must be in correct folders
- File permissions: Make sure ComfyUI can read the files
- Refresh browser: Sometimes needs a refresh to see new models
Slow generation
- Check GPU usage:
nvidia-smishould show ComfyUI using GPU - VRAM issues: Close other applications using GPU
- Model size: Larger models = slower generation
Can’t access web interface
- Check port: Default is 8188, look for “Starting server” message
- Firewall: Windows may block the connection
- Try different browser: Chrome/Firefox work best
Performance Optimization
GPU Acceleration
# Check if CUDA is available
python -c "import torch; print(torch.cuda.is_available())"
# Should print: True
Memory Management
- —lowvram: For 6-8GB VRAM cards
- —normalvram: For 8-12GB VRAM cards
- —highvram: For 12GB+ VRAM cards
- —cpu: Force CPU generation
Example:
python main.py --lowvram
Storage Tips
- Use .safetensors: Safer and faster than .ckpt files
- SSD storage: Put models on SSD for faster loading
- Symlinks: Link to models stored elsewhere to save space
What’s Next?
Now that ComfyUI is installed:
- Learn the basics: Check out “Intro to ComfyUI” tutorial
- Download more models: Explore Civitai and HuggingFace
- Install custom nodes: Expand ComfyUI’s capabilities
- Join communities: ComfyUI Discord, Reddit r/comfyui
Popular Model Recommendations
Photorealistic:
- Realistic Vision XL
- RealESRGAN (for upscaling)
Anime/Artistic:
- Anything V3/V5
- Waifu Diffusion
ControlNet (for precise control):
- sd-controlnet-canny
- sd-controlnet-openpose
Congratulations! 🎉 You now have ComfyUI running locally. This is your gateway to unlimited AI image generation without monthly subscriptions or cloud dependencies.
Screenshots would be incredibly helpful for the web interface sections - showing the node graph, model selection dropdowns, and the Queue Prompt button. Want me to continue with the other tutorials while you gather those?