ONLINE

Installing ComfyUI

Published:

Introduction

This tutorial walks you through installing ComfyUI on your local machine. We’ll cover installation on Windows, Linux, and Mac, plus initial configuration and downloading your first model.

What You’ll Need

  • A computer with a decent GPU (NVIDIA recommended, 8GB+ VRAM)
  • Python 3.10+ installed
  • Git installed
  • At least 20GB free disk space

What You’ll Learn

  • Cloning the ComfyUI repository
  • Setting up a Python virtual environment
  • Installing dependencies
  • Downloading and placing models
  • Running ComfyUI for the first time
  • Basic troubleshooting

Prerequisites Check

Before we start, verify you have:

  1. GPU Requirements:

    • NVIDIA GPU with 8GB+ VRAM (recommended)
    • AMD or Apple Silicon can work but with limitations
    • Intel Arc GPUs are experimentally supported
  2. Check GPU VRAM:

# Windows (Command Prompt)
nvidia-smi

# Or check in Task Manager > Performance > GPU
  1. Python 3.10+:
python --version
# Should show Python 3.10.x or higher
  1. Git installed:
git --version

Don’t have Python/Git?

Windows

  1. Download the portable version:

  2. Extract and run:

    • Extract the 7z file to C:\ComfyUI\
    • Double-click run_nvidia_gpu.bat (or run_cpu.bat if no GPU)
  3. First launch:

    • A browser window will open to http://127.0.0.1:8188
    • You’ll see an empty node graph - that’s normal!

Mac (Apple Silicon)

  1. Clone the repository:
git clone https://github.com/comfyanonymous/ComfyUI.git
cd ComfyUI
  1. Install dependencies:
pip3 install torch torchvision torchaudio
pip3 install -r requirements.txt
  1. Run ComfyUI:
python3 main.py

Linux

  1. Clone and setup:
git clone https://github.com/comfyanonymous/ComfyUI.git
cd ComfyUI

# Create virtual environment (recommended)
python3 -m venv venv
source venv/bin/activate

# Install PyTorch (adjust for your CUDA version)
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121

# Install requirements
pip install -r requirements.txt
  1. Run ComfyUI:
python main.py

Installation Method 2: Manual Setup (Advanced)

If the automatic method fails or you want more control:

Windows Manual Setup

  1. Clone ComfyUI:
git clone https://github.com/comfyanonymous/ComfyUI.git
cd ComfyUI
  1. Create virtual environment:
python -m venv venv
venv\Scripts\activate
  1. Install PyTorch (check pytorch.org for latest):
# For NVIDIA GPUs with CUDA 12.1
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121

# For CPU only
pip install torch torchvision torchaudio
  1. Install dependencies:
pip install -r requirements.txt

Downloading Your First Model

ComfyUI needs AI models to generate images. Here’s how to get started:

  1. Create models directory (if not exists):
ComfyUI/
├── models/
│   ├── checkpoints/     ← Main models go here
│   ├── vae/            ← VAE files
│   ├── clip/           ← CLIP models
│   └── controlnet/     ← ControlNet models
  1. Download a base model:

Option A: Stable Diffusion 1.5 (4GB, good for beginners):

Option B: SDXL (7GB, better quality):

  1. Download VAE (improves image quality):

First Run & Testing

  1. Start ComfyUI:
# Windows portable: run_nvidia_gpu.bat
# Manual setup: python main.py
  1. Open browser: Navigate to http://127.0.0.1:8188

  2. Load default workflow:

    • You should see nodes connected with lines
    • If empty, drag & drop this example workflow:
  3. Basic workflow setup:

    • Load Checkpoint node: Select your downloaded model
    • CLIP Text Encode nodes: Add your prompts
    • KSampler: Keep default settings for now
    • VAE Decode: Select your VAE
  4. Generate your first image:

    • Click “Queue Prompt” button
    • Watch the progress in the terminal
    • Image appears in the Save Image node

Troubleshooting Common Issues

”No module named ‘torch’"

# Reinstall PyTorch
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121

"CUDA out of memory”

  1. Lower resolution: Try 512x512 instead of 1024x1024
  2. Reduce batch size: Set to 1 in KSampler
  3. Enable CPU offload: Add --cpu flag when starting

”Model not found”

  1. Check file path: Models must be in correct folders
  2. File permissions: Make sure ComfyUI can read the files
  3. Refresh browser: Sometimes needs a refresh to see new models

Slow generation

  1. Check GPU usage: nvidia-smi should show ComfyUI using GPU
  2. VRAM issues: Close other applications using GPU
  3. Model size: Larger models = slower generation

Can’t access web interface

  1. Check port: Default is 8188, look for “Starting server” message
  2. Firewall: Windows may block the connection
  3. Try different browser: Chrome/Firefox work best

Performance Optimization

GPU Acceleration

# Check if CUDA is available
python -c "import torch; print(torch.cuda.is_available())"
# Should print: True

Memory Management

  • —lowvram: For 6-8GB VRAM cards
  • —normalvram: For 8-12GB VRAM cards
  • —highvram: For 12GB+ VRAM cards
  • —cpu: Force CPU generation

Example:

python main.py --lowvram

Storage Tips

  • Use .safetensors: Safer and faster than .ckpt files
  • SSD storage: Put models on SSD for faster loading
  • Symlinks: Link to models stored elsewhere to save space

What’s Next?

Now that ComfyUI is installed:

  1. Learn the basics: Check out “Intro to ComfyUI” tutorial
  2. Download more models: Explore Civitai and HuggingFace
  3. Install custom nodes: Expand ComfyUI’s capabilities
  4. Join communities: ComfyUI Discord, Reddit r/comfyui

Photorealistic:

  • Realistic Vision XL
  • RealESRGAN (for upscaling)

Anime/Artistic:

  • Anything V3/V5
  • Waifu Diffusion

ControlNet (for precise control):

  • sd-controlnet-canny
  • sd-controlnet-openpose

Congratulations! 🎉 You now have ComfyUI running locally. This is your gateway to unlimited AI image generation without monthly subscriptions or cloud dependencies.

Screenshots would be incredibly helpful for the web interface sections - showing the node graph, model selection dropdowns, and the Queue Prompt button. Want me to continue with the other tutorials while you gather those?