Hello everyone, I am the daydreamer Nexmoe. Recently, I have open-sourced a ComfyUI elastic Serverless application packaged with Docker, which features a complete front-end and back-end separation architecture and a user-friendly interface.
After completing the development of the ComfyUI workflow, deploying it to a production environment can be quite tricky. Therefore, I have open-sourced a set of examples for everyone to learn from.
Demo: https://hadoop.nexmoe.com/
Open source address: https://github.com/nexmoe/serverless-comfyui
Project Features#
- 🐳 Complete Docker deployment solution
- 🎨 Modern front-end interface
- 🔌 Modular back-end architecture
- 🛠 Simple configuration and usage
Architecture Diagram#
Project Structure#
comfy-docker/
├── frontend/ # Next.js front-end project
│ ├── src/ # Source code
│ └── .env # Environment configuration
├── backend/ # ComfyUI back-end
│ ├── checkpoints/ # Model checkpoints
│ ├── controlnet/ # ControlNet models
│ ├── custom_nodes/ # Custom nodes
│ └── loras/ # LoRA models
└── bruno/ # API test files
The directory structure of frontend/ is as follows, models and custom nodes need to be downloaded and installed manually.
.
├── Dockerfile
├── checkpoints
│ └── dreamshaperXL_sfwV2TurboDPMSDE.safetensors
├── controlnet
│ ├── sai_xl_canny_256lora.safetensors
│ └── sai_xl_depth_256lora.safetensors
├── custom_nodes
│ ├── ComfyUI-Custom-Scripts
│ ├── ComfyUI-WD14-Tagger
│ ├── ComfyUI_Comfyroll_CustomNodes
│ ├── comfyui-art-venture
│ └── comfyui_controlnet_aux
├── docker-compose.yml
├── loras
│ └── StudioGhibli.Redmond-StdGBRRedmAF-StudioGhibli.safetensors
├── provisioning.sh // Custom script
└── sanhua.json // Workflow
Environment Requirements#
- Docker & Docker Compose
- NVIDIA GPU (Currently, the demo workflow requires more than 12G of VRAM)
- Sufficient disk space (100G~200G) for storing models
Quick Start#
Local Testing of Backend#
- Navigate to the backend Dockerfile directory
cd backend
- Download model files
Please refer to: https://gongjiyun.com/docs/tutorials/comfyui.html#%E4%B8%8B%E8%BD%BD%E6%A8%A1%E5%9E%8B%E5%92%8C%E6%89%A9%E5%B1%95
- Build the Docker image
docker build -t gongji/comfyui:0.1 .
- Run the Docker container
docker run -it --rm --gpus all -p 3000:3000 -p 8188:8188 --name comfyui gongji/comfyui:0.1
After the container starts, you can access:
- ComfyUI interface: http://localhost:8188
- API interface: http://localhost:3000/docs
Local Testing of Frontend#
- Navigate to the frontend directory
cd frontend
- Configure environment variables
cp .env.example .env
# Edit the .env file to configure necessary environment variables
- Install dependencies and start
pnpm install
pnpm dev
Deploying ComfyUI Docker to Serverless Elastic Platform#
Please refer to Gongji Technology's ComfyUI deployment documentation
API Documentation#
The project uses Bruno for API testing and documentation management, with related files located in the bruno/
directory.
Example of ComfyUI API Call#
Here is an example code for calling the ComfyUI API (refer to frontend/src/app/api/route.ts
):
async function generateImage(imageUrl: string) {
// 1. Prepare prompt data
const promptData = { ...promptob }; // Import base prompt from JSON file
promptData.prompt["30"].inputs.image = imageUrl; // Modify input image
// 2. Set request options
const url = `${process.env.GONGJI_ENDPOINT}/prompt`;
const options = {
method: 'POST',
headers: { 'content-type': 'application/json' },
body: JSON.stringify(promptData)
};
// 3. Send request
const response = await fetch(url, options);
const data = await response.json();
// 4. Error handling
if (response.status !== 200) {
throw new Error(response.statusText);
}
// 5. Process returned image data
if (data.images && data.images.length > 0) {
return data.images[0]; // Return base64 format image data
} else {
throw new Error('No valid image data returned');
}
}
Main steps explanation:
-
Prepare Prompt:
- Import base prompt configuration from JSON file
- Modify parameters in the prompt as needed (e.g., input image)
-
Send Request:
- Use POST method
- Set Content-Type to application/json
- Request body is serialized prompt data
-
Handle Response:
- Check response status code
- Parse returned JSON data
- Extract generated image (base64 format)
-
Error Handling:
- Log error messages
- Throw appropriate error messages
Environment Variable Configuration#
Before using the API, ensure the following environment variables are configured:
GONGJI_ENDPOINT=your-comfyui-api-endpoint # ComfyUI API endpoint
S3 Configuration Instructions#
The project's image upload feature requires configuring S3 storage services. You can use AWS S3 or other object storage services compatible with the S3 protocol (like MinIO).
In the frontend/.env
file, configure the following environment variables:
S3_ENDPOINT=your-s3-endpoint
S3_ACCESS_KEY=your-access-key
S3_SECRET_KEY=your-secret-key
S3_BUCKET=your-bucket-name
S3_REGION=your-region
Note:
- Ensure the created bucket has appropriate access permissions
- If using MinIO, the endpoint should be a complete URL (e.g., http://localhost:9000)
- When using AWS S3, you can omit the endpoint configuration
Contribution Guidelines#
Feel free to submit Issues and Pull Requests!
License#
MIT License