From 4939cf18d85e1cc39183791d9e4e6888b9e70f06 Mon Sep 17 00:00:00 2001 From: lllyasviel Date: Tue, 6 Feb 2024 07:43:33 -0800 Subject: [PATCH] update hints for 4090 --- README.md | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index 17ce726c..715c983e 100644 --- a/README.md +++ b/README.md @@ -76,7 +76,10 @@ Forge backend removes all WebUI's codes related to resource management and rewor Without any cmd flag, Forge can run SDXL with 4GB vram and SD1.5 with 2GB vram. -**The only one flag that you may still need** is `--always-offload-from-vram` (This flag will make things **slower**). This option will let Forge always unload models from VRAM. This can be useful if you use multiple software together and want Forge to use less VRAM and give some vram to other software, or when you are using some old extensions that will compete vram with Forge, or (very rarely) when you get OOM. +The only two flags that you may still need: + +1. `--always-offload-from-vram` (This flag will make things slower). This option will let Forge always unload models from VRAM. This can be useful if you use multiple software together and want Forge to use less VRAM and give some vram to other software, or when you are using some old extensions that will compete vram with Forge, or (very rarely) when you get OOM. +2. `--always-gpu` (This flag will completely avoid model offload). If you use GPU with lots of memory like 4090 24GB, you should use this flag to probably further speed up inference by avoiding moving models repeatedly. If you really want to play with cmd flags, you can additionally control the GPU with: