From 65367aa24dba047df75d57912073ba78fb91045e Mon Sep 17 00:00:00 2001 From: lllyasviel Date: Tue, 6 Feb 2024 07:51:37 -0800 Subject: [PATCH] accurate words --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 715c983e..06ee0536 100644 --- a/README.md +++ b/README.md @@ -79,7 +79,7 @@ Without any cmd flag, Forge can run SDXL with 4GB vram and SD1.5 with 2GB vram. The only two flags that you may still need: 1. `--always-offload-from-vram` (This flag will make things slower). This option will let Forge always unload models from VRAM. This can be useful if you use multiple software together and want Forge to use less VRAM and give some vram to other software, or when you are using some old extensions that will compete vram with Forge, or (very rarely) when you get OOM. -2. `--always-gpu` (This flag will completely avoid model offload). If you use GPU with lots of memory like 4090 24GB, you should use this flag to probably further speed up inference by avoiding moving models repeatedly. +2. `--always-gpu` (This flag will avoid model moving/offload). If you use GPU with lots of memory like 4090 24GB, you should use this flag to probably further speed up inference by avoiding moving models repeatedly. If you really want to play with cmd flags, you can additionally control the GPU with: