Graph parallel: the next generation (#1080)

* WIP: absorb adding input into std_attn and std_ffn

* WIP: NCCL infra

* WIP: add reduce and fake_cpy ops

* WIP

* WIP: graph appears to work, layer is broken

* WIP: Qwen3-MoE works with graph, layer still broken

* WIP: GLM-4.5 graph works

* WIP: fix sm layer (dense)

* WIP: fix sm layer (MoE)

* WIP: fast PP with bespoke 4-GPU NCCL

I guess, I'm not using NCCL the right way as PP is very
low with a single communicator group for 3 or more GPUs.
But if I create 4 communicator groups for pairs of GPUs
(0,1, 2,3, 0,2, 1,3) and use that, PP is fast: I'm hitting
1500 t/s for L3-70B on the 4x3090 system, which is
~20% better than the previous sm graph without NCCL.
But that cannot be the solution (I cannot be creating pairwise
communicators and associated logic for every possible number of GPUs).

* WIP: Cohere2

* Explicitely set device

* Bespoke 3-GPU case

* WIP

* Do not repeat get_rows multiple times

* Fix 3 GPUs

* OK, let's leave it in

* Implement the reduce op without NCCL available

* Be able to build without NCCL

cmake -DGGML_NCCL=OFF disables it

* Make --max-gpu work again

* Slightly better for 4 GPUs without NCCL

* Cleanup

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
This commit is contained in:
Kawrakow
2025-12-24 08:31:48 +01:00
committed by GitHub
parent 2a633c4357
commit 0d7eb34185
12 changed files with 870 additions and 256 deletions

View File

@@ -689,6 +689,9 @@ extern "C" {
GGML_OP_GLU,
GGML_OP_REDUCE,
GGML_OP_FAKE_CPY,
GGML_OP_COUNT,
};
@@ -3034,6 +3037,17 @@ extern "C" {
struct ggml_tensor ** splits;
} ggml_split_tensor_t;
GGML_API struct ggml_tensor * ggml_reduce(
struct ggml_context * ctx,
struct ggml_tensor ** a,
int n,
enum ggml_op op);
GGML_API struct ggml_tensor * ggml_fake_cpy(
struct ggml_context * ctx,
struct ggml_tensor * dst,
struct ggml_tensor * src);
#ifdef __cplusplus
}
#endif