Refactor file llama.cpp (#823)

* llama_model and llama_hparams

* llama_build_context

Surprisingly small reduction in llama.cpp compile time given
the reduction in LOCs (22k -> 14k)

* LLM_TN

llama.cpp compilation: 50 s -> 33 s

* llama_quantize

* arch names

* All graph building is now in llm-build-context.cpp

* hparams loading

llama.cpp is now just 9300 LOC, but still takes 32 seconds to compile.

* We are now at 6 seconds to build the src folder

* load -> create

We are not actually loading the tensors, but just creating them.

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
This commit is contained in:
Kawrakow
2025-10-11 11:35:20 +03:00
committed by GitHub
parent 23275ac066
commit 4daff01b39
16 changed files with 16361 additions and 15826 deletions

1513
src/llama-quantize.cpp Normal file

File diff suppressed because it is too large Load Diff