Port mdmd from mainline + Qwen2/2.5-VL support (#798)

* Add mtmd: the beginning

* Add mtmd: mtmd.cpp compiles

* Add mtmd: clip initialization compiles

* Add mtmd: clip.cpp compiles

* Add mtmd: builds successfully

* Add CPU implementation for GGML_OP_GLU

* Add CUDA implementation for GGML_OP_GLU

* Add CPU implementation for GGML_OP_CONV_2D and GGML_OP_CONV_2D_DW

* Add CUDA implementation for GGML_OP_CONV_2D and GGML_OP_CONV_2D_DW

* Add mtmd: refresh CPU rope

* Add mtmd: refresh CUDA rope

* Add mtmd: add Qwen2-VL

* Add mtmd: Qwen2.5-VL text seems to work with this change

* Add mtmd: fix swiglu

* Add mtmd: use LOG_TEE so generated tokens show up in terminal

* Add mtmd: do not attempt to load a GPU backend if none are available

* GLU, not GPU

* Fix typo

* Fix new/free mismatch

* LOG stuff

* Add mtmd: this fixes gibberish on second image

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
This commit is contained in:
Kawrakow
2025-09-27 08:45:29 +02:00
committed by GitHub
parent 367654f99e
commit 87e4762720
51 changed files with 115141 additions and 432 deletions

View File

@@ -1235,7 +1235,7 @@ struct server_context {
chat_templates = common_chat_templates_init(model, params.chat_template);
try {
common_chat_format_example(chat_templates.get(), params.use_jinja);
common_chat_format_example(chat_templates.get(), params.use_jinja, {});
}
catch (const std::exception& e) {
LOG_WARNING("%s: The chat template that comes with this model is not yet supported, falling back to chatml. This may cause the model to output suboptimal responses\n", __func__);
@@ -3778,7 +3778,7 @@ int main(int argc, char ** argv) {
});
LOG_INFO("chat template", {
{"chat_example", common_chat_format_example(ctx_server.chat_templates.get(), ctx_server.params.use_jinja).c_str()
{"chat_example", common_chat_format_example(ctx_server.chat_templates.get(), ctx_server.params.use_jinja, {}).c_str()
},
{"built_in", params.chat_template.empty()},
});