mirror of
https://github.com/ikawrakow/ik_llama.cpp.git
synced 2026-01-30 19:19:57 +00:00
* model : Granite docling + Idefics3 preprocessing (SmolVLM) (#16206) * feat: Add granite-docling conversion using trillion pretokenizer Branch: gabe-l-hart/GraniteDocling Signed-off-by: Gabe Goodhart <ghart@us.ibm.com> * feat: Add granite-docling vocab pre enum Branch: gabe-l-hart/GraniteDocling Signed-off-by: Gabe Goodhart <ghart@us.ibm.com> * fix: Use granite-docling pre Branch: gabe-l-hart/GraniteDocling Signed-off-by: Gabe Goodhart <ghart@us.ibm.com> * feat: Add clip_is_idefics3 Branch: gabe-l-hart/GraniteDocling Signed-off-by: Gabe Goodhart <ghart@us.ibm.com> * feat: Allow multi-token boundary sequences for image templating Branch: gabe-l-hart/GraniteDocling Signed-off-by: Gabe Goodhart <ghart@us.ibm.com> * feat: Add tiling support for idefices3 in clip.cpp This should likely be moved into llava_uhd::get_slice_instructions, but for now this avoids disrupting the logic there. Branch: gabe-l-hart/GraniteDocling Signed-off-by: Gabe Goodhart <ghart@us.ibm.com> * feat: Partial support for full templating for idefics3 in mtmd There are still errors encoding some of the image chunks, but the token sequence now matches transformers _almost_ perfectly, except for the double newline before the global image which shows up as two consecutive newline tokens instead of a single double-newline token. I think this is happening because the blocks are tokenized separately then concatenated. Branch: gabe-l-hart/GraniteDocling Signed-off-by: Gabe Goodhart <ghart@us.ibm.com> * feat: Fully working image preprocessing for idefics3 w/ resize and slicing Branch: gabe-l-hart/GraniteDocling Signed-off-by: Gabe Goodhart <ghart@us.ibm.com> * feat: Parse the preprocessor config's longest side and add it to the mmproj hparams Branch: GraniteDocling Signed-off-by: Gabe Goodhart <ghart@us.ibm.com> * fix: Use the longest side instead of size * scale_factor For Granite Docling, these come out to the same value, but that was just a conicidence. Branch: GraniteDocling Signed-off-by: Gabe Goodhart <ghart@us.ibm.com> * fix: Allow batch encoding and remove clip_is_idefics3 Branch: GraniteDocling Signed-off-by: Gabe Goodhart <ghart@us.ibm.com> * refactor: Remove unnecessary conditionals for empty token vectors Branch: GraniteDocling Signed-off-by: Gabe Goodhart <ghart@us.ibm.com> * refactor: Use image_manipulation util Branch: GraniteDocling Signed-off-by: Gabe Goodhart <ghart@us.ibm.com> * add test model --------- Signed-off-by: Gabe Goodhart <ghart@us.ibm.com> Co-authored-by: Xuan Son Nguyen <son@huggingface.co> # Conflicts: # convert_hf_to_gguf.py # convert_hf_to_gguf_update.py # gguf-py/gguf/constants.py # gguf-py/gguf/gguf_writer.py # src/llama-vocab.cpp # src/llama-vocab.h * mtmd : support home-cooked Mistral Small Omni (#14928) * model : add LightOnOCR-1B model (#16764) * model : add LightOnOCR-1B model * add test # Conflicts: # convert_hf_to_gguf.py # gguf-py/gguf/constants.py * mtmd : fix idefics3 preprocessing (#16806) * mtmd : fix idefics3 preprocessing * disable granite test * fix test for granite * model: Add support for CogVLM model (#15002) * Added GGUF mappings for CogVLM model * Add tensor mapping for CogVLM visual encoder * Add CogVLM to conversion script, no vision part yet * Added CogVLM vision model to conversion script * Add graph for CogVLM CLIP model * Add graph for CogVLM * Fixes for CogVLM. Now compiles. * Model now runs * Fixes for cogvlm graph * Account for graph context change after rebase * Changes for whitespace * Changes in convert script according to comments * Switch CogVLM LLM graph to merged QKV tensor * Use rope_type variable instead of direct definition * Change CogVLM CLIP encoder to use SWIGLU * Switch CogVLM CLIP to use merged QKV * Apply rebase edits and remove ggml_cont call that is now unnecessary * clean up --------- Co-authored-by: Xuan Son Nguyen <son@huggingface.co> # Conflicts: # convert_hf_to_gguf.py # examples/mtmd/clip.cpp # gguf-py/gguf/constants.py # gguf-py/gguf/tensor_mapping.py # src/llama-arch.cpp # src/llama-arch.h # src/llama-model.cpp # src/llama-model.h * mtmd: refactor preprocessing + support max/min pixels (#16878) * mtmd: refactor preprocessing + support max/min pixels * fix mlp type * implement mix/max pixels * improve hparams * better image preproc for qwen * fix * fix out of bound composite * fix (2) * fix token calculation * get_merge_kernel_size() * fix llama4 and lfm2 * gonna fix them all * use simple resize for qwen * qwen: increase min tokens * no resize if dst size == src size * restore to initial min/max tokens value for qwen # Conflicts: # examples/mtmd/clip.cpp * clip : use FA (#16837) * clip : use FA * cont : add warning about unsupported ops * implement "auto" mode for clip flash attn * clip : print more detailed op support info during warmup * cont : remove obsolete comment [no ci] * improve debugging message * trailing space * metal : remove stray return --------- Co-authored-by: Xuan Son Nguyen <son@huggingface.co> * model: add Janus Pro for image understanding (#16906) * Add support for Janus Pro * Update gguf-py/gguf/tensor_mapping.py Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com> * Update gguf-py/gguf/tensor_mapping.py Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com> * Address reviewer suggestions Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com> * Add JANUS_PRO constant * Update clip model handling Co-authored-by: Xuan-Son Nguyen <son@huggingface.co> * Update tools/mtmd/clip.cpp Co-authored-by: Xuan-Son Nguyen <thichthat@gmail.com> * Refactor JANUS_PRO handling in clip.cpp Co-authored-by: Xuan-Son Nguyen <son@huggingface.co> * Update tools/mtmd/clip.cpp Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com> * em whitespace --------- Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com> Co-authored-by: Xuan-Son Nguyen <son@huggingface.co> Co-authored-by: Xuan-Son Nguyen <thichthat@gmail.com> # Conflicts: # convert_hf_to_gguf.py # gguf-py/gguf/constants.py # gguf-py/gguf/tensor_mapping.py * mtmd: pad mask for qwen2.5vl (#16954) * mtmd: pad mask for qwen2.5vl * improve * mtmd: add --image-min/max-tokens (#16921) * mtmd: improve struct initialization (#16981) * mtmd: allow QwenVL to process larger image by default (#17020) * Disable flash attention * mtmd : fix embedding size for image input (#17123) * mtmd: fix patch_size initialized to random value in audio models (#17128) * mtmd: fix patch_size initialized to random value in audio models * add default hparams * add llama_model_n_embd_inp * Fix load qwen3 vl Change batch size * Add description * Fix cli build error --------- Signed-off-by: Gabe Goodhart <ghart@us.ibm.com> Co-authored-by: Gabe Goodhart <ghart@us.ibm.com> Co-authored-by: Xuan Son Nguyen <son@huggingface.co> Co-authored-by: Tianyue-Zhao <zhaotianyue@outlook.com> Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> Co-authored-by: Zhiyong Wang <85110830+ravenouse@users.noreply.github.com> Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com> Co-authored-by: Xuan-Son Nguyen <thichthat@gmail.com> Co-authored-by: firecoperana <firecoperana>
118 lines
4.6 KiB
C
118 lines
4.6 KiB
C
#pragma once
|
|
|
|
#include "ggml.h"
|
|
|
|
#include <stddef.h>
|
|
#include <stdint.h>
|
|
|
|
// !!! Internal header, to be used by mtmd only !!!
|
|
|
|
struct clip_ctx;
|
|
|
|
struct clip_image_size {
|
|
int width;
|
|
int height;
|
|
};
|
|
|
|
struct clip_image_f32;
|
|
struct clip_image_u8_batch;
|
|
struct clip_image_f32_batch;
|
|
|
|
enum clip_modality {
|
|
CLIP_MODALITY_VISION,
|
|
CLIP_MODALITY_AUDIO,
|
|
};
|
|
|
|
enum clip_flash_attn_type {
|
|
CLIP_FLASH_ATTN_TYPE_AUTO = -1,
|
|
CLIP_FLASH_ATTN_TYPE_DISABLED = 0,
|
|
CLIP_FLASH_ATTN_TYPE_ENABLED = 1,
|
|
};
|
|
|
|
struct clip_context_params {
|
|
bool use_gpu;
|
|
enum ggml_log_level verbosity;
|
|
enum clip_flash_attn_type flash_attn_type;
|
|
int image_min_tokens;
|
|
int image_max_tokens;
|
|
};
|
|
|
|
struct clip_init_result {
|
|
struct clip_ctx * ctx_v; // vision context
|
|
struct clip_ctx * ctx_a; // audio context
|
|
};
|
|
|
|
struct clip_init_result clip_init(const char * fname, struct clip_context_params ctx_params);
|
|
|
|
void clip_free(struct clip_ctx * ctx);
|
|
|
|
size_t clip_embd_nbytes(const struct clip_ctx * ctx);
|
|
size_t clip_embd_nbytes_by_img(const struct clip_ctx * ctx, int img_w, int img_h);
|
|
|
|
int32_t clip_get_image_size (const struct clip_ctx * ctx);
|
|
int32_t clip_get_patch_size (const struct clip_ctx * ctx);
|
|
int32_t clip_get_hidden_size(const struct clip_ctx * ctx);
|
|
|
|
// TODO: should be enum, not string
|
|
const char * clip_patch_merge_type(const struct clip_ctx * ctx);
|
|
|
|
int clip_n_output_tokens(const struct clip_ctx * ctx, struct clip_image_f32 * img);
|
|
|
|
// for M-RoPE, this will be the number of token positions in X and Y directions
|
|
// for other models, X will be the total number of tokens and Y will be 1
|
|
int clip_n_output_tokens_x(const struct clip_ctx * ctx, struct clip_image_f32 * img);
|
|
int clip_n_output_tokens_y(const struct clip_ctx * ctx, struct clip_image_f32 * img);
|
|
|
|
// this should be equal to the embedding dimension of the text model
|
|
int clip_n_mmproj_embd(const struct clip_ctx * ctx);
|
|
|
|
struct clip_image_size * clip_image_size_init(void);
|
|
struct clip_image_u8 * clip_image_u8_init (void);
|
|
struct clip_image_f32 * clip_image_f32_init(void);
|
|
struct clip_image_f32_batch * clip_image_f32_batch_init(void); // only used by libllava
|
|
|
|
// nx, ny are the output image dimensions
|
|
unsigned char * clip_image_u8_get_data(struct clip_image_u8 * img, uint32_t * nx, uint32_t * ny);
|
|
|
|
void clip_image_size_free (struct clip_image_size * img_size);
|
|
void clip_image_u8_free (struct clip_image_u8 * img);
|
|
void clip_image_f32_free(struct clip_image_f32 * img);
|
|
void clip_image_u8_batch_free (struct clip_image_u8_batch * batch);
|
|
void clip_image_f32_batch_free(struct clip_image_f32_batch * batch);
|
|
|
|
// use for accessing underlay data of clip_image_f32_batch
|
|
size_t clip_image_f32_batch_n_images(const struct clip_image_f32_batch * batch); // equivalent to batch->size()
|
|
size_t clip_image_f32_batch_nx(const struct clip_image_f32_batch * batch, int idx); // equivalent to batch[idx]->nx
|
|
size_t clip_image_f32_batch_ny(const struct clip_image_f32_batch * batch, int idx); // equivalent to batch[idx]->ny
|
|
struct clip_image_f32 * clip_image_f32_get_img(const struct clip_image_f32_batch * batch, int idx); // equivalent to batch[idx]->data
|
|
|
|
/**
|
|
* Build image from pixels decoded by other libraries instead of stb_image.h for better performance.
|
|
* The memory layout is RGBRGBRGB..., input buffer length must be 3*nx*ny bytes
|
|
*/
|
|
void clip_build_img_from_pixels(const unsigned char * rgb_pixels, int nx, int ny, struct clip_image_u8 * img);
|
|
|
|
/** preprocess img and store the result in res_imgs, pad_to_square may be overridden to false depending on model configuration */
|
|
bool clip_image_preprocess(struct clip_ctx * ctx, const struct clip_image_u8 * img, struct clip_image_f32_batch * res_imgs );
|
|
|
|
struct ggml_tensor * clip_get_newline_tensor(const struct clip_ctx * ctx);
|
|
|
|
bool clip_image_encode (struct clip_ctx * ctx, int n_threads, struct clip_image_f32 * img, float * vec);
|
|
bool clip_image_batch_encode(struct clip_ctx * ctx, int n_threads, const struct clip_image_f32_batch * imgs, float * vec);
|
|
|
|
int clip_is_minicpmv(const struct clip_ctx * ctx);
|
|
bool clip_is_glm(const struct clip_ctx * ctx);
|
|
bool clip_is_qwen2vl(const struct clip_ctx * ctx);
|
|
bool clip_is_qwen3vl(const struct clip_ctx * ctx);
|
|
bool clip_is_llava(const struct clip_ctx * ctx);
|
|
bool clip_is_gemma3(const struct clip_ctx * ctx);
|
|
|
|
bool clip_encode_float_image (struct clip_ctx * ctx, int n_threads, float * img, int h, int w, float * vec);
|
|
|
|
// use by audio input
|
|
void clip_image_f32_batch_add_mel(struct clip_image_f32_batch * batch, int n_mel, int n_frames, float * mel);
|
|
|
|
bool clip_has_vision_encoder(const struct clip_ctx * ctx);
|
|
bool clip_has_audio_encoder(const struct clip_ctx * ctx);
|
|
bool clip_has_whisper_encoder(const struct clip_ctx * ctx);
|