mirror of
https://github.com/ikawrakow/ik_llama.cpp.git
synced 2026-03-03 18:40:14 +00:00
Commit Graph
Select branches
Hide Pull Requests
fcp/checkpoint_tolerance
fcp/context_shift_fix
fcp/fix_rpc_device
ik/FlashMLA-3
ik/adapt_iq1_iq2_bn
ik/adaptive_p
ik/adaptive_p_2
ik/add_forgotten_multi_add
ik/add_granite
ik/add_iq3ks_to_gguf
ik/add_jinja_file_help
ik/add_missing_enum_values_qwen3
ik/add_missing_gguf_constants
ik/add_missing_mmq_iq5ks
ik/add_mmq_id
ik/add_mtmd
ik/add_q60
ik/add_vq_help
ik/allow_empty_splits
ik/andrew_trellis
ik/another_mmq_id_fix
ik/apply_cuda_faster_iq3k
ik/arch_flags
ik/arm_better_r4
ik/attn_gemm
ik/avoid_cuda_mla_1
ik/avx2_bf16
ik/avx2_flash_attn
ik/avx2_flash_attn_2
ik/avx2_q4_0_q8_0
ik/avx2_q5_0
ik/avx2_r4_tweaks
ik/backend_reduce_syncs
ik/bailingmoe2
ik/bailingmoe2_graph
ik/bench_gp
ik/better_batched_processing
ik/better_cpu_fa_thread_strategy
ik/better_fa_glm45
ik/better_fa_masking
ik/better_flash_mla
ik/better_graph_pp
ik/better_graph_tg
ik/better_iq4_nl
ik/better_iqk_strategy
ik/better_model_info
ik/better_tg_fattn
ik/bf16_kv_cache
ik/bf16_r4
ik/biased_mmvq
ik/biased_qkv
ik/bitnet_adjustments
ik/bitnet_cuda
ik/bitnet_fused_unary
ik/bitnet_improve_metal
ik/bitnet_optional_scales
ik/bitnet_token_embedding_gpu
ik/bitnet_token_embedding_gpu_2
ik/buffer_type_overrides
ik/bug_missing_parentheses
ik/cached_graph
ik/change_default_fa_offset
ik/change_fmoe_fa_defaults
ik/change_q_pure
ik/chat_templates
ik/check_up_gate_fmoe
ik/clang_warnings
ik/cleanup_fudge_factors
ik/cohere2
ik/cohere2_sm_graph
ik/convert_i2s
ik/copyright
ik/correct_glm47_flash_gating_func
ik/correct_missing_gating_func_comments
ik/cpp_17
ik/cpu_argsort
ik/cpu_deepseek_fa
ik/cpu_fa_dont_repack_tg
ik/cpu_fa_tg_glm4.5
ik/cpu_moe_tg
ik/cpu_repeat
ik/cpu_swa_v0
ik/cpu_swa_v1
ik/cpu_swa_v2
ik/cpu_topk_moe
ik/cuda_better_moe
ik/cuda_bf16
ik/cuda_faster_iq2k
ik/cuda_faster_iq4nl_kvcache
ik/cuda_faster_moe_tg
ik/cuda_fattn_Dk_Dv
ik/cuda_fix_quantized_flash_mla3
ik/cuda_flash_mla3
ik/cuda_flash_mla3_v2
ik/cuda_flash_mla_q8_0
ik/cuda_graphs_with_overrides
ik/cuda_grouped_topk
ik/cuda_iq1_m_r4
ik/cuda_iq1_s_r4
ik/cuda_iq2k_use_bperm1
ik/cuda_iq3k_use_bperm1
ik/cuda_iq4_k_r4
ik/cuda_iqk_ks_r4
ik/cuda_iqk_r4
ik/cuda_large_cpy
ik/cuda_lto
ik/cuda_mailine_fixes
ik/cuda_mla
ik/cuda_mla2
ik/cuda_mmq_iq2_k
ik/cuda_mmq_iq4_k
ik/cuda_mmq_iq4_ks
ik/cuda_native
ik/cuda_params
ik/cuda_q4_0_r4
ik/cuda_quantized_fmoe
ik/cuda_refactor_fattn
ik/cuda_rms_non_contiguous
ik/cuda_rope_back
ik/cuda_set_device
ik/cuda_swa2
ik/cuda_swa3
ik/cuda_topk_moe
ik/cuda_tracer
ik/cuda_use_bperm
ik/custom_q_rules
ik/debug_849
ik/debug_issue_721
ik/debug_issue_733
ik/dedup_stb_image
ik/deepseek_fa_opt
ik/deepseek_guarantee_rope_fusion
ik/deepseek_is_this_better
ik/deepseek_merge_qk
ik/deepseek_mla0
ik/deepseek_opt
ik/deepseek_rope_cache
ik/delta_net
ik/dequant_gemm
ik/dequant_moe_gemm
ik/desperate_bug_fix_attempt
ik/disable_add_fused_rms
ik/disable_experimental_code1
ik/disable_fusion_by_default
ik/disable_multi_add
ik/disable_or_enable_p2p
ik/disable_rope_cache
ik/disable_sm_row
ik/disable_some_fusion
ik/disable_vocab_debug
ik/dont_abort_on_nccl_init_failure
ik/dont_split_output
ik/dup_experts_bias
ik/enable_fusion_by_default
ik/enable_mla3_in_crippled_ggufs
ik/ernie_graph
ik/extra_reduce_types
ik/fa_mainline_compat
ik/fa_offset_2
ik/falcon3
ik/falcon3a
ik/falcon_edge
ik/faster_avx2_q40
ik/faster_iq3_iq5_quantize
ik/faster_iq4k
ik/faster_iq4k_quantize
ik/faster_iq4nl_quantize
ik/faster_moe_quantize
ik/faster_q60_avx2
ik/fattn_Dk_Dv
ik/fattn_bf16
ik/fattn_enable_iq4_nl
ik/fattn_enable_q6_0
ik/fattn_gqa_10
ik/fattn_is_supported
ik/fattn_kq_max_offset
ik/fattn_kqv
ik/fattn_mma
ik/fattn_q35dense
ik/fattn_work_buffer
ik/fix_1015
ik/fix_1055
ik/fix_1205
ik/fix_1237
ik/fix_300
ik/fix_358
ik/fix_412
ik/fix_447
ik/fix_499
ik/fix_538
ik/fix_596
ik/fix_827
ik/fix_Makefile
ik/fix_add_bf16_turing
ik/fix_after_883
ik/fix_again_cmake
ik/fix_annoying_warnings
ik/fix_arm_fa
ik/fix_avx2_gemm_mess
ik/fix_avx2_iq4_nl_r4
ik/fix_avx512_vs_fancy_simd
ik/fix_batched_cublas
ik/fix_bench_compile
ik/fix_bug_481
ik/fix_comma_pauses
ik/fix_compiler_warnings
ik/fix_contiguously_allocated
ik/fix_cpu_fa_work_buffer_size
ik/fix_cuda_fa_race
ik/fix_cuda_memcpy_async
ik/fix_cuda_scale_bug
ik/fix_debug_build
ik/fix_deepseek_fattn
ik/fix_deepseek_q80_cache
ik/fix_dequantize_when_requantizing
ik/fix_div_zero
ik/fix_dup_q
ik/fix_exp_shexp_split
ik/fix_experts_node_name
ik/fix_fa_192_128
ik/fix_fa_avx2_bug
ik/fix_fattn_odd_even
ik/fix_fattn_supported
ik/fix_flash_attn
ik/fix_fused_grouped_topk
ik/fix_gcc_arm
ik/fix_gemma3_vision
ik/fix_ggml_common
ik/fix_glm4_attn
ik/fix_gpt_oss_partial_offload
ik/fix_graph_parallel_partial_offload
ik/fix_hybrid_detection
ik/fix_imatrix_check
ik/fix_imatrix_nonsense
ik/fix_iq4k_avx2
ik/fix_iqk_for_strange_numrows
ik/fix_kimi2_parse
ik/fix_kld
ik/fix_kq
ik/fix_llama4_attention
ik/fix_metal_fa
ik/fix_missing_bf16_avx512
ik/fix_missing_dry
ik/fix_missing_end
ik/fix_mla_imatrix
ik/fix_mmq_id
ik/fix_mmq_overflow
ik/fix_mmvq_bug
ik/fix_mul_mat_16
ik/fix_multiple_choice
ik/fix_neon_build
ik/fix_neon_legacy_quants
ik/fix_neon_q82
ik/fix_no_iqk_build
ik/fix_no_p2p_case
ik/fix_perf_regression
ik/fix_pr_261
ik/fix_pr_842
ik/fix_q41_q51_arm
ik/fix_q5_0_fa
ik/fix_q6_0_dequantize
ik/fix_q80_avx2_2
ik/fix_q80_avx2_mess
ik/fix_q80_moe_avx2
ik/fix_quantize_kt
ik/fix_quantized_k_cache
ik/fix_quantized_kv_nofa
ik/fix_reduce_race
ik/fix_reduce_windows
ik/fix_repacked_legacy_quants
ik/fix_replace_all
ik/fix_requantize_interleaved
ik/fix_ring_reduction
ik/fix_rope_norm_fast_cuda
ik/fix_rpc_off
ik/fix_rpc_off2
ik/fix_rtr_mqkv
ik/fix_ser
ik/fix_ser_cuda
ik/fix_standard_attention_cpu
ik/fix_sync_logic
ik/fix_the_fix
ik/fix_typo
ik/fix_up_gate_mmq_not_supported
ik/fix_vulkan_required
ik/fix_windows
ik/fix_windows_avx512
ik/fix_windows_no_omp
ik/fix_xeon_6226R
ik/flash_mla
ik/flash_mla2_cuda_no_f32
ik/flash_mla2_no_f32
ik/flash_mla_2
ik/flash_mla_4
ik/flash_precision
ik/flax-vector-conversions
ik/format_name
ik/fuse_add_add_fused_rms
ik/fuse_add_fused_rms
ik/fuse_bias_only_tg
ik/fuse_biased_qkv
ik/fuse_kvcache_copy
ik/fuse_merge_up_gate_exps
ik/fuse_moe_up_gate
ik/fuse_mul_mat_scale
ik/fuse_qkv
ik/fused_bailingmoev2
ik/fused_delta_net
ik/fused_delta_net_2
ik/fused_delta_net_3
ik/fused_delta_net_3a
ik/fused_ffn_up_gate
ik/fused_mul_multiadd
ik/fused_mul_unary
ik/fused_mul_unary_1
ik/fused_norm
ik/fused_rms_norm
ik/fused_rms_rms
ik/fused_rope_rope
ik/fused_softcap_softmax
ik/fused_up_gate_unary
ik/gemm_4d
ik/gemm_iq1s
ik/gemm_neon_1bit
ik/gemm_neon_iqk
ik/gemm_neon_iquants
ik/gemm_neon_kquants
ik/gemm_neon_legacy
ik/gemma3
ik/gemma3_mqkv_rcache
ik/gemma_output_tensor
ik/gemma_q80_kvcache
ik/gemv_bf16_r16
ik/gguf_bool_arrays
ik/gguf_py_add_maxfp4
ik/gguf_py_changes_for_np2.0
ik/glm45_tg_fa_hack
ik/glm45_tg_very_fast
ik/glm47_fa_2
ik/glm47_tg_fa_hack
ik/glm5
ik/glm_flash
ik/gpt-oss
ik/gpt_oss_graph
ik/graph_alloc
ik/graph_better_splits
ik/graph_parallel_tweak
ik/graph_reuse
ik/graph_reuse_on
ik/handle_incompatible_deepseek_ggufs
ik/handle_split_cache
ik/hide_imatrix
ik/hsums
ik/huihui_57B
ik/hunyuan_graph
ik/ignore_nextn_layers
ik/imatrix_lsim
ik/improve_iq1m
ik/improve_iq2_xs
ik/improve_iq2ks
ik/improve_mmq
ik/interleaved_guards
ik/iq1_kt
ik/iq1_m_neon
ik/iq1_m_r4
ik/iq1_s_checks
ik/iq1_s_gemm
ik/iq1_s_r4
ik/iq1_s_r4_k128
ik/iq1_s_r4_neon
ik/iq1_tn
ik/iq1_tn_cuda
ik/iq1_tn_metal
ik/iq1bn_metal
ik/iq1m_gemm
ik/iq2_bn_r4
ik/iq2_k
ik/iq2_k_r4
ik/iq2_k_tweak
ik/iq2_kl
ik/iq2_s_r4
ik/iq2_tn
ik/iq2_tn_as_iq2_bn
ik/iq2_tn_avx2
ik/iq2_tn_faster_pp
ik/iq2_xs_r4
ik/iq2_xxs_gemm
ik/iq2_xxs_r4
ik/iq2k_experiments
ik/iq2ks_experiments
ik/iq3_k_r4_v2
ik/iq3_ks
ik/iq3_ks_v2
ik/iq3_s_gemm
ik/iq3_s_r4
ik/iq3_s_r4_v2
ik/iq3_xxs_gemm
ik/iq3_xxs_r4
ik/iq3_xxs_r4_v2
ik/iq4_k
ik/iq4_k_r4
ik/iq4_k_r4_avx2
ik/iq4_k_tweaks
ik/iq4_k_xxs
ik/iq4_knn
ik/iq4_ks_r4
ik/iq4_kss
ik/iq4_kss_improvements
ik/iq4_nl_cache
ik/iq4_nl_x4
ik/iq4_xs_r4
ik/iq4_xs_r4_avx2
ik/iq4_xs_r8
ik/iq4_xs_r8_v2
ik/iq4kss_experiments
ik/iq4nl_kv_cache
ik/iq5_k_r4
ik/iq5_ks
ik/iq5_ks_r4
ik/iq6_k
ik/iq_gemv_tweaks
ik/iqk_fattn_all_quants
ik/iqk_gemm
ik/iqk_mmvq_opt
ik/iqk_q_improvements
ik/is_this_better_for_multi_gpu
ik/issue_214
ik/issue_217
ik/issue_224
ik/issue_230
ik/k_cache_hadamard
ik/k_cache_hadamard_cuda
ik/kq_fused_softmax
ik/kq_mask
ik/kq_mask_padding_64
ik/l4_rms_norm
ik/legacy_gemm
ik/llama4
ik/llama_bench_mla3
ik/llama_bench_n_cpu_moe
ik/llama_bench_overrides
ik/llama_bench_rcache
ik/llama_bench_sas
ik/llama_bench_tgb
ik/llama_hparams_add_mla
ik/llama_warnings
ik/make_biased_gemv_optional
ik/make_qx_quants
ik/mask_mt
ik/max_nodes
ik/max_nodes_again
ik/measure_barriers
ik/merge_Aug_12_2024
ik/merge_July_26_2024
ik/merge_only_qk
ik/merge_qkv
ik/merge_up_gate_exps_2
ik/merge_up_gate_exps_3
ik/metal_bf16
ik/metal_faster_iq4ks
ik/metal_fattn_update
ik/metal_fix_iq2k
ik/metal_fix_iq3k
ik/metal_moe
ik/metal_new_trellis
ik/mimo2
ik/mimo2_4_gpus
ik/mimo2_graph
ik/minimax2_very_fast
ik/minimax_graph_minor
ik/ministral3
ik/minmax2_sm_graph
ik/minor_delta_tweak
ik/minor_iq2ks_tweak
ik/mistral3_large
ik/mistral3_std_attn
ik/mla
ik/mla2_q80_cache
ik/mla2_q80_cache_cpu
ik/mla=3_by_default
ik/mla_fixes
ik/mla_guard
ik/mla_imatrix
ik/mla_no_transposed_cache
ik/mla_q80
ik/mmq_id_thresh
ik/mmq_iq_ks_r4
ik/mmq_to_cublas
ik/mmvq_args
ik/mmvq_fuse_bias
ik/mmvq_type_supported
ik/moe_fused_unary
ik/moe_offload_strategy
ik/more_set_device
ik/mtmd_reduce_memory_use
ik/mul_mat_bf16
ik/mul_mat_ext
ik/multi_add
ik/mv_q4_0_r4
ik/mxfp4
ik/n_cpu_moe
ik/nccl1
ik/nccl2
ik/nccl3
ik/nccl3_async
ik/neon_bf16
ik/neon_flash_attention_2
ik/neon_flash_attention_3
ik/neon_improve_legacy_quants
ik/neon_iq3_kt
ik/new_iq1bn
ik/new_iq2kt
ik/new_iq2kt_v2
ik/new_iq4kt
ik/new_trellis_2
ik/no_KV_for_unused_layers
ik/non_contiguous_rope
ik/offline_repack
ik/offline_repack_patterns
ik/offload_policy
ik/ooae2
ik/ooae_on_by_default
ik/opt_kt_quants
ik/option_cpu_fa
ik/option_to_disable_cuda_fusion
ik/optional_yarn_log_multiplier
ik/p2p_cpy_set_device
ik/per_row_scale
ik/phi3.5_tweaks
ik/pickup_13095
ik/play_with_barrier
ik/poc_tp
ik/poc_tp_glm4.5
ik/prepare_wk_b
ik/q2_k_r4
ik/q3_k_r4
ik/q3next_concat
ik/q3next_concat_cpu
ik/q3next_cuda_graphs
ik/q3next_opt2
ik/q3next_opt3
ik/q4_0_r4
ik/q4_0_r8
ik/q4_k_gemm
ik/q4_k_r4
ik/q4_k_r4_v2
ik/q4_k_r4_v3
ik/q5_0_r4
ik/q5_k_r4
ik/q60_mmq
ik/q6_0_r4
ik/q6_k_gemm
ik/q6_k_r4
ik/q8_0_r4
ik/q8_KV
ik/q8_k_r16
ik/q8_k_r8
ik/q8_k_r8_avx512
ik/qkvz_tweak
ik/qkvz_tweak1
ik/qmix_tweaks
ik/qmix_tweaks_2
ik/qstats
ik/quantization_tweaks
ik/quantize_dry_run
ik/quantize_ffn_gate_inp
ik/quantize_q8k_avx2
ik/quantize_stats
ik/qwen3.5_vision
ik/qwen35_std_attn
ik/qwen35dense
ik/qwen35moe
ik/qwen3_graph
ik/qwen3next
ik/qwen3vl_graph
ik/qx_0_r4_avx2
ik/qx_k_b32_avx2
ik/r4_faster_zen4
ik/r4_neon
ik/r4_nrcy_16
ik/really_fix_rope_cache
ik/reduce_compute_buffers
ik/reduce_make_copies
ik/reduce_mla3_compute_buffer_size
ik/reduce_no_nccl
ik/reduce_race_quick_fix
ik/refactor_iqk
ik/refactor_llama.cpp
ik/remove_iqk_option
ik/remove_kv_l
ik/remove_llamafile
ik/remove_scary_warning
ik/remove_unnecessary_calls
ik/remove_unnessessary_ids_copy
ik/rename_4_8
ik/rename_iq4_nl_x4
ik/reorg_mmvq_and_fuse_bias
ik/repack_also_experts
ik/repack_f16
ik/revert_0bf4d997
ik/revert_739
ik/revert_delta_net_3
ik/reverts
ik/ring_reduce
ik/rms_block_size
ik/rng_sampling
ik/rope_cache
ik/run_time_repack
ik/sampling-top-n-sigma
ik/sampling-xtc
ik/sampling_refactor_sorting
ik/sanitize_importance_iqk
ik/sanitize_importance_kt_quants
ik/sched_copy_experts
ik/sched_max_copies=1
ik/server_send_done
ik/shexps_better_hybrid
ik/simplify_delta_net
ik/simplify_delta_net_2
ik/skip_get_rows
ik/skip_noop_barriers
ik/skip_rowids_computation
ik/skip_unnecessary_quantize
ik/slightly_better_fdn
ik/slightly_better_graph_split_strategy
ik/sm_graph_cuda_graphs
ik/sm_graph_delta_net
ik/sm_graph_disable_cuda_graphs
ik/sm_graph_max_gpu
ik/sm_graph_q35
ik/sm_graph_q3next
ik/sm_graph_qwen35moe
ik/sm_graph_rearrange
ik/sm_graph_seedoss
ik/sm_graph_step35
ik/sm_graph_sync
ik/smart_expert_selection
ik/smollm3
ik/softcap
ik/softcap_minor
ik/split_graph_2
ik/split_mode_f32
ik/step35
ik/step35_compat
ik/support_gigachat
ik/sweep_bench_n_predict
ik/sweep_bench_nrep
ik/sweep_bench_warmup
ik/swiglu
ik/sync_fa
ik/tensor_override_honor_mmap
ik/test_q80_NaNs
ik/test_thp
ik/tg_tweaks
ik/topk_moe_fuse_bias
ik/topk_moe_with_norm
ik/trellis_bf16
ik/trellis_metal
ik/trellis_neon
ik/trellis_opt
ik/trinet
ik/try_authors
ik/try_cuda_graphs
ik/try_fa_no_q80_repack
ik/try_fix_1014
ik/try_fix_1201
ik/try_fix_1222
ik/try_fix_367
ik/try_fix_367_v2
ik/try_fix_690
ik/try_fix_772
ik/try_fix_854
ik/try_fix_974
ik/try_fix_avx2_fa
ik/try_fix_many_gpus
ik/try_fix_many_gpus_2
ik/try_grouped_topk_playing1
ik/try_remove_cpy_indirection
ik/try_split_mla
ik/try_split_offloaded_moe_up_gate
ik/try_svd
ik/try_trellis
ik/undo_1049_if_tensor_overrides
ik/undo_sync_reduction
ik/update_authors
ik/update_license
ik/use_bf16_when_no_mmq
ik/use_mmq_id_for_moe
ik/use_q8_2
ik/validate_quants_on_load
ik/vendor
ik/vulkan1
ik/vulkan_again
ik/vulkan_disable_fused_ops
ik/vulkan_disable_multi_add
ik/vulkan_fattn
ik/vulkan_fused_mul_unary
ik/vulkan_fused_rms
ik/vulkan_multi_add
ik/wip_sync_llama
ik/zen4_faster_iq4ks_iq5ks
ik/zen4_flash_attn
ik/zen4_flash_attn_2
ik/zen4_flash_attn_bf16
ik/zen4_iq4_xs_r4
ik/zen4_repack_f16
ikawrakow-patch-1
ikawrakow-patch-1-1
main
s6/MLA_prompt_save_restore_fix
s6/bitnet2b_2501
s6/bitnet_name_update
s6/cache_default
s6/deci_support
s6/docs_update
s6/dots
s6/fix_kshift_crash
s6/fix_prompt_tokenization
s6/fix_python
s6/fp8_native
s6/list_prompt_cache
s6/mikupad
s6/mla
s6/numa_KV
s6/qwen3_dynamic_yarn
s6/readme-minor1
s6/readme-minor2
s6/readme_update
s6/remove_kv_l
s6/rope_freq_fix
s6/rpc
s6/seed_support2
s6/sweep_bench
s6/sweep_bench_update
s6/termux_fix
s6/warmup
#1
#10
#1000
#1001
#1003
#1004
#1005
#1006
#1007
#1008
#101
#1011
#1012
#1016
#1017
#1018
#102
#1022
#1023
#1024
#1025
#1026
#1027
#1029
#1030
#1031
#1032
#1033
#1034
#1035
#1036
#1037
#1038
#1039
#1040
#1042
#1047
#1048
#1049
#105
#1050
#1051
#1052
#1053
#1054
#1056
#1057
#1058
#1059
#106
#1060
#1061
#1062
#1063
#1064
#1065
#1067
#1068
#1069
#107
#1070
#1071
#1073
#1079
#108
#1080
#1082
#1086
#1087
#1088
#1089
#109
#1091
#1092
#1093
#1094
#1096
#1097
#11
#110
#1100
#1101
#1103
#1104
#1105
#1106
#1107
#111
#1110
#1112
#1114
#1115
#1116
#1118
#1119
#112
#1120
#1121
#1124
#1126
#1128
#1129
#113
#1130
#1131
#1131
#1134
#1135
#1136
#1137
#1138
#1139
#114
#1140
#1141
#1143
#1144
#1147
#115
#1151
#1152
#1153
#1154
#1155
#1156
#116
#1160
#1161
#1164
#1165
#1166
#1168
#117
#1170
#1171
#1172
#1174
#1175
#1176
#1177
#1178
#1179
#118
#1182
#1183
#1184
#1185
#1187
#119
#1190
#1191
#1192
#1193
#1194
#1195
#1196
#1198
#1199
#12
#120
#1202
#1202
#1206
#1207
#1208
#121
#1211
#1212
#1213
#1214
#1215
#1216
#1217
#1218
#122
#1220
#1221
#1222
#1223
#1224
#1226
#123
#1231
#1235
#1236
#1238
#1239
#124
#1240
#1241
#1243
#1243
#1244
#1249
#125
#1250
#1251
#1252
#1257
#126
#1260
#1261
#1262
#1263
#1266
#1268
#1269
#127
#1270
#1272
#1274
#1275
#1276
#1277
#1278
#1279
#1279
#128
#1280
#1283
#1284
#1285
#1286
#1287
#1288
#129
#1292
#1295
#1296
#13
#130
#1300
#1301
#1303
#1304
#1305
#1306
#1307
#1308
#1309
#131
#1310
#1311
#1313
#1314
#1315
#1318
#132
#1320
#1321
#1322
#1326
#1328
#1329
#1330
#1331
#1332
#1332
#1333
#1335
#1336
#1337
#1339
#134
#1340
#1345
#1346
#1347
#1349
#135
#1350
#1352
#1354
#1355
#1355
#136
#137
#138
#139
#14
#141
#142
#143
#144
#145
#146
#147
#148
#149
#150
#151
#152
#153
#154
#155
#156
#157
#158
#16
#161
#162
#163
#168
#169
#17
#170
#171
#172
#173
#174
#175
#176
#177
#178
#179
#180
#181
#182
#184
#185
#186
#187
#188
#189
#19
#190
#191
#192
#193
#194
#195
#197
#198
#2
#20
#200
#202
#204
#205
#206
#207
#208
#21
#210
#212
#213
#215
#216
#218
#219
#22
#220
#225
#226
#229
#23
#231
#232
#233
#234
#235
#236
#237
#238
#239
#24
#240
#241
#243
#244
#246
#247
#248
#250
#251
#252
#253
#259
#260
#261
#262
#264
#265
#268
#269
#27
#270
#272
#273
#274
#275
#276
#277
#278
#279
#28
#280
#282
#283
#284
#287
#289
#290
#291
#292
#294
#295
#298
#299
#3
#301
#302
#303
#304
#307
#309
#31
#310
#311
#312
#313
#315
#317
#318
#32
#320
#321
#324
#325
#326
#327
#328
#329
#33
#330
#331
#332
#333
#336
#337
#338
#341
#342
#343
#344
#346
#347
#348
#349
#35
#351
#352
#355
#356
#36
#360
#364
#366
#368
#369
#37
#370
#371
#374
#375
#377
#38
#382
#386
#39
#390
#391
#392
#394
#4
#40
#400
#402
#404
#405
#406
#408
#409
#41
#410
#411
#413
#414
#415
#416
#417
#418
#42
#421
#422
#424
#426
#427
#428
#429
#43
#430
#431
#435
#438
#439
#44
#441
#442
#443
#444
#445
#446
#448
#449
#45
#453
#454
#457
#458
#46
#460
#461
#462
#465
#468
#469
#47
#470
#471
#473
#475
#478
#48
#480
#481
#482
#483
#484
#486
#487
#488
#489
#49
#492
#493
#494
#495
#496
#497
#5
#50
#501
#502
#504
#505
#506
#508
#509
#51
#510
#511
#512
#513
#515
#516
#517
#518
#52
#520
#524
#525
#528
#529
#53
#531
#533
#534
#535
#536
#537
#54
#540
#541
#542
#544
#546
#547
#549
#55
#550
#552
#553
#554
#554
#555
#557
#558
#559
#56
#560
#563
#565
#566
#567
#569
#57
#570
#571
#573
#574
#577
#578
#579
#58
#580
#581
#582
#583
#584
#585
#587
#588
#589
#592
#593
#595
#598
#6
#602
#603
#604
#606
#607
#608
#609
#61
#610
#611
#612
#616
#617
#618
#62
#620
#622
#624
#628
#630
#631
#637
#639
#64
#640
#642
#643
#645
#648
#65
#652
#653
#653
#654
#66
#661
#662
#668
#670
#672
#674
#676
#677
#68
#680
#682
#683
#684
#688
#689
#69
#692
#695
#696
#698
#699
#7
#70
#700
#701
#702
#705
#707
#708
#709
#71
#710
#711
#712
#713
#714
#716
#717
#719
#72
#720
#722
#723
#724
#726
#727
#728
#73
#734
#735
#738
#739
#74
#740
#741
#742
#745
#748
#75
#751
#752
#754
#757
#759
#76
#760
#762
#764
#768
#77
#771
#774
#78
#782
#786
#787
#788
#789
#79
#790
#791
#794
#795
#796
#797
#798
#799
#80
#801
#802
#803
#807
#81
#810
#814
#817
#820
#823
#824
#825
#826
#828
#829
#83
#833
#835
#836
#837
#838
#84
#840
#841
#842
#843
#844
#845
#85
#850
#851
#852
#853
#855
#857
#858
#86
#860
#861
#863
#864
#866
#868
#87
#870
#871
#872
#874
#875
#876
#878
#879
#880
#881
#882
#883
#887
#889
#89
#891
#892
#894
#896
#897
#899
#9
#90
#900
#901
#902
#903
#906
#907
#91
#910
#911
#913
#914
#916
#920
#921
#922
#923
#924
#926
#928
#929
#93
#931
#932
#933
#934
#935
#936
#937
#938
#939
#94
#941
#943
#944
#945
#947
#948
#949
#951
#952
#954
#957
#958
#959
#96
#963
#965
#966
#968
#969
#97
#970
#971
#972
#973
#976
#977
#98
#980
#983
#984
#985
#987
#988
#989
#99
#991
#992
#993
#995
#996
#998
#999
t0002
Select branches
Hide Pull Requests
fcp/checkpoint_tolerance
fcp/context_shift_fix
fcp/fix_rpc_device
ik/FlashMLA-3
ik/adapt_iq1_iq2_bn
ik/adaptive_p
ik/adaptive_p_2
ik/add_forgotten_multi_add
ik/add_granite
ik/add_iq3ks_to_gguf
ik/add_jinja_file_help
ik/add_missing_enum_values_qwen3
ik/add_missing_gguf_constants
ik/add_missing_mmq_iq5ks
ik/add_mmq_id
ik/add_mtmd
ik/add_q60
ik/add_vq_help
ik/allow_empty_splits
ik/andrew_trellis
ik/another_mmq_id_fix
ik/apply_cuda_faster_iq3k
ik/arch_flags
ik/arm_better_r4
ik/attn_gemm
ik/avoid_cuda_mla_1
ik/avx2_bf16
ik/avx2_flash_attn
ik/avx2_flash_attn_2
ik/avx2_q4_0_q8_0
ik/avx2_q5_0
ik/avx2_r4_tweaks
ik/backend_reduce_syncs
ik/bailingmoe2
ik/bailingmoe2_graph
ik/bench_gp
ik/better_batched_processing
ik/better_cpu_fa_thread_strategy
ik/better_fa_glm45
ik/better_fa_masking
ik/better_flash_mla
ik/better_graph_pp
ik/better_graph_tg
ik/better_iq4_nl
ik/better_iqk_strategy
ik/better_model_info
ik/better_tg_fattn
ik/bf16_kv_cache
ik/bf16_r4
ik/biased_mmvq
ik/biased_qkv
ik/bitnet_adjustments
ik/bitnet_cuda
ik/bitnet_fused_unary
ik/bitnet_improve_metal
ik/bitnet_optional_scales
ik/bitnet_token_embedding_gpu
ik/bitnet_token_embedding_gpu_2
ik/buffer_type_overrides
ik/bug_missing_parentheses
ik/cached_graph
ik/change_default_fa_offset
ik/change_fmoe_fa_defaults
ik/change_q_pure
ik/chat_templates
ik/check_up_gate_fmoe
ik/clang_warnings
ik/cleanup_fudge_factors
ik/cohere2
ik/cohere2_sm_graph
ik/convert_i2s
ik/copyright
ik/correct_glm47_flash_gating_func
ik/correct_missing_gating_func_comments
ik/cpp_17
ik/cpu_argsort
ik/cpu_deepseek_fa
ik/cpu_fa_dont_repack_tg
ik/cpu_fa_tg_glm4.5
ik/cpu_moe_tg
ik/cpu_repeat
ik/cpu_swa_v0
ik/cpu_swa_v1
ik/cpu_swa_v2
ik/cpu_topk_moe
ik/cuda_better_moe
ik/cuda_bf16
ik/cuda_faster_iq2k
ik/cuda_faster_iq4nl_kvcache
ik/cuda_faster_moe_tg
ik/cuda_fattn_Dk_Dv
ik/cuda_fix_quantized_flash_mla3
ik/cuda_flash_mla3
ik/cuda_flash_mla3_v2
ik/cuda_flash_mla_q8_0
ik/cuda_graphs_with_overrides
ik/cuda_grouped_topk
ik/cuda_iq1_m_r4
ik/cuda_iq1_s_r4
ik/cuda_iq2k_use_bperm1
ik/cuda_iq3k_use_bperm1
ik/cuda_iq4_k_r4
ik/cuda_iqk_ks_r4
ik/cuda_iqk_r4
ik/cuda_large_cpy
ik/cuda_lto
ik/cuda_mailine_fixes
ik/cuda_mla
ik/cuda_mla2
ik/cuda_mmq_iq2_k
ik/cuda_mmq_iq4_k
ik/cuda_mmq_iq4_ks
ik/cuda_native
ik/cuda_params
ik/cuda_q4_0_r4
ik/cuda_quantized_fmoe
ik/cuda_refactor_fattn
ik/cuda_rms_non_contiguous
ik/cuda_rope_back
ik/cuda_set_device
ik/cuda_swa2
ik/cuda_swa3
ik/cuda_topk_moe
ik/cuda_tracer
ik/cuda_use_bperm
ik/custom_q_rules
ik/debug_849
ik/debug_issue_721
ik/debug_issue_733
ik/dedup_stb_image
ik/deepseek_fa_opt
ik/deepseek_guarantee_rope_fusion
ik/deepseek_is_this_better
ik/deepseek_merge_qk
ik/deepseek_mla0
ik/deepseek_opt
ik/deepseek_rope_cache
ik/delta_net
ik/dequant_gemm
ik/dequant_moe_gemm
ik/desperate_bug_fix_attempt
ik/disable_add_fused_rms
ik/disable_experimental_code1
ik/disable_fusion_by_default
ik/disable_multi_add
ik/disable_or_enable_p2p
ik/disable_rope_cache
ik/disable_sm_row
ik/disable_some_fusion
ik/disable_vocab_debug
ik/dont_abort_on_nccl_init_failure
ik/dont_split_output
ik/dup_experts_bias
ik/enable_fusion_by_default
ik/enable_mla3_in_crippled_ggufs
ik/ernie_graph
ik/extra_reduce_types
ik/fa_mainline_compat
ik/fa_offset_2
ik/falcon3
ik/falcon3a
ik/falcon_edge
ik/faster_avx2_q40
ik/faster_iq3_iq5_quantize
ik/faster_iq4k
ik/faster_iq4k_quantize
ik/faster_iq4nl_quantize
ik/faster_moe_quantize
ik/faster_q60_avx2
ik/fattn_Dk_Dv
ik/fattn_bf16
ik/fattn_enable_iq4_nl
ik/fattn_enable_q6_0
ik/fattn_gqa_10
ik/fattn_is_supported
ik/fattn_kq_max_offset
ik/fattn_kqv
ik/fattn_mma
ik/fattn_q35dense
ik/fattn_work_buffer
ik/fix_1015
ik/fix_1055
ik/fix_1205
ik/fix_1237
ik/fix_300
ik/fix_358
ik/fix_412
ik/fix_447
ik/fix_499
ik/fix_538
ik/fix_596
ik/fix_827
ik/fix_Makefile
ik/fix_add_bf16_turing
ik/fix_after_883
ik/fix_again_cmake
ik/fix_annoying_warnings
ik/fix_arm_fa
ik/fix_avx2_gemm_mess
ik/fix_avx2_iq4_nl_r4
ik/fix_avx512_vs_fancy_simd
ik/fix_batched_cublas
ik/fix_bench_compile
ik/fix_bug_481
ik/fix_comma_pauses
ik/fix_compiler_warnings
ik/fix_contiguously_allocated
ik/fix_cpu_fa_work_buffer_size
ik/fix_cuda_fa_race
ik/fix_cuda_memcpy_async
ik/fix_cuda_scale_bug
ik/fix_debug_build
ik/fix_deepseek_fattn
ik/fix_deepseek_q80_cache
ik/fix_dequantize_when_requantizing
ik/fix_div_zero
ik/fix_dup_q
ik/fix_exp_shexp_split
ik/fix_experts_node_name
ik/fix_fa_192_128
ik/fix_fa_avx2_bug
ik/fix_fattn_odd_even
ik/fix_fattn_supported
ik/fix_flash_attn
ik/fix_fused_grouped_topk
ik/fix_gcc_arm
ik/fix_gemma3_vision
ik/fix_ggml_common
ik/fix_glm4_attn
ik/fix_gpt_oss_partial_offload
ik/fix_graph_parallel_partial_offload
ik/fix_hybrid_detection
ik/fix_imatrix_check
ik/fix_imatrix_nonsense
ik/fix_iq4k_avx2
ik/fix_iqk_for_strange_numrows
ik/fix_kimi2_parse
ik/fix_kld
ik/fix_kq
ik/fix_llama4_attention
ik/fix_metal_fa
ik/fix_missing_bf16_avx512
ik/fix_missing_dry
ik/fix_missing_end
ik/fix_mla_imatrix
ik/fix_mmq_id
ik/fix_mmq_overflow
ik/fix_mmvq_bug
ik/fix_mul_mat_16
ik/fix_multiple_choice
ik/fix_neon_build
ik/fix_neon_legacy_quants
ik/fix_neon_q82
ik/fix_no_iqk_build
ik/fix_no_p2p_case
ik/fix_perf_regression
ik/fix_pr_261
ik/fix_pr_842
ik/fix_q41_q51_arm
ik/fix_q5_0_fa
ik/fix_q6_0_dequantize
ik/fix_q80_avx2_2
ik/fix_q80_avx2_mess
ik/fix_q80_moe_avx2
ik/fix_quantize_kt
ik/fix_quantized_k_cache
ik/fix_quantized_kv_nofa
ik/fix_reduce_race
ik/fix_reduce_windows
ik/fix_repacked_legacy_quants
ik/fix_replace_all
ik/fix_requantize_interleaved
ik/fix_ring_reduction
ik/fix_rope_norm_fast_cuda
ik/fix_rpc_off
ik/fix_rpc_off2
ik/fix_rtr_mqkv
ik/fix_ser
ik/fix_ser_cuda
ik/fix_standard_attention_cpu
ik/fix_sync_logic
ik/fix_the_fix
ik/fix_typo
ik/fix_up_gate_mmq_not_supported
ik/fix_vulkan_required
ik/fix_windows
ik/fix_windows_avx512
ik/fix_windows_no_omp
ik/fix_xeon_6226R
ik/flash_mla
ik/flash_mla2_cuda_no_f32
ik/flash_mla2_no_f32
ik/flash_mla_2
ik/flash_mla_4
ik/flash_precision
ik/flax-vector-conversions
ik/format_name
ik/fuse_add_add_fused_rms
ik/fuse_add_fused_rms
ik/fuse_bias_only_tg
ik/fuse_biased_qkv
ik/fuse_kvcache_copy
ik/fuse_merge_up_gate_exps
ik/fuse_moe_up_gate
ik/fuse_mul_mat_scale
ik/fuse_qkv
ik/fused_bailingmoev2
ik/fused_delta_net
ik/fused_delta_net_2
ik/fused_delta_net_3
ik/fused_delta_net_3a
ik/fused_ffn_up_gate
ik/fused_mul_multiadd
ik/fused_mul_unary
ik/fused_mul_unary_1
ik/fused_norm
ik/fused_rms_norm
ik/fused_rms_rms
ik/fused_rope_rope
ik/fused_softcap_softmax
ik/fused_up_gate_unary
ik/gemm_4d
ik/gemm_iq1s
ik/gemm_neon_1bit
ik/gemm_neon_iqk
ik/gemm_neon_iquants
ik/gemm_neon_kquants
ik/gemm_neon_legacy
ik/gemma3
ik/gemma3_mqkv_rcache
ik/gemma_output_tensor
ik/gemma_q80_kvcache
ik/gemv_bf16_r16
ik/gguf_bool_arrays
ik/gguf_py_add_maxfp4
ik/gguf_py_changes_for_np2.0
ik/glm45_tg_fa_hack
ik/glm45_tg_very_fast
ik/glm47_fa_2
ik/glm47_tg_fa_hack
ik/glm5
ik/glm_flash
ik/gpt-oss
ik/gpt_oss_graph
ik/graph_alloc
ik/graph_better_splits
ik/graph_parallel_tweak
ik/graph_reuse
ik/graph_reuse_on
ik/handle_incompatible_deepseek_ggufs
ik/handle_split_cache
ik/hide_imatrix
ik/hsums
ik/huihui_57B
ik/hunyuan_graph
ik/ignore_nextn_layers
ik/imatrix_lsim
ik/improve_iq1m
ik/improve_iq2_xs
ik/improve_iq2ks
ik/improve_mmq
ik/interleaved_guards
ik/iq1_kt
ik/iq1_m_neon
ik/iq1_m_r4
ik/iq1_s_checks
ik/iq1_s_gemm
ik/iq1_s_r4
ik/iq1_s_r4_k128
ik/iq1_s_r4_neon
ik/iq1_tn
ik/iq1_tn_cuda
ik/iq1_tn_metal
ik/iq1bn_metal
ik/iq1m_gemm
ik/iq2_bn_r4
ik/iq2_k
ik/iq2_k_r4
ik/iq2_k_tweak
ik/iq2_kl
ik/iq2_s_r4
ik/iq2_tn
ik/iq2_tn_as_iq2_bn
ik/iq2_tn_avx2
ik/iq2_tn_faster_pp
ik/iq2_xs_r4
ik/iq2_xxs_gemm
ik/iq2_xxs_r4
ik/iq2k_experiments
ik/iq2ks_experiments
ik/iq3_k_r4_v2
ik/iq3_ks
ik/iq3_ks_v2
ik/iq3_s_gemm
ik/iq3_s_r4
ik/iq3_s_r4_v2
ik/iq3_xxs_gemm
ik/iq3_xxs_r4
ik/iq3_xxs_r4_v2
ik/iq4_k
ik/iq4_k_r4
ik/iq4_k_r4_avx2
ik/iq4_k_tweaks
ik/iq4_k_xxs
ik/iq4_knn
ik/iq4_ks_r4
ik/iq4_kss
ik/iq4_kss_improvements
ik/iq4_nl_cache
ik/iq4_nl_x4
ik/iq4_xs_r4
ik/iq4_xs_r4_avx2
ik/iq4_xs_r8
ik/iq4_xs_r8_v2
ik/iq4kss_experiments
ik/iq4nl_kv_cache
ik/iq5_k_r4
ik/iq5_ks
ik/iq5_ks_r4
ik/iq6_k
ik/iq_gemv_tweaks
ik/iqk_fattn_all_quants
ik/iqk_gemm
ik/iqk_mmvq_opt
ik/iqk_q_improvements
ik/is_this_better_for_multi_gpu
ik/issue_214
ik/issue_217
ik/issue_224
ik/issue_230
ik/k_cache_hadamard
ik/k_cache_hadamard_cuda
ik/kq_fused_softmax
ik/kq_mask
ik/kq_mask_padding_64
ik/l4_rms_norm
ik/legacy_gemm
ik/llama4
ik/llama_bench_mla3
ik/llama_bench_n_cpu_moe
ik/llama_bench_overrides
ik/llama_bench_rcache
ik/llama_bench_sas
ik/llama_bench_tgb
ik/llama_hparams_add_mla
ik/llama_warnings
ik/make_biased_gemv_optional
ik/make_qx_quants
ik/mask_mt
ik/max_nodes
ik/max_nodes_again
ik/measure_barriers
ik/merge_Aug_12_2024
ik/merge_July_26_2024
ik/merge_only_qk
ik/merge_qkv
ik/merge_up_gate_exps_2
ik/merge_up_gate_exps_3
ik/metal_bf16
ik/metal_faster_iq4ks
ik/metal_fattn_update
ik/metal_fix_iq2k
ik/metal_fix_iq3k
ik/metal_moe
ik/metal_new_trellis
ik/mimo2
ik/mimo2_4_gpus
ik/mimo2_graph
ik/minimax2_very_fast
ik/minimax_graph_minor
ik/ministral3
ik/minmax2_sm_graph
ik/minor_delta_tweak
ik/minor_iq2ks_tweak
ik/mistral3_large
ik/mistral3_std_attn
ik/mla
ik/mla2_q80_cache
ik/mla2_q80_cache_cpu
ik/mla=3_by_default
ik/mla_fixes
ik/mla_guard
ik/mla_imatrix
ik/mla_no_transposed_cache
ik/mla_q80
ik/mmq_id_thresh
ik/mmq_iq_ks_r4
ik/mmq_to_cublas
ik/mmvq_args
ik/mmvq_fuse_bias
ik/mmvq_type_supported
ik/moe_fused_unary
ik/moe_offload_strategy
ik/more_set_device
ik/mtmd_reduce_memory_use
ik/mul_mat_bf16
ik/mul_mat_ext
ik/multi_add
ik/mv_q4_0_r4
ik/mxfp4
ik/n_cpu_moe
ik/nccl1
ik/nccl2
ik/nccl3
ik/nccl3_async
ik/neon_bf16
ik/neon_flash_attention_2
ik/neon_flash_attention_3
ik/neon_improve_legacy_quants
ik/neon_iq3_kt
ik/new_iq1bn
ik/new_iq2kt
ik/new_iq2kt_v2
ik/new_iq4kt
ik/new_trellis_2
ik/no_KV_for_unused_layers
ik/non_contiguous_rope
ik/offline_repack
ik/offline_repack_patterns
ik/offload_policy
ik/ooae2
ik/ooae_on_by_default
ik/opt_kt_quants
ik/option_cpu_fa
ik/option_to_disable_cuda_fusion
ik/optional_yarn_log_multiplier
ik/p2p_cpy_set_device
ik/per_row_scale
ik/phi3.5_tweaks
ik/pickup_13095
ik/play_with_barrier
ik/poc_tp
ik/poc_tp_glm4.5
ik/prepare_wk_b
ik/q2_k_r4
ik/q3_k_r4
ik/q3next_concat
ik/q3next_concat_cpu
ik/q3next_cuda_graphs
ik/q3next_opt2
ik/q3next_opt3
ik/q4_0_r4
ik/q4_0_r8
ik/q4_k_gemm
ik/q4_k_r4
ik/q4_k_r4_v2
ik/q4_k_r4_v3
ik/q5_0_r4
ik/q5_k_r4
ik/q60_mmq
ik/q6_0_r4
ik/q6_k_gemm
ik/q6_k_r4
ik/q8_0_r4
ik/q8_KV
ik/q8_k_r16
ik/q8_k_r8
ik/q8_k_r8_avx512
ik/qkvz_tweak
ik/qkvz_tweak1
ik/qmix_tweaks
ik/qmix_tweaks_2
ik/qstats
ik/quantization_tweaks
ik/quantize_dry_run
ik/quantize_ffn_gate_inp
ik/quantize_q8k_avx2
ik/quantize_stats
ik/qwen3.5_vision
ik/qwen35_std_attn
ik/qwen35dense
ik/qwen35moe
ik/qwen3_graph
ik/qwen3next
ik/qwen3vl_graph
ik/qx_0_r4_avx2
ik/qx_k_b32_avx2
ik/r4_faster_zen4
ik/r4_neon
ik/r4_nrcy_16
ik/really_fix_rope_cache
ik/reduce_compute_buffers
ik/reduce_make_copies
ik/reduce_mla3_compute_buffer_size
ik/reduce_no_nccl
ik/reduce_race_quick_fix
ik/refactor_iqk
ik/refactor_llama.cpp
ik/remove_iqk_option
ik/remove_kv_l
ik/remove_llamafile
ik/remove_scary_warning
ik/remove_unnecessary_calls
ik/remove_unnessessary_ids_copy
ik/rename_4_8
ik/rename_iq4_nl_x4
ik/reorg_mmvq_and_fuse_bias
ik/repack_also_experts
ik/repack_f16
ik/revert_0bf4d997
ik/revert_739
ik/revert_delta_net_3
ik/reverts
ik/ring_reduce
ik/rms_block_size
ik/rng_sampling
ik/rope_cache
ik/run_time_repack
ik/sampling-top-n-sigma
ik/sampling-xtc
ik/sampling_refactor_sorting
ik/sanitize_importance_iqk
ik/sanitize_importance_kt_quants
ik/sched_copy_experts
ik/sched_max_copies=1
ik/server_send_done
ik/shexps_better_hybrid
ik/simplify_delta_net
ik/simplify_delta_net_2
ik/skip_get_rows
ik/skip_noop_barriers
ik/skip_rowids_computation
ik/skip_unnecessary_quantize
ik/slightly_better_fdn
ik/slightly_better_graph_split_strategy
ik/sm_graph_cuda_graphs
ik/sm_graph_delta_net
ik/sm_graph_disable_cuda_graphs
ik/sm_graph_max_gpu
ik/sm_graph_q35
ik/sm_graph_q3next
ik/sm_graph_qwen35moe
ik/sm_graph_rearrange
ik/sm_graph_seedoss
ik/sm_graph_step35
ik/sm_graph_sync
ik/smart_expert_selection
ik/smollm3
ik/softcap
ik/softcap_minor
ik/split_graph_2
ik/split_mode_f32
ik/step35
ik/step35_compat
ik/support_gigachat
ik/sweep_bench_n_predict
ik/sweep_bench_nrep
ik/sweep_bench_warmup
ik/swiglu
ik/sync_fa
ik/tensor_override_honor_mmap
ik/test_q80_NaNs
ik/test_thp
ik/tg_tweaks
ik/topk_moe_fuse_bias
ik/topk_moe_with_norm
ik/trellis_bf16
ik/trellis_metal
ik/trellis_neon
ik/trellis_opt
ik/trinet
ik/try_authors
ik/try_cuda_graphs
ik/try_fa_no_q80_repack
ik/try_fix_1014
ik/try_fix_1201
ik/try_fix_1222
ik/try_fix_367
ik/try_fix_367_v2
ik/try_fix_690
ik/try_fix_772
ik/try_fix_854
ik/try_fix_974
ik/try_fix_avx2_fa
ik/try_fix_many_gpus
ik/try_fix_many_gpus_2
ik/try_grouped_topk_playing1
ik/try_remove_cpy_indirection
ik/try_split_mla
ik/try_split_offloaded_moe_up_gate
ik/try_svd
ik/try_trellis
ik/undo_1049_if_tensor_overrides
ik/undo_sync_reduction
ik/update_authors
ik/update_license
ik/use_bf16_when_no_mmq
ik/use_mmq_id_for_moe
ik/use_q8_2
ik/validate_quants_on_load
ik/vendor
ik/vulkan1
ik/vulkan_again
ik/vulkan_disable_fused_ops
ik/vulkan_disable_multi_add
ik/vulkan_fattn
ik/vulkan_fused_mul_unary
ik/vulkan_fused_rms
ik/vulkan_multi_add
ik/wip_sync_llama
ik/zen4_faster_iq4ks_iq5ks
ik/zen4_flash_attn
ik/zen4_flash_attn_2
ik/zen4_flash_attn_bf16
ik/zen4_iq4_xs_r4
ik/zen4_repack_f16
ikawrakow-patch-1
ikawrakow-patch-1-1
main
s6/MLA_prompt_save_restore_fix
s6/bitnet2b_2501
s6/bitnet_name_update
s6/cache_default
s6/deci_support
s6/docs_update
s6/dots
s6/fix_kshift_crash
s6/fix_prompt_tokenization
s6/fix_python
s6/fp8_native
s6/list_prompt_cache
s6/mikupad
s6/mla
s6/numa_KV
s6/qwen3_dynamic_yarn
s6/readme-minor1
s6/readme-minor2
s6/readme_update
s6/remove_kv_l
s6/rope_freq_fix
s6/rpc
s6/seed_support2
s6/sweep_bench
s6/sweep_bench_update
s6/termux_fix
s6/warmup
#1
#10
#1000
#1001
#1003
#1004
#1005
#1006
#1007
#1008
#101
#1011
#1012
#1016
#1017
#1018
#102
#1022
#1023
#1024
#1025
#1026
#1027
#1029
#1030
#1031
#1032
#1033
#1034
#1035
#1036
#1037
#1038
#1039
#1040
#1042
#1047
#1048
#1049
#105
#1050
#1051
#1052
#1053
#1054
#1056
#1057
#1058
#1059
#106
#1060
#1061
#1062
#1063
#1064
#1065
#1067
#1068
#1069
#107
#1070
#1071
#1073
#1079
#108
#1080
#1082
#1086
#1087
#1088
#1089
#109
#1091
#1092
#1093
#1094
#1096
#1097
#11
#110
#1100
#1101
#1103
#1104
#1105
#1106
#1107
#111
#1110
#1112
#1114
#1115
#1116
#1118
#1119
#112
#1120
#1121
#1124
#1126
#1128
#1129
#113
#1130
#1131
#1131
#1134
#1135
#1136
#1137
#1138
#1139
#114
#1140
#1141
#1143
#1144
#1147
#115
#1151
#1152
#1153
#1154
#1155
#1156
#116
#1160
#1161
#1164
#1165
#1166
#1168
#117
#1170
#1171
#1172
#1174
#1175
#1176
#1177
#1178
#1179
#118
#1182
#1183
#1184
#1185
#1187
#119
#1190
#1191
#1192
#1193
#1194
#1195
#1196
#1198
#1199
#12
#120
#1202
#1202
#1206
#1207
#1208
#121
#1211
#1212
#1213
#1214
#1215
#1216
#1217
#1218
#122
#1220
#1221
#1222
#1223
#1224
#1226
#123
#1231
#1235
#1236
#1238
#1239
#124
#1240
#1241
#1243
#1243
#1244
#1249
#125
#1250
#1251
#1252
#1257
#126
#1260
#1261
#1262
#1263
#1266
#1268
#1269
#127
#1270
#1272
#1274
#1275
#1276
#1277
#1278
#1279
#1279
#128
#1280
#1283
#1284
#1285
#1286
#1287
#1288
#129
#1292
#1295
#1296
#13
#130
#1300
#1301
#1303
#1304
#1305
#1306
#1307
#1308
#1309
#131
#1310
#1311
#1313
#1314
#1315
#1318
#132
#1320
#1321
#1322
#1326
#1328
#1329
#1330
#1331
#1332
#1332
#1333
#1335
#1336
#1337
#1339
#134
#1340
#1345
#1346
#1347
#1349
#135
#1350
#1352
#1354
#1355
#1355
#136
#137
#138
#139
#14
#141
#142
#143
#144
#145
#146
#147
#148
#149
#150
#151
#152
#153
#154
#155
#156
#157
#158
#16
#161
#162
#163
#168
#169
#17
#170
#171
#172
#173
#174
#175
#176
#177
#178
#179
#180
#181
#182
#184
#185
#186
#187
#188
#189
#19
#190
#191
#192
#193
#194
#195
#197
#198
#2
#20
#200
#202
#204
#205
#206
#207
#208
#21
#210
#212
#213
#215
#216
#218
#219
#22
#220
#225
#226
#229
#23
#231
#232
#233
#234
#235
#236
#237
#238
#239
#24
#240
#241
#243
#244
#246
#247
#248
#250
#251
#252
#253
#259
#260
#261
#262
#264
#265
#268
#269
#27
#270
#272
#273
#274
#275
#276
#277
#278
#279
#28
#280
#282
#283
#284
#287
#289
#290
#291
#292
#294
#295
#298
#299
#3
#301
#302
#303
#304
#307
#309
#31
#310
#311
#312
#313
#315
#317
#318
#32
#320
#321
#324
#325
#326
#327
#328
#329
#33
#330
#331
#332
#333
#336
#337
#338
#341
#342
#343
#344
#346
#347
#348
#349
#35
#351
#352
#355
#356
#36
#360
#364
#366
#368
#369
#37
#370
#371
#374
#375
#377
#38
#382
#386
#39
#390
#391
#392
#394
#4
#40
#400
#402
#404
#405
#406
#408
#409
#41
#410
#411
#413
#414
#415
#416
#417
#418
#42
#421
#422
#424
#426
#427
#428
#429
#43
#430
#431
#435
#438
#439
#44
#441
#442
#443
#444
#445
#446
#448
#449
#45
#453
#454
#457
#458
#46
#460
#461
#462
#465
#468
#469
#47
#470
#471
#473
#475
#478
#48
#480
#481
#482
#483
#484
#486
#487
#488
#489
#49
#492
#493
#494
#495
#496
#497
#5
#50
#501
#502
#504
#505
#506
#508
#509
#51
#510
#511
#512
#513
#515
#516
#517
#518
#52
#520
#524
#525
#528
#529
#53
#531
#533
#534
#535
#536
#537
#54
#540
#541
#542
#544
#546
#547
#549
#55
#550
#552
#553
#554
#554
#555
#557
#558
#559
#56
#560
#563
#565
#566
#567
#569
#57
#570
#571
#573
#574
#577
#578
#579
#58
#580
#581
#582
#583
#584
#585
#587
#588
#589
#592
#593
#595
#598
#6
#602
#603
#604
#606
#607
#608
#609
#61
#610
#611
#612
#616
#617
#618
#62
#620
#622
#624
#628
#630
#631
#637
#639
#64
#640
#642
#643
#645
#648
#65
#652
#653
#653
#654
#66
#661
#662
#668
#670
#672
#674
#676
#677
#68
#680
#682
#683
#684
#688
#689
#69
#692
#695
#696
#698
#699
#7
#70
#700
#701
#702
#705
#707
#708
#709
#71
#710
#711
#712
#713
#714
#716
#717
#719
#72
#720
#722
#723
#724
#726
#727
#728
#73
#734
#735
#738
#739
#74
#740
#741
#742
#745
#748
#75
#751
#752
#754
#757
#759
#76
#760
#762
#764
#768
#77
#771
#774
#78
#782
#786
#787
#788
#789
#79
#790
#791
#794
#795
#796
#797
#798
#799
#80
#801
#802
#803
#807
#81
#810
#814
#817
#820
#823
#824
#825
#826
#828
#829
#83
#833
#835
#836
#837
#838
#84
#840
#841
#842
#843
#844
#845
#85
#850
#851
#852
#853
#855
#857
#858
#86
#860
#861
#863
#864
#866
#868
#87
#870
#871
#872
#874
#875
#876
#878
#879
#880
#881
#882
#883
#887
#889
#89
#891
#892
#894
#896
#897
#899
#9
#90
#900
#901
#902
#903
#906
#907
#91
#910
#911
#913
#914
#916
#920
#921
#922
#923
#924
#926
#928
#929
#93
#931
#932
#933
#934
#935
#936
#937
#938
#939
#94
#941
#943
#944
#945
#947
#948
#949
#951
#952
#954
#957
#958
#959
#96
#963
#965
#966
#968
#969
#97
#970
#971
#972
#973
#976
#977
#98
#980
#983
#984
#985
#987
#988
#989
#99
#991
#992
#993
#995
#996
#998
#999
t0002
-
cae6483d88
ggml : a faster version for Q4_1 x Q8_0 dot products (#1083)
Kawrakow
2023-04-21 17:18:26 +02:00 -
1bfc153e2f
ggml : a faster version for Q4_1 x Q8_0 dot products (#1083)
Kawrakow
2023-04-21 17:18:26 +02:00 -
b9387e79c4
Show perplexity ETA in hours and minutes (#1096)
slaren
2023-04-21 14:57:57 +02:00 -
3d59769c3b
Show perplexity ETA in hours and minutes (#1096)
slaren
2023-04-21 14:57:57 +02:00 -
465c3659d2
llama : fix comment for "output.weight" tensor
Georgi Gerganov
2023-04-21 10:23:36 +03:00 -
d40fded93e
llama : fix comment for "output.weight" tensor
Georgi Gerganov
2023-04-21 10:23:36 +03:00 -
fe9564f09e
Add ggml-model-*.bin checksums for 7B, 13B, 30B, 65B (#1088)
Stephan Walter
2023-04-20 21:56:44 +00:00 -
2510c1831f
Add ggml-model-*.bin checksums for 7B, 13B, 30B, 65B (#1088)
Stephan Walter
2023-04-20 21:56:44 +00:00 -
9e824bf15c
ggml : sync ggml (add GPT-NeoX RoPE implementation)
Georgi Gerganov
2023-04-20 23:32:59 +03:00 -
12b5900dbc
ggml : sync ggml (add GPT-NeoX RoPE implementation)
Georgi Gerganov
2023-04-20 23:32:59 +03:00 -
f04613a668
ggml : fix bug in ggml_compute_forward_dup_f32()
Georgi Gerganov
2023-04-20 21:58:05 +03:00 -
9ff334f3c9
ggml : fix bug in ggml_compute_forward_dup_f32()
Georgi Gerganov
2023-04-20 21:58:05 +03:00 -
dc2fb22941
Add Q4_3 support to cuBLAS (#1086)
slaren
2023-04-20 20:49:53 +02:00 -
2005469ea1
Add Q4_3 support to cuBLAS (#1086)
slaren
2023-04-20 20:49:53 +02:00 -
7a693926b8
ggml : do not break cuBLAS build (Q4_3 is not yet implemented)
Georgi Gerganov
2023-04-20 21:43:50 +03:00 -
8a1756abdf
ggml : do not break cuBLAS build (Q4_3 is not yet implemented)
Georgi Gerganov
2023-04-20 21:43:50 +03:00 -
c3aa2316ac
ggml : fix Q4_3 quantization
Georgi Gerganov
2023-04-20 20:44:05 +03:00 -
66aab46079
ggml : fix Q4_3 quantization
Georgi Gerganov
2023-04-20 20:44:05 +03:00 -
6e34a4c7c8
llama : multi-threaded quantization (#1075)
Kawrakow
2023-04-20 19:42:27 +02:00 -
38de86a711
llama : multi-threaded quantization (#1075)
Kawrakow
2023-04-20 19:42:27 +02:00 -
0a8cdb2ea1
ggml : add Q4_3 quantization (#1082)
Georgi Gerganov
2023-04-20 20:35:53 +03:00 -
e0305ead3a
ggml : add Q4_3 quantization (#1082)
Georgi Gerganov
2023-04-20 20:35:53 +03:00 -
59f4d32a01
ci : remove the LLAMA_ACCELERATE matrix dimension from Ubuntu builds in the CI (#1074)
Ivan Komarov
2023-04-20 17:15:18 +02:00 -
6a9661ea5a
ci : remove the LLAMA_ACCELERATE matrix dimension from Ubuntu builds in the CI (#1074)
Ivan Komarov
2023-04-20 17:15:18 +02:00 -
b6c1bfc960
fix: LLAMA_CUBLAS=1 undefined reference 'shm_open' (#1080)
源文雨
2023-04-20 21:28:43 +08:00 -
5addcb120c
fix: LLAMA_CUBLAS=1 undefined reference 'shm_open' (#1080)
源文雨
2023-04-20 21:28:43 +08:00 -
091a53228c
AVX2 optimization for vec_dot_q4_2_q8_0 (#1068)
Stephan Walter
2023-04-20 06:45:41 +00:00 -
c8c2c52482
AVX2 optimization for vec_dot_q4_2_q8_0 (#1068)
Stephan Walter
2023-04-20 06:45:41 +00:00 -
881ecfb4ef
Improve cuBLAS performance by dequantizing on the GPU (#1065)
slaren
2023-04-20 03:14:14 +02:00 -
02d6988121
Improve cuBLAS performance by dequantizing on the GPU (#1065)
slaren
2023-04-20 03:14:14 +02:00 -
7ecc2d9e42
Minor: Readme fixed grammar, spelling, and misc updates (#1071)
CRD716
2023-04-19 14:52:14 -05:00 -
834695fe3a
Minor: Readme fixed grammar, spelling, and misc updates (#1071)
CRD716
2023-04-19 14:52:14 -05:00 -
e0e10251a3
Q4_2 quantization with rmse-optimized scale and quants (#1062)
Kawrakow
2023-04-19 20:20:14 +02:00 -
f7d05095b4
Q4_2 quantization with rmse-optimized scale and quants (#1062)
Kawrakow
2023-04-19 20:20:14 +02:00 -
73a59affb2
ggml : use 8-bit precision for Q4_1 intermediate results (#1047)
Georgi Gerganov
2023-04-19 20:10:08 +03:00 -
884e7d7a2b
ggml : use 8-bit precision for Q4_1 intermediate results (#1047)
Georgi Gerganov
2023-04-19 20:10:08 +03:00 -
068083ca76
readme : add warning about Q4_2 and Q4_3
Georgi Gerganov
2023-04-19 19:07:54 +03:00 -
7cd5c4a3e9
readme : add warning about Q4_2 and Q4_3
Georgi Gerganov
2023-04-19 19:07:54 +03:00 -
ec0e355be1
ggml : Q4 cleanup - remove 4-bit dot product code (#1061)
Stephan Walter
2023-04-19 16:06:37 +00:00 -
f3d4edf504
ggml : Q4 cleanup - remove 4-bit dot product code (#1061)
Stephan Walter
2023-04-19 16:06:37 +00:00 -
bc5977cc90
Add NVIDIA cuBLAS support (#1044)
slaren
2023-04-19 11:22:45 +02:00 -
8944a13296
Add NVIDIA cuBLAS support (#1044)
slaren
2023-04-19 11:22:45 +02:00 -
dee44e099f
Multi-threaded ggml_cpy (#1035)
slaren
2023-04-19 00:53:24 +02:00 -
6667401238
Multi-threaded ggml_cpy (#1035)
slaren
2023-04-19 00:53:24 +02:00 -
4207b4b129
ggml : add new Q4_2 quantization (ARM only) (#1046)
Georgi Gerganov
2023-04-18 23:54:57 +03:00 -
77a73403ca
ggml : add new Q4_2 quantization (ARM only) (#1046)
Georgi Gerganov
2023-04-18 23:54:57 +03:00 -
5eaa6d25cf
ggml : scratch that - vmlaq_n_f32 is always better
Georgi Gerganov
2023-04-18 23:11:23 +03:00 -
50a8a2af97
ggml : scratch that - vmlaq_n_f32 is always better
Georgi Gerganov
2023-04-18 23:11:23 +03:00 -
998b1d59c8
gitignore : vdot
Georgi Gerganov
2023-04-18 23:00:08 +03:00 -
4caebf6d40
gitignore : vdot
Georgi Gerganov
2023-04-18 23:00:08 +03:00 -
47aacf4239
ggml : optimize ggml_vec_dot_q4_0_q8_0() using vectorized accumulators
Georgi Gerganov
2023-04-18 22:59:17 +03:00 -
dcdd65e296
ggml : optimize ggml_vec_dot_q4_0_q8_0() using vectorized accumulators
Georgi Gerganov
2023-04-18 22:59:17 +03:00 -
684aeeb3e0
Adding a simple program to measure speed of dot products (#1041)
Kawrakow
2023-04-18 21:00:14 +02:00 -
5ecff35151
Adding a simple program to measure speed of dot products (#1041)
Kawrakow
2023-04-18 21:00:14 +02:00 -
426d0c45f4
readme : update hot topics about new LoRA functionality
Georgi Gerganov
2023-04-18 20:10:26 +03:00 -
7faa7460f0
readme : update hot topics about new LoRA functionality
Georgi Gerganov
2023-04-18 20:10:26 +03:00 -
b2ef9f4eae
ci : do not run on drafts
Georgi Gerganov
2023-04-17 18:00:10 +03:00 -
5af8e32238
ci : do not run on drafts
Georgi Gerganov
2023-04-17 18:00:10 +03:00 -
b1f527be59
Do not close file after mmap (Windows version) (#1034)
Ivan Komarov
2023-04-18 03:15:50 +02:00 -
42747220b4
Do not close file after mmap (Windows version) (#1034)
Ivan Komarov
2023-04-18 03:15:50 +02:00 -
3c9a24cc72
readme : add Ruby bindings (#1029)
Atsushi Tatsuma
2023-04-18 04:34:35 +09:00 -
e9298af389
readme : add Ruby bindings (#1029)
Atsushi Tatsuma
2023-04-18 04:34:35 +09:00 -
5e5d6ffdaa
add 4_0 to default outfile namestr dict (#1031)
Cameron
2023-04-17 11:26:23 -07:00 -
4ad73137a1
add 4_0 to default outfile namestr dict (#1031)
Cameron
2023-04-17 11:26:23 -07:00 -
dc0fa95077
Add LoRA support (#820)
slaren
2023-04-17 17:28:55 +02:00 -
315a95a4d3
Add LoRA support (#820)
slaren
2023-04-17 17:28:55 +02:00 -
368d63e55f
llama : well-defined static initialization of complex objects (#927)
Arik Poznanski
2023-04-17 17:41:53 +03:00 -
efd05648c8
llama : well-defined static initialization of complex objects (#927)
Arik Poznanski
2023-04-17 17:41:53 +03:00 -
9bed0fc823
quantize-stats : fix bug in --type argument
Georgi Gerganov
2023-04-17 17:31:06 +03:00 -
eb17a026fd
quantize-stats : fix bug in --type argument
Georgi Gerganov
2023-04-17 17:31:06 +03:00 -
42ea22af13
ggml : avoid using ggml_fp16_to_fp32() and ggml_fp32_to_fp16() in ggml.c
Georgi Gerganov
2023-04-17 16:16:23 +03:00 -
69b740289f
ggml : avoid using ggml_fp16_to_fp32() and ggml_fp32_to_fp16() in ggml.c
Georgi Gerganov
2023-04-17 16:16:23 +03:00 -
fb550a0f64
Speedup the AVX-512 implementation of ggml_vec_dot_q4_0() (#933)
Ivan Komarov
2023-04-17 15:10:57 +02:00 -
f266259ad9
Speedup the AVX-512 implementation of ggml_vec_dot_q4_0() (#933)
Ivan Komarov
2023-04-17 15:10:57 +02:00 -
b5fefdd2a8
Fix: do not close file on mmap (#1017)
slaren
2023-04-16 21:27:38 +02:00 -
47f61aaa5f
Fix: do not close file on mmap (#1017)
slaren
2023-04-16 21:27:38 +02:00 -
aa84f3b5d5
stdout : vertical align outputs for better readibility
Georgi Gerganov
2023-04-16 13:58:48 +03:00 -
3173a62eb9
stdout : vertical align outputs for better readibility
Georgi Gerganov
2023-04-16 13:58:48 +03:00 -
c5cb4f71c6
examples: add missing <ctime> include for time() (#1011)
Pavol Rusnak
2023-04-16 12:13:00 +02:00 -
489537e6cf
examples: add missing <ctime> include for time() (#1011)
Pavol Rusnak
2023-04-16 12:13:00 +02:00 -
598810e9c4
Fix msys2 build error and warnings (#1009)
nanahi
2023-04-16 17:13:42 +08:00 -
2d3481c721
Fix msys2 build error and warnings (#1009)
nanahi
2023-04-16 17:13:42 +08:00 -
a0909d9b15
convert.py: Fix loading safetensors and ggml format on Windows (#991)
comex
2023-04-15 14:53:21 -07:00 -
74f5899df4
convert.py: Fix loading safetensors and ggml format on Windows (#991)
comex
2023-04-15 14:53:21 -07:00 -
0363b17de2
Fix potential int8 overflow in non-SIMD vec_dot (#986)
Stephan Walter
2023-04-15 18:28:56 +00:00 -
2f7c8e014e
Fix potential int8 overflow in non-SIMD vec_dot (#986)
Stephan Walter
2023-04-15 18:28:56 +00:00 -
378ffbab0e
Refactor ggml.c for future tensor types (#1001)
Stephan Walter
2023-04-15 16:25:38 +00:00 -
0ad964631f
Refactor ggml.c for future tensor types (#1001)
Stephan Walter
2023-04-15 16:25:38 +00:00 -
053915a751
ggml : add Q8_0 quantization for intermediate results (#951)
Georgi Gerganov
2023-04-15 17:53:22 +03:00 -
e95b6554b4
ggml : add Q8_0 quantization for intermediate results (#951)
Georgi Gerganov
2023-04-15 17:53:22 +03:00 -
a15576393c
ggml : use posix_memalign on non-Windows env
Georgi Gerganov
2023-04-15 14:25:45 +03:00 -
aa485cee33
ggml : use posix_memalign on non-Windows env
Georgi Gerganov
2023-04-15 14:25:45 +03:00 -
ae921afa4a
benchmark : fix result validation in benchmark-q4_0-matmult (#987)
Ivan Komarov
2023-04-15 07:51:54 +02:00 -
c12b14b77f
benchmark : fix result validation in benchmark-q4_0-matmult (#987)
Ivan Komarov
2023-04-15 07:51:54 +02:00 -
05d97008d3
cmake : add finding the OpenBLAS header file (#992)
katsu560
2023-04-15 14:51:11 +09:00 -
106faaf297
cmake : add finding the OpenBLAS header file (#992)
katsu560
2023-04-15 14:51:11 +09:00 -
87bb0f74bd
Revert "main : alternative instruct mode (Vicuna support, etc.) (#863)" (#982)
Pavol Rusnak
2023-04-14 21:58:43 +02:00 -
c85e03d12e
Revert "main : alternative instruct mode (Vicuna support, etc.) (#863)" (#982)
Pavol Rusnak
2023-04-14 21:58:43 +02:00 -
66cf09af08
py : bump sentencepiece to 0.1.98 to support Python 3.11 (#976)
Pavol Rusnak
2023-04-14 21:46:49 +02:00 -
489093548c
py : bump sentencepiece to 0.1.98 to support Python 3.11 (#976)
Pavol Rusnak
2023-04-14 21:46:49 +02:00