Loading
llama-cpp: 6442 -> 6479
Includes fix for v_dot2_f32_f16 being used on ISAs without that instruction. https://github.com/ggml-org/llama.cpp/pull/15927
Includes fix for v_dot2_f32_f16 being used on ISAs without that instruction. https://github.com/ggml-org/llama.cpp/pull/15927