Loading
ramalama: add more optional dependencies
0. llama-cpp to infer directly on system, without containers. 1. mlx-lm for aarch64-darwin as an alternative inference engine. 2. huggingface-cli needed for some commands like `upload`.
Admins will be upgrading ORNL GitLab Servers on Saturday, 16 May 2026, from 7 AM until 11 AM EST. Repositories will experience intermittent outages during this time.
0. llama-cpp to infer directly on system, without containers. 1. mlx-lm for aarch64-darwin as an alternative inference engine. 2. huggingface-cli needed for some commands like `upload`.