Github

代码库

Find the local LLM that actually runs and performs best on your hardware. Ranked by real, recency-aware benchmarks, not parameter count. One command, run it instantly.
Python
aiapple-siliconbenchmarksclicommand-line-toolggufgpuhuggingfaceinferencellmlocal-llmollamapythonvram