Find the local LLM that actually runs and performs best on your hardware. Ranked by real, recency-aware benchmarks, not parameter count. One command, run it instantly.
aiapple-siliconbenchmarksclicommand-line-toolggufgpuhuggingfaceinferencellmlocal-llmollamapythonvram