Star 历史趋势
数据来源: GitHub API · 生成自 Stargazers.cn
README.md

tiny-llm - LLM Serving in a Week

CI (main)

A course on LLM serving using MLX for system engineers. The codebase is solely (almost!) based on MLX array/matrix APIs without any high-level neural network APIs, so that we can build the model serving infrastructure from scratch and dig into the optimizations.

The goal is to learn the techniques behind efficiently serving a large language model (e.g., Qwen2 models).

In week 1, you will implement the necessary components in Python (only Python!) to use the Qwen2 model to generate responses (e.g., attention, RoPE, etc). In week 2, you will implement the inference system which is similar to but a much simpler version of vLLM (e.g., KV cache, continuous batching, flash attention, etc). In week 3, we will cover more advanced topics and how the model interacts with the outside world.

Why MLX: nowadays it's easier to get a macOS-based local development environment than setting up an NVIDIA GPU.

Why Qwen2: this was the first LLM I've interacted with -- it's the go-to example in the vllm documentation. I spent some time looking at the vllm source code and built some knowledge around it.

Book

The tiny-llm book is available at https://skyzh.github.io/tiny-llm/. You can follow the guide and start building.

Community

You may join skyzh's Discord server and study with the tiny-llm community.

Join skyzh's Discord Server

Roadmap

Week 1 and 2 is complete. Week 3 is in progress.

Week + ChapterTopicCodeTestDoc
1.1Attention
1.2RoPE
1.3Grouped Query Attention
1.4RMSNorm and MLP
1.5Load the Model
1.6Generate Responses (aka Decoding)
1.7Sampling
2.1Key-Value Cache
2.2Quantized Matmul and Linear - CPU
2.3Quantized Matmul and Linear - GPU
2.4Flash Attention 2 - CPU
2.5Flash Attention 2 - GPU
2.6Continuous Batching
2.7Chunked Prefill
3.1Paged Attention - Part 1🚧
3.2Paged Attention - Part 2🚧🚧🚧
3.3MoE (Mixture of Experts)🚧🚧🚧
3.4Speculative Decoding🚧🚧
3.5RAG Pipeline🚧🚧🚧
3.6AI Agent / Tool Calling🚧🚧🚧
3.7Long Context🚧🚧🚧

Other topics not covered: quantized/compressed kv cache, prefix/prompt cache; sampling, fine tuning; smaller kernels (softmax, silu, etc)

Star History

Star History Chart

关于 About

A course of learning LLM inference serving on Apple Silicon for systems engineers: build a tiny vLLM + Qwen.
courselarge-language-modelllmpythonqwenqwen2servingvllm

语言 Languages

Python75.0%
C++19.3%
Metal4.0%
CMake1.7%
Shell0.1%

提交活跃度 Commit Activity

代码提交热力图
过去 52 周的开发活跃度
156
Total Commits
峰值: 18次/周
Less
More

核心贡献者 Contributors