caveman
why use many token when few do trick
Install โข Benchmarks โข Before/After โข Intensity Levels โข Compress โข Why
A Claude Code skill/plugin and Codex plugin that makes agent talk like caveman โ cutting ~75% of output tokens while keeping full technical accuracy. Plus a companion tool that compresses your memory files to cut ~45% of input tokens every session.
Based on the viral observation that caveman-speak dramatically reduces LLM token usage without losing technical substance. So we made it a one-line install.
Before / After
๐ฃ๏ธ Normal Claude (69 tokens)
|
๐ชจ Caveman Claude (19 tokens)
|
๐ฃ๏ธ Normal Claude
|
๐ชจ Caveman Claude
|
Same fix. 75% less word. Brain still big.
Sometimes too much caveman. Sometimes not enough:
๐ชถ Lite
|
๐ชจ Full
|
๐ฅ Ultra
|
Same answer. You pick how many word.
Benchmarks
Real token counts from the Claude API (reproduce it yourself):
| Task | Normal (tokens) | Caveman (tokens) | Saved |
|---|---|---|---|
| Explain React re-render bug | 1180 | 159 | 87% |
| Fix auth middleware token expiry | 704 | 121 | 83% |
| Set up PostgreSQL connection pool | 2347 | 380 | 84% |
| Explain git rebase vs merge | 702 | 292 | 58% |
| Refactor callback to async/await | 387 | 301 | 22% |
| Architecture: microservices vs monolith | 446 | 310 | 30% |
| Review PR for security issues | 678 | 398 | 41% |
| Docker multi-stage build | 1042 | 290 | 72% |
| Debug PostgreSQL race condition | 1200 | 232 | 81% |
| Implement React error boundary | 3454 | 456 | 87% |
| Average | 1214 | 294 | 65% |
Range: 22%โ87% savings across prompts.
[!IMPORTANT] Caveman only affects output tokens โ thinking/reasoning tokens are untouched. Caveman no make brain smaller. Caveman make mouth smaller. Biggest win is readability and speed, cost savings are a bonus.
Science back caveman up
A March 2026 paper "Brevity Constraints Reverse Performance Hierarchies in Language Models" found that constraining large models to brief responses improved accuracy by 26 percentage points on certain benchmarks and completely reversed performance hierarchies. Verbose not always better. Sometimes less word = more correct.
Install
npx skills add JuliusBrussee/caveman
npx skills supports 40+ agents โ Claude Code, GitHub Copilot, Cursor, Windsurf, Cline, and more. To install for a specific agent:
npx skills add JuliusBrussee/caveman -a cursor npx skills add JuliusBrussee/caveman -a github-copilot npx skills add JuliusBrussee/caveman -a cline npx skills add JuliusBrussee/caveman -a windsurf npx skills add JuliusBrussee/caveman -a codex
Or with Claude Code plugin system:
claude plugin marketplace add JuliusBrussee/caveman claude plugin install caveman@caveman
Usage
Trigger with:
/cavemanor Codex$caveman- "talk like caveman"
- "caveman mode"
- "less tokens please"
Stop with: "stop caveman" or "normal mode"
Intensity Levels
Sometimes full caveman too much. Sometimes not enough. Now you pick:
| Level | Trigger | What it do |
|---|---|---|
| Lite | /caveman lite or $caveman lite | Drop filler, keep grammar. Professional but no fluff |
| Full | /caveman full or $caveman full | Default caveman. Drop articles, fragments, full grunt |
| Ultra | /caveman ultra or $caveman ultra | Maximum compression. Telegraphic. Abbreviate everything |
Level stick until you change it or session end.
What Caveman Do
| Thing | Caveman Do? |
|---|---|
| English explanation | ๐ชจ Caveman smash filler words |
| Code blocks | โ๏ธ Write normal (caveman not stupid) |
| Technical terms | ๐ง Keep exact (polymorphism stay polymorphism) |
| Error messages | ๐ Quote exact |
| Git commits & PRs | โ๏ธ Write normal |
| Articles (a, an, the) | ๐ Gone |
| Pleasantries | ๐ "Sure I'd be happy to" is dead |
| Hedging | ๐ "It might be worth considering" extinct |
Why
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ TOKENS SAVED โโโโโโโโ 75% โ
โ TECHNICAL ACCURACY โโโโโโโโ 100%โ
โ SPEED INCREASE โโโโโโโโ ~3x โ
โ VIBES โโโโโโโโ OOG โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
- Faster response โ less token to generate = speed go brrr
- Easier to read โ no wall of text, just the answer
- Same accuracy โ all technical info kept, only fluff removed (science say so)
- Save money โ ~71% less output token = less cost
- Fun โ every code review become comedy
How It Work
Caveman not dumb. Caveman efficient.
Normal LLM waste token on:
- "I'd be happy to help you with that" (8 wasted tokens)
- "The reason this is happening is because" (7 wasted tokens)
- "I would recommend that you consider" (7 wasted tokens)
- "Sure, let me take a look at that for you" (10 wasted tokens)
Caveman say what need saying. Then stop.
Caveman Compress
Caveman makes Claude speak with fewer tokens. Caveman Compress makes Claude read fewer tokens.
Your CLAUDE.md loads on every session start. A 1000-token project memory file costs you tokens every single time you open a project. Caveman Compress rewrites those files into caveman-speak so Claude reads less โ without you losing the human-readable original.
/caveman-compress CLAUDE.md
CLAUDE.md โ compressed (Claude reads this every session โ fewer tokens)
CLAUDE.original.md โ human-readable backup (you read and edit this)
How it works
A Python pipeline that shells out to claude --print for the actual compression, then validates the result locally โ no tokens wasted on checking.
detect file type (local) โ compress with Claude (1 call) โ validate (local)
โ
if errors: targeted fix (1 call, cherry-pick only)
โ
retry up to 2ร, restore original on failure
What's preserved exactly
Code blocks, inline code, URLs, file paths, commands, headings, table structure, dates, version numbers โ anything technical passes through untouched. Only natural language prose gets compressed.
Compress benchmarks
| File | Original | Compressed | Saved |
|---|---|---|---|
claude-md-preferences.md | 706 | 285 | 59.6% |
project-notes.md | 1145 | 535 | 53.3% |
claude-md-project.md | 1122 | 687 | 38.8% |
todo-list.md | 627 | 388 | 38.1% |
mixed-with-code.md | 888 | 574 | 35.4% |
| Average | 898 | 494 | 45% |
Full-circle token savings
| Tool | What it cuts | Savings |
|---|---|---|
| caveman | Output tokens (Claude's responses) | ~65% |
| caveman-compress | Input tokens (memory files loaded per session) | ~45% |
| Both together | The whole conversation | Output + input both shrunk |
See the full caveman-compress README for install, usage, and validation details.
Star This Repo
If caveman save you mass token, mass money โ leave mass star. โญ
Also by Julius Brussee
- Blueprint โ specification-driven development for Claude Code. Natural language โ blueprints โ parallel builds โ working software.
- Revu โ local-first macOS study app with FSRS spaced repetition, decks, exams, and study guides. revu.cards
License
MIT โ free like mass mammoth on open plain.