let result = try await Workflow() .step(researchAgent) .step(writerAgent) .run("Summarize the latest WWDC session on Swift concurrency.")
Two agents, one pipeline, compiled to a DAG with crash recovery and Swift concurrency safety.
Install
.package(url: "https://github.com/christopherkarani/Swarm.git", from: "0.5.0")
Quick Start
import Swarm // The @Tool macro generates the JSON schema at compile time @Tool("Looks up the current stock price") struct PriceTool { @Parameter("Ticker symbol") var ticker: String func execute() async throws -> String { "182.50" } } // Create an agent with unlabeled instructions first and tools in the trailing @ToolBuilder closure let agent = try Agent("Answer finance questions using real data.", configuration: .init(name: "Analyst"), inferenceProvider: .anthropic(key: "sk-...")) { PriceTool() CalculatorTool() } let result = try await agent.run("What is AAPL trading at?") print(result.output) // "Apple (AAPL) is currently trading at $182.50."
That is a working agent with type-safe tool calling. The rest of this README covers workflows, memory, guardrails, and the surrounding runtime pieces.
On-Device Workspace
Swarm now supports a file-backed on-device workspace with:
AGENTS.mdfor workspace-wide instructions.swarm/agents/<id>.mdfor per-agent specs- standard
.swarm/skills/<name>/SKILL.mdfolders for reusable skills .swarm/memory/for durable writable notes
Code-first setup:
let workspace = try AgentWorkspace.appDefault() let agent = try Agent.onDevice( "You are a concise local assistant.", workspace: workspace, inferenceProvider: .foundationModels )
Markdown-first setup:
let workspace = try AgentWorkspace.appDefault() let agent = try Agent.spec( "support", in: workspace, inferenceProvider: .foundationModels )
Workspace layout:
AgentWorkspace/ AGENTS.md .swarm/ agents/ support.md skills/ refund-policy/ SKILL.md memory/ facts/ decisions/ tasks/ lessons/ handoffs/
Use try await workspace.validate() in development or CI to catch malformed specs and skills before runtime.
Why Swarm
- Swift concurrency is part of the surface. Swift 6.2
StrictConcurrencyis enabled across the package. - Tools stay type-safe. The
@Toolmacro generates JSON schemas from Swift structs. - Workflows can survive crashes. Durable workflow checkpointing lets you resume from an explicit checkpoint ID.
- Cloud and on-device models use the same abstractions. Foundation Models, Anthropic, OpenAI, Ollama, Gemini, OpenRouter, and MLX all fit the same shape.
- It is written in Swift all the way down.
AsyncThrowingStream, actors, result builders, and macros are first-class here.
Examples
Capability matrix showcase
Swarm now ships with an in-repo capability showcase that exercises the stable surface area in one deterministic matrix:
- agents and tools
- streaming
- conversation plus session persistence
- sequential, parallel, routed, and repeat-until workflows
- handoffs
- memory
- on-device workspace loading
- guardrails
- resilience helpers
- durable checkpoint and resume
- observability
- MCP discovery and tool bridging
- provider selection
Run it locally:
swift run SwarmCapabilityShowcase list swift run SwarmCapabilityShowcase matrix swift run SwarmCapabilityShowcase run handoff swift run SwarmCapabilityShowcase smoke
The deterministic matrix is CI-safe. Live-provider smoke coverage is opt-in through environment variables. See docs/guide/capability-showcase.md for the scenario catalog and smoke-mode details.
Multi-agent pipeline
let researcher = try Agent("Research the topic and extract key facts.", inferenceProvider: .anthropic(key: "sk-...")) { WebSearchTool() } let writer = try Agent("Write a concise summary from the research.", inferenceProvider: .anthropic(key: "sk-...")) let result = try await Workflow() .step(researcher) .step(writer) .run("Latest advances in on-device ML")
Parallel fan-out
let result = try await Workflow() .parallel([bullAgent, bearAgent, analystAgent], merge: .structured) .run("Evaluate Apple's Q4 earnings.") // Three perspectives, merged into one output.
Dynamic routing
let result = try await Workflow() .route { input in if input.contains("$") { return mathAgent } if input.contains("weather") { return weatherAgent } return generalAgent } .run("What is 15% of $240?")
Streaming
for try await event in agent.stream("Summarize the changelog.") { switch event { case .output(.token(let t)): print(t, terminator: "") case .tool(.completed(let call, _)): print("\n[tool: \(call.toolName)]") case .lifecycle(.completed(let r)): print("\nDone in \(r.duration)") default: break } }
More examples
Semantic memory
let agent = try Agent("You remember past conversations.", inferenceProvider: .anthropic(key: "sk-..."), memory: .vector(embeddingProvider: myEmbedder, threshold: 0.75)) { // tools }
Guardrails
let agent = try Agent("You are a helpful assistant.", inputGuardrails: [GuardrailSpec.maxInput(5000), GuardrailSpec.inputNotEmpty], outputGuardrails: [GuardrailSpec.maxOutput(2000)])
Closure tools
let reverse = FunctionTool( name: "reverse", description: "Reverses a string", parameters: [ToolParameter(name: "text", description: "Text to reverse", type: .string, isRequired: true)] ) { args in let text = try args.require("text", as: String.self) return .string(String(text.reversed())) } let agent = try Agent("Text utilities.", tools: [reverse])
Crash-resumable workflows
let workflow = Workflow() .step(monitor) .durable.checkpoint(id: "monitor-v1", policy: .everyStep) .durable.checkpointing(.fileSystem(directory: checkpointsURL)) let resumed = try await workflow.durable.execute("watch", resumeFrom: "monitor-v1")
Provider switching
// On-device, private, no API key needed let local = try Agent("Be helpful.", inferenceProvider: .foundationModels) // Cloud let cloud = try Agent("Be helpful.", inferenceProvider: .anthropic(key: k)) // Or swap at runtime via environment let modified = agent.environment(\.inferenceProvider, .ollama(model: "mistral"))
Conversation
let conversation = Conversation(with: agent) let response1 = try await conversation.send("What's the weather?") let response2 = try await conversation.send("And tomorrow?") // Context preserved for message in await conversation.messages { print("\(message.role): \(message.text)") }
How Swarm Compares
| Swarm | LangChain | AutoGen | |
|---|---|---|---|
| Language | Swift 6.2 | Python | Python |
| Data race safety | Compile-time | Runtime | Runtime |
| On-device LLM | Foundation Models | n/a | n/a |
| Execution engine | Compiled DAG | Loop-based | Loop-based |
| Crash recovery | Checkpoints | n/a | Partial |
| Type-safe tools | @Tool macro (compile-time) | Decorators (runtime) | Runtime |
| Streaming | AsyncThrowingStream | Callbacks | Callbacks |
| iOS / macOS native | First-class | n/a | n/a |
What's Included
| Agents | Agent struct with @ToolBuilder trailing closure, AgentRuntime protocol |
| Workflows | Workflow: .step(), .parallel(), .route(), .repeatUntil(), .timeout() |
| Tools | @Tool macro, FunctionTool, @ToolBuilder, parallel execution |
| Memory | MemoryOption.conversation(limit:), MemoryOption.vector(embeddingProvider:), MemoryOption.slidingWindow(count:), MemoryOption.summary(summarizer:) |
| Guardrails | GuardrailSpec.maxInput(), GuardrailSpec.maxOutput(), GuardrailSpec.inputNotEmpty, GuardrailSpec.outputNotEmpty, GuardrailSpec.customInput(), GuardrailSpec.customOutput() |
| Conversation | Conversation actor for stateful multi-turn dialogue |
| Resilience | 7 backoff strategies, circuit breaker, fallback chains, rate limiting |
| Observability | AgentObserver, Tracer, SwiftLogTracer, per-agent token metrics |
| MCP | Model Context Protocol client and server support |
| Providers | Foundation Models, Anthropic, OpenAI, Ollama, Gemini, OpenRouter, MLX via Conduit |
| Macros | @Tool, @Parameter, @Traceable, #Prompt |
Architecture
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Your Application โ
โ iOS 26+ ยท macOS 26+ ยท Linux (Ubuntu 22.04+) โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ Workflow ยท Conversation ยท .run() ยท .stream() โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ Agents Memory Tools โ
โ Agent (struct) MemoryOption @Tool macro โ
โ AgentRuntime Conversation FunctionTool โ
โ (dot-syntax) @ToolBuilder โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ GuardrailSpec ยท Resilience ยท Observability ยท MCP โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ Durable Graph Runtime (internal) โ
โ Compiled DAG ยท Checkpointing ยท Deterministic retry โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ InferenceProvider (pluggable) โ
โ Foundation Models ยท Anthropic ยท OpenAI ยท Ollama ยท MLX โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Requirements
| Platform | Minimum |
|---|---|
| Swift | 6.2+ |
| iOS | 26.0+ |
| macOS | 26.0+ |
| tvOS | 26.0+ |
| Linux | Ubuntu 22.04+ with Swift 6.2 |
Foundation Models require iOS 26 / macOS 26. Cloud providers work on any Swift 6.2 platform including Linux.
Documentation
| Getting Started | Installation, first agent, workflows |
| API Reference | Every type, protocol, and API |
| Front-Facing API | Public API surface |
| Why Swarm? | Design philosophy and architecture |
Contributing
- Fork โ branch โ
swift testโ PR - All public types must be
Sendable; the compiler enforces it - Format with
swift package plugin --allow-writing-to-package-directory swiftformat
Bug reports and feature requests: GitHub Issues
Community
GitHub Issues ยท Discussions ยท @ckarani7
If Swarm saves you time, a star helps others find it.
License
Released under the MIT License.