Star 历史趋势
数据来源: GitHub API · 生成自 Stargazers.cn
README.md

Downloads gpu Open in Colab roadmap MkDocs

Highly Performant, Modular and Memory Safe
Ingestion, Inference and Indexing in Rust 🦀
Python docs »
Rust docs »
Benchmarks · FAQ · Adapters . Collaborations . Notebooks

EmbedAnything is a minimalist, yet highly performant, modular, lightning-fast, lightweight, multisource, multimodal, and local embedding pipeline built in Rust. Whether you're working with text, images, audio, PDFs, websites, or other media, EmbedAnything streamlines the process of generating embeddings from various sources and seamlessly streaming (memory-efficient-indexing) them to a vector database. It supports dense, sparse, ONNX, model2vec and late-interaction embeddings, offering flexibility for a wide range of use cases.

Table of Contents
  1. About The Project
  2. Getting Started
  3. Usage
  4. Roadmap
  5. Contributing
  6. How to add custom model and chunk size

🚀 Key Features

  • No Dependency on Pytorch: Easy to deploy on cloud, comes with low memory footprint.
  • Highly Modular : Choose any vectorDB adapter for RAG, with 1 line 1 word of code
  • Backend : Supports Candle, ONNX and cloud models
  • MultiModality : Works with text sources like PDFs, txt, md, Images JPG and Audio, .WAV
  • GPU support : Hardware acceleration on GPU as well.
  • Chunking : In-built chunking methods like semantic, late-chunking
  • Vector Streaming: : Separate file processing, Indexing and Inferencing on different threads, reduces latency.
  • AWS S3 Bucket: : Directly import AWS S3 bucket files.
  • Prebult Docker Image : Just pull it: starlightsearch/embedanything-server
  • SearchAgent : Example of how you can use index for Searchr1 reasoning.

💡What is Vector Streaming

Embedding models are computationally expensive and time-consuming. By separating document preprocessing from model inference, you can significantly reduce pipeline latency and improve throughput.

Vector streaming transforms a sequential bottleneck into an efficient, concurrent workflow.

The embedding process happens separetly from the main process, so as to maintain high performance enabled by rust MPSC, and no memory leak as embeddings are directly saved to vector database. Find our blog.

EmbedAnythingXWeaviate

🦀 Why Embed Anything

➡️Faster execution.
➡️No Pytorch Dependency, thus low-memory footprint and easy to deploy on cloud.
➡️True multithreading
➡️Running embedding models locally and efficiently
➡️In-built chunking methods like semantic, late-chunking
➡️Supports range of models, Dense, Sparse, Late-interaction, ReRanker, ModernBert.
➡️Memory Management: Rust enforces memory management simultaneously, preventing memory leaks and crashes that can plague other languages

⚠️ WhichModel has been deprecated in pretrained_hf

🍓 Our Past Collaborations:

We have collaborated with reputed enterprise like Elastic, Weaviate, SingleStore, Milvus and Analytics Vidya Datahours

You can get in touch with us for further collaborations.

Benchmarks

Inference Speed benchmarks.

Only measures embedding model inference speed, on onnx-runtime. Code

Benchmarks with other fromeworks coming soon!! 🚀

⭐ Supported Models

We support any hugging-face models on Candle. And We also support ONNX runtime for BERT and ColPali.

How to add custom model on candle: from_pretrained_hf

⚠️ WhichModel has been deprecated in from_pretrained_hf

import embed_anything from embed_anything import EmbeddingModel, WhichModel, TextEmbedConfig # Load a custom BERT model from Hugging Face model = EmbeddingModel.from_pretrained_hf( model_id="sentence-transformers/all-MiniLM-L12-v2" ) # Configure embedding parameters config = TextEmbedConfig( chunk_size=1000, # Maximum characters per chunk batch_size=32, # Number of chunks to process in parallel splitting_strategy="sentence" # How to split text: "sentence", "word", or "semantic" ) # Embed a file (supports PDF, TXT, MD, etc.) data = embed_anything.embed_file("path/to/your/file.pdf", embedder=model, config=config) # Access the embeddings and text for item in data: print(f"Text: {item.text[:100]}...") # First 100 characters print(f"Embedding shape: {len(item.embedding)}") print(f"Metadata: {item.metadata}") print("---" * 20)
ModelHF link
JinaJina Models
BertAll Bert based models
CLIPopenai/clip-*
WhisperOpenAI Whisper models
ColPalistarlight-ai/colpali-v1.2-merged-onnx
Colbertanswerdotai/answerai-colbert-small-v1, jinaai/jina-colbert-v2 and more
SpladeSplade Models and other Splade like models
Model2Vecmodel2vec, minishlab/potion-base-8M
Qwen3-EmbeddingQwen/Qwen3-Embedding-0.6B
RerankerJina Reranker Models, Xenova/bge-reranker, Qwen/Qwen3-Reranker-4B

Splade Models (Sparse Embeddings)

Sparse embeddings are useful for keyword-based retrieval and hybrid search scenarios.

import embed_anything from embed_anything import EmbeddingModel, TextEmbedConfig # Load a SPLADE model for sparse embeddings model = EmbeddingModel.from_pretrained_hf( model_id="prithivida/Splade_PP_en_v1" ) # Configure the embedding process config = TextEmbedConfig(chunk_size=1000, batch_size=32) # Embed text files data = embed_anything.embed_file("test_files/document.txt", embedder=model, config=config) # Sparse embeddings are useful for hybrid search (combining dense and sparse) for item in data: print(f"Text: {item.text}") print(f"Sparse embedding (non-zero values): {sum(1 for x in item.embedding if x != 0)}")

ONNX-Runtime: from_pretrained_onnx

ONNX models provide faster inference and lower memory usage. Use the ONNXModel enum for pre-configured models or provide a custom model path.

BERT Models

import embed_anything from embed_anything import EmbeddingModel, WhichModel, ONNXModel, Dtype, TextEmbedConfig # Option 2: Use a custom ONNX model from Hugging Face model = EmbeddingModel.from_pretrained_onnx( WhichModel.Bert model_id="onnx_model_link", dtype=Dtype.F16 # Use half precision for faster inference )

Cloud Embedding Models (Cohere Embed v4)

Use cloud models for high-quality embeddings without local model deployment.

import embed_anything from embed_anything import EmbeddingModel, WhichModel import os # Set your API key os.environ["COHERE_API_KEY"] = "your-api-key-here" # Initialize the cloud model model = EmbeddingModel.from_pretrained_cloud( WhichModel.CohereVision, model_id="embed-v4.0" ) # Use it like any other model data = embed_anything.embed_file("test_files/document.pdf", embedder=model)

For Semantic Chunking

Semantic chunking preserves meaning by splitting text at semantically meaningful boundaries rather than fixed sizes.

import embed_anything from embed_anything import EmbeddingModel, TextEmbedConfig # Main embedding model for generating final embeddings model = EmbeddingModel.from_pretrained_hf( model_id="sentence-transformers/all-MiniLM-L12-v2" ) # Semantic encoder for determining chunk boundaries # This model analyzes text to find natural semantic breaks semantic_encoder = EmbeddingModel.from_pretrained_hf( model_id="jinaai/jina-embeddings-v2-small-en" ) # Configure semantic chunking config = TextEmbedConfig( chunk_size=1000, # Target chunk size batch_size=32, # Batch processing size splitting_strategy="semantic", # Use semantic splitting semantic_encoder=semantic_encoder # Model for semantic analysis ) # Embed with semantic chunking data = embed_anything.embed_file("test_files/document.pdf", embedder=model, config=config) # Chunks will be split at semantically meaningful boundaries for item in data: print(f"Chunk: {item.text[:200]}...") print("---" * 20)

For Late-Chunking

Late-chunking splits text into smaller units first, then combines them during embedding for better context preservation.

import embed_anything from embed_anything import EmbeddingModel, TextEmbedConfig, EmbedData # Load your embedding model model = EmbeddingModel.from_pretrained_hf( model_id="sentence-transformers/all-MiniLM-L12-v2" ) # Configure late-chunking config = TextEmbedConfig( chunk_size=1000, # Maximum chunk size batch_size=8, # Batch size for processing splitting_strategy="sentence", # Split by sentences first late_chunking=True, # Enable late-chunking ) # Embed a file with late-chunking data: list[EmbedData] = model.embed_file("test_files/attention.pdf", config=config) # Late-chunking helps preserve context across sentence boundaries for item in data: print(f"Text: {item.text}") print(f"Embedding dimension: {len(item.embedding)}") print("---" * 20)

🧑‍🚀 Getting Started

💚 Installation

pip install embed-anything

For GPUs and using special models like ColPali

pip install embed-anything-gpu

🚧❌ If it shows cuda error while running on windowns, run the following command:

os.add_dll_directory("C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.6/bin")

📒 Notebooks

End-to-End Retrieval and Reranking using VectorDB Adapters
ColPali-Onnx
Adapters
Qwen3- Embedings
Benchmarks

Advanced Usage with Configuration

import embed_anything from embed_anything import EmbeddingModel, WhichModel, TextEmbedConfig # Load model model = EmbeddingModel.from_pretrained_hf( model_id="jinaai/jina-embeddings-v2-small-en" ) # Configure embedding parameters config = TextEmbedConfig( chunk_size=1000, # Characters per chunk batch_size=32, # Process 32 chunks at once buffer_size=64, # Buffer size for streaming splitting_strategy="sentence" # Split by sentences ) # Embed with custom configuration data = embed_anything.embed_file( "test_files/document.pdf", embedder=model, config=config ) # Process embeddings for item in data: print(f"Chunk: {item.text}") print(f"Metadata: {item.metadata}")

Embedding Queries

# Embed a query queries = ["What is machine learning?", "How does neural networks work?"] query_embeddings = embed_anything.embed_query(queries, embedder=model) # Use embeddings for similarity search for i, query_emb in enumerate(query_embeddings): print(f"Query: {queries[i]}") print(f"Embedding shape: {len(query_emb.embedding)}")

Embedding Directories

# Embed all files in a directory data = embed_anything.embed_directory( "test_files/", embedder=model, config=config ) print(f"Total chunks: {len(data)}")

Using Custom ONNX Models

For custom or fine-tuned models, specify the Hugging Face model ID and path to the ONNX file:

import embed_anything from embed_anything import EmbeddingModel, WhichModel, Dtype # Load a custom ONNX model from Hugging Face model = EmbeddingModel.from_pretrained_onnx( WhichModel.Jina, hf_model_id="jinaai/jina-embeddings-v2-small-en", path_in_repo="model.onnx", # Path to ONNX file in the repo dtype=Dtype.F16 # Use half precision ) # Use the model data = embed_anything.embed_file("test_files/document.pdf", embedder=model)

Note: Using pre-configured models (via ONNXModel enum) is recommended as these models are tested and optimized. For a complete list of supported ONNX models, see ONNX Models Guide.

⁉️FAQ

Do I need to know rust to use or contribute to embedanything?

The answer is No. EmbedAnything provides you pyo3 bindings, so you can run any function in python without any issues. To contibute you should check out our guidelines and python folder example of adapters.

How is it different from fastembed?

We provide both backends, candle and onnx. On top of it we also give an end-to-end pipeline, that is you can ingest different data-types and index to any vector database, and inference any model. Fastembed is just an onnx-wrapper.

We've received quite a few questions about why we're using Candle.

One of the main reasons is that Candle doesn't require any specific ONNX format models, which means it can work seamlessly with any Hugging Face model. This flexibility has been a key factor for us. However, we also recognize that we’ve been compromising a bit on speed in favor of that flexibility.

🚧 Contributing to EmbedAnything

First of all, thank you for taking the time to contribute to this project. We truly appreciate your contributions, whether it's bug reports, feature suggestions, or pull requests. Your time and effort are highly valued in this project. 🚀

This document provides guidelines and best practices to help you to contribute effectively. These are meant to serve as guidelines, not strict rules. We encourage you to use your best judgment and feel comfortable proposing changes to this document through a pull request.

  • Roadmap
  • Quick Start
  • Guidelines
  • 🏎️ RoadMap

    Accomplishments

    One of the aims of EmbedAnything is to allow AI engineers to easily use state of the art embedding models on typical files and documents. A lot has already been accomplished here and these are the formats that we support right now and a few more have to be done.

    🖼️ Modalities and Source

    We’re excited to share that we've expanded our platform to support multiple modalities, including:

    • Audio files

    • Markdowns

    • Websites

    • Images

    • Videos (frame sampling; enable the video feature)

    • Graph

    This gives you the flexibility to work with various data types all in one place! 🌐

    ⚙️ Performance

    We now support both candle and Onnx backend
    ➡️ Support for GGUF models

    🫐Embeddings:

    We had multimodality from day one for our infrastructure. We have already included it for websites, images and audios but we want to expand it further to.

    ➡️ Graph embedding -- build deepwalks embeddings depth first and word to vec
    ➡️ Video embedding improvements (temporal + audio)
    ➡️ Yolo Clip

    🌊Expansion to other Vector Adapters

    We currently support a wide range of vector databases for streaming embeddings, including:

    • Elastic: thanks to amazing and active Elastic team for the contribution
    • Weaviate
    • Pinecone
    • Qdrant
    • Milvus
    • Chroma

    How to add an adpters: https://starlight-search.com/blog/2024/02/25/adapter-development-guide.md

    💥 Create WASM demos to integrate embedanything directly to the browser.

    💜 Add support for ingestion from remote sources

    ➡️ Support for S3 bucket
    ➡️ Support for azure storage
    ➡️ Support for google drive/dropbox

    But we're not stopping there! We're actively working to expand this list.

    Want to Contribute? If you’d like to add support for your favorite vector database, we’d love to have your help! Check out our contribution.md for guidelines, or feel free to reach out directly sonam@starlight-search.com . Let's build something amazing together! 💡

    AWESOME Projects built on EmbedAnything.

    1. A Rust-based cursor like chat with your codebase tool: https://github.com/timpratim/cargo-chat
    2. A simple vector-based search engine, also supports ordinary text search : https://github.com/szuwgh/vectorbase2
    3. Semantic file tracker in CLI operated through daemon built with rust.: https://github.com/sam-salehi/sophist
    4. FogX-Store is a dataset store service that collects and serves large robotics datasets : https://github.com/J-HowHuang/FogX-Store
    5. A Dart Wrapper for EmbedAnything Crate: https://github.com/cotw-fabier/embedanythingindart
    6. Generate embeddings in Rust with tauri on MacOS : https://github.com/do-me/tauri-embedanything-ios
    7. RAG with EmbedAnything and Milvus: https://milvus.io/docs/v2.5.x/build_RAG_with_milvus_and_embedAnything.md

    A big Thank you to all our StarGazers

    Star History

    Star History Chart

    关于 About

    Highly Performant, Modular, Memory Safe and Production-ready Inference, Ingestion and Indexing built in Rust 🦀
    aicloudgenerative-aihacktoberfesthigh-performanceindexinginferenceinformation-retrievallarge-language-modelslocalmachine-learningonnxruntimepipelineproduction-readypythonragrustsearchservervector-database

    语言 Languages

    Rust92.0%
    Python7.3%
    Dockerfile0.7%
    HTML0.0%

    提交活跃度 Commit Activity

    代码提交热力图
    过去 52 周的开发活跃度
    190
    Total Commits
    峰值: 27次/周
    Less
    More

    核心贡献者 Contributors