Star 历史趋势
数据来源: GitHub API · 生成自 Stargazers.cn
README.md

AlphaGenome PyTorch

PyPI Documentation Weights

A PyTorch port of AlphaGenome, the DNA sequence model from Google DeepMind that predicts hundreds of genomic tracks at single base-pair resolution from sequences up to 1M bp.

We strive to make it an accessible, readable, and hackable implementation — for integrating into existing PyTorch pipelines, fine-tuning on custom datasets, and building on top of.

Installation

Installation from PyPI:

pip install alphagenome-pytorch

Installation from repo:

pip install git+https://github.com/genomicsxai/alphagenome-pytorch

For fine-tuning (incl. BigWig data loading):

pip install alphagenome-pytorch[finetuning] # adds pyBigWig, pyfaidx

Quick Start

import torch import numpy as np from alphagenome_pytorch import AlphaGenome # Load pretrained model model = AlphaGenome.from_pretrained('alphagenome.pt', device='cuda') # Create one-hot encoded DNA sequence in NLC format (batch=1, length=131072, channels=4) # Channels: A=0, C=1, G=2, T=3 sequence = np.random.randint(0, 4, size=(1, 131072)) dna_onehot = torch.tensor(np.eye(4)[sequence], dtype=torch.float32).cuda() # Inference (handles dtype casting, returns float32 outputs) outputs = model.predict(dna_onehot, organism_index=0) # organism: 0=human, 1=mouse

The weights for this port are available on Hugging Face.

Output structure

Each genomic-track head returns a dict mapping resolution → tensor:

outputs['atac'][1] # (1, 131072, 256) ATAC-seq at 1 bp outputs['atac'][128] # (1, 1024, 256) ATAC-seq at 128 bp outputs['dnase'][1] # (1, 131072, 384) DNase at 1 bp outputs['cage'][128] # (1, 1024, 640) CAGE at 128 bp outputs['chip_histone'][128] # (1, 1024, 1152) ChIP-hist at 128 bp only

Contact maps are returned as a single tensor (no resolution dict):

outputs['contact_maps'] # (1, 64, 64, 28) 3D chromatin contacts

Splice heads return dicts of tensors:

outputs['splice_sites']['probs'] # (1, 131072, 5) splice site classes

Padding

Track dimensions are padded (e.g. ATAC has 167 real human tracks but the tensor has 256 channels). Real tracks come first; the rest are zeros. Use named_outputs=True to auto-strip padding:

from alphagenome_pytorch.named_outputs import NamedOutputs, TrackMetadataCatalog catalog = TrackMetadataCatalog.load_builtin(organism=0) model.set_track_metadata_catalog(catalog) named = model.predict(dna_onehot, organism_index=0, named_outputs=True) named.atac[1].shape # (1, 131072, 167) — padding removed named.atac[1].tracks[-1].track_name # 'UBERON:0015143 ATAC-seq' # Filter by metadata named.rna_seq[128].select(strand='+') named.chip_tf[128].select(transcription_factor='CTCF') named.atac[1].select(biosample_type='tissue', ontology_curie='UBERON:0015143')

Extracting Embeddings

Use model.encode() to get embeddings without running prediction heads — useful for building custom heads or analyzing representations:

# Get embeddings (128bp only for efficiency) emb = model.encode(dna_onehot, organism_index=0, resolutions=(128,)) emb['embeddings_128bp'] # (B, 1024, 3072) at 128bp

Fine-tuning

Train a new head on your data with frozen trunk (linear probing) or with LoRA adapters:

from alphagenome_pytorch import AlphaGenome, TransferConfig, load_trunk, prepare_for_transfer # Load trunk, freeze, add custom heads model = AlphaGenome() model = load_trunk(model, 'alphagenome.pt') model = prepare_for_transfer(model, TransferConfig( mode='lora', new_heads={'atac': {'modality': 'atac', 'num_tracks': 1}}, lora_rank=8, ))

The easiest way to start with fine-tuning is to use scripts/finetune.py that implements a flexible CLI interface:

# LoRA fine-tuning python scripts/finetune.py --mode lora --lora-rank 8 \ --genome hg38.fa --modality atac --bigwig *.bw \ --train-bed train.bed --val-bed val.bed \ --pretrained-weights alphagenome.pt # Multi-GPU torchrun --nproc_per_node=4 scripts/finetune.py --mode lora ...

See examples/notebooks/finetune_linear_probe.ipynb for an example of linear probing on ATAC-seq data.

Numerical Parity with JAX

This port is validated against the original JAX model, including per-head and full forward pass output comparisons as well as loss values and gradients.

See a compiled ARCHITECTURE_COMPARISON.md for some technical details.

Model Outputs

HeadTracks (human)Dimension (padded)ResolutionsDescription
atac1672561bp, 128bpChromatin accessibility
dnase3053841bp, 128bpDNase-seq
procap121281bp, 128bpTranscription initiation
cage5466401bp, 128bp5' cap RNA
rna_seq6677681bp, 128bpRNA expression
chip_tf16171664128bpTF binding
chip_histone11161152128bpHistone modifications
contact_maps282864×643D chromatin contacts
splice_sites551bpSplice site classification (D+, A+, D−, A−, none)
splice_junctions734734pairwiseJunction read counts (367 tissues × 2 strands)
splice_site_usage7347341bpFraction of transcripts using splice site

Tracks column shows the number of real human tracks (without padding). Dimension is the raw output tensor size — padding fills the gap. When using named_outputs=True, padding is stripped by default. See named outputs guide for details.

See more information about model outputs in the official AlphaGenome documentation.

Example Notebooks

Citation

@article{avsec2026alphagenome, title={Advancing regulatory variant effect prediction with AlphaGenome}, author={Avsec, {\v{Z}}iga and Latysheva, Natasha and Cheng, Jun and Novati, Guido and Taylor, Kyle R and Ward, Tom and Bycroft, Clare and Nicolaisen, Lauren and Arvaniti, Eirini and Pan, Joshua and others}, journal={Nature}, volume={649}, number={8099}, pages={1206--1218}, year={2026}, publisher={Nature Publishing Group UK London} }
bioRxiv preprint
@article{avsec2025alphagenome, title = {AlphaGenome: advancing regulatory variant effect prediction with a unified DNA sequence model}, author = {Avsec, {\v Z}iga and Latysheva, Natasha and Cheng, Jun and ...}, year = {2025}, journal = {bioRxiv}, doi = {10.1101/2025.06.25.661532} }

Acknowledgements

We acknowledge Phil Wang, Miquel Anglada-Girotto, and Xinming Tu as developers of an older AlphaGenome PyTorch port unrelated to this repo. Note that the PyPI namespace is now linked to this repo.

License

This project is a port of the google-deepmind/alphagenome_research repository licensed under the Apache License, Version 2.0:

Copyright 2026 Google LLC

The model parameters, output, and any derivatives thereof remain subject to Google DeepMind’s AlphaGenome Model Terms.

This port is licensed under the Apache License, Version 2.0 (Apache 2.0):

Copyright 2026 Danila Bredikhin, Martin Kjellberg, Christopher Zou, Alejandro Buendia, Xinming Tu, Anshul Kundaje

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this except in compliance with the License. Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

关于 About

AlphaGenome PyTorch port

语言 Languages

Python100.0%

提交活跃度 Commit Activity

代码提交热力图
过去 52 周的开发活跃度
24
Total Commits
峰值: 13次/周
Less
More

核心贡献者 Contributors