Star 历史趋势
数据来源: GitHub API · 生成自 Stargazers.cn
README.md
AgentCPM-GUI Logo

【English | 中文

OverviewQuick StartModelEvaluation DataTechnical Report

News

  • [2025-06-03] 📄📄📄 We have released the technical report of AgentCPM-GUI! Check it out here.
  • [2025-05-13] 🚀🚀🚀 We have open-sourced AgentCPM-GUI, an on-device GUI agent capable of operating Chinese & English apps and equipped with RFT-enhanced reasoning abilities.

Overview

AgentCPM-GUI is an open-source on-device LLM agent model jointly developed by THUNLP, Renmin University of China and ModelBest. Built on MiniCPM-V with 8 billion parameters, it accepts smartphone screenshots as input and autonomously executes user-specified tasks.

Key features include:

  • High-quality GUI grounding — Pre-training on a large-scale bilingual Android dataset significantly boosts localization and comprehension of common GUI widgets (buttons, input boxes, labels, icons, etc.).
  • Chinese-app operation — The first open-source GUI agent finely tuned for Chinese apps, covering 30 + popular titles such as Amap, Dianping, bilibili and Xiaohongshu.
  • Enhanced planning & reasoning — Reinforcement fine-tuning (RFT) lets the model “think” before outputting an action, greatly improving success on complex tasks.
  • Compact action-space design — An optimized action space and concise JSON format reduce the average action length to 9.7 tokens, boosting on-device inference efficiency.

Demo Case (1x speed):

https://github.com/user-attachments/assets/694d3c2c-12ce-4084-8feb-4937ca9ad247

Quick Start

Install dependencies

git clone https://github.com/OpenBMB/AgentCPM-GUI cd AgentCPM-GUI conda create -n gui_agent python=3.11 conda activate gui_agent pip install -r requirements.txt

Download the model

Download AgentCPM-GUI from Hugging Face and place it in model/AgentCPM-GUI.

Huggingface Inference

import torch from transformers import AutoTokenizer, AutoModelForCausalLM from PIL import Image import json # 1. Load the model and tokenizer model_path = "model/AgentCPM-GUI" # model path tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True, torch_dtype=torch.bfloat16) model = model.to("cuda:0") # 2. Build the input instruction = "请点击屏幕上的‘会员’按钮" image_path = "assets/test.jpeg" image = Image.open(image_path).convert("RGB") # 3. Resize the longer side to 1120 px to save compute & memory def __resize__(origin_img): resolution = origin_img.size w,h = resolution max_line_res = 1120 if max_line_res is not None: max_line = max_line_res if h > max_line: w = int(w * max_line / h) h = max_line if w > max_line: h = int(h * max_line / w) w = max_line img = origin_img.resize((w,h),resample=Image.Resampling.LANCZOS) return img image = __resize__(image) # 4. Build the message format messages = [{ "role": "user", "content": [ f"<Question>{instruction}</Question>\n当前屏幕截图:", image ] }] # 5. Inference ACTION_SCHEMA = json.load(open('eval/utils/schema/schema.json', encoding="utf-8")) items = list(ACTION_SCHEMA.items()) insert_index = 3 items.insert(insert_index, ("required", ["thought"])) # enable/disable thought by setting it to "required"/"optional" ACTION_SCHEMA = dict(items) SYSTEM_PROMPT = f'''# Role 你是一名熟悉安卓系统触屏GUI操作的智能体,将根据用户的问题,分析当前界面的GUI元素和布局,生成相应的操作。 # Task 针对用户问题,根据输入的当前屏幕截图,输出下一步的操作。 # Rule - 以紧凑JSON格式输出 - 输出操作必须遵循Schema约束 # Schema {json.dumps(ACTION_SCHEMA, indent=None, ensure_ascii=False, separators=(',', ':'))}''' outputs = model.chat( image=None, msgs=messages, system_prompt=SYSTEM_PROMPT, tokenizer=tokenizer, temperature=0.1, top_p=0.3, n=1, ) # 6. Output print(outputs)

Expected output:

{"thought":"任务目标是点击屏幕上的‘会员’按钮。当前界面显示了应用的推荐页面,顶部有一个导航栏。点击‘会员’按钮可以访问应用的会员相关内容。","POINT":[729,69]}

Note: AgentCPM-GUI outputs relative coordinates ranging from 0-1000. The conversions are as follows:

rel_x, rel_y = [int(abs_x / width * 1000), int(abs_y / height * 1000)] abs_x, abs_y = [int(rel_x / 1000 * width), int(rel_y / 1000 * height)]

where width and height refer to the original width and height of the image, respectively.

vLLM Inference

# Launch the vLLM server # If run out of VRAM, try add --max_model_len 2048 vllm serve model/AgentCPM-GUI --served-model-name AgentCPM-GUI --tensor_parallel_size 1 --trust-remote-code --limit-mm-per-prompt image=10
import base64 import io import json import requests from PIL import Image END_POINT = "http://localhost:8000/v1/chat/completions" # Replace with actual endpoint # system prompt ACTION_SCHEMA = json.load(open('eval/utils/schema/schema.json', encoding="utf-8")) items = list(ACTION_SCHEMA.items()) insert_index = 3 items.insert(insert_index, ("required", ["thought"])) # enable/disable thought by setting it to "required"/"optional" ACTION_SCHEMA = dict(items) SYSTEM_PROMPT = f'''# Role 你是一名熟悉安卓系统触屏GUI操作的智能体,将根据用户的问题,分析当前界面的GUI元素和布局,生成相应的操作。 # Task 针对用户问题,根据输入的当前屏幕截图,输出下一步的操作。 # Rule - 以紧凑JSON格式输出 - 输出操作必须遵循Schema约束 # Schema {json.dumps(ACTION_SCHEMA, indent=None, ensure_ascii=False, separators=(',', ':'))}''' def encode_image(image: Image.Image) -> str: """Convert PIL Image to base64-encoded string.""" with io.BytesIO() as in_mem_file: image.save(in_mem_file, format="JPEG") in_mem_file.seek(0) return base64.b64encode(in_mem_file.read()).decode("utf-8") def __resize__(origin_img): resolution = origin_img.size w,h = resolution max_line_res = 1120 if max_line_res is not None: max_line = max_line_res if h > max_line: w = int(w * max_line / h) h = max_line if w > max_line: h = int(h * max_line / w) w = max_line img = origin_img.resize((w,h),resample=Image.Resampling.LANCZOS) return img def predict(text_prompt: str, image: Image.Image): messages = [ {"role": "system", "content": SYSTEM_PROMPT}, {"role": "user", "content": [ {"type": "text", "text": f"<Question>{text_prompt}</Question>\n当前屏幕截图:()"}, {"type": "image_url", "image_url": {"url": f"data:image/jpeg;base64,{encode_image(image)}"}} ]} ] payload = { "model": "AgentCPM-GUI", # Your model name "temperature": 0.1, "messages": messages, "max_tokens": 2048, } headers = { "Content-Type": "application/json", } response = requests.post(END_POINT, headers=headers, json=payload) assistant_msg = response.json()["choices"][0]["message"]["content"] return assistant_msg image = __resize__(Image.open("assets/test.jpeg")) instruction = "请点击屏幕上的‘会员’按钮" response = predict(instruction, image) print(response)

Action Space

At each step, the agent outputs is a single JSON object that contains:

  • One (and only one) primitive action, chosen from the list below;
  • Optional modifiers (duration, thought) and/or a task-level flag (STATUS).

Note that all keywords are case-sensitive, and we use compact JSON (i.e., no extra whitespace), which affects the tokenizer’s behavior.

ActionRequired field(s)Optional field(s)PurposeExample
ClickPOINT:[x,y]duration,thought,STATUSSingle tap at the normalized screen coordinate (0–1000, origin = top-left).{"POINT":[480,320]}
Long PressPOINT:[x,y]
duration:1000
duration,thought,STATUSTouch-and-hold at coordinate (set a longer duration, e.g. >200 ms).{"POINT":[480,320],"duration":1000}
SwipePOINT:[x,y]
to:"up" | "down" | "left" | "right" or to:[x,y]
duration,thought,STATUSSwipe from the start point toward a direction or another coordinate.{"POINT":[500,200],"to":"down"}
Press keyPRESS:"HOME" | "BACK" | "ENTER"duration,thought,STATUSTrigger a hardware / navigation button.{"PRESS":"HOME"}
Type textTYPE:"<text>"duration,thought,STATUSInsert the given text at the current input focus.{"TYPE":"Hello, world!"}
Waitdurationthought,STATUSIdle for the specified time without any other action.{"duration":500}
Task-level statusSTATUS:"start" | "continue" | "finish" | "satisfied" | "impossible" | "interrupt" | "need_feedback"duration,thoughtReport task progress; may appear alone or with a primitive action.{"STATUS":"finish"}

Fine-tuning

Source code for SFT and RFT training is provided — see SFT and RFT.

Performance Evaluation

Grounding Benchmark

ModelFun2PointText2PointBbox2textAverage
AgentCPM-GUI-8B79.176.558.271.3
Qwen2.5-VL-7B59.859.350.056.4
Intern2.5-VL-8B17.224.245.929.1
Intern2.5-VL-26B14.816.636.322.6
OS-Genesis-7B8.35.84.06.0
UI-TARS-7B56.866.71.441.6
OS-Atlas-7B53.660.70.438.2
Aguvis-7B60.876.50.245.8
GPT-4o22.119.914.318.8
GPT-4o with Grounding44.344.014.344.2

Agent Benchmark

DatasetAndroid Control-Low TMAndroid Control-Low EMAndroid Control-High TMAndroid Control-High EMGUI-Odyssey TMGUI-Odyssey EMAITZ TMAITZ EMChinese APP (CAGUI) TMChinese APP (CAGUI) EM
AgentCPM-GUI-8B94.3990.2077.7069.1790.8574.9685.7176.3896.8691.28
Qwen2.5-VL-7B94.1484.9675.1062.9059.5446.2878.4154.6174.1855.16
UI-TARS-7B95.2491.7981.6374.4386.0667.9080.4265.7788.6270.26
OS-Genesis-7B90.7474.2265.9244.4311.673.6319.988.4538.1014.50
OS-Atlas-7B73.0367.2570.3656.5391.83*76.76*74.1358.4581.5355.89
Aguvis-7B93.8589.4065.5654.1826.7113.5435.7118.9967.4338.20
OdysseyAgent-7B65.1039.1658.8032.7490.8373.6759.1731.6067.5625.44
GPT-4o-19.49-20.80-20.3970.0035.303.673.67
Gemini 2.0-28.50-60.20-3.27----
Claude-19.40-12.5060.90-----

*Different train/test splits

TM and EM stand for the Type Match and Exact Match, respectively. All evaluation data and code are open-sourced — see here for details.

Evaluation Data

We provide CAGUI, an evaluation benchmark for Chinese apps covering grounding and agent tasks. See the dataset on Hugging Face.

FAQs

Click here to view the FAQs.

Trends

Star History Chart

License

  • Code in this repository is released under the Apache-2.0 license.

Explore More

                 

Citation

If AgentCPM-GUI is useful for your research, please cite:

@inproceedings{zhang-etal-2025-agentcpm, title={Agent{CPM}-{GUI}: Building Mobile-Use Agents with Reinforcement Fine-Tuning}, author={Zhong Zhang and Yaxi Lu and Yikun Fu and Yupeng Huo and Shenzhi Yang and Yesai Wu and Han Si and Xin Cong and Haotian Chen and Yankai Lin and Jie Xie and Wei Zhou and Wang Xu and Yuanheng Zhang and Zhou Su and Zhongwu Zhai and Xiaoming Liu and Yudong Mei and Jianming Xu and Hongyan Tian and Chongyi Wang and Chi Chen and Yuan Yao and Zhiyuan Liu and Maosong Sun}, year={2025}, booktitle = "Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", }

关于 About

AgentCPM-GUI: An on-device GUI agent for operating Android apps, enhancing reasoning ability with reinforcement fine-tuning for efficient task execution.

语言 Languages

Python99.0%
Shell1.0%

提交活跃度 Commit Activity

代码提交热力图
过去 52 周的开发活跃度
55
Total Commits
峰值: 19次/周
Less
More

核心贡献者 Contributors