{ "cells": [ { "cell_type": "markdown", "id": "ff73715d-bf8e-45ff-a36d-60a51ebf3a63", "metadata": {}, "source": [ "# 02.Token、Embedding 与向量空间" ] }, { "cell_type": "markdown", "id": "fe92a04c-468a-42e4-9ac6-6e50f689413f", "metadata": {}, "source": [ "在上一节中讲到,LLM的本质是**一个在给定上下文条件下,预测下一个 token 概率分布的函数**。那么Token到底是什么,为什么LLM要以token为单位。以及由此而来的Embedding和向量空间的概念。" ] }, { "cell_type": "markdown", "id": "3245a5af-97cb-4b36-9b1c-1fd689dbb7a6", "metadata": {}, "source": [ "## 一、什么是 Token,为什么 LLM 的基本单位是 Token" ] }, { "cell_type": "markdown", "id": "a40b8fe2-4081-4616-a800-d0770020ca34", "metadata": {}, "source": [ "要理解 Token,不能只停留在当下的 AI 浪潮中,而需要**回到一个更根本的问题**:\n", "**计算机究竟是如何“看到”文字的?**\n", "\n", "\n", "### 1. 追根溯源:从二进制到字符\n", "\n", "计算机的世界里,**唯一真实存在的只有二进制**。\n", "无论是文字、图片还是代码,最终都必须被表示为 0 和 1。\n", "\n", "因此,计算机并不能直接理解:\n", "\n", "```text\n", "A\n", "你\n", "+\n", "```\n", "\n", "它真正能处理的只有类似这样的形式:\n", "\n", "```text\n", "01000001\n", "```\n" ] }, { "cell_type": "markdown", "id": "183f2a54-de41-4f39-8742-ab91e021b509", "metadata": {}, "source": [ "### 2. 编码(Encoding):字符进入计算机的第一步\n", "\n", "为了让计算机“识别”人类的文字,出现了 **编码(Encoding)** 的概念——\n", "**用数字去代表字符。**\n", "\n", "最早被广泛使用的是 **ASCII 编码**:\n", "\n", "* 每个字符用 **8 位二进制数**表示\n", "* 覆盖英文字符、数字和少量符号\n", "\n", "**例子:**\n", "\n", "* `A` → 十进制 `65`\n", "* `B` → 十进制 `66`\n", "\n", "ASCII 解决了**英文世界**的问题,但它无法表示中文、日文等非拉丁字符。\n", "\n", "\n", "### 3. Unicode:让“所有字符”都能被编码\n", "\n", "为了解决多语言问题,**Unicode** 体系诞生了(常见实现如 UTF-8)。\n", "\n", "它的目标很明确:\n", "\n", "> **为世界上每一个字符分配一个唯一编号。**\n", "\n", "在这一阶段,计算机已经可以:\n", "\n", "* 显示中文\n", "* 存储多语言文本\n", "* 正确传输字符序列\n" ] }, { "cell_type": "markdown", "id": "7260ec26-f601-42fb-8ce1-169eff03447b", "metadata": {}, "source": [ "### 4. Token 的出现:从“编码”走向“统计压缩”\n", "\n", "Token 的意义,正是在这里发生了质变。\n", "\n", "与 ASCII / Unicode 不同,**Token 不再只是编码方案**,\n", "而是一种基于语料统计的 **压缩与建模方式**:\n", "\n", "> **把经常一起出现的字符序列,打包成一个更高层级的单位。**\n", "\n", "换句话说:编码关心的是“这个字符用哪个数字表示?”Token 关心的是:*“哪些字符经常一起出现,值得被当成一个整体?”*\n", "\n", "\n", "### 5. 一个直观对比:同一句话在不同层级下的表示\n", "\n", "比如“人工智能”这几个字在编码和Token的不同表示。\n", "\n", "| 层级 | 表示方式 |\n", "| ----------------- | --------------- |\n", "| ASCII / Unicode | `人` `工` `智` `能` |\n", "| Token | `人工` `智能` |\n", "\n", "可以看到,层级越低 → 越接近机器,序列越长,层级越高 → 越接近人类语义,结构越清晰" ] }, { "cell_type": "markdown", "id": "784b89ca-40cf-416c-b65f-0a79d52f4820", "metadata": {}, "source": [ "### 6.为什么 LLM 的基本单位是 Token\n", "\n", "当前主流 LLM 的核心架构是 Transformer,而 Transformer 在底层本质上只支持 **加法、乘法和矩阵运算**。\n", "这意味着:**任何进入模型的东西,必须先被表示为数字**,模型无法直接理解或计算人类语言中的字符串、字符或词语。\n", "\n", "因此,文本在进入模型之前,必须先被切分为 **Token**,再映射为对应的 **Token ID**,并进一步转换为向量表示(Embedding),才能参与注意力计算、矩阵运算以及梯度反向传播。在这一计算范式下,Token 不仅是一种文本表示方式,而是 Transformer **唯一能够感知和操作的语言单位**,模型所展现出的理解、推理与生成能力,全部涌现自 Token 级别的数值计算之上。" ] }, { "cell_type": "markdown", "id": "ae92366a-82d4-4f1d-9247-f0df8f9330d1", "metadata": {}, "source": [ "## 二、Embedding 是什么,它解决了什么问题" ] }, { "cell_type": "markdown", "id": "efb81661-1a39-4733-ba88-7d92215cdb54", "metadata": {}, "source": [ "### 1. Embedding是什么\n", "在 LLM 中,Token 本身只是一个 **离散 ID(整数)**,例如 `12345`,这个数字对模型没有任何语义含义,ID 之间的大小、距离也毫无意义;Embedding 的作用,就是把这些离散、无序的 Token ID 映射到一个 **连续的高维向量空间** 中,使模型能够用数学方式刻画它们之间的**相似性、方向性和组合关系**。在这个向量空间里,语义相近的 Token 会拥有相近的向量表示,不同语义则体现为不同的方向或距离,从而让注意力机制和矩阵运算“看见”语言结构与语义关系。简单来说,它是将离散的数字 ID 转化为一个**高维连续向量**(由数百或数千个实数组成的数组)的过程。" ] }, { "cell_type": "markdown", "id": "ca52720d-8841-44c4-9091-d7a2097bbb4d", "metadata": {}, "source": [ "以“人工智能”为例,过程如下:\n", "1. **原始文本**:“人工智能”\n", "2. **Token 化**:切分为 `[人工, 智能]`\n", "3. **Token ID**:转换为离散数字 `[3421, 5678]`\n", "4. **Embedding 层**:\n", "* `3421` `[0.12, -0.45, 0.88, ...]` (1536个数字)\n", "* `5678` `[0.34, 0.11, -0.92, ...]` (1536个数字)" ] }, { "cell_type": "markdown", "id": "642bbe1a-c1e5-470d-b3c1-fa7a7363478a", "metadata": {}, "source": [ "### 2. 它解决了什么核心问题?\n", "\n", "在 Embedding 出现之前,计算机通过 **One-hot Encoding(独热编码)** 来处理文字。\n", "\n", "> 比如:\n", "> “猫” = `[1, 0, 0, 0]`\n", "> “狗” = `[0, 1, 0, 0]`\n", "> “手机” = `[0, 0, 1, 0]`\n", "\n", "这种传统方式存在两个致命缺陷,而 Embedding 完美解决了它们:\n", "\n", "#### A. 解决“语义孤岛”问题(语义关联性)\n", "\n", "* **旧问题**:在独热编码中,任何两个词的向量乘积都为 0。计算机认为“猫”和“狗”的距离,与“猫”和“手机”的距离是一样远的。它无法理解词语之间的含义联系。\n", "* **Embedding 方案**:它将词语映射到一个**语义空间**。在这个空间里,意思相近的词(如“猫”和“小猫”)在几何距离上会靠得很近,而无关的词则很远。\n", "\n", "#### B. 解决“维度灾难”问题(存储效率)\n", "\n", "* **旧问题**:如果词表有 10 万个词,每个词都需要一个 10 万维的向量,且里面全是 0,极度浪费空间。\n", "* **Embedding 方案**:它通过“降维打击”,用一个固定长度(如 GPT-3 的 12288 维)的**稠密向量**来表达。这个向量不仅省空间,还能承载极其丰富的语义细节。\n", "\n", "### 3. Embedding 的特性:语义运算\n", "\n", "Embedding 最神奇的地方在于,它让语言具备了**数学运算**的可能性。在一个训练良好的 Embedding 空间中,你可以发现类似“类比”的逻辑关系:\n", "\n", "这说明模型已经“理解”了:**女王之于女性,等同于国王之于男性。** 这种性别、时态、甚至是逻辑上的关系,都被编码进了这些数字里。" ] }, { "cell_type": "code", "execution_count": 17, "id": "102f3435-9e84-4cc3-a384-79076f07f564", "metadata": {}, "outputs": [ { "data": { "text/html": [ "\n", "
\n", " \n", "
\n", " (来源:3Blue1Brown)\n", "
\n", "
\n" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "from IPython.display import HTML, display\n", "\n", "display(HTML(\"\"\"\n", "
\n", " \n", "
\n", " (来源:3Blue1Brown)\n", "
\n", "
\n", "\"\"\"))" ] }, { "cell_type": "markdown", "id": "88d63f7b-144e-4b09-8686-6eed39dd3445", "metadata": {}, "source": [ "## 实战: 生成 Embedding" ] }, { "cell_type": "code", "execution_count": 4, "id": "83d6fbd6-7901-44d5-bef5-d847a1e94cc0", "metadata": {}, "outputs": [], "source": [ "from sentence_transformers import SentenceTransformer\n", "import pandas as pd\n", "\n", "# 选择模型\n", "model = SentenceTransformer(\"all-MiniLM-L6-v2\")" ] }, { "cell_type": "code", "execution_count": 7, "id": "7c0d4bcd-a5ef-4e54-a2d5-481f52e3b01f", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([-5.31470478e-02, 1.41944205e-02, 7.14570703e-03, 6.86086714e-02,\n", " -7.84803182e-02, 1.01674581e-02, 1.02283150e-01, -1.20648630e-02,\n", " 9.52134356e-02, -3.03501599e-02, 2.16468587e-03, -6.48644567e-02,\n", " -2.59441393e-03, 6.21889625e-03, -3.92866367e-03, -3.06245871e-02,\n", " -4.79115173e-02, -1.93005297e-02, -5.98853417e-02, -1.04167372e-01,\n", " -8.61489400e-02, 3.63595113e-02, -2.55260803e-02, 1.63894286e-03,\n", " -7.14420825e-02, 6.16800524e-02, 1.71946045e-02, -5.66111133e-02,\n", " 2.48122215e-02, -7.78224692e-02, -3.24991569e-02, -8.69906414e-03,\n", " -1.15325311e-02, 3.81673612e-02, -5.69308065e-02, -5.32710142e-02,\n", " 4.92575485e-03, 3.25005986e-02, 7.25324824e-02, 3.29848900e-02,\n", " 2.47229282e-02, -8.33452195e-02, -1.56857930e-02, -4.81197946e-02,\n", " -3.47721879e-03, 4.35052160e-03, -3.58956084e-02, -5.18540107e-02,\n", " 1.56732909e-02, 3.52238840e-03, -1.03232525e-02, 4.76416945e-02,\n", " -4.01585810e-02, -9.09934007e-03, -3.46348435e-02, -3.69818099e-02,\n", " -4.08402309e-02, 1.77191067e-02, -9.39303543e-03, -5.36363572e-02,\n", " 1.11330664e-02, 1.61804762e-02, 1.37846181e-02, 2.83132568e-02,\n", " 4.02589217e-02, 2.09016986e-02, -1.44656394e-02, -1.60001859e-03,\n", " -4.97509679e-03, 1.20726479e-02, 4.55365926e-02, 1.31221246e-02,\n", " 7.05804005e-02, -3.08292732e-02, 3.04354727e-02, -1.08476602e-01,\n", " 5.54767624e-02, -1.75317489e-02, 1.64315417e-01, 5.14558889e-02,\n", " -2.76835170e-02, -2.99590547e-02, -5.69522269e-02, 5.68234473e-02,\n", " 5.09467646e-02, 1.51321786e-02, -1.28050987e-03, 2.39950046e-02,\n", " -6.32866621e-02, 2.88717858e-02, -5.53410612e-02, -3.49392518e-02,\n", " 3.02832369e-02, 2.68858746e-02, -8.35041329e-02, 1.83623694e-02,\n", " -3.51577885e-02, -8.28191116e-02, -7.18911737e-02, 1.97990969e-01,\n", " 1.64156351e-02, 4.45847474e-02, -3.75540741e-03, -3.85080092e-02,\n", " 5.34835942e-02, -3.46948649e-03, -4.34817411e-02, 6.33896515e-02,\n", " -1.31755564e-02, -1.97628196e-02, -4.52737324e-02, 2.07129531e-02,\n", " -5.65177351e-02, 5.74406050e-02, 5.55342957e-02, 2.11564805e-02,\n", " -1.00935355e-01, -3.43029164e-02, 2.93644387e-02, -3.32466699e-02,\n", " 2.89120711e-02, 3.00968327e-02, -5.18645532e-02, 8.20421241e-03,\n", " -1.66977104e-02, -8.43599364e-02, 1.11420378e-02, -5.92391480e-33,\n", " 3.06508038e-02, -8.50427821e-02, 2.71754828e-03, -4.10685278e-02,\n", " -4.27255332e-02, 4.10275124e-02, 2.94409040e-02, 3.64597365e-02,\n", " -1.21237539e-01, 1.34886522e-02, -1.38791539e-02, 3.12682539e-02,\n", " -2.16193087e-02, 1.61773115e-02, 1.12230092e-01, -6.70827320e-03,\n", " -1.89293770e-03, 5.32201156e-02, 3.26109342e-02, -3.77661884e-02,\n", " -4.69162986e-02, 6.19483553e-02, 6.36137053e-02, 5.01194410e-02,\n", " -7.60396710e-03, -2.14810148e-02, -3.77874821e-02, -8.29146579e-02,\n", " -2.62903869e-02, 3.61402445e-02, 4.12769802e-02, 1.45185124e-02,\n", " 7.34504461e-02, 6.52731396e-04, -8.14354494e-02, -5.57826385e-02,\n", " -4.21326421e-02, -9.66771394e-02, -4.01260480e-02, 2.85595078e-02,\n", " 1.29050031e-01, 1.04480283e-02, 2.51060501e-02, 1.73397325e-02,\n", " -2.72082537e-02, -4.94833011e-03, 1.57340150e-02, 3.43773626e-02,\n", " -4.44876663e-02, 2.08375175e-02, 2.75189616e-02, -1.43145444e-02,\n", " 2.88183019e-02, -2.12674066e-02, 8.81789345e-03, 9.85575933e-03,\n", " 2.99947709e-03, -2.38090865e-02, 1.30473869e-02, 6.63441122e-02,\n", " 6.89176843e-02, 8.25469047e-02, 8.78614653e-03, -1.40591711e-02,\n", " 9.10706893e-02, -1.22176625e-01, -4.53094766e-02, -1.80813260e-02,\n", " -2.21721251e-02, 2.15227753e-02, -3.88434231e-02, -1.95572469e-02,\n", " 7.97145590e-02, -1.57627091e-02, 6.88804835e-02, -1.55662987e-02,\n", " 2.27821078e-02, 2.52953786e-02, -3.11915670e-02, -3.35003026e-02,\n", " -2.15878300e-02, -1.00693638e-02, 5.50083211e-03, 4.89774533e-02,\n", " -2.15045698e-02, 6.38372153e-02, -1.97170395e-02, -3.02719940e-02,\n", " 6.23276411e-03, 4.51806299e-02, -4.58096117e-02, -4.91579063e-02,\n", " 8.70888755e-02, 2.73449812e-02, 9.05928835e-02, 3.43216396e-33,\n", " 6.25441819e-02, 2.89678350e-02, 5.44469949e-05, 9.14406180e-02,\n", " -3.03731542e-02, 4.91050910e-03, -2.54671145e-02, 6.67012036e-02,\n", " -3.41491178e-02, 4.78098281e-02, -3.42067257e-02, 7.90238101e-03,\n", " 1.07918680e-01, 9.03509837e-03, 7.59194931e-03, 8.85861665e-02,\n", " 3.73837794e-03, -3.04710288e-02, 2.17071641e-02, -4.32722270e-03,\n", " -1.44716799e-01, 1.15704779e-02, 1.83675960e-02, -2.58501470e-02,\n", " -5.19390665e-02, 3.94174643e-02, 3.75358388e-02, -1.47289466e-02,\n", " -2.22415328e-02, -4.87109609e-02, -6.51957840e-03, -3.95688005e-02,\n", " -4.12730090e-02, -2.84374133e-02, 1.06937578e-02, 1.58605665e-01,\n", " 4.76557165e-02, -4.72694859e-02, -6.29038140e-02, 8.55189282e-03,\n", " 5.99067435e-02, 1.93113182e-02, -3.22296433e-02, 1.11583948e-01,\n", " 1.61229149e-02, 5.26993088e-02, -1.79329794e-02, -5.92171960e-03,\n", " 5.29188178e-02, 1.84235200e-02, -4.74426262e-02, -1.43298488e-02,\n", " 3.00339926e-02, -7.33370781e-02, -1.25934128e-02, 4.52122325e-03,\n", " -9.49897692e-02, 1.88207123e-02, -2.90870517e-02, -5.30484645e-03,\n", " -2.84171407e-03, 6.96969330e-02, 1.24590090e-02, 1.21921822e-01,\n", " -1.04835384e-01, -5.37252650e-02, -1.27632422e-02, -2.79169213e-02,\n", " 5.00147007e-02, -7.64585733e-02, 2.42873449e-02, 4.53653820e-02,\n", " -2.89868917e-02, 1.01877814e-02, -1.06165595e-02, 3.10457610e-02,\n", " -4.64856885e-02, 4.57400270e-03, 7.66269350e-03, -6.38109772e-03,\n", " -7.78828710e-02, -6.52912185e-02, -4.76768091e-02, 1.03237452e-02,\n", " -5.66283204e-02, -1.12508647e-02, 2.10432080e-03, 6.38601333e-02,\n", " -1.33299334e-02, -3.01730894e-02, -9.82381031e-03, 5.49602620e-02,\n", " -2.16868073e-02, -5.33165708e-02, -2.85976138e-02, -1.33195126e-08,\n", " -2.86922399e-02, -2.91737635e-02, -4.29840200e-02, -1.95621699e-02,\n", " 9.97484177e-02, 6.95172623e-02, -3.01070735e-02, -4.01307791e-02,\n", " -6.63048029e-03, 2.61629764e-02, 4.42572646e-02, -1.63678415e-02,\n", " -7.00091869e-02, 1.34455282e-02, 4.65524308e-02, -1.51504036e-02,\n", " -5.34376279e-02, 3.98613103e-02, 6.28618672e-02, 7.71477371e-02,\n", " -5.10236397e-02, 3.02715208e-02, 5.55066504e-02, 2.20580120e-03,\n", " -5.12280725e-02, -3.59411649e-02, 4.56036888e-02, 1.06061161e-01,\n", " -8.21795836e-02, 3.80977243e-02, -2.26301346e-02, 1.40611708e-01,\n", " -7.61837214e-02, -3.00867762e-02, -4.03564796e-03, -6.96879402e-02,\n", " 7.60908946e-02, -7.92493373e-02, 2.50342023e-02, 3.40404622e-02,\n", " 5.04252091e-02, 1.52100503e-01, -2.00870167e-02, -7.89096504e-02,\n", " -5.83763991e-04, 6.22937195e-02, 2.64480468e-02, -1.21596888e-01,\n", " -2.82968692e-02, -5.64231686e-02, -9.82324257e-02, -7.41288718e-03,\n", " 2.79071331e-02, 6.90643117e-02, 1.50048779e-02, 5.07094571e-03,\n", " -1.31183881e-02, -4.80346903e-02, -1.67353749e-02, 3.66703980e-02,\n", " 1.11444503e-01, 2.98568588e-02, 2.39054970e-02, 1.10093102e-01],\n", " dtype=float32)" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# 生成embedding\n", "model.encode('dog')" ] }, { "cell_type": "code", "execution_count": 10, "id": "b178e6f2-96d3-4025-8e8a-6ee68b9271b9", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "北京 vs 上海: 0.378\n", "北京 vs 东京: 0.836\n", "北京 vs 苹果: 0.173\n", "Python vs Java: 0.450\n", "Python vs 香蕉: 0.072\n" ] } ], "source": [ "# 比较相似度\n", "import numpy as np\n", "\n", "def cosine_similarity(a, b):\n", " a = np.array(a)\n", " b = np.array(b)\n", " return np.dot(a, b) / (np.linalg.norm(a) * np.linalg.norm(b))\n", " \n", "pairs = [\n", " (\"北京\", \"上海\"),\n", " (\"北京\", \"东京\"),\n", " (\"北京\", \"苹果\"),\n", " (\"Python\", \"Java\"),\n", " (\"Python\", \"香蕉\")\n", "]\n", "\n", "for a, b in pairs:\n", " sim = cosine_similarity(model.encode(a), model.encode(b))\n", " print(f\"{a} vs {b}: {sim:.3f}\")" ] }, { "cell_type": "markdown", "id": "2c470295-6f49-4933-ab6b-034df202565d", "metadata": {}, "source": [ "## 三、相似性在向量空间中如何体现\n", "\n", "LLM里的“相似”是指**向量方向相近**\n", "\n", "常用指标:\n", "\n", "- Cosine Similarity(余弦相似度)\n", "\n", "```text\n", "cos(θ) → 1 非常相似\n", "cos(θ) → 0 无关\n", "cos(θ) → -1 相反\n", "```\n" ] }, { "cell_type": "code", "execution_count": 18, "id": "daaea61b-99d7-428a-bfc6-7c8f8da127f6", "metadata": {}, "outputs": [ { "data": { "text/html": [ "\n", "
\n", " \n", "
\n", " (来源:3Blue1Brown)\n", "
\n", "
\n" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "from IPython.display import HTML, display\n", "\n", "display(HTML(\"\"\"\n", "
\n", " \n", "
\n", " (来源:3Blue1Brown)\n", "
\n", "
\n", "\"\"\"))" ] }, { "cell_type": "markdown", "id": "18b6ee12-2e3a-477e-ae22-d887ca78f682", "metadata": {}, "source": [ "## 实战: 计算相似度" ] }, { "cell_type": "code", "execution_count": 13, "id": "a4b22a93-33f6-4a3a-b111-863c6109debc", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "北京 vs 上海: 0.378\n", "北京 vs 东京: 0.836\n", "北京 vs 苹果: 0.173\n", "Python vs Java: 0.450\n", "Python vs 香蕉: 0.072\n" ] } ], "source": [ "def cosine_similarity(a, b):\n", " a = np.array(a)\n", " b = np.array(b)\n", " return np.dot(a, b) / (np.linalg.norm(a) * np.linalg.norm(b))\n", "\n", "pairs = [\n", " (\"北京\", \"上海\"),\n", " (\"北京\", \"东京\"),\n", " (\"北京\", \"苹果\"),\n", " (\"Python\", \"Java\"),\n", " (\"Python\", \"香蕉\")\n", "]\n", "\n", "for a, b in pairs:\n", " sim = cosine_similarity(model.encode(a), model.encode(b))\n", " print(f\"{a} vs {b}: {sim:.3f}\")" ] }, { "cell_type": "markdown", "id": "f49c86e4-84c4-4585-bb82-a2259d13e275", "metadata": {}, "source": [ "## 四、类比在向量空间中如何体现\n", "\n", "Embedding 不仅能表示“相似”,还能隐式表达**关系本身**。\n", "在高维向量空间中,**两个词向量的差值,并不是随机噪声,而往往对应一种稳定的语义方向**。\n", "经典例子是:\n", "\n", "```text\n", "国王 - 男人 + 女人 ≈ 女王\n", "```\n", "\n", "这并不是模型“会算术”,而是因为`国王 - 男人` 抽取出了“**王权但不含性别**”的语义方向, 再加上 `女人`,就把性别维度切换为女性得到的向量自然靠近 `女王`。\n", "因此可以说明,关系本身可以被表示为向量方向,某些语义关系在不同词之间是**可迁移、可复用的**。" ] }, { "cell_type": "code", "execution_count": 20, "id": "ec082c8b-c5ce-4840-89f6-0f334eeb272f", "metadata": {}, "outputs": [ { "data": { "text/html": [ "\n", "
\n", " \n", "
\n", " (来源:3Blue1Brown)\n", "
\n", "
\n" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "from IPython.display import HTML, display\n", "\n", "display(HTML(\"\"\"\n", "
\n", " \n", "
\n", " (来源:3Blue1Brown)\n", "
\n", "
\n", "\"\"\"))" ] }, { "cell_type": "markdown", "id": "25b7f2c5-9cca-4499-ae69-91333abc5a75", "metadata": {}, "source": [ "## 实战: 类比" ] }, { "cell_type": "code", "execution_count": 15, "id": "22c7b0f1-f911-459f-ba8c-8b2cb7911fb8", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "运算结果向量指向的词是: queen\n", "相似度得分: 0.9947\n" ] } ], "source": [ "import numpy as np\n", "\n", "# 假设这是从预训练模型中提取的 4 维 Embedding 向量(实际模型通常是 768 或 1536 维)\n", "# 每一维可能隐含代表:[权力, 性别(男为正,女为负), 生物性, ... ]\n", "embeddings = {\n", " \"king\": np.array([0.9, 0.8, 1.0, 0.1]),\n", " \"man\": np.array([0.1, 0.9, 1.0, 0.0]),\n", " \"woman\": np.array([0.1, -0.9, 1.0, 0.0]),\n", " \"queen\": np.array([0.9, -0.8, 1.0, 0.1]),\n", " \"apple\": np.array([0.0, 0.0, 0.0, 0.9]) # 干扰项\n", "}\n", "\n", "def find_closest(target_vec, word_dict):\n", " # 计算余弦相似度,找到与目标向量方向最接近的词\n", " best_word = None\n", " max_sim = -1\n", " for word, vec in word_dict.items():\n", " # 余弦相似度公式\n", " sim = np.dot(target_vec, vec) / (np.linalg.norm(target_vec) * np.linalg.norm(vec))\n", " if sim > max_sim:\n", " max_sim = sim\n", " best_word = word\n", " return best_word, max_sim\n", "\n", "# 1. 执行向量运算:国王 - 男人 + 女人\n", "result_vec = embeddings[\"king\"] - embeddings[\"man\"] + embeddings[\"woman\"]\n", "\n", "# 2. 在空间中寻找离结果最近的词\n", "closest_word, similarity = find_closest(result_vec, embeddings)\n", "\n", "print(f\"运算结果向量指向的词是: {closest_word}\")\n", "print(f\"相似度得分: {similarity:.4f}\")" ] }, { "cell_type": "markdown", "id": "a59fe91c-51f1-4e77-a3cb-701bb89d1a30", "metadata": {}, "source": [ "\n", "## 五、联想是如何发生的\n", "\n", "当你向 LLM 输入一个词或一句话时,模型并不会像人类一样“联想到相关事物”,而是输入会把模型的注意力和概率分布,推向某个高密度的语义区域。\n", "\n", "例如,当你输入:\n", "\n", "```text\n", "医院\n", "```\n", "\n", "在向量空间中,与“医院”语义最接近、共现频率最高的 Token 会被优先激活,因此模型更容易继续生成:\n", "\n", "* 医生\n", "* 病人\n", "* 手术\n", "* 治疗\n", "\n", "这看起来像是“联想”,但本质上并不是主动思考,而是在一个已经被大量语料反复强化的高密度区域中继续采样下一个 Token。\n", "\n", "换句话说,模型并不是在“想到医生”,\n", "而是在统计意义上,**“医生”是此刻概率最高的延续结果**。" ] }, { "cell_type": "markdown", "id": "4da53c06-14d6-41c4-94cf-fd423ebe52d9", "metadata": {}, "source": [ "## 实战: 联想" ] }, { "cell_type": "code", "execution_count": 16, "id": "e8150343-7842-425f-92f3-7c078c032ebb", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "--- 输入词:医院 ---\n", "模型后续 Token 的激活概率分布:\n", "Token: [医生] -> 出现概率: 0.3717\n", "Token: [手术] -> 出现概率: 0.3290\n", "Token: [病人] -> 出现概率: 0.2972\n", "Token: [咖啡] -> 出现概率: 0.0018\n", "Token: [代码] -> 出现概率: 0.0002\n" ] } ], "source": [ "import numpy as np\n", "\n", "# 1. 定义词表与简化的 Embedding 向量(高维空间的坐标)\n", "# 每一维可能代表 [生命健康, 科技, 生活琐事]\n", "vocab = {\n", " \"医院\": np.array([0.9, 0.1, 0.2]),\n", " \"医生\": np.array([0.85, 0.2, 0.3]),\n", " \"病人\": np.array([0.8, 0.0, 0.4]),\n", " \"手术\": np.array([0.95, 0.3, 0.1]),\n", " \"代码\": np.array([0.1, 0.9, 0.1]),\n", " \"咖啡\": np.array([0.2, 0.1, 0.8])\n", "}\n", "\n", "def simulate_association(input_word):\n", " if input_word not in vocab:\n", " return \"词不在词表中\"\n", " \n", " input_vec = vocab[input_word]\n", " probabilities = {}\n", "\n", " # 2. 计算输入词与词表中所有词的相似度(激活强度)\n", " for word, vec in vocab.items():\n", " if word == input_word: continue\n", " # 使用余弦相似度模拟“距离”\n", " similarity = np.dot(input_vec, vec) / (np.linalg.norm(input_vec) * np.linalg.norm(vec))\n", " # 将相似度转化为激活概率(简化版的 Softmax)\n", " probabilities[word] = np.exp(similarity * 10) \n", "\n", " # 3. 归一化概率\n", " total_prob = sum(probabilities.values())\n", " for word in probabilities:\n", " probabilities[word] /= total_prob\n", "\n", " # 按概率排序输出\n", " sorted_probs = sorted(probabilities.items(), key=lambda x: x[1], reverse=True)\n", " return sorted_probs\n", "\n", "# 模拟输入“医院”\n", "associations = simulate_association(\"医院\")\n", "\n", "print(\"--- 输入词:医院 ---\")\n", "print(\"模型后续 Token 的激活概率分布:\")\n", "for word, prob in associations:\n", " print(f\"Token: [{word}] -> 出现概率: {prob:.4f}\")" ] }, { "cell_type": "code", "execution_count": null, "id": "feb162a2-398d-440a-8cf3-6c81989984d7", "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.12.10" } }, "nbformat": 4, "nbformat_minor": 5 }