{ "cells": [ { "cell_type": "markdown", "id": "d95f841a-63c9-41d4-aea1-496b3d2024dd", "metadata": {}, "source": [ "\n", "\n", "\n", "\n", "\n", "
\n", "\n", "Supplementary code for the Build a Large Language Model From Scratch book by Sebastian Raschka
\n", "
Code repository: https://github.com/rasbt/LLMs-from-scratch\n", "
汉化的库: https://github.com/GoatCsu/CN-LLMs-from-scratch.git\n", "
\n", "
\n", "\n", "
\n" ] }, { "cell_type": "markdown", "id": "25aa40e3-5109-433f-9153-f5770531fe94", "metadata": {}, "source": [ "# 第二章:处理文本数据 " ] }, { "cell_type": "markdown", "id": "76d5d2c0-cba8-404e-9bf3-71a218cae3cf", "metadata": {}, "source": [ "本章节需要安装的包" ] }, { "cell_type": "markdown", "id": "778a1d01-7ca9-44a5-b8ce-00624232ac0b", "metadata": {}, "source": [ "pip3 install importlib.metadata" ] }, { "cell_type": "code", "execution_count": 1, "id": "4d1305cf-12d5-46fe-a2c9-36fb71c5b3d3", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "torch version: 2.6.0+cu126\n", "tiktoken version: 0.9.0\n" ] } ], "source": [ "from importlib.metadata import version\n", "\n", "print(\"torch version:\", version(\"torch\"))\n", "print(\"tiktoken version:\", version(\"tiktoken\"))\n", "# 确认库已安装并显示当前安装的版本" ] }, { "cell_type": "markdown", "id": "5a42fbfd-e3c2-43c2-bc12-f5f870a0b10a", "metadata": {}, "source": [ "- 本章节已经为LLM的实现构建了数据库" ] }, { "cell_type": "markdown", "id": "628b2922-594d-4ff9-bd82-04f1ebdf41f5", "metadata": {}, "source": [ "" ] }, { "cell_type": "markdown", "id": "2417139b-2357-44d2-bd67-23f5d7f52ae7", "metadata": {}, "source": [ "## 2.1 理解文字embedding" ] }, { "cell_type": "markdown", "id": "0b6816ae-e927-43a9-b4dd-e47a9b0e1cf6", "metadata": {}, "source": [ "- 无代码" ] }, { "cell_type": "markdown", "id": "4f69dab7-a433-427a-9e5b-b981062d6296", "metadata": {}, "source": [ "- 在众多形式的embedding中,我们只讨论text embedding\n", "- embedding含义丰富,而且是常用词汇,所以以下皆不做翻译,多加体会!" ] }, { "cell_type": "markdown", "id": "ba08d16f-f237-4166-bf89-0e9fe703e7b4", "metadata": {}, "source": [ "" ] }, { "cell_type": "markdown", "id": "288c4faf-b93a-4616-9276-7a4aa4b5e9ba", "metadata": {}, "source": [ "- LLM从高纬空间视角理解文字(i.e., 上千个dimension)\n", "- 虽然人类只能想象低维的视角,我们无法描绘计算机所理解的embedding\n", "- 但是下图我们粗浅的从二维上模拟计算机的视角" ] }, { "cell_type": "markdown", "id": "d6b80160-1f10-4aad-a85e-9c79444de9e6", "metadata": {}, "source": [ "" ] }, { "cell_type": "markdown", "id": "eddbb984-8d23-40c5-bbfa-c3c379e7eec3", "metadata": {}, "source": [ "## 2.2 文本标签化(tokenize)" ] }, { "cell_type": "markdown", "id": "e4876035", "metadata": {}, "source": [ "- 关于embedding token,实在是不好翻译,于是有时候我会选择不去翻译这两个专有名词\n", "- 本节中,我们将tokenize文字信息. 这会把文字拆解为更多小的理解单元 例如单个词或者字节" ] }, { "cell_type": "markdown", "id": "747d73d0", "metadata": {}, "source": [ "(这也有点抽象,事实上可以粗略理解为将单词拆分为词根、词源和词缀)" ] }, { "cell_type": "markdown", "id": "09872fdb-9d4e-40c4-949d-52a01a43ec4b", "metadata": {}, "source": [ "" ] }, { "cell_type": "markdown", "id": "8cceaa18-833d-46b6-b211-b20c53902805", "metadata": {}, "source": [ "- 载入源文件\n", "- [The Verdict by Edith Wharton](https://en.wikisource.org/wiki/The_Verdict) 一本无版权的短篇小说" ] }, { "cell_type": "code", "execution_count": 2, "id": "27e9b441-cf4e-4a4e-8e3e-44be25354259", "metadata": {}, "outputs": [], "source": [ "import os##导入os库    \n", "import urllib.request ##导入request库\n", "\n", "if not os.path.exists(\"the-verdict.txt\"):##如果文件不存在则创建,防止因文件已存在而报错\n", " url = (\"https://raw.githubusercontent.com/rasbt/\"\n", " \"LLMs-from-scratch/main/ch02/01_main-chapter-code/\"\n", " \"the-verdict.txt\")\n", " file_path = \"the-verdict.txt\"\n", " urllib.request.urlretrieve(url, file_path)##从指定的地点读取文件" ] }, { "cell_type": "markdown", "id": "56488f2c-a2b8-49f1-aaeb-461faad08dce", "metadata": {}, "source": [ "- 如果在执行前面的代码单元时遇到 `ssl.SSLCertVerificationError` 错误可能是由于使用了过时的 Python 版本;\n", "- 你可以在 [GitHub 上查阅更多信息](https://github.com/rasbt/LLMs-from-scratch/pull/403)。" ] }, { "cell_type": "code", "execution_count": 3, "id": "8a769e87-470a-48b9-8bdb-12841b416198", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Total number of character: 20479\n", "I HAD always thought Jack Gisburn rather a cheap genius--though a good fellow enough--so it was no \n" ] } ], "source": [ "with open(\"the-verdict.txt\", \"r\", encoding=\"utf-8\") as f:\n", " raw_text = f.read() ##读入文件按照utf-8\n", " \n", "print(\"Total number of character:\", len(raw_text))##先输出总长度\n", "print(raw_text[:99])##输出前一百个内容" ] }, { "cell_type": "markdown", "id": "9b971a46-ac03-4368-88ae-3f20279e8f4e", "metadata": {}, "source": [ "- 目标是对这段文本进行分词和嵌入处理,以便用于大语言模型(LLM)。\n", "- 我们将基于一些简单的示例文本开发一个简单的分词器,之后可以将其应用于上述文本。\n", "- 以下正则表达式将基于空格进行分割。" ] }, { "cell_type": "code", "execution_count": 4, "id": "737dd5b0-9dbb-4a97-9ae4-3482c8c04be7", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "['Hello,', ' ', 'world.', ' ', 'This,', ' ', 'is', ' ', 'a', ' ', 'test.']\n" ] } ], "source": [ "import re\n", "\n", "text = \"Hello, world. This, is a test.\"\n", "result = re.split(r'(\\s)', text)##正则表达式按照空白字符进行分割\n", "\n", "print(result)" ] }, { "cell_type": "markdown", "id": "a8c40c18-a9d5-4703-bf71-8261dbcc5ee3", "metadata": {}, "source": [ "- 优化正则表达,可以分割更多的符号" ] }, { "cell_type": "code", "execution_count": 5, "id": "ea02489d-01f9-4247-b7dd-a0d63f62ef07", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "['Hello', ',', '', ' ', 'world', '.', '', ' ', 'This', ',', '', ' ', 'is', ' ', 'a', ' ', 'test', '.', '']\n" ] } ], "source": [ "result = re.split(r'([,.]|\\s)', text)##只是按照, .分割\n", "print(result)" ] }, { "cell_type": "markdown", "id": "461d0c86-e3af-4f87-8fae-594a9ca9b6ad", "metadata": {}, "source": [ "- 移除空格" ] }, { "cell_type": "code", "execution_count": 6, "id": "4d8a6fb7-2e62-4a12-ad06-ccb04f25fed7", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "['Hello', ',', 'world', '.', 'This', ',', 'is', 'a', 'test', '.']\n" ] } ], "source": [ "##把上述结果去掉空格\n", "result = [item for item in result if item.strip()]\n", "print(result)" ] }, { "cell_type": "markdown", "id": "250e8694-181e-496f-895d-7cb7d92b5562", "metadata": {}, "source": [ "- 我们还需要处理其他标点符号" ] }, { "cell_type": "code", "execution_count": 7, "id": "ed3a9467-04b4-49d9-96c5-b8042bcf8374", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "['Hello', ',', 'world', '.', 'Is', 'this', '--', 'a', 'test', '?']\n" ] } ], "source": [ "text = \"Hello, world. Is this-- a test?\"\n", "\n", "result = re.split(r'([,.:;?_!\"()\\']|--|\\s)', text) ##就是按照常用的符号分割\n", "result = [item.strip() for item in result if item.strip()]##去掉两端的空白字符 也是去掉了空字符串与仅包含空白字符的项\n", "print(result)" ] }, { "cell_type": "markdown", "id": "5bbea70b-c030-45d9-b09d-4318164c0bb4", "metadata": {}, "source": [ "- 万事俱备,我们现在来看一下文字处理后的效果" ] }, { "cell_type": "markdown", "id": "6cbe9330-b587-4262-be9f-497a84ec0e8a", "metadata": {}, "source": [ "" ] }, { "cell_type": "code", "execution_count": 8, "id": "8c567caa-8ff5-49a8-a5cc-d365b0a78a99", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "['I', 'HAD', 'always', 'thought', 'Jack', 'Gisburn', 'rather', 'a', 'cheap', 'genius', '--', 'though', 'a', 'good', 'fellow', 'enough', '--', 'so', 'it', 'was', 'no', 'great', 'surprise', 'to', 'me', 'to', 'hear', 'that', ',', 'in']\n" ] } ], "source": [ "preprocessed = re.split(r'([,.:;?_!\"()\\']|--|\\s)', raw_text) ##按照符号继续把原文件给分割了\n", "preprocessed = [item.strip() for item in preprocessed if item.strip()]##去掉两端的空白字符 也是去掉了空字符串和仅包含空白字符的项\n", "print(preprocessed[:30])" ] }, { "cell_type": "markdown", "id": "e2a19e1a-5105-4ddb-812a-b7d3117eab95", "metadata": {}, "source": [ "- 查看token的长度" ] }, { "cell_type": "code", "execution_count": 9, "id": "35db7b5e-510b-4c45-995f-f5ad64a8e19c", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "4690\n" ] } ], "source": [ "print(len(preprocessed))" ] }, { "cell_type": "markdown", "id": "0b5ce8fe-3a07-4f2a-90f1-a0321ce3a231", "metadata": {}, "source": [ "## 2.3 给token编号" ] }, { "cell_type": "markdown", "id": "a5204973-f414-4c0d-87b0-cfec1f06e6ff", "metadata": {}, "source": [ "- 通过如下的embedding层,我们可以给token编号" ] }, { "cell_type": "markdown", "id": "177b041d-f739-43b8-bd81-0443ae3a7f8d", "metadata": {}, "source": [ "" ] }, { "cell_type": "markdown", "id": "b5973794-7002-4202-8b12-0900cd779720", "metadata": {}, "source": [ "- 我们要创建一个表格,给所有的token给映射到不同的标号上" ] }, { "cell_type": "code", "execution_count": 10, "id": "7fdf0533-5ab6-42a5-83fa-a3b045de6396", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "1130\n" ] } ], "source": [ "all_words = sorted(set(preprocessed))#从去掉重复的字符\n", "vocab_size = len(all_words)#计总的单词书\n", "\n", "print(vocab_size)" ] }, { "cell_type": "code", "execution_count": 11, "id": "77d00d96-881f-4691-bb03-84fec2a75a26", "metadata": {}, "outputs": [], "source": [ "vocab = {token:integer for integer,token in enumerate(all_words)}##先把word进行编号,再按照单词或者标点为索引(有HashList那味道了)" ] }, { "cell_type": "markdown", "id": "75bd1f81-3a8f-4dd9-9dd6-e75f32dacbe3", "metadata": {}, "source": [ "- 看一下前50个是怎样的" ] }, { "cell_type": "code", "execution_count": 12, "id": "e1c5de4a-aa4e-4aec-b532-10bb364039d6", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "('!', 0)\n", "('\"', 1)\n", "(\"'\", 2)\n", "('(', 3)\n", "(')', 4)\n", "(',', 5)\n", "('--', 6)\n", "('.', 7)\n", "(':', 8)\n", "(';', 9)\n", "('?', 10)\n", "('A', 11)\n", "('Ah', 12)\n", "('Among', 13)\n", "('And', 14)\n", "('Are', 15)\n", "('Arrt', 16)\n", "('As', 17)\n", "('At', 18)\n", "('Be', 19)\n", "('Begin', 20)\n", "('Burlington', 21)\n", "('But', 22)\n", "('By', 23)\n", "('Carlo', 24)\n", "('Chicago', 25)\n", "('Claude', 26)\n", "('Come', 27)\n", "('Croft', 28)\n", "('Destroyed', 29)\n", "('Devonshire', 30)\n", "('Don', 31)\n", "('Dubarry', 32)\n", "('Emperors', 33)\n", "('Florence', 34)\n", "('For', 35)\n", "('Gallery', 36)\n", "('Gideon', 37)\n", "('Gisburn', 38)\n", "('Gisburns', 39)\n", "('Grafton', 40)\n", "('Greek', 41)\n", "('Grindle', 42)\n", "('Grindles', 43)\n", "('HAD', 44)\n", "('Had', 45)\n", "('Hang', 46)\n", "('Has', 47)\n", "('He', 48)\n", "('Her', 49)\n", "('Hermia', 50)\n" ] } ], "source": [ "for i, item in enumerate(vocab.items()):\n", " print(item)\n", " if i >= 50:\n", " break ##遍历到前五十个" ] }, { "cell_type": "markdown", "id": "3b1dc314-351b-476a-9459-0ec9ddc29b19", "metadata": {}, "source": [ "- 接下来,我们将通过一个短文本来感受下,处理后的效果是怎样的" ] }, { "cell_type": "markdown", "id": "67407a9f-0202-4e7c-9ed7-1b3154191ebc", "metadata": {}, "source": [ "" ] }, { "cell_type": "markdown", "id": "4e569647-2589-4c9d-9a5c-aef1c88a0a9a", "metadata": {}, "source": [ "- 现在将所有内容整合到一个分词器类中。" ] }, { "cell_type": "code", "execution_count": 13, "id": "f531bf46-7c25-4ef8-bff8-0d27518676d5", "metadata": {}, "outputs": [], "source": [ "class SimpleTokenizerV1:#一个实例的名字创立\n", " def __init__(self, vocab): ## 初始化一个字符串\n", " self.str_to_int = vocab #单词到整数的映射\n", " self.int_to_str = {i:s for s,i in vocab.items()} \n", " #方便解码,进行整数到词汇的反向映射\n", " \n", " def encode(self, text):\n", " preprocessed = re.split(r'([,.:;?_!\"()\\']|--|\\s)', text)##正则化分词标点符号\n", " \n", " preprocessed = [\n", " item.strip() for item in preprocessed if item.strip()## 去掉两端空格与全部的空句\n", " ]\n", " ids = [self.str_to_int[s] for s in preprocessed]##整理完的额字符串列表对应到id,从字典出来\n", " return ids\n", " \n", " def decode(self, ids):\n", " text = \" \".join([self.int_to_str[i] for i in ids]) #映射整数id到字符串。join是用前面那个(“ ”)联结成一个完整的字符串\n", " # Replace spaces before the specified punctuations\n", " text = re.sub(r'\\s+([,.?!\"()\\'])', r'\\1', text) #使用正则表达式,去除标点符号前的多余空格\n", " # \\s+匹配一个或者多个空白 \\1 替换到匹配\n", " return text" ] }, { "cell_type": "markdown", "id": "dee7a1e5-b54f-4ca1-87ef-3d663c4ee1e7", "metadata": {}, "source": [ "- `encode` 函数将文本转换为标记 ID。\n", "- `decode` 函数将标记 ID 转换回文本。" ] }, { "cell_type": "markdown", "id": "cc21d347-ec03-4823-b3d4-9d686e495617", "metadata": {}, "source": [ "" ] }, { "cell_type": "markdown", "id": "c2950a94-6b0d-474e-8ed0-66d0c3c1a95c", "metadata": {}, "source": [ "- 我们可以使用分词器将文本编码(即分词)为数字。\n", "- 然后,这些整数可以作为大语言模型(LLM)的输入,进行嵌入。" ] }, { "cell_type": "code", "execution_count": 14, "id": "647364ec-7995-4654-9b4a-7607ccf5f1e4", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[1, 56, 2, 850, 988, 602, 533, 746, 5, 1126, 596, 5, 1, 67, 7, 38, 851, 1108, 754, 793, 7]\n" ] } ], "source": [ "tokenizer = SimpleTokenizerV1(vocab) #用vocab创造一个实例\n", "\n", "text = \"\"\"\"It's the last he painted, you know,\" \n", " Mrs. Gisburn said with pardonable pride.\"\"\"\n", "ids = tokenizer.encode(text) #按照这个例子里的encode函数处理text\n", "print(ids)" ] }, { "cell_type": "markdown", "id": "3201706e-a487-4b60-b99d-5765865f29a0", "metadata": {}, "source": [ "- 把数字重新映射回文字" ] }, { "cell_type": "code", "execution_count": 15, "id": "01d8c8fb-432d-4a49-b332-99f23b233746", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'\" It\\' s the last he painted, you know,\" Mrs. Gisburn said with pardonable pride.'" ] }, "execution_count": 15, "metadata": {}, "output_type": "execute_result" } ], "source": [ "tokenizer.decode(ids)#按照这个例子里的decode函数处理text" ] }, { "cell_type": "code", "execution_count": 16, "id": "54f6aa8b-9827-412e-9035-e827296ab0fe", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'\" It\\' s the last he painted, you know,\" Mrs. Gisburn said with pardonable pride.'" ] }, "execution_count": 16, "metadata": {}, "output_type": "execute_result" } ], "source": [ "tokenizer.decode(tokenizer.encode(text))#按照这个例子里的decode函数处理(#按照这个例子里的encode函数处理text)" ] }, { "cell_type": "markdown", "id": "4b821ef8-4d53-43b6-a2b2-aef808c343c7", "metadata": {}, "source": [ "## 2.4 添加特殊标记" ] }, { "cell_type": "markdown", "id": "863d6d15-a3e2-44e0-b384-bb37f17cf443", "metadata": {}, "source": [ "- 文本的结尾需要特别的符号来表明" ] }, { "cell_type": "markdown", "id": "aa7fc96c-e1fd-44fb-b7f5-229d7c7922a4", "metadata": {}, "source": [ "" ] }, { "cell_type": "markdown", "id": "9d709d57-2486-4152-b7f9-d3e4bd8634cd", "metadata": {}, "source": [ "- 一些分词器使用特殊标记来帮助大语言模型(LLM)获取额外的上下文信息。\n", "- 其中一些特殊标记包括:\n", " - `[BOS]`(序列开始)表示文本的开始。\n", " - `[EOS]`(序列结束)表示文本的结束(通常用于连接多个不相关的文本,例如两个不同的维基百科文章或两本不同的书籍等)。\n", " - `[PAD]`(填充)如果我们使用大于1的批次大小训练LLM(我们可能会包含不同长度的多篇文本),使用填充标记将较短的文本填充至最长的长度,以确保所有文本具有相同的长度。\n", "- `[UNK]` 用于表示词汇表中没有的词。\n", "\n", "- 请注意,GPT-2不需要上述提到的这些标记,它只使用 `<|endoftext|>` 标记。\n", "- `<|endoftext|>` 类似于上述提到的 `[EOS]` 标记。\n", "- GPT 还使用 `<|endoftext|>` 进行填充(因为我们在批量输入训练时通常使用掩码,所以无论这些填充标记是什么,都不会影响模型的训练,因为填充标记不会被关注)。\n", "- GPT-2 不使用 `` 标记来表示词汇表外的词;相反,GPT-2 使用字节对编码(BPE)分词器,将单词分解为子词单元,后续将进一步讨论这一点。\n", "- 我们在两个独立文本之间使用 `<|endoftext|>` 标记:" ] }, { "cell_type": "markdown", "id": "52442951-752c-4855-9752-b121a17fef55", "metadata": {}, "source": [ "" ] }, { "cell_type": "markdown", "id": "c661a397-da06-4a86-ac27-072dbe7cb172", "metadata": {}, "source": [ "- 看一下接下来会发生什么" ] }, { "cell_type": "code", "execution_count": 17, "id": "d5767eff-440c-4de1-9289-f789349d6b85", "metadata": {}, "outputs": [ { "ename": "KeyError", "evalue": "'Hello'", "output_type": "error", "traceback": [ "\u001b[31m---------------------------------------------------------------------------\u001b[39m", "\u001b[31mKeyError\u001b[39m Traceback (most recent call last)", "\u001b[36mCell\u001b[39m\u001b[36m \u001b[39m\u001b[32mIn[17]\u001b[39m\u001b[32m, line 5\u001b[39m\n\u001b[32m 1\u001b[39m tokenizer = SimpleTokenizerV1(vocab) \u001b[38;5;66;03m##用vocab创造一个实例\u001b[39;00m\n\u001b[32m 3\u001b[39m text = \u001b[33m\"\u001b[39m\u001b[33mHello, do you like tea. Is this-- a test?\u001b[39m\u001b[33m\"\u001b[39m\n\u001b[32m----> \u001b[39m\u001b[32m5\u001b[39m \u001b[43mtokenizer\u001b[49m\u001b[43m.\u001b[49m\u001b[43mencode\u001b[49m\u001b[43m(\u001b[49m\u001b[43mtext\u001b[49m\u001b[43m)\u001b[49m\n", "\u001b[36mCell\u001b[39m\u001b[36m \u001b[39m\u001b[32mIn[13]\u001b[39m\u001b[32m, line 13\u001b[39m, in \u001b[36mSimpleTokenizerV1.encode\u001b[39m\u001b[34m(self, text)\u001b[39m\n\u001b[32m 8\u001b[39m preprocessed = re.split(\u001b[33mr\u001b[39m\u001b[33m'\u001b[39m\u001b[33m([,.:;?_!\u001b[39m\u001b[33m\"\u001b[39m\u001b[33m()\u001b[39m\u001b[38;5;130;01m\\'\u001b[39;00m\u001b[33m]|--|\u001b[39m\u001b[33m\\\u001b[39m\u001b[33ms)\u001b[39m\u001b[33m'\u001b[39m, text)\u001b[38;5;66;03m##正则化分词标点符号\u001b[39;00m\n\u001b[32m 10\u001b[39m preprocessed = [\n\u001b[32m 11\u001b[39m item.strip() \u001b[38;5;28;01mfor\u001b[39;00m item \u001b[38;5;129;01min\u001b[39;00m preprocessed \u001b[38;5;28;01mif\u001b[39;00m item.strip()\u001b[38;5;66;03m## 去掉两端空格与全部的空句\u001b[39;00m\n\u001b[32m 12\u001b[39m ]\n\u001b[32m---> \u001b[39m\u001b[32m13\u001b[39m ids = [\u001b[38;5;28;43mself\u001b[39;49m\u001b[43m.\u001b[49m\u001b[43mstr_to_int\u001b[49m\u001b[43m[\u001b[49m\u001b[43ms\u001b[49m\u001b[43m]\u001b[49m \u001b[38;5;28;01mfor\u001b[39;00m s \u001b[38;5;129;01min\u001b[39;00m preprocessed]\u001b[38;5;66;03m##整理完的额字符串列表对应到id,从字典出来\u001b[39;00m\n\u001b[32m 14\u001b[39m \u001b[38;5;28;01mreturn\u001b[39;00m ids\n", "\u001b[31mKeyError\u001b[39m: 'Hello'" ] } ], "source": [ "tokenizer = SimpleTokenizerV1(vocab) ##用vocab创造一个实例\n", "\n", "text = \"Hello, do you like tea. Is this-- a test?\"\n", "\n", "tokenizer.encode(text)" ] }, { "cell_type": "markdown", "id": "dc53ee0c-fe2b-4cd8-a946-5471f7651acf", "metadata": {}, "source": [ "- 上述操作会产生一个错误,因为单词“Hello”不在词汇表中。\n", "- 为了处理这种情况,我们可以向词汇表中添加类似 `\"<|unk|>\"` 的特殊标记,用于表示未知词汇。\n", "- 因为我们已经在扩展词汇表,那么我们可以再添加一个标记 `\"<|endoftext|>\"`,该标记在GPT-2训练中用于表示文本的结束(它也用于连接的文本之间,例如当我们的训练数据集包含多篇文章、书籍等时)。" ] }, { "cell_type": "code", "execution_count": 18, "id": "ce9df29c-6c5b-43f1-8c1a-c7f7b79db78f", "metadata": {}, "outputs": [], "source": [ "all_tokens = sorted(list(set(preprocessed)))#set去重 list把处理后的重新变为列表,然后排序\n", "all_tokens.extend([\"<|endoftext|>\", \"<|unk|>\"])#加上未知的表示\n", "\n", "vocab = {token:integer for integer,token in enumerate(all_tokens)}\n", "#遍历 enumerate(all_tokens) 中的每个元组 (integer, token),以 token 作为键,integer 作为值创建字典条目。" ] }, { "cell_type": "code", "execution_count": 19, "id": "57c3143b-e860-4d3b-a22a-de22b547a6a9", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "1132" ] }, "execution_count": 19, "metadata": {}, "output_type": "execute_result" } ], "source": [ "len(vocab.items())" ] }, { "cell_type": "code", "execution_count": 20, "id": "50e51bb1-ae05-4aa8-a9ff-455b65ed1959", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "('younger', 1127)\n", "('your', 1128)\n", "('yourself', 1129)\n", "('<|endoftext|>', 1130)\n", "('<|unk|>', 1131)\n" ] } ], "source": [ "for i, item in enumerate(list(vocab.items())[-5:]):#输出后五个内容与其标号\n", " print(item)" ] }, { "cell_type": "markdown", "id": "a1daa2b0-6e75-412b-ab53-1f6fb7b4d453", "metadata": {}, "source": [ "- 因此增加 ``不失为一种好的选择" ] }, { "cell_type": "code", "execution_count": 21, "id": "948861c5-3f30-4712-a234-725f20d26f68", "metadata": {}, "outputs": [], "source": [ "class SimpleTokenizerV2:##版本2.0,启动!\n", " def __init__(self, vocab):\n", " self.str_to_int = vocab\n", " self.int_to_str = { i:s for s,i in vocab.items()}#s为单词,i是key\n", " \n", " def encode(self, text):\n", " preprocessed = re.split(r'([,.:;?_!\"()\\']|--|\\s)', text)#正则化按照标点分类\n", " preprocessed = [item.strip() for item in preprocessed if item.strip()]#去掉两头与所有空余句\n", " preprocessed = [\n", " item if item in self.str_to_int \n", " else \"<|unk|>\" for item in preprocessed\n", " #遍历 preprocessed 中的每个 item,如果 item 存在于 self.str_to_int(即词汇表)中,就保留 item\n", " #如果不存在(即该单词或符号未定义在词汇表中),就替换为特殊标记 <|unk|>。\n", " #拓展:推导式(如列表推导式)是一种紧凑的语法,专门用于生成新列表(或其他容器)\n", " #与普通 for 循环相比,它更加简洁和高效,但逻辑复杂时可能会降低可读性。\n", " ]\n", "\n", " ids = [self.str_to_int[s] for s in preprocessed]#单词或标点映射为整数列表\n", " return ids\n", " \n", " def decode(self, ids):\n", " text = \" \".join([self.int_to_str[i] for i in ids])\n", " # Replace spaces before the specified punctuations\n", " text = re.sub(r'\\s+([,.:;?!\"()\\'])', r'\\1', text)\n", " return text" ] }, { "cell_type": "markdown", "id": "e9b2e942", "metadata": {}, "source": [ "- 用优化后的分词器对文本进行操作" ] }, { "cell_type": "code", "execution_count": 22, "id": "4133c502-18ac-4412-9f43-01caf4efa3dc", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Hello, do you like tea? <|endoftext|> In the sunlit terraces of the palace.\n" ] } ], "source": [ "tokenizer = SimpleTokenizerV2(vocab)\n", "\n", "text1 = \"Hello, do you like tea?\"\n", "text2 = \"In the sunlit terraces of the palace.\"\n", "\n", "text = \" <|endoftext|> \".join((text1, text2))#用句子分隔符链接两个句子\n", "\n", "print(text) #跟第一个一样,但不会报错了" ] }, { "cell_type": "code", "execution_count": 23, "id": "7ed395fe-dc1b-4ed2-b85b-457cc35aab60", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[1131, 5, 355, 1126, 628, 975, 10, 1130, 55, 988, 956, 984, 722, 988, 1131, 7]" ] }, "execution_count": 23, "metadata": {}, "output_type": "execute_result" } ], "source": [ "tokenizer.encode(text)" ] }, { "cell_type": "code", "execution_count": 24, "id": "059367f9-7a60-4c0d-8a00-7c4c766d0ebc", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'<|unk|>, do you like tea? <|endoftext|> In the sunlit terraces of the <|unk|>.'" ] }, "execution_count": 24, "metadata": {}, "output_type": "execute_result" } ], "source": [ "tokenizer.decode(tokenizer.encode(text))" ] }, { "cell_type": "markdown", "id": "5c4ba34b-170f-4e71-939b-77aabb776f14", "metadata": {}, "source": [ "## 2.5 字节对编码" ] }, { "cell_type": "markdown", "id": "2309494c-79cf-4a2d-bc28-a94d602f050e", "metadata": {}, "source": [ "- GPT-2 使用字节对编码(BPE)作为其分词器。\n", "- 这种方式允许模型将不在预定义词汇表中的单词分解为更小的子词单元,甚至是单个字符,从而使其能够处理词汇表外的词汇。\n", "- 例如,如果 GPT-2 的词汇表中没有“unfamiliarword”这个单词,它可能会将其分词为 [\"unfam\", \"iliar\", \"word\"],或者根据训练好的 BPE 合并规则进行其他子词分解。\n", "- 原始的 BPE 分词器可以在这里找到:[https://github.com/openai/gpt-2/blob/master/src/encoder.py](https://github.com/openai/gpt-2/blob/master/src/encoder.py)\n", "- 在本章中,我们使用了 OpenAI 开源的 [tiktoken](https://github.com/openai/tiktoken) 库中的 BPE 分词器,该库在 Rust 中实现了核心算法,以提高计算性能。\n", "- 在 [./bytepair_encoder](../02_bonus_bytepair-encoder) 中创建了一个笔记本,对比了这两种实现的效果(在样本文本上,tiktoken 的速度大约快了 5 倍)。" ] }, { "cell_type": "code", "execution_count": 25, "id": "ede1d41f-934b-4bf4-8184-54394a257a94", "metadata": {}, "outputs": [], "source": [ "# pip install tiktoken" ] }, { "cell_type": "code", "execution_count": 26, "id": "48967a77-7d17-42bf-9e92-fc619d63a59e", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "tiktoken version: 0.9.0\n" ] } ], "source": [ "import importlib\n", "import tiktoken\n", "\n", "print(\"tiktoken version:\", importlib.metadata.version(\"tiktoken\"))#验证下载并输出版本信息" ] }, { "cell_type": "code", "execution_count": 27, "id": "6ad3312f-a5f7-4efc-9d7d-8ea09d7b5128", "metadata": {}, "outputs": [], "source": [ "tokenizer = tiktoken.get_encoding(\"gpt2\")#初始化GPT2!" ] }, { "cell_type": "code", "execution_count": 28, "id": "5ff2cd85-7cfb-4325-b390-219938589428", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[15496, 11, 466, 345, 588, 8887, 30, 220, 50256, 554, 262, 4252, 18250, 8812, 2114, 1659, 617, 34680, 27271, 13]\n" ] } ], "source": [ "text = (\n", " \"Hello, do you like tea? <|endoftext|> In the sunlit terraces\"\n", " \"of someunknownPlace.\"\n", ")\n", "\n", "integers = tokenizer.encode(text, allowed_special={\"<|endoftext|>\"})#输出分词的id,可以允许endoftext\n", "\n", "print(integers)" ] }, { "cell_type": "code", "execution_count": 29, "id": "d26a48bb-f82e-41a8-a955-a1c9cf9d50ab", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Hello, do you like tea? <|endoftext|> In the sunlit terracesof someunknownPlace.\n" ] } ], "source": [ "strings = tokenizer.decode(integers)\n", "#按照数字解码回去\n", "\n", "print(strings)" ] }, { "cell_type": "markdown", "id": "e8c2e7b4-6a22-42aa-8e4d-901f06378d4a", "metadata": {}, "source": [ "- BPE tokenizers将未知词汇分解为子词和单个字符。" ] }, { "cell_type": "markdown", "id": "c082d41f-33d7-4827-97d8-993d5a84bb3c", "metadata": {}, "source": [ "" ] }, { "cell_type": "markdown", "id": "abbd7c0d-70f8-4386-a114-907e96c950b0", "metadata": {}, "source": [ "## 2.6 利用滑动窗口进行数据采样" ] }, { "cell_type": "markdown", "id": "509d9826-6384-462e-aa8a-a7c73cd6aad0", "metadata": {}, "source": [ "- 现在我们训练的大语言模型(LLMs)时是一次生成一个单词,因此希望根据训练数据的要求进行准备,使得序列中的下一个单词作为预测目标。" ] }, { "cell_type": "markdown", "id": "39fb44f4-0c43-4a6a-9c2f-9cf31452354c", "metadata": {}, "source": [ "" ] }, { "cell_type": "code", "execution_count": 30, "id": "848d5ade-fd1f-46c3-9e31-1426e315c71b", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "5145\n" ] } ], "source": [ "with open(\"the-verdict.txt\", \"r\", encoding=\"utf-8\") as f:\n", " raw_text = f.read()\n", "\n", "enc_text = tokenizer.encode(raw_text)#读入了一个text并编码到enc_text里面\n", "print(len(enc_text))" ] }, { "cell_type": "markdown", "id": "cebd0657-5543-43ca-8011-2ae6bd0a5810", "metadata": {}, "source": [ "- 对于每个文本块,我们需要输入和目标。\n", "- 由于我们希望模型预测下一个单词,因此我们要生成目标是将输入右移一个位置后的单词。" ] }, { "cell_type": "code", "execution_count": 31, "id": "e84424a7-646d-45b6-99e3-80d15fb761f2", "metadata": {}, "outputs": [], "source": [ "enc_sample = enc_text[50:]#从第五十一个开始向后" ] }, { "cell_type": "code", "execution_count": 32, "id": "dfbff852-a92f-48c8-a46d-143a0f109f40", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "x: [290, 4920, 2241, 287]\n", "y: [4920, 2241, 287, 257]\n" ] } ], "source": [ "context_size = 4 #sliding windows4\n", "\n", "x = enc_sample[:context_size]#开头四个\n", "y = enc_sample[1:context_size+1]#第二个开始的四个\n", "\n", "print(f\"x: {x}\")\n", "print(f\"y: {y}\")" ] }, { "cell_type": "markdown", "id": "815014ef-62f7-4476-a6ad-66e20e42b7c3", "metadata": {}, "source": [ "- 就像预言家一个晚上只能预言一个玩家,我们的模型一次也只能预测一个单词" ] }, { "cell_type": "code", "execution_count": 33, "id": "d97b031e-ed55-409d-95f2-aeb38c6fe366", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[290] ----> 4920\n", "[290, 4920] ----> 2241\n", "[290, 4920, 2241] ----> 287\n", "[290, 4920, 2241, 287] ----> 257\n" ] } ], "source": [ "for i in range(1, context_size+1):\n", " #文本成输入 context,先输出有什么,然后输出下一个是什么编号\n", " context = enc_sample[:i]\n", " desired = enc_sample[i]\n", "\n", " print(context, \"---->\", desired)" ] }, { "cell_type": "code", "execution_count": 34, "id": "f57bd746-dcbf-4433-8e24-ee213a8c34a1", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ " and ----> established\n", " and established ----> himself\n", " and established himself ----> in\n", " and established himself in ----> a\n" ] } ], "source": [ "for i in range(1, context_size+1):\n", " #文本成输入 context,先输出有什么,然后输出下一个是什么单词\n", " context = enc_sample[:i]\n", " desired = enc_sample[i]\n", "\n", " print(tokenizer.decode(context), \"---->\", tokenizer.decode([desired]))" ] }, { "cell_type": "markdown", "id": "210d2dd9-fc20-4927-8d3d-1466cf41aae1", "metadata": {}, "source": [ "- 我们将在后续章节中处理下一个单词预测,届时会介绍注意力机制。\n", "- 目前,我们实现一个简单的数据加载器,它遍历输入数据集并返回右移一个位置后的输入和目标。" ] }, { "cell_type": "markdown", "id": "a1a1b47a-f646-49d1-bc70-fddf2c840796", "metadata": {}, "source": [ "- 安装并导入 PyTorch(安装提示请参见附录A)。" ] }, { "cell_type": "code", "execution_count": 35, "id": "e1770134-e7f3-4725-a679-e04c3be48cac", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "PyTorch version: 2.6.0+cu126\n" ] } ], "source": [ "import torch\n", "print(\"PyTorch version:\", torch.__version__)" ] }, { "cell_type": "markdown", "id": "0c9a3d50-885b-49bc-b791-9f5cc8bc7b7c", "metadata": {}, "source": [ "- 用滑动窗口法运行,窗口位置每次加一\n", "\n", "" ] }, { "cell_type": "markdown", "id": "92ac652d-7b38-4843-9fbd-494cdc8ec12c", "metadata": {}, "source": [ "- 创建数据集和数据加载器,从输入文本数据集中提取文本块。" ] }, { "cell_type": "code", "execution_count": 36, "id": "74b41073-4c9f-46e2-a1bd-d38e4122b375", "metadata": {}, "outputs": [], "source": [ "from torch.utils.data import Dataset, DataLoader\n", "\n", "\n", "class GPTDatasetV1(Dataset):\n", " #让GPT初始化一个类型\n", " def __init__(self, txt, tokenizer, max_length, stride):\n", " self.input_ids = []\n", " self.target_ids = []\n", "\n", " # Tokenize the entire text\n", " token_ids = tokenizer.encode(txt, allowed_special={\"<|endoftext|>\"})#id是文本内容编码过来的\n", "\n", " # Use a sliding window to chunk the book into overlapping sequences of max_length\n", " for i in range(0, len(token_ids) - max_length, stride):\n", " input_chunk = token_ids[i:i + max_length]\n", " target_chunk = token_ids[i + 1: i + max_length + 1]\n", " self.input_ids.append(torch.tensor(input_chunk))\n", " self.target_ids.append(torch.tensor(target_chunk))\n", "\n", " def __len__(self):\n", " return len(self.input_ids)\n", "\n", " def __getitem__(self, idx):\n", " return self.input_ids[idx], self.target_ids[idx]" ] }, { "cell_type": "code", "execution_count": 37, "id": "5eb30ebe-97b3-43c5-9ff1-a97d621b3c4e", "metadata": {}, "outputs": [], "source": [ "def create_dataloader_v1(txt, batch_size=4, max_length=256, \n", " stride=128, shuffle=True, drop_last=True,\n", " num_workers=0):\n", "\n", " # Initialize the tokenizer\n", " tokenizer = tiktoken.get_encoding(\"gpt2\")\n", "\n", " # Create dataset\n", " dataset = GPTDatasetV1(txt, tokenizer, max_length, stride)\n", "\n", " # Create dataloader\n", " dataloader = DataLoader(\n", " dataset,\n", " batch_size=batch_size,\n", " shuffle=shuffle,\n", " drop_last=drop_last,\n", " num_workers=num_workers\n", " )\n", "\n", " return dataloader" ] }, { "cell_type": "markdown", "id": "42dd68ef-59f7-45ff-ba44-e311c899ddcd", "metadata": {}, "source": [ "- 让我们使用批次大小为1、上下文大小为4的设置,测试数据加载器在大语言模型(LLM)中的表现。" ] }, { "cell_type": "code", "execution_count": 38, "id": "df31d96c-6bfd-4564-a956-6192242d7579", "metadata": {}, "outputs": [], "source": [ "with open(\"the-verdict.txt\", \"r\", encoding=\"utf-8\") as f:\n", " raw_text = f.read()" ] }, { "cell_type": "code", "execution_count": 39, "id": "9226d00c-ad9a-4949-a6e4-9afccfc7214f", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[tensor([[ 40, 367, 2885, 1464]]), tensor([[ 367, 2885, 1464, 1807]])]\n" ] } ], "source": [ "dataloader = create_dataloader_v1(#raw_text 中创建一个数据加载器 但是所批次\n", " raw_text, batch_size=1, max_length=4, stride=1, shuffle=False\n", ")\n", "\n", "data_iter = iter(dataloader)#数据加载器 dataloader 转换为一个迭代器\n", "first_batch = next(data_iter)\n", "print(first_batch)" ] }, { "cell_type": "code", "execution_count": 40, "id": "10deb4bc-4de1-4d20-921e-4b1c7a0e1a6d", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[tensor([[ 367, 2885, 1464, 1807]]), tensor([[2885, 1464, 1807, 3619]])]\n" ] } ], "source": [ "second_batch = next(data_iter)\n", "print(second_batch)" ] }, { "cell_type": "markdown", "id": "b006212f-de45-468d-bdee-5806216d1679", "metadata": {}, "source": [ "- 下面是一个示例,步幅等于上下文长度(此处为4):" ] }, { "cell_type": "markdown", "id": "9cb467e0-bdcd-4dda-b9b0-a738c5d33ac3", "metadata": {}, "source": [ "" ] }, { "cell_type": "markdown", "id": "b1ae6d45-f26e-4b83-9c7b-cff55ffa7d16", "metadata": {}, "source": [ "- 我们还可以批量输出。\n", "- 因为过多的重叠可能导致过拟合,这里增加了步幅。" ] }, { "cell_type": "code", "execution_count": 41, "id": "1916e7a6-f03d-4f09-91a6-d0bdbac5a58c", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Inputs:\n", " tensor([[ 40, 367, 2885, 1464],\n", " [ 1807, 3619, 402, 271],\n", " [10899, 2138, 257, 7026],\n", " [15632, 438, 2016, 257],\n", " [ 922, 5891, 1576, 438],\n", " [ 568, 340, 373, 645],\n", " [ 1049, 5975, 284, 502],\n", " [ 284, 3285, 326, 11]])\n", "\n", "Targets:\n", " tensor([[ 367, 2885, 1464, 1807],\n", " [ 3619, 402, 271, 10899],\n", " [ 2138, 257, 7026, 15632],\n", " [ 438, 2016, 257, 922],\n", " [ 5891, 1576, 438, 568],\n", " [ 340, 373, 645, 1049],\n", " [ 5975, 284, 502, 284],\n", " [ 3285, 326, 11, 287]])\n" ] } ], "source": [ "dataloader = create_dataloader_v1(raw_text, batch_size=8, max_length=4, stride=4, shuffle=False)\n", "\n", "data_iter = iter(dataloader)\n", "inputs, targets = next(data_iter)\n", "print(\"Inputs:\\n\", inputs)\n", "print(\"\\nTargets:\\n\", targets)" ] }, { "cell_type": "markdown", "id": "2cd2fcda-2fda-4aa8-8bc8-de1e496f9db1", "metadata": {}, "source": [ "## 2.7 创建标记嵌入(Creating token embeddings)" ] }, { "cell_type": "markdown", "id": "1a301068-6ab2-44ff-a915-1ba11688274f", "metadata": {}, "source": [ "- 数据库快被准备好用于大语言模型(LLM)。\n", "- 最后,让我们使用嵌入层将标记转换为连续的向量表示。\n", "- 通常,这些嵌入层是大语言模型的一部分,并在模型训练过程中进行更新(训练)。" ] }, { "cell_type": "markdown", "id": "e85089aa-8671-4e5f-a2b3-ef252004ee4c", "metadata": {}, "source": [ "" ] }, { "cell_type": "markdown", "id": "44e014ca-1fc5-4b90-b6fa-c2097bb92c0b", "metadata": {}, "source": [ "- 假设我们有以下四个输入示例,经过分词后对应的输入 ID 为 2、3、5 和 1:" ] }, { "cell_type": "code", "execution_count": 42, "id": "15a6304c-9474-4470-b85d-3991a49fa653", "metadata": {}, "outputs": [], "source": [ "input_ids = torch.tensor([2, 3, 5, 1])#要加入2,3,5,1的字符" ] }, { "cell_type": "markdown", "id": "14da6344-2c71-4837-858d-dd120005ba05", "metadata": {}, "source": [ "- 为了简化问题,假设我们只有一个包含 6 个词的小型词汇表,并且我们希望创建大小为 3 的嵌入。" ] }, { "cell_type": "code", "execution_count": 43, "id": "93cb2cee-9aa6-4bb8-8977-c65661d16eda", "metadata": {}, "outputs": [], "source": [ "vocab_size = 6#嵌入层需要支持的唯一标记的总数\n", "output_dim = 3#嵌入向量的维度\n", "\n", "torch.manual_seed(123)#用于设置随机数生成器的种子,确保结果的可复现性\n", "embedding_layer = torch.nn.Embedding(vocab_size, output_dim)#每行表示一个标记的嵌入向量。" ] }, { "cell_type": "markdown", "id": "4ff241f6-78eb-4e4a-a55f-5b2b6196d5b0", "metadata": {}, "source": [ "- 结果是个6*3的矩阵" ] }, { "cell_type": "code", "execution_count": 44, "id": "a686eb61-e737-4351-8f1c-222913d47468", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Parameter containing:\n", "tensor([[ 0.3374, -0.1778, -0.1690],\n", " [ 0.9178, 1.5810, 1.3010],\n", " [ 1.2753, -0.2010, -0.1606],\n", " [-0.4015, 0.9666, -1.1481],\n", " [-1.1589, 0.3255, -0.6315],\n", " [-2.8400, -0.7849, -1.4096]], requires_grad=True)\n" ] } ], "source": [ "print(embedding_layer.weight)" ] }, { "cell_type": "markdown", "id": "26fcf4f5-0801-4eb4-bb90-acce87935ac7", "metadata": {}, "source": [ "- 对于熟悉(one-hot encoding)的人来说,上述嵌入层方法本质上只是实现one-hot编码后接矩阵乘法的更高效方式,具体实现可以参考[./embedding_vs_matmul](../03_bonus_embedding-vs-matmul)中的补充代码。\n", "- 嵌入层只是对one-hot encoding和矩阵乘法方法的高效实现,因此可以将其视为一个神经网络层,并通过反向传播进行优化。" ] }, { "cell_type": "markdown", "id": "4b0d58c3-83c0-4205-aca2-9c48b19fd4a7", "metadata": {}, "source": [ "- 通过下列操作,我们可以吧id3映射到一个三纬的矩阵" ] }, { "cell_type": "code", "execution_count": 45, "id": "e43600ba-f287-4746-8ddf-d0f71a9023ca", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "tensor([[-0.4015, 0.9666, -1.1481]], grad_fn=)\n" ] } ], "source": [ "print(embedding_layer(torch.tensor([3])))" ] }, { "cell_type": "markdown", "id": "a7bbf625-4f36-491d-87b4-3969efb784b0", "metadata": {}, "source": [ "- 请注意,上述内容是 `embedding_layer` 权重矩阵中的第四行。\n", "- 要嵌入上述所有四个 `input_ids` 值,我们执行以下操作:" ] }, { "cell_type": "code", "execution_count": 46, "id": "50280ead-0363-44c8-8c35-bb885d92c8b7", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "tensor([[ 1.2753, -0.2010, -0.1606],\n", " [-0.4015, 0.9666, -1.1481],\n", " [-2.8400, -0.7849, -1.4096],\n", " [ 0.9178, 1.5810, 1.3010]], grad_fn=)\n" ] } ], "source": [ "print(embedding_layer(input_ids))" ] }, { "cell_type": "markdown", "id": "be97ced4-bd13-42b7-866a-4d699a17e155", "metadata": {}, "source": [ "- 嵌入层本质上是一个查找操作:" ] }, { "cell_type": "markdown", "id": "f33c2741-bf1b-4c60-b7fd-61409d556646", "metadata": {}, "source": [ "" ] }, { "cell_type": "markdown", "id": "08218d9f-aa1a-4afb-a105-72ff96a54e73", "metadata": {}, "source": [ "- **您可能对比较嵌入层与常规线性层的附加内容感兴趣:[../03_bonus_embedding-vs-matmul](../03_bonus_embedding-vs-matmul)**" ] }, { "cell_type": "markdown", "id": "c393d270-b950-4bc8-99ea-97d74f2ea0f6", "metadata": {}, "source": [ "## 2.8 位置信息编码" ] }, { "cell_type": "markdown", "id": "24940068-1099-4698-bdc0-e798515e2902", "metadata": {}, "source": [ "- 嵌入层将 ID 转换为相同的向量表示" ] }, { "cell_type": "markdown", "id": "9e0b14a2-f3f3-490e-b513-f262dbcf94fa", "metadata": {}, "source": [ "" ] }, { "cell_type": "markdown", "id": "92a7d7fe-38a5-46e6-8db6-b688887b0430", "metadata": {}, "source": [ "- 位置信息与标记向量结合,形成大语言模型的最终输入:" ] }, { "cell_type": "markdown", "id": "48de37db-d54d-45c4-ab3e-88c0783ad2e4", "metadata": {}, "source": [ "" ] }, { "cell_type": "markdown", "id": "7f187f87-c1f8-4c2e-8050-350bbb972f55", "metadata": {}, "source": [ "- 字节对编码器的词汇表大小为 50,257:\n", "- 假设我们想将输入标记编码为一个 256 维的向量表示:" ] }, { "cell_type": "code", "execution_count": 47, "id": "0b9e344d-03a6-4f2c-b723-67b6a20c5041", "metadata": {}, "outputs": [], "source": [ "vocab_size = 50257\n", "output_dim = 256\n", "token_embedding_layer = torch.nn.Embedding(vocab_size, output_dim)#映射为tensor" ] }, { "cell_type": "markdown", "id": "a2654722-24e4-4b0d-a43c-436a461eb70b", "metadata": {}, "source": [ "- 如果我们从数据加载器中采样数据,我们将每个批次中的标记嵌入为一个 256 维的向量。\n", "- 如果我们有一个批次大小为 8,每个批次包含 4 个标记,这将得到一个 8 x 4 x 256 的张量:" ] }, { "cell_type": "code", "execution_count": null, "id": "ad56a263-3d2e-4d91-98bf-d0b68d3c7fc3", "metadata": {}, "outputs": [], "source": [ "max_length = 4\n", "dataloader = create_dataloader_v1(\n", " raw_text, batch_size=8, max_length=max_length,\n", " stride=max_length, shuffle=False\n", ")\n", "data_iter = iter(dataloader)\n", "inputs, targets = next(data_iter)" ] }, { "cell_type": "code", "execution_count": 49, "id": "84416b60-3707-4370-bcbc-da0b62f2b64d", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Token IDs:\n", " tensor([[ 40, 367, 2885, 1464],\n", " [1807, 3619, 402, 271]])\n", "\n", "Inputs shape:\n", " torch.Size([2, 4])\n" ] } ], "source": [ "print(\"Token IDs:\\n\", inputs)\n", "print(\"\\nInputs shape:\\n\", inputs.shape)" ] }, { "cell_type": "code", "execution_count": 50, "id": "7766ec38-30d0-4128-8c31-f49f063c43d1", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "torch.Size([2, 4, 256])\n" ] } ], "source": [ "token_embeddings = token_embedding_layer(inputs)#调用token_embedding_layer将输入inputs映射为对应的嵌入向量。\n", "print(token_embeddings.shape)" ] }, { "cell_type": "markdown", "id": "fe2ae164-6f19-4e32-b9e5-76950fcf1c9f", "metadata": {}, "source": [ "- GPT-2 使用绝对位置嵌入,因此我们只需要创建另一个位置嵌入层:" ] }, { "cell_type": "code", "execution_count": 51, "id": "cc048e20-7ac8-417e-81f5-8fe6f9a4fe07", "metadata": {}, "outputs": [], "source": [ "context_length = max_length\n", "pos_embedding_layer = torch.nn.Embedding(context_length, output_dim)\n", "#目的是为输入序列中的每个位置生成一个向量,表明位置信息" ] }, { "cell_type": "markdown", "id": "9056e371", "metadata": {}, "source": [ "- 嵌入层本质上是一个查找表,大小为(context_length, output_dim)" ] }, { "cell_type": "code", "execution_count": 52, "id": "c369a1e7-d566-4b53-b398-d6adafb44105", "metadata": { "scrolled": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "torch.Size([4, 256])\n" ] } ], "source": [ "pos_embeddings = pos_embedding_layer(torch.arange(max_length))#生成一个连续整数的1D tensor\n", "\n", "print(pos_embeddings.shape)" ] }, { "cell_type": "markdown", "id": "870e9d9f-2935-461a-9518-6d1386b976d6", "metadata": {}, "source": [ "- 为了创建大语言模型(LLM)中使用的输入嵌入,我们只需将标记嵌入和位置嵌入相加:" ] }, { "cell_type": "code", "execution_count": 53, "id": "b22fab89-526e-43c8-9035-5b7018e34288", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "torch.Size([2, 4, 256])\n" ] } ], "source": [ "input_embeddings = token_embeddings + pos_embeddings#特征是词语信息跟位置信息的结合\n", "print(input_embeddings.shape)" ] }, { "cell_type": "markdown", "id": "1fbda581-6f9b-476f-8ea7-d244e6a4eaec", "metadata": {}, "source": [ "- 在输入处理流程的初始阶段,输入文本被分割为独立的标记。\n", "- 在此分割后,这些标记根据预定义的单词表转换为标记 ID:" ] }, { "cell_type": "markdown", "id": "d1bb0f7e-460d-44db-b366-096adcd84fff", "metadata": {}, "source": [ "" ] }, { "cell_type": "markdown", "id": "63230f2e-258f-4497-9e2e-8deee4530364", "metadata": {}, "source": [ "# 总结与收获" ] }, { "cell_type": "markdown", "id": "8b3293a6-45a5-47cd-aa00-b23e3ca0a73f", "metadata": {}, "source": [ "请参见 [./dataloader.ipynb](./dataloader.ipynb) 代码笔记本,这是我们在本章中实现的数据加载器的简洁版,并将在后续章节中用于训练 GPT 模型。\n", "\n", "请参见 [./exercise-solutions.ipynb](./exercise-solutions.ipynb) 获取习题解答。" ] } ], "metadata": { "kernelspec": { "display_name": "pytorch_env", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.12.9" } }, "nbformat": 4, "nbformat_minor": 5 }