{ "cells": [ { "cell_type": "markdown", "id": "12e91914-5f51-43fa-b65b-625e73b4d17b", "metadata": { "id": "12e91914-5f51-43fa-b65b-625e73b4d17b" }, "source": [ "\n", "\n", "\n", "\n", "\n", "
\n", "\n", "Supplementary code for the Build a Large Language Model From Scratch book by Sebastian Raschka
\n", "
Code repository: https://github.com/rasbt/LLMs-from-scratch\n", "
汉化的库: https://github.com/GoatCsu/CN-LLMs-from-scratch.git\n", "
\n", "
\n", "\n", "
\n" ] }, { "cell_type": "markdown", "id": "c2520ec3-722f-4f44-bdd1-885b13e7afbf", "metadata": { "id": "c2520ec3-722f-4f44-bdd1-885b13e7afbf" }, "source": [ "# 第七章:指令微调" ] }, { "cell_type": "code", "execution_count": 1, "id": "4e19327b-6c02-4881-ad02-9b6d3ec0b1b4", "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "4e19327b-6c02-4881-ad02-9b6d3ec0b1b4", "outputId": "9d937b84-d8f8-4ce9-cc3c-211188f49a10" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "matplotlib version: 3.7.1\n", "tiktoken version: 0.7.0\n", "torch version: 2.4.0\n", "tqdm version: 4.66.4\n", "tensorflow version: 2.15.0\n" ] } ], "source": [ "from importlib.metadata import version\n", "\n", "pkgs = [\n", " \"matplotlib\", # 绘图库\n", " \"tiktoken\", # 分词器\n", " \"torch\", # 深度学习库\n", " \"tqdm\", # 进度条\n", " \"tensorflow\", # 用于加载OpenAI的预训练权重\n", "]\n", "for p in pkgs:\n", " print(f\"{p} version: {version(p)}\")\n", "# 读取并输出版本号" ] }, { "cell_type": "markdown", "id": "264fca98-2f9a-4193-b435-2abfa3b4142f", "metadata": { "id": "264fca98-2f9a-4193-b435-2abfa3b4142f" }, "source": [ "" ] }, { "cell_type": "markdown", "id": "8bbc68e9-75b3-41f1-ac2c-e071c3cd0813", "metadata": { "id": "8bbc68e9-75b3-41f1-ac2c-e071c3cd0813" }, "source": [ "## 7.1 指令微调的介绍" ] }, { "cell_type": "markdown", "id": "53dba24a-6805-496c-9a7f-c75e2d3527ab", "metadata": { "id": "53dba24a-6805-496c-9a7f-c75e2d3527ab" }, "source": [ "- 在第5章中,我们看到大语言模型的预训练是通过让模型学习逐个生成单词来实现的。\n", "- 由此可见,预训练的大语言模型擅长文本补全的任务,但不擅长执行指令。\n", "- 在本章中,我们将微调大语言模型使其更好地遵循指令。" ] }, { "cell_type": "markdown", "id": "18dc0535-0904-44ed-beaf-9b678292ef35", "metadata": { "id": "18dc0535-0904-44ed-beaf-9b678292ef35" }, "source": [ "" ] }, { "cell_type": "markdown", "id": "b4698b23-12e0-4bd7-a140-ccb3dd71d4e8", "metadata": { "id": "b4698b23-12e0-4bd7-a140-ccb3dd71d4e8" }, "source": [ "- 在下图中,你可以看到本章节所涉及的话题\n", "\n", "" ] }, { "cell_type": "markdown", "id": "5384f0cf-ef3c-4436-a5fa-59bd25649f86", "metadata": { "id": "5384f0cf-ef3c-4436-a5fa-59bd25649f86" }, "source": [ "## 7.2 为有监督指令微调准备数据集" ] }, { "cell_type": "markdown", "id": "f8b34ff8-619f-4e89-bd03-ce513269760d", "metadata": { "id": "f8b34ff8-619f-4e89-bd03-ce513269760d" }, "source": [ "- 使用我为本章准备的一个指令数据集" ] }, { "cell_type": "code", "execution_count": 2, "id": "0G3axLw6kY1N", "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "0G3axLw6kY1N", "outputId": "a5f70eb8-6248-4834-e7ae-6105e94e5afa" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Number of entries: 1100\n" ] } ], "source": [ "import json\n", "import os\n", "import urllib\n", "\n", "\n", "def download_and_load_file(file_path, url):\n", "\n", " if not os.path.exists(file_path):\n", " with urllib.request.urlopen(url) as response:\n", " text_data = response.read().decode(\"utf-8\")\n", " with open(file_path, \"w\", encoding=\"utf-8\") as file:\n", " file.write(text_data)\n", " else:\n", " with open(file_path, \"r\", encoding=\"utf-8\") as file:\n", " text_data = file.read()\n", "\n", " with open(file_path, \"r\", encoding=\"utf-8\") as file:\n", " data = json.load(file)\n", "\n", " return data\n", "# 在网上下载并打开数据库\n", "\n", "file_path = \"instruction-data.json\"\n", "url = (\n", " \"https://raw.githubusercontent.com/rasbt/LLMs-from-scratch\"\n", " \"/main/ch07/01_main-chapter-code/instruction-data.json\"\n", ")\n", "\n", "data = download_and_load_file(file_path, url)\n", "print(\"Number of entries:\", len(data))\n", "# 看一下数据一共有多少条" ] }, { "cell_type": "markdown", "id": "d7af8176-4255-4e92-8c7d-998771733eb8", "metadata": { "id": "d7af8176-4255-4e92-8c7d-998771733eb8" }, "source": [ "- 每个我们从上述 JSON 文件加载的 `data` 列表中的项都是一个字典,格式如下:" ] }, { "cell_type": "code", "execution_count": 3, "id": "-LiuBMsHkzQV", "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "-LiuBMsHkzQV", "outputId": "cc742019-b8d7-40f9-b21a-6a5ddf821377" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Example entry:\n", " {'instruction': 'Identify the correct spelling of the following word.', 'input': 'Ocassion', 'output': \"The correct spelling is 'Occasion.'\"}\n" ] } ], "source": [ "print(\"Example entry:\\n\", data[50])\n", "# 打印第51个json的形式" ] }, { "cell_type": "markdown", "id": "c5a32b34-485a-4816-a77a-da14f9fe6e46", "metadata": { "id": "c5a32b34-485a-4816-a77a-da14f9fe6e46" }, "source": [ "- 有时输入也可能是空的,如下所示" ] }, { "cell_type": "code", "execution_count": 4, "id": "uFInFxDDk2Je", "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "uFInFxDDk2Je", "outputId": "70241295-a9ec-4b7d-caf5-ab6f267e3271" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Another example entry:\n", " {'instruction': \"What is an antonym of 'complicated'?\", 'input': '', 'output': \"An antonym of 'complicated' is 'simple'.\"}\n" ] } ], "source": [ "print(\"Another example entry:\\n\", data[999])" ] }, { "cell_type": "markdown", "id": "f034799a-6575-45fd-98c9-9d1012d0fd58", "metadata": { "id": "f034799a-6575-45fd-98c9-9d1012d0fd58" }, "source": [ "- 指令微调通常被称为“监督指令微调”,因为它涉及在一个数据集上训练模型,而该数据集中明确提供了输入-输出对。\n", "- 有多种不同的方法可以将样本制作为适应于大语言模型的格式;下图分别展示了两种用于训练Alpaca([https://crfm.stanford.edu/2023/03/13/alpaca.html](https://crfm.stanford.edu/2023/03/13/alpaca.html))和Phi-3([https://arxiv.org/abs/2404.14219](https://arxiv.org/abs/2404.14219))LLM的示例格式。" ] }, { "cell_type": "markdown", "id": "dffa4f70-44d4-4be4-89a9-2159f4885b10", "metadata": { "id": "dffa4f70-44d4-4be4-89a9-2159f4885b10" }, "source": [ "" ] }, { "cell_type": "markdown", "id": "dd79a74e-befb-491c-be49-f777a6a5b6a6", "metadata": { "id": "dd79a74e-befb-491c-be49-f777a6a5b6a6" }, "source": [ "- 在本章中,我们默认使用Alpaca风格的提示格式,这是一种较早公开并被广泛使用的指令微调提示模板。\n", "- 下面,我们将格式化输入内容,并将其作为输入传递给大语言模型。" ] }, { "cell_type": "code", "execution_count": 5, "id": "Jhk37nnJnkBh", "metadata": { "id": "Jhk37nnJnkBh" }, "outputs": [], "source": [ "def format_input(entry):\n", " # 使用数据库的提示词\n", " instruction_text = (\n", " f\"Below is an instruction that describes a task. \"\n", " f\"Write a response that appropriately completes the request.\"\n", " f\"\\n\\n### Instruction:\\n{entry['instruction']}\"\n", " )\n", " \n", " # 如果没有输入的格式,将如何处理\n", " input_text = f\"\\n\\n### Input:\\n{entry['input']}\" if entry[\"input\"] else \"\"\n", " \n", " return instruction_text + input_text" ] }, { "cell_type": "markdown", "id": "011e78b4-e89a-4653-a2ee-7b2739ca04d6", "metadata": { "id": "011e78b4-e89a-4653-a2ee-7b2739ca04d6" }, "source": [ "- 格式化的回复如下所示" ] }, { "cell_type": "code", "execution_count": 6, "id": "F9UQRfjzo4Js", "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "F9UQRfjzo4Js", "outputId": "13ec7abf-ad94-4e26-860d-6a39a344f31f" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n", "\n", "### Instruction:\n", "Identify the correct spelling of the following word.\n", "\n", "### Input:\n", "Ocassion\n", "\n", "### Response:\n", "The correct spelling is 'Occasion.'\n" ] } ], "source": [ "model_input = format_input(data[50])\n", "desired_response = f\"\\n\\n### Response:\\n{data[50]['output']}\"\n", "# 先使用五十条数据进行测试\n", "print(model_input + desired_response)" ] }, { "cell_type": "markdown", "id": "4dc93ddf-431c-49c0-96f2-fb3a79c4d94c", "metadata": { "id": "4dc93ddf-431c-49c0-96f2-fb3a79c4d94c" }, "source": [ "- 以下是一个没有输入内容对应的格式化响应:" ] }, { "cell_type": "code", "execution_count": 7, "id": "a3891fa9-f738-41cd-946c-80ef9a99c346", "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "a3891fa9-f738-41cd-946c-80ef9a99c346", "outputId": "d6be5713-1293-4a70-c8c8-a86ea8e95817" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n", "\n", "### Instruction:\n", "What is an antonym of 'complicated'?\n", "\n", "### Response:\n", "An antonym of 'complicated' is 'simple'.\n" ] } ], "source": [ "model_input = format_input(data[999])\n", "desired_response = f\"\\n\\n### Response:\\n{data[999]['output']}\"\n", "\n", "print(model_input + desired_response)" ] }, { "cell_type": "markdown", "id": "4aa8afd5-2a21-49a5-90c3-6a03865a4771", "metadata": { "id": "4aa8afd5-2a21-49a5-90c3-6a03865a4771" }, "source": [ "- 最后,在下一节中准备 PyTorch 数据加载器之前,我们将数据集划分为训练集、验证集和测试集。" ] }, { "cell_type": "code", "execution_count": 8, "id": "aFZVopbIlNfx", "metadata": { "id": "aFZVopbIlNfx" }, "outputs": [], "source": [ "# 自定义训练集、测试集和验证集的大小\n", "train_portion = int(len(data) * 0.85) # 85% 作为训练集\n", "test_portion = int(len(data) * 0.1) # 10% 作为测试集\n", "val_portion = len(data) - train_portion - test_portion # 剩下的 5% 作为验证集\n", "\n", "# 划分数据集\n", "train_data = data[:train_portion]\n", "test_data = data[train_portion:train_portion + test_portion]\n", "val_data = data[train_portion + test_portion:]\n" ] }, { "cell_type": "code", "execution_count": 9, "id": "-zf6oht6bIUQ", "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "-zf6oht6bIUQ", "outputId": "bb5fe8e5-1ce5-4fca-a430-76ecf42e99ef" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Training set length: 935\n", "Validation set length: 55\n", "Test set length: 110\n" ] } ], "source": [ "print(\"Training set length:\", len(train_data))\n", "print(\"Validation set length:\", len(val_data))\n", "print(\"Test set length:\", len(test_data))" ] }, { "cell_type": "markdown", "id": "fcaaf606-f913-4445-8301-632ae10d387d", "metadata": { "id": "fcaaf606-f913-4445-8301-632ae10d387d" }, "source": [ "## 7.3 将数据组织成训练批次" ] }, { "cell_type": "markdown", "id": "233f63bd-9755-4d07-8884-5e2e5345cf27", "metadata": { "id": "233f63bd-9755-4d07-8884-5e2e5345cf27" }, "source": [ "" ] }, { "cell_type": "markdown", "id": "c149fc1a-7757-4ec8-80cb-e2a3fb007a2c", "metadata": { "id": "c149fc1a-7757-4ec8-80cb-e2a3fb007a2c" }, "source": [ "- 下图总结了我们处理数据的几种方式\n", "\n", "" ] }, { "cell_type": "markdown", "id": "b9af423f-aad9-4b3c-bea5-153021c04862", "metadata": { "id": "b9af423f-aad9-4b3c-bea5-153021c04862" }, "source": [ "- 首先,我们实现一个 `InstructionDataset` 类,它对数据集中的所有输入进行预分词,类似于第6章中的 `SpamDataset`。\n", "\n", "" ] }, { "cell_type": "code", "execution_count": 10, "id": "adc29dc4-f1c7-4c71-937b-95119d6239bb", "metadata": { "id": "adc29dc4-f1c7-4c71-937b-95119d6239bb" }, "outputs": [], "source": [ "import torch\n", "from torch.utils.data import Dataset\n", "\n", "\n", "class InstructionDataset(Dataset):\n", " # 指示数据类的构建\n", " def __init__(self, data, tokenizer):\n", " self.data = data\n", " # 实例化数据\n", " # 对文本进行预编码\n", " self.encoded_texts = []\n", " for entry in data:\n", " instruction_plus_input = format_input(entry)\n", " response_text = f\"\\n\\n### Response:\\n{entry['output']}\"\n", " full_text = instruction_plus_input + response_text\n", " self.encoded_texts.append(\n", " tokenizer.encode(full_text)\n", " )\n", " # 链表访问\n", " def __getitem__(self, index):\n", " return self.encoded_texts[index]\n", " \n", " def __len__(self):\n", " return len(self.data)" ] }, { "cell_type": "markdown", "id": "384f0e69-4b22-41c0-a25d-f077527eddd1", "metadata": { "id": "384f0e69-4b22-41c0-a25d-f077527eddd1" }, "source": [ "- 与第6章类似, 为了加速训练, 我们希望将多个批次收集到同个训练轮次中, 这要求将所有输入填充到相同的长度。\n", "- 同样与上一章类似,我们使用 `<|endoftext|>` 作为填充 token。" ] }, { "cell_type": "code", "execution_count": 11, "id": "ff24fe1a-5746-461c-ad3d-b6d84a1a7c96", "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "ff24fe1a-5746-461c-ad3d-b6d84a1a7c96", "outputId": "4d63f8b8-b4ad-45d9-9e93-c9dd8c2b7706" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[50256]\n" ] } ], "source": [ "import tiktoken\n", "# gpt2作为编码模型\n", "tokenizer = tiktoken.get_encoding(\"gpt2\")\n", "print(tokenizer.encode(\"<|endoftext|>\", allowed_special={\"<|endoftext|>\"}))" ] }, { "cell_type": "markdown", "id": "9e5bd7bc-f347-4cf8-a0c2-94cb8799e427", "metadata": { "id": "9e5bd7bc-f347-4cf8-a0c2-94cb8799e427" }, "source": [ "- 在第6章中,我们将数据集中的所有例子填充为相同的长度。\n", " - 而在这里,我们采取了一种更为复杂的方法: 开发了一个自定义的 \"collate\" 函数,并将其传递给数据加载器。\n", " - 这个自定义的collate函数会将每个批次中的训练示例填充到相同的长度(不同批次的长度可以不同)。" ] }, { "cell_type": "markdown", "id": "65c4d943-4aa8-4a44-874e-05bc6831fbd3", "metadata": { "id": "65c4d943-4aa8-4a44-874e-05bc6831fbd3" }, "source": [ "" ] }, { "cell_type": "code", "execution_count": 12, "id": "eb4c77dd-c956-4a1b-897b-b466909f18ca", "metadata": { "id": "eb4c77dd-c956-4a1b-897b-b466909f18ca" }, "outputs": [], "source": [ "def custom_collate_draft_1(\n", " batch,\n", " pad_token_id=50256,\n", " device=\"cpu\"\n", "):\n", " # 找到批次中最长的序列\n", " # 并将最大长度增加1,这样会在后面添加一个额外的填充 token\n", " batch_max_length = max(len(item)+1 for item in batch)\n", "\n", " inputs_lst = []\n", " # 准备输入\n", " for item in batch:\n", " new_item = item.copy()\n", " new_item += [pad_token_id]\n", " # 复制后进行填充\n", " padded = (\n", " new_item + [pad_token_id] *\n", " (batch_max_length - len(new_item))\n", " )\n", " # 去掉最后一个表示并保存\n", " inputs = torch.tensor(padded[:-1])\n", " \n", " inputs_lst.append(inputs)\n", " # 堆积起来并输送给gpu\n", " inputs_tensor = torch.stack(inputs_lst).to(device)\n", " \n", " return inputs_tensor" ] }, { "cell_type": "code", "execution_count": 13, "id": "8fb02373-59b3-4f3a-b1d1-8181a2432645", "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "8fb02373-59b3-4f3a-b1d1-8181a2432645", "outputId": "8705ca9a-e999-4f70-9db8-1ad084eba7bb" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "tensor([[ 0, 1, 2, 3, 4],\n", " [ 5, 6, 50256, 50256, 50256],\n", " [ 7, 8, 9, 50256, 50256]])\n" ] } ], "source": [ "inputs_1 = [0, 1, 2, 3, 4]\n", "inputs_2 = [5, 6]\n", "inputs_3 = [7, 8, 9]\n", "\n", "batch = (\n", " inputs_1,\n", " inputs_2,\n", " inputs_3\n", ")\n", "\n", "print(custom_collate_draft_1(batch))" ] }, { "cell_type": "markdown", "id": "c46832ab-39b7-45f8-b330-ac9adfa10d1b", "metadata": { "id": "c46832ab-39b7-45f8-b330-ac9adfa10d1b" }, "source": [ "" ] }, { "cell_type": "markdown", "id": "17769a19-b961-4213-92ef-34f441b2d1d6", "metadata": { "id": "17769a19-b961-4213-92ef-34f441b2d1d6" }, "source": [ "- 上述内容中,我们仅返回了给大语言模型的输入。\n", "- 然而,对于大语言模型的训练,我们还需要目标值。\n", "- 与我们在预训练大语言模型时的做法相似,目标token序号与输入token序号相对应,但相比起来向右移动了一个位置(见下图),这样的设计使得大语言模型能够学习如何预测序列中的下一个词元。" ] }, { "cell_type": "markdown", "id": "0386b6fe-3455-4e70-becd-a5a4681ba2ef", "metadata": { "id": "0386b6fe-3455-4e70-becd-a5a4681ba2ef" }, "source": [ "" ] }, { "cell_type": "code", "execution_count": 14, "id": "74af192e-757c-4c0a-bdf9-b7eb25bf6ebc", "metadata": { "id": "74af192e-757c-4c0a-bdf9-b7eb25bf6ebc" }, "outputs": [], "source": [ "def custom_collate_draft_2(\n", " batch,\n", " pad_token_id=50256,\n", " device=\"cpu\"\n", "):\n", " # 找到最大的序列长度\n", " batch_max_length = max(len(item)+1 for item in batch)\n", " # 准备一个空列表\n", " inputs_lst, targets_lst = [], []\n", " \n", " for item in batch:\n", " new_item = item.copy()\n", " new_item += [pad_token_id]\n", " padded = (\n", " new_item + [pad_token_id] *\n", " (batch_max_length - len(new_item))\n", " )\n", " # 输入值是第一个到倒数第二个\n", " inputs = torch.tensor(padded[:-1]) \n", " # 目标值是第二个到最后一个,这样子保证了长度一样\n", " targets = torch.tensor(padded[1:]) \n", " \n", " inputs_lst.append(inputs)\n", " targets_lst.append(targets)\n", "\n", " inputs_tensor = torch.stack(inputs_lst).to(device)\n", " targets_tensor = torch.stack(targets_lst).to(device)\n", " return inputs_tensor, targets_tensor" ] }, { "cell_type": "code", "execution_count": 15, "id": "6eb2bce3-28a7-4f39-9d4b-5e972d69066c", "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "6eb2bce3-28a7-4f39-9d4b-5e972d69066c", "outputId": "b9ceae14-13c2-49f7-f4a4-b503f3db3009" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "tensor([[ 0, 1, 2, 3, 4],\n", " [ 5, 6, 50256, 50256, 50256],\n", " [ 7, 8, 9, 50256, 50256]])\n", "tensor([[ 1, 2, 3, 4, 50256],\n", " [ 6, 50256, 50256, 50256, 50256],\n", " [ 8, 9, 50256, 50256, 50256]])\n" ] } ], "source": [ "inputs, targets = custom_collate_draft_2(batch)\n", "print(inputs)\n", "print(targets)" ] }, { "cell_type": "markdown", "id": "3bf85703-a0e0-42aa-8f29-cbc28dbf4e15", "metadata": { "id": "3bf85703-a0e0-42aa-8f29-cbc28dbf4e15" }, "source": [ "- 接下来,我们引入了一个 `ignore_index` 值,用于将所有填充 token 的ID替换为一个新值;引入 `ignore_index` 的目的是使我们能够在损失函数中忽略填充值(稍后会详细讨论)。\n", "\n", "\n", "\n", "- 具体来说,这意味着我们将 `50256` 对应的 token ID替换为 `-100`,如图所示。" ] }, { "cell_type": "markdown", "id": "bd4bed33-956e-4b3f-a09c-586d8203109a", "metadata": { "id": "bd4bed33-956e-4b3f-a09c-586d8203109a" }, "source": [ "" ] }, { "cell_type": "markdown", "id": "5346513e-c3f4-44fe-af22-4ebd36497728", "metadata": { "id": "5346513e-c3f4-44fe-af22-4ebd36497728" }, "source": [ "- (此外,我们还引入了 `allowed_max_length`,以支持“限制样本的长度”。如果您打算使用比GPT-2模型支持的1024个 token 上下文长度更长的数据集,这个设置将非常有用)" ] }, { "cell_type": "code", "execution_count": 16, "id": "41ec6e2d-9eb2-4124-913e-d2af39be4cf2", "metadata": { "id": "41ec6e2d-9eb2-4124-913e-d2af39be4cf2" }, "outputs": [], "source": [ "def custom_collate_fn(\n", " batch,\n", " pad_token_id=50256,\n", " ignore_index=-100,\n", " allowed_max_length=None,\n", " device=\"cpu\"\n", "):\n", " # 找到批次中最长的序列\n", " batch_max_length = max(len(item)+1 for item in batch)\n", "\n", " # 填充并准备输入和目标\n", " inputs_lst, targets_lst = [], []\n", "\n", " for item in batch:\n", " new_item = item.copy()\n", " # 添加一个 <|endoftext|> token \n", " new_item += [pad_token_id]\n", " # 将序列填充到最大长度\n", " padded = (\n", " new_item + [pad_token_id] *\n", " (batch_max_length - len(new_item))\n", " )\n", " inputs = torch.tensor(padded[:-1]) # 截断最后一个 token 作为输入\n", " targets = torch.tensor(padded[1:]) # 向右移1个位置作为目标\n", "\n", " # 新增:将目标中除了第一个填充 token 外的所有填充 token 替换为 ignore_index\n", " mask = targets == pad_token_id\n", " indices = torch.nonzero(mask).squeeze()\n", " if indices.numel() > 1:\n", " targets[indices[1:]] = ignore_index\n", "\n", " # 新增:根据需要,限制序列的最大长度\n", " if allowed_max_length is not None:\n", " inputs = inputs[:allowed_max_length]\n", " targets = targets[:allowed_max_length]\n", "\n", " inputs_lst.append(inputs)\n", " targets_lst.append(targets)\n", "\n", " # 将输入和目标的列表转换为张量,并转移到目标设备\n", " inputs_tensor = torch.stack(inputs_lst).to(device)\n", " targets_tensor = torch.stack(targets_lst).to(device)\n", "\n", " return inputs_tensor, targets_tensor" ] }, { "cell_type": "code", "execution_count": 17, "id": "cdf5eec4-9ebe-4be0-9fca-9a47bee88fdc", "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "cdf5eec4-9ebe-4be0-9fca-9a47bee88fdc", "outputId": "a5501547-239d-431d-fb04-da7fa2ffad79" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "tensor([[ 0, 1, 2, 3, 4],\n", " [ 5, 6, 50256, 50256, 50256],\n", " [ 7, 8, 9, 50256, 50256]])\n", "tensor([[ 1, 2, 3, 4, 50256],\n", " [ 6, 50256, -100, -100, -100],\n", " [ 8, 9, 50256, -100, -100]])\n" ] } ], "source": [ "inputs, targets = custom_collate_fn(batch)\n", "print(inputs)\n", "print(targets)" ] }, { "cell_type": "markdown", "id": "26727c90-0d42-43b3-af21-0a66ad4fbbc7", "metadata": { "id": "26727c90-0d42-43b3-af21-0a66ad4fbbc7" }, "source": [ "- 看看填充 token 替换为 -100 产生了什么效果。\n", "- 为了说明,假设我们有一个小型分类任务,包含两个类标签,0和1,类似于第6章的内容。\n", "- 如果有以下的logits值(模型最后一层的输出),我们可以计算出以下的损失。" ] }, { "cell_type": "code", "execution_count": 18, "id": "W2jvh-OP9MFV", "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "W2jvh-OP9MFV", "outputId": "b5cd858e-7c58-4a21-c5a7-e72768bd301c" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "tensor(1.1269)\n" ] } ], "source": [ "logits_1 = torch.tensor(\n", " [[-1.0, 1.0], # 1st training example\n", " [-0.5, 1.5]] # 2nd training example\n", ")\n", "# 两个训练的实例\n", "targets_1 = torch.tensor([0, 1])\n", "\n", "loss_1 = torch.nn.functional.cross_entropy(logits_1, targets_1)\n", "# 计算交叉熵\n", "print(loss_1)" ] }, { "cell_type": "markdown", "id": "5edd3244-8886-4505-92e9-367d28529e1e", "metadata": { "id": "5edd3244-8886-4505-92e9-367d28529e1e" }, "source": [ "- 显然,多了一个token会影响loss" ] }, { "cell_type": "code", "execution_count": 19, "id": "nvVMuil89v9N", "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "nvVMuil89v9N", "outputId": "e4a07b99-a23c-4404-ccdb-5f93c39f3b09" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "tensor(0.7936)\n" ] } ], "source": [ "logits_2 = torch.tensor(\n", " [[-1.0, 1.0],\n", " [-0.5, 1.5],\n", " [-0.5, 1.5]] # 新增第3个训练实例\n", ")\n", "targets_2 = torch.tensor([0, 1, 1])\n", "\n", "loss_2 = torch.nn.functional.cross_entropy(logits_2, targets_2)\n", "print(loss_2)" ] }, { "cell_type": "markdown", "id": "54dca331-40e0-468b-b690-189fe156ba8f", "metadata": { "id": "54dca331-40e0-468b-b690-189fe156ba8f" }, "source": [ "- 但是我们看看如果这个token变成了-100会怎样" ] }, { "cell_type": "code", "execution_count": 20, "id": "RTyB1vah9p56", "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "RTyB1vah9p56", "outputId": "28c16387-1d9c-48a7-eda7-aa270864683d" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "tensor(1.1269)\n", "loss_1 == loss_3: tensor(True)\n" ] } ], "source": [ "targets_3 = torch.tensor([0, 1, -100])\n", "\n", "loss_3 = torch.nn.functional.cross_entropy(logits_2, targets_3)\n", "print(loss_3)\n", "print(\"loss_1 == loss_3:\", loss_1 == loss_3)\n", "# 综上所述、交叉熵会忽略-100" ] }, { "cell_type": "markdown", "id": "cef09d21-b652-4760-abea-4f76920e6a25", "metadata": { "id": "cef09d21-b652-4760-abea-4f76920e6a25" }, "source": [ "- 如上述所见,这3个训练样本计算得到的损失与我们从2个样本计算得到的损失相同,可以看出交叉熵损失函数忽略了带有 -100 标签的训练样本。\n", "- 默认情况下,PyTorch 的 `cross_entropy(..., ignore_index=-100)` 设置会忽略对应于标签 -100 的样本。\n", "- 使用这个 -100 的 `ignore_index`,我们可以忽略在批次中填充训练样本到相同长度时使用的额外结束 token(填充 token)。\n", "- 然而,我们忽略第一个结束 token(50256)也不是个好选择,因为这个 token 有助于向LLM发出**响应已完成**的信号。" ] }, { "cell_type": "markdown", "id": "6a4e9c5f-7c49-4321-9f1b-a50468a84524", "metadata": { "id": "6a4e9c5f-7c49-4321-9f1b-a50468a84524" }, "source": [ "- 除了屏蔽填充词元,实践中我们通常还会屏蔽与指令相关的目标token ID(这是本章节的练习)。" ] }, { "cell_type": "markdown", "id": "fab8f0ed-80e8-4fd9-bf84-e5d0e0bc0a39", "metadata": { "id": "fab8f0ed-80e8-4fd9-bf84-e5d0e0bc0a39" }, "source": [ "" ] }, { "cell_type": "markdown", "id": "bccaf048-ec95-498c-9155-d5b3ccba6c96", "metadata": { "id": "bccaf048-ec95-498c-9155-d5b3ccba6c96" }, "source": [ "## 7.4 创建指令数据集的数据加载器" ] }, { "cell_type": "markdown", "id": "e6b8e656-3af3-4db6-8dde-d8c216a12f50", "metadata": { "id": "e6b8e656-3af3-4db6-8dde-d8c216a12f50" }, "source": [ "- 在本节中,我们使用 `InstructionDataset` 类和 `custom_collate_fn` 函数来实例化训练集、验证集和测试集数据加载器。" ] }, { "cell_type": "markdown", "id": "9fffe390-b226-4d5c-983f-9f4da773cb82", "metadata": { "id": "9fffe390-b226-4d5c-983f-9f4da773cb82" }, "source": [ "" ] }, { "cell_type": "markdown", "id": "932677e9-9317-42e8-b461-7b0269518f97", "metadata": { "id": "932677e9-9317-42e8-b461-7b0269518f97" }, "source": [ "- 之前的 `custom_collate_fn` 函数的另一个改进之处在于,我们现在直接将数据移动到目标设备(例如GPU),而不是在主训练循环中执行。这提高了效率,因为当我们将 `custom_collate_fn` 作为数据加载器的一部分使用时,数据的移动可以在后台进行。\n", "- 我们使用 Python 标准库中的 `functools` 模块的 `partial` 函数,创建了一个新函数,其中原始函数的 `device` 参数已预先填充。" ] }, { "cell_type": "code", "execution_count": 3, "id": "etpqqWh8phKc", "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "etpqqWh8phKc", "outputId": "925faf3a-6df4-4ad0-f276-f328493619c3" }, "outputs": [ { "ename": "IndentationError", "evalue": "unexpected indent (3650425726.py, line 8)", "output_type": "error", "traceback": [ "\u001b[0;36m Cell \u001b[0;32mIn[3], line 8\u001b[0;36m\u001b[0m\n\u001b[0;31m if torch.cuda.is_available():\u001b[0m\n\u001b[0m ^\u001b[0m\n\u001b[0;31mIndentationError\u001b[0m\u001b[0;31m:\u001b[0m unexpected indent\n" ] } ], "source": [ "# device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n", "\n", "# 注意:\n", "# 如果适用,取消注释以下行将使代码能够在Apple Silicon芯片上运行,\n", "# 这比在Apple CPU上运行要快得多(在M3 MacBook Air上测得)。\n", "# 然而,计算得到的loss可能会略有不同。\n", "\n", "if torch.cuda.is_available():\n", " device = torch.device(\"cuda\")\n", "elif torch.backends.mps.is_available():\n", " device = torch.device(\"mps\")\n", "else:\n", " device = torch.device(\"cpu\")\n", "\n", "print(\"Device:\", device)" ] }, { "cell_type": "code", "execution_count": 22, "id": "4e47fb30-c2c6-4e6d-a64c-76cc65be4a2c", "metadata": { "id": "4e47fb30-c2c6-4e6d-a64c-76cc65be4a2c" }, "outputs": [], "source": [ "from functools import partial\n", "# 初始化定义\n", "customized_collate_fn = partial(\n", " custom_collate_fn,\n", " device=device,\n", " allowed_max_length=1024\n", ")" ] }, { "cell_type": "markdown", "id": "8ff42c29-8b81-45e5-ae8d-b97cd1cf447a", "metadata": { "id": "8ff42c29-8b81-45e5-ae8d-b97cd1cf447a" }, "source": [ "- 接下来,我们像之前的章节一样实例化数据加载器,唯一不同的是,我们现在为批处理过程提供了自定义的collate函数。" ] }, { "cell_type": "code", "execution_count": 23, "id": "BtWkgir6Hlpe", "metadata": { "id": "BtWkgir6Hlpe" }, "outputs": [], "source": [ "from torch.utils.data import DataLoader\n", "\n", "\n", "num_workers = 0\n", "batch_size = 8\n", "\n", "torch.manual_seed(123)\n", "# 初始化训练\n", "train_dataset = InstructionDataset(train_data, tokenizer)\n", "train_loader = DataLoader(\n", " train_dataset,\n", " batch_size=batch_size,\n", " collate_fn=customized_collate_fn,\n", " shuffle=True,\n", " drop_last=True,\n", " num_workers=num_workers\n", ")\n" ] }, { "cell_type": "code", "execution_count": 24, "id": "1d097dc8-ad34-4f05-b435-e4147965f532", "metadata": { "id": "1d097dc8-ad34-4f05-b435-e4147965f532" }, "outputs": [], "source": [ "# 初始化验证与测试\n", "val_dataset = InstructionDataset(val_data, tokenizer)\n", "val_loader = DataLoader(\n", " val_dataset,\n", " batch_size=batch_size,\n", " collate_fn=customized_collate_fn,\n", " shuffle=False,\n", " drop_last=False,\n", " num_workers=num_workers\n", ")\n", "\n", "test_dataset = InstructionDataset(test_data, tokenizer)\n", "test_loader = DataLoader(\n", " test_dataset,\n", " batch_size=batch_size,\n", " collate_fn=customized_collate_fn,\n", " shuffle=False,\n", " drop_last=False,\n", " num_workers=num_workers\n", ")\n" ] }, { "cell_type": "markdown", "id": "3f67c147-b1a2-4a95-9807-e2d0de0324c0", "metadata": { "id": "3f67c147-b1a2-4a95-9807-e2d0de0324c0" }, "source": [ "- 看看输入和输出批次的维度是怎样的" ] }, { "cell_type": "code", "execution_count": 25, "id": "GGs1AI3vHpnX", "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "GGs1AI3vHpnX", "outputId": "53a9695d-87cb-4d7c-8b43-1561dfa68ba0" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Train loader:\n", "torch.Size([8, 61]) torch.Size([8, 61])\n", "torch.Size([8, 76]) torch.Size([8, 76])\n", "torch.Size([8, 73]) torch.Size([8, 73])\n", "torch.Size([8, 68]) torch.Size([8, 68])\n", "torch.Size([8, 65]) torch.Size([8, 65])\n", "torch.Size([8, 72]) torch.Size([8, 72])\n", "torch.Size([8, 80]) torch.Size([8, 80])\n", "torch.Size([8, 67]) torch.Size([8, 67])\n", "torch.Size([8, 62]) torch.Size([8, 62])\n", "torch.Size([8, 75]) torch.Size([8, 75])\n", "torch.Size([8, 62]) torch.Size([8, 62])\n", "torch.Size([8, 68]) torch.Size([8, 68])\n", "torch.Size([8, 67]) torch.Size([8, 67])\n", "torch.Size([8, 77]) torch.Size([8, 77])\n", "torch.Size([8, 69]) torch.Size([8, 69])\n", "torch.Size([8, 79]) torch.Size([8, 79])\n", "torch.Size([8, 71]) torch.Size([8, 71])\n", "torch.Size([8, 66]) torch.Size([8, 66])\n", "torch.Size([8, 83]) torch.Size([8, 83])\n", "torch.Size([8, 68]) torch.Size([8, 68])\n", "torch.Size([8, 80]) torch.Size([8, 80])\n", "torch.Size([8, 71]) torch.Size([8, 71])\n", "torch.Size([8, 69]) torch.Size([8, 69])\n", "torch.Size([8, 65]) torch.Size([8, 65])\n", "torch.Size([8, 68]) torch.Size([8, 68])\n", "torch.Size([8, 60]) torch.Size([8, 60])\n", "torch.Size([8, 59]) torch.Size([8, 59])\n", "torch.Size([8, 69]) torch.Size([8, 69])\n", "torch.Size([8, 63]) torch.Size([8, 63])\n", "torch.Size([8, 65]) torch.Size([8, 65])\n", "torch.Size([8, 76]) torch.Size([8, 76])\n", "torch.Size([8, 66]) torch.Size([8, 66])\n", "torch.Size([8, 71]) torch.Size([8, 71])\n", "torch.Size([8, 91]) torch.Size([8, 91])\n", "torch.Size([8, 65]) torch.Size([8, 65])\n", "torch.Size([8, 64]) torch.Size([8, 64])\n", "torch.Size([8, 67]) torch.Size([8, 67])\n", "torch.Size([8, 66]) torch.Size([8, 66])\n", "torch.Size([8, 64]) torch.Size([8, 64])\n", "torch.Size([8, 65]) torch.Size([8, 65])\n", "torch.Size([8, 75]) torch.Size([8, 75])\n", "torch.Size([8, 89]) torch.Size([8, 89])\n", "torch.Size([8, 59]) torch.Size([8, 59])\n", "torch.Size([8, 88]) torch.Size([8, 88])\n", "torch.Size([8, 83]) torch.Size([8, 83])\n", "torch.Size([8, 83]) torch.Size([8, 83])\n", "torch.Size([8, 70]) torch.Size([8, 70])\n", "torch.Size([8, 65]) torch.Size([8, 65])\n", "torch.Size([8, 74]) torch.Size([8, 74])\n", "torch.Size([8, 76]) torch.Size([8, 76])\n", "torch.Size([8, 67]) torch.Size([8, 67])\n", "torch.Size([8, 75]) torch.Size([8, 75])\n", "torch.Size([8, 83]) torch.Size([8, 83])\n", "torch.Size([8, 69]) torch.Size([8, 69])\n", "torch.Size([8, 67]) torch.Size([8, 67])\n", "torch.Size([8, 60]) torch.Size([8, 60])\n", "torch.Size([8, 60]) torch.Size([8, 60])\n", "torch.Size([8, 66]) torch.Size([8, 66])\n", "torch.Size([8, 80]) torch.Size([8, 80])\n", "torch.Size([8, 71]) torch.Size([8, 71])\n", "torch.Size([8, 61]) torch.Size([8, 61])\n", "torch.Size([8, 58]) torch.Size([8, 58])\n", "torch.Size([8, 71]) torch.Size([8, 71])\n", "torch.Size([8, 67]) torch.Size([8, 67])\n", "torch.Size([8, 68]) torch.Size([8, 68])\n", "torch.Size([8, 63]) torch.Size([8, 63])\n", "torch.Size([8, 87]) torch.Size([8, 87])\n", "torch.Size([8, 68]) torch.Size([8, 68])\n", "torch.Size([8, 64]) torch.Size([8, 64])\n", "torch.Size([8, 68]) torch.Size([8, 68])\n", "torch.Size([8, 71]) torch.Size([8, 71])\n", "torch.Size([8, 68]) torch.Size([8, 68])\n", "torch.Size([8, 71]) torch.Size([8, 71])\n", "torch.Size([8, 61]) torch.Size([8, 61])\n", "torch.Size([8, 65]) torch.Size([8, 65])\n", "torch.Size([8, 67]) torch.Size([8, 67])\n", "torch.Size([8, 65]) torch.Size([8, 65])\n", "torch.Size([8, 64]) torch.Size([8, 64])\n", "torch.Size([8, 60]) torch.Size([8, 60])\n", "torch.Size([8, 72]) torch.Size([8, 72])\n", "torch.Size([8, 64]) torch.Size([8, 64])\n", "torch.Size([8, 70]) torch.Size([8, 70])\n", "torch.Size([8, 57]) torch.Size([8, 57])\n", "torch.Size([8, 72]) torch.Size([8, 72])\n", "torch.Size([8, 64]) torch.Size([8, 64])\n", "torch.Size([8, 68]) torch.Size([8, 68])\n", "torch.Size([8, 62]) torch.Size([8, 62])\n", "torch.Size([8, 74]) torch.Size([8, 74])\n", "torch.Size([8, 80]) torch.Size([8, 80])\n", "torch.Size([8, 68]) torch.Size([8, 68])\n", "torch.Size([8, 70]) torch.Size([8, 70])\n", "torch.Size([8, 91]) torch.Size([8, 91])\n", "torch.Size([8, 61]) torch.Size([8, 61])\n", "torch.Size([8, 66]) torch.Size([8, 66])\n", "torch.Size([8, 80]) torch.Size([8, 80])\n", "torch.Size([8, 81]) torch.Size([8, 81])\n", "torch.Size([8, 74]) torch.Size([8, 74])\n", "torch.Size([8, 82]) torch.Size([8, 82])\n", "torch.Size([8, 63]) torch.Size([8, 63])\n", "torch.Size([8, 83]) torch.Size([8, 83])\n", "torch.Size([8, 68]) torch.Size([8, 68])\n", "torch.Size([8, 67]) torch.Size([8, 67])\n", "torch.Size([8, 77]) torch.Size([8, 77])\n", "torch.Size([8, 91]) torch.Size([8, 91])\n", "torch.Size([8, 64]) torch.Size([8, 64])\n", "torch.Size([8, 61]) torch.Size([8, 61])\n", "torch.Size([8, 75]) torch.Size([8, 75])\n", "torch.Size([8, 64]) torch.Size([8, 64])\n", "torch.Size([8, 66]) torch.Size([8, 66])\n", "torch.Size([8, 78]) torch.Size([8, 78])\n", "torch.Size([8, 66]) torch.Size([8, 66])\n", "torch.Size([8, 64]) torch.Size([8, 64])\n", "torch.Size([8, 83]) torch.Size([8, 83])\n", "torch.Size([8, 66]) torch.Size([8, 66])\n", "torch.Size([8, 74]) torch.Size([8, 74])\n", "torch.Size([8, 69]) torch.Size([8, 69])\n" ] } ], "source": [ "print(\"Train loader:\")\n", "for inputs, targets in train_loader:\n", " print(inputs.shape, targets.shape)" ] }, { "cell_type": "markdown", "id": "0c8e8dd7-d46a-4cc3-8a7e-c1d31e1b4657", "metadata": { "id": "0c8e8dd7-d46a-4cc3-8a7e-c1d31e1b4657" }, "source": [ "- 如上面的输出所示,所有批次的批次大小为8,但长度各不相同,正如预期的那样。\n", "- 我们还可以通过输出 `inputs` 批次中第一个训练样本的内容,再次确认输入中包含了与 token ID 50256 对应的 `<|endoftext|>` 填充 token。" ] }, { "cell_type": "code", "execution_count": 26, "id": "21b8fd02-014f-4481-9b71-5bfee8f9dfcd", "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "21b8fd02-014f-4481-9b71-5bfee8f9dfcd", "outputId": "ce919ecd-5ded-453c-a312-10cf55c13da7" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "tensor([21106, 318, 281, 12064, 326, 8477, 257, 4876, 13, 19430,\n", " 257, 2882, 326, 20431, 32543, 262, 2581, 13, 198, 198,\n", " 21017, 46486, 25, 198, 30003, 6525, 262, 6827, 1262, 257,\n", " 985, 576, 13, 198, 198, 21017, 23412, 25, 198, 464,\n", " 5156, 318, 845, 13779, 13, 198, 198, 21017, 18261, 25,\n", " 198, 464, 5156, 318, 355, 13779, 355, 257, 4936, 13,\n", " 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256],\n", " device='cuda:0')\n" ] } ], "source": [ "print(inputs[0])" ] }, { "cell_type": "markdown", "id": "5f1f3647-8971-4006-89e0-6a2a1ec1d360", "metadata": { "id": "5f1f3647-8971-4006-89e0-6a2a1ec1d360" }, "source": [ "- 类似地,我们通过输出,直观地检查目标中是否包含 -100 占位符 token。" ] }, { "cell_type": "code", "execution_count": 27, "id": "51649ab4-1a7e-4a9e-92c5-950a24fde211", "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "51649ab4-1a7e-4a9e-92c5-950a24fde211", "outputId": "fdf486f3-e99d-4891-9814-afc9e4991020" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "tensor([ 318, 281, 12064, 326, 8477, 257, 4876, 13, 19430, 257,\n", " 2882, 326, 20431, 32543, 262, 2581, 13, 198, 198, 21017,\n", " 46486, 25, 198, 30003, 6525, 262, 6827, 1262, 257, 985,\n", " 576, 13, 198, 198, 21017, 23412, 25, 198, 464, 5156,\n", " 318, 845, 13779, 13, 198, 198, 21017, 18261, 25, 198,\n", " 464, 5156, 318, 355, 13779, 355, 257, 4936, 13, 50256,\n", " -100, -100, -100, -100, -100, -100, -100, -100, -100],\n", " device='cuda:0')\n" ] } ], "source": [ "print(targets[0])" ] }, { "cell_type": "markdown", "id": "d6aad445-8f19-4238-b9bf-db80767fb91a", "metadata": { "id": "d6aad445-8f19-4238-b9bf-db80767fb91a" }, "source": [ "## 7.5 加载预训练的大语言模型" ] }, { "cell_type": "markdown", "id": "5a5c07d1-4fc9-4846-94cf-b11a085a667b", "metadata": { "id": "5a5c07d1-4fc9-4846-94cf-b11a085a667b" }, "source": [ "- GPT跟本书ch05和ch06章节演示是一样的" ] }, { "cell_type": "markdown", "id": "8d1b438f-88af-413f-96a9-f059c6c55fc4", "metadata": { "id": "8d1b438f-88af-413f-96a9-f059c6c55fc4" }, "source": [ "" ] }, { "cell_type": "markdown", "id": "8c68eda7-e02e-4caa-846b-ca6dbd396ca2", "metadata": { "id": "8c68eda7-e02e-4caa-846b-ca6dbd396ca2" }, "source": [ "- 然而,我们没有加载1.24亿参数的最小模型,而是选择了3.55亿参数的中型版本,因为1.24亿参数的模型对于通过指令微调获得合理的结果来说过于简单。" ] }, { "cell_type": "code", "execution_count": 28, "id": "0d249d67-5eba-414e-9bd2-972ebf01329d", "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "0d249d67-5eba-414e-9bd2-972ebf01329d", "outputId": "3f08f5e1-ca7c-406d-e2ae-1b5fcafad3f2" }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "2024-07-25 02:22:49.969483: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.\n", "2024-07-25 02:22:50.023103: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered\n", "2024-07-25 02:22:50.023136: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered\n", "2024-07-25 02:22:50.024611: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered\n", "2024-07-25 02:22:50.033304: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.\n", "To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.\n", "2024-07-25 02:22:51.282247: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT\n", "checkpoint: 100%|██████████| 77.0/77.0 [00:00<00:00, 169kiB/s]\n", "encoder.json: 100%|██████████| 1.04M/1.04M [00:00<00:00, 2.43MiB/s]\n", "hparams.json: 100%|██████████| 91.0/91.0 [00:00<00:00, 168kiB/s]\n", "model.ckpt.data-00000-of-00001: 100%|██████████| 1.42G/1.42G [00:56<00:00, 25.0MiB/s]\n", "model.ckpt.index: 100%|██████████| 10.4k/10.4k [00:00<00:00, 16.5MiB/s]\n", "model.ckpt.meta: 100%|██████████| 927k/927k [00:00<00:00, 1.96MiB/s]\n", "vocab.bpe: 100%|██████████| 456k/456k [00:00<00:00, 1.53MiB/s]\n" ] } ], "source": [ "from gpt_download import download_and_load_gpt2\n", "from previous_chapters import GPTModel, load_weights_into_gpt\n", "\n", "\n", "BASE_CONFIG = {\n", " \"vocab_size\": 50257, # 词表大小\n", " \"context_length\": 1024, # 上下文长度\n", " \"drop_rate\": 0.0, # Dropout率\n", " \"qkv_bias\": True # 查询-键-值偏置\n", "}\n", "\n", "model_configs = {\n", " \"gpt2-small (124M)\": {\"emb_dim\": 768, \"n_layers\": 12, \"n_heads\": 12},\n", " \"gpt2-medium (355M)\": {\"emb_dim\": 1024, \"n_layers\": 24, \"n_heads\": 16},\n", " \"gpt2-large (774M)\": {\"emb_dim\": 1280, \"n_layers\": 36, \"n_heads\": 20},\n", " \"gpt2-xl (1558M)\": {\"emb_dim\": 1600, \"n_layers\": 48, \"n_heads\": 25},\n", "}\n", "\n", "CHOOSE_MODEL = \"gpt2-medium (355M)\"\n", "\n", "BASE_CONFIG.update(model_configs[CHOOSE_MODEL])\n", "\n", "model_size = CHOOSE_MODEL.split(\" \")[-1].lstrip(\"(\").rstrip(\")\")\n", "settings, params = download_and_load_gpt2(\n", " model_size=model_size,\n", " models_dir=\"gpt2\"\n", ")\n", "\n", "model = GPTModel(BASE_CONFIG)\n", "load_weights_into_gpt(model, params)\n", "model.eval()" ] }, { "cell_type": "markdown", "id": "dbf3afed-bc8e-4d3a-ad9d-eb6f57bb7af5", "metadata": { "id": "dbf3afed-bc8e-4d3a-ad9d-eb6f57bb7af5" }, "source": [ "- 在下一节开始微调模型之前,我们先来看一下它在一个验证集数据上的表现。" ] }, { "cell_type": "code", "execution_count": 29, "id": "7bd32b7c-5b44-4d25-a09f-46836802ca74", "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "7bd32b7c-5b44-4d25-a09f-46836802ca74", "outputId": "30d4fbd9-7d22-4545-cfc5-c5749cc0bd93" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n", "\n", "### Instruction:\n", "Convert the active sentence to passive: 'The chef cooks the meal every day.'\n" ] } ], "source": [ "torch.manual_seed(123)\n", "\n", "input_text = format_input(val_data[0])\n", "print(input_text)" ] }, { "cell_type": "code", "execution_count": 30, "id": "2e3e68e0-2627-4c65-b4e7-1e0667e4f6fa", "metadata": { "id": "2e3e68e0-2627-4c65-b4e7-1e0667e4f6fa" }, "outputs": [], "source": [ "from previous_chapters import (\n", " generate,\n", " text_to_token_ids,\n", " token_ids_to_text\n", ")\n", "\n", "token_ids = generate(\n", " model=model,\n", " idx=text_to_token_ids(input_text, tokenizer),\n", " max_new_tokens=35,\n", " context_size=BASE_CONFIG[\"context_length\"],\n", " eos_id=50256,\n", ")\n", "generated_text = token_ids_to_text(token_ids, tokenizer)" ] }, { "cell_type": "markdown", "id": "36e2fda5-f796-4954-8f72-1dd1123e3344", "metadata": { "id": "36e2fda5-f796-4954-8f72-1dd1123e3344" }, "source": [ "- 请注意,之前章节中使用的 `generate` 函数返回的是输入和输出文本的合并结果,这在上一节中便于生成可读的文本。\n", "- 为了提取响应,我们可以从 `generated_text` 的开头减去指令部分获得。" ] }, { "cell_type": "code", "execution_count": 31, "id": "ba4a55bf-a245-48d8-beda-2838a58fb5ba", "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "ba4a55bf-a245-48d8-beda-2838a58fb5ba", "outputId": "b46de9b3-98f0-45e4-a9ae-86870c3244a1" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "The chef cooks the meal every day.\n", "\n", "### Instruction:\n", "\n", "Convert the active sentence to passive: 'The chef cooks the\n" ] } ], "source": [ "response_text = (\n", " # 从生成的文本开始计数\n", " generated_text[len(input_text):]\n", " #如果生成的文本包含 `### Response:`,则删除它\n", " .replace(\"### Response:\", \"\")\n", " #去掉空格\n", " .strip()\n", ")\n", "print(response_text)" ] }, { "cell_type": "markdown", "id": "d44080b2-a4c5-4520-a797-549519f66a3e", "metadata": { "id": "d44080b2-a4c5-4520-a797-549519f66a3e" }, "source": [ "- 如我们所见,模型还无法正确地执行指令,但它创建了一个“response”部分,虽然只是简单地重复了原始输入句子和指令。" ] }, { "cell_type": "markdown", "id": "70d27b9d-a942-4cf5-b797-848c5f01e723", "metadata": { "id": "70d27b9d-a942-4cf5-b797-848c5f01e723" }, "source": [ "## 7.6 在指令数据上微调大语言模型" ] }, { "cell_type": "markdown", "id": "314b2a39-88b4-44d8-8c85-1c5b0cd6cc4a", "metadata": { "id": "314b2a39-88b4-44d8-8c85-1c5b0cd6cc4a" }, "source": [ "- 在这一部分,我们将要微调模型\n", "\n", "\n", "\n", "- 之前所使用的loss函数和训练函数我们都可以复用" ] }, { "cell_type": "code", "execution_count": 32, "id": "65444865-df87-4d98-9faf-875e1c4be860", "metadata": { "id": "65444865-df87-4d98-9faf-875e1c4be860" }, "outputs": [], "source": [ "from previous_chapters import (\n", " calc_loss_loader,\n", " train_model_simple\n", ")" ] }, { "cell_type": "markdown", "id": "00083059-aa41-4d37-8a17-1c72d1b1ca00", "metadata": { "id": "00083059-aa41-4d37-8a17-1c72d1b1ca00" }, "source": [ "- 在开始训练之前,让我们计算初始的训练集和验证集损失(与之前章节一样,目标是最小化损失)。" ] }, { "cell_type": "code", "execution_count": 33, "id": "d99fc6f8-63b2-43da-adbb-a7b6b92c8dd5", "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "d99fc6f8-63b2-43da-adbb-a7b6b92c8dd5", "outputId": "36fdf03b-6fa6-46c3-c77d-ecc99e886265" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Training loss: 3.82590970993042\n", "Validation loss: 3.761933755874634\n" ] } ], "source": [ "model.to(device)\n", "\n", "torch.manual_seed(123)\n", "\n", "with torch.no_grad():\n", " train_loss = calc_loss_loader(train_loader, model, device, num_batches=5)\n", " val_loss = calc_loss_loader(val_loader, model, device, num_batches=5)\n", "\n", "# 先看一次没有微调的结果\n", "print(\"Training loss:\", train_loss)\n", "print(\"Validation loss:\", val_loss)" ] }, { "cell_type": "markdown", "id": "12a6da8f-15b3-42b0-a136-619b7a35c3e9", "metadata": { "id": "12a6da8f-15b3-42b0-a136-619b7a35c3e9" }, "source": [ "- 因为模型更大了,我们的计算成本就比之前高了不少\n", "- 下表列出了不同设备运行该模型的时间" ] }, { "cell_type": "markdown", "id": "db4b57fb-e689-4550-931c-6d34a932487c", "metadata": { "id": "db4b57fb-e689-4550-931c-6d34a932487c" }, "source": [ "
\n", " \n", "| Model | Device | Runtime for 2 Epochs |\n", "|--------------------|-----------------------|----------------------|\n", "| gpt2-medium (355M) | CPU (M3 MacBook Air) | 15.78 minutes |\n", "| gpt2-medium (355M) | GPU (M3 MacBook Air) | 10.77 minutes |\n", "| gpt2-medium (355M) | GPU (L4) | 1.83 minutes |\n", "| gpt2-medium (355M) | GPU (A100) | 0.86 minutes |\n", "| gpt2-small (124M) | CPU (M3 MacBook Air) | 5.74 minutes |\n", "| gpt2-small (124M) | GPU (M3 MacBook Air) | 3.73 minutes |\n", "| gpt2-small (124M) | GPU (L4) | 0.69 minutes |\n", "| gpt2-small (124M) | GPU (A100) | 0.39 minutes |\n", "\n", "
\n" ] }, { "cell_type": "code", "execution_count": 34, "id": "78bcf83a-1fff-4540-97c1-765c4016d5e3", "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "78bcf83a-1fff-4540-97c1-765c4016d5e3", "outputId": "cea0618c-56ca-418a-c972-bcc060362727" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Ep 1 (Step 000000): Train loss 2.637, Val loss 2.626\n", "Ep 1 (Step 000005): Train loss 1.174, Val loss 1.102\n", "Ep 1 (Step 000010): Train loss 0.872, Val loss 0.944\n", "Ep 1 (Step 000015): Train loss 0.857, Val loss 0.906\n", "Ep 1 (Step 000020): Train loss 0.776, Val loss 0.881\n", "Ep 1 (Step 000025): Train loss 0.754, Val loss 0.859\n", "Ep 1 (Step 000030): Train loss 0.799, Val loss 0.836\n", "Ep 1 (Step 000035): Train loss 0.714, Val loss 0.808\n", "Ep 1 (Step 000040): Train loss 0.672, Val loss 0.806\n", "Ep 1 (Step 000045): Train loss 0.633, Val loss 0.789\n", "Ep 1 (Step 000050): Train loss 0.663, Val loss 0.783\n", "Ep 1 (Step 000055): Train loss 0.760, Val loss 0.763\n", "Ep 1 (Step 000060): Train loss 0.719, Val loss 0.743\n", "Ep 1 (Step 000065): Train loss 0.653, Val loss 0.735\n", "Ep 1 (Step 000070): Train loss 0.532, Val loss 0.729\n", "Ep 1 (Step 000075): Train loss 0.569, Val loss 0.728\n", "Ep 1 (Step 000080): Train loss 0.605, Val loss 0.725\n", "Ep 1 (Step 000085): Train loss 0.509, Val loss 0.709\n", "Ep 1 (Step 000090): Train loss 0.562, Val loss 0.691\n", "Ep 1 (Step 000095): Train loss 0.500, Val loss 0.681\n", "Ep 1 (Step 000100): Train loss 0.503, Val loss 0.677\n", "Ep 1 (Step 000105): Train loss 0.564, Val loss 0.670\n", "Ep 1 (Step 000110): Train loss 0.555, Val loss 0.666\n", "Ep 1 (Step 000115): Train loss 0.508, Val loss 0.664\n", "Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: Convert the active sentence to passive: 'The chef cooks the meal every day.' ### Response: The meal is prepared every day by the chef.<|endoftext|>The following is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: Convert the active sentence to passive:\n", "Ep 2 (Step 000120): Train loss 0.435, Val loss 0.672\n", "Ep 2 (Step 000125): Train loss 0.451, Val loss 0.687\n", "Ep 2 (Step 000130): Train loss 0.447, Val loss 0.683\n", "Ep 2 (Step 000135): Train loss 0.405, Val loss 0.682\n", "Ep 2 (Step 000140): Train loss 0.409, Val loss 0.681\n", "Ep 2 (Step 000145): Train loss 0.369, Val loss 0.680\n", "Ep 2 (Step 000150): Train loss 0.382, Val loss 0.675\n", "Ep 2 (Step 000155): Train loss 0.413, Val loss 0.675\n", "Ep 2 (Step 000160): Train loss 0.415, Val loss 0.683\n", "Ep 2 (Step 000165): Train loss 0.379, Val loss 0.686\n", "Ep 2 (Step 000170): Train loss 0.323, Val loss 0.681\n", "Ep 2 (Step 000175): Train loss 0.337, Val loss 0.669\n", "Ep 2 (Step 000180): Train loss 0.392, Val loss 0.656\n", "Ep 2 (Step 000185): Train loss 0.415, Val loss 0.657\n", "Ep 2 (Step 000190): Train loss 0.340, Val loss 0.648\n", "Ep 2 (Step 000195): Train loss 0.330, Val loss 0.634\n", "Ep 2 (Step 000200): Train loss 0.310, Val loss 0.634\n", "Ep 2 (Step 000205): Train loss 0.352, Val loss 0.630\n", "Ep 2 (Step 000210): Train loss 0.367, Val loss 0.630\n", "Ep 2 (Step 000215): Train loss 0.394, Val loss 0.635\n", "Ep 2 (Step 000220): Train loss 0.299, Val loss 0.648\n", "Ep 2 (Step 000225): Train loss 0.346, Val loss 0.661\n", "Ep 2 (Step 000230): Train loss 0.292, Val loss 0.659\n", "Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: Convert the active sentence to passive: 'The chef cooks the meal every day.' ### Response: The meal is cooked every day by the chef.<|endoftext|>The following is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: What is the capital of the United Kingdom\n", "Training completed in 1.84 minutes.\n" ] } ], "source": [ "import time\n", "\n", "start_time = time.time()\n", "\n", "torch.manual_seed(123)\n", "\n", "# 用Adam训练,并定义了学习率、权重衰减等参数\n", "optimizer = torch.optim.AdamW(model.parameters(), lr=0.00005, weight_decay=0.1)\n", "\n", "num_epochs = 2\n", "\n", "train_losses, val_losses, tokens_seen = train_model_simple(\n", " model, train_loader, val_loader, optimizer, device,\n", " num_epochs=num_epochs, eval_freq=5, eval_iter=5,\n", " start_context=format_input(val_data[0]), tokenizer=tokenizer\n", ")\n", "\n", "end_time = time.time()\n", "execution_time_minutes = (end_time - start_time) / 60\n", "print(f\"Training completed in {execution_time_minutes:.2f} minutes.\")" ] }, { "cell_type": "markdown", "id": "Ise3wGjlB-iq", "metadata": { "id": "Ise3wGjlB-iq" }, "source": [ "- 从上面的输出可以看出,模型训练得很好,训练损失和验证损失值不断下降。\n", "- 此外,从每个epoch结束后输出的响应文本来看,我们可以看到模型正确地执行了指令,将输入句子 `'The chef cooks the meal every day.'` 转换为被动语态 `'The meal is cooked every day by the chef.'`(我们将在后续章节中对响应进行适当的格式化和评估)。\n", "- 最后,让我们看看训练损失和验证损失曲线。" ] }, { "cell_type": "code", "execution_count": 35, "id": "4acd368b-1403-4807-a218-9102e35bfdbb", "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 308 }, "id": "4acd368b-1403-4807-a218-9102e35bfdbb", "outputId": "680da58a-9bd7-402d-ac95-470a4a29a6c4" }, "outputs": [ { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAeoAAAEiCAYAAAA21pHjAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjcuMSwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/bCgiHAAAACXBIWXMAAA9hAAAPYQGoP6dpAABY5UlEQVR4nO3dd3gU1frA8e9u+qYnpDcCRAIhQKhSrCBFRIMFRRSwF4qIgvJTEfEqKqiocLFdyb0qgqggIgKhS5EeOqEnAVKA9J7snt8fCwtLCSkbNgnv53nmye7MmZn3LCHvnpkz52iUUgohhBBC1ElaawcghBBCiKuTRC2EEELUYZKohRBCiDpMErUQQghRh0miFkIIIeowSdRCCCFEHSaJWgghhKjDJFELIYQQdZgkaiGEEKIOk0QtRANy/PhxNBoNCQkJ1g5FCGEhkqiFqGM0Gk2Fy8SJE60dohDiOrK1dgBCCHOpqamm13PnzmXChAkkJiaa1rm4uFgjLCGElUiLWog6xt/f37S4u7uj0WhM7319ffnkk08IDg7GwcGBtm3bsmTJkqseS6/X8+STTxIZGUlycjIAv//+O+3atcPR0ZEmTZrwzjvvUF5ebtpHo9Hw7bffMmDAAHQ6HRERESxcuNC0PSsri8GDB+Pj44OTkxMRERHMmjXrqjH88ssvREdH4+TkhLe3Nz179qSgoMC0/dtvv6VFixY4OjoSGRnJv//9b7P9U1JSGDhwIB4eHnh5eXHfffdx/Phx0/Zhw4YRGxvL1KlTCQgIwNvbm+HDh1NWVlbpz1yIOk0JIeqsWbNmKXd3d9P7Tz75RLm5uamffvpJHThwQI0bN07Z2dmpgwcPKqWUOnbsmALUjh07VHFxsRowYICKiYlRGRkZSiml1q5dq9zc3FRcXJw6cuSIWrZsmWrcuLGaOHGi6RyACg4OVrNnz1aHDh1So0aNUi4uLurs2bNKKaWGDx+u2rZtq7Zs2aKOHTum4uPj1cKFC68Y/6lTp5Stra365JNP1LFjx9SuXbvUjBkzVF5enlJKqR9++EEFBASoX3/9VR09elT9+uuvysvLS8XFxSmllCotLVUtWrRQTz75pNq1a5fat2+fevTRR1Xz5s1VSUmJUkqpoUOHKjc3N/X888+r/fv3qz/++EPpdDr19ddfW/YfQwgrkUQtRB12aaIODAxU7733nlmZjh07qhdffFEpdSFR//3336pHjx6qe/fuKjs721S2R48e6v333zfb//vvv1cBAQGm94B68803Te/z8/MVoP766y+llFL9+/dXTzzxRKXi37ZtmwLU8ePHr7i9adOmavbs2Wbr3n33XdWlSxdTbM2bN1cGg8G0vaSkRDk5OamlS5cqpYyJOiwsTJWXl5vKPPTQQ+rhhx+uVIxC1HVyj1qIeiI3N5dTp07RrVs3s/XdunVj586dZusGDRpEcHAwK1euxMnJybR+586drF+/nvfee8+0Tq/XU1xcTGFhITqdDoDWrVubtjs7O+Pm5kZGRgYAL7zwAg888ADbt2+nV69exMbG0rVr1yvG3KZNG3r06EF0dDS9e/emV69ePPjgg3h6elJQUMCRI0d46qmneOaZZ0z7lJeX4+7ubor38OHDuLq6mh23uLiYI0eOmN5HRUVhY2Njeh8QEMDu3bsr+DSFqD8kUQvRAN1999388MMPbNy4kTvvvNO0Pj8/n3feeYf777//sn0cHR1Nr+3s7My2aTQaDAYDAH379iUpKYnFixcTHx9Pjx49GD58OFOnTr3smDY2NsTHx7NhwwaWLVvGF198wRtvvMGmTZtMXwq++eYbOnfufNl+5+Nt3749P/7442XH9vHxqVS8QtR3kqiFqCfc3NwIDAxk/fr13Hbbbab169evp1OnTmZlX3jhBVq1asW9997Ln3/+aSrfrl07EhMTadasWY1i8fHxYejQoQwdOpRbbrmFsWPHXjFRgzFpduvWjW7dujFhwgTCwsKYP38+Y8aMITAwkKNHjzJ48OAr7tuuXTvmzp2Lr68vbm5uNYpZiPpKErUQ9cjYsWN5++23adq0KW3btmXWrFkkJCRcscU5cuRI9Ho999xzD3/99Rfdu3dnwoQJ3HPPPYSGhvLggw+i1WrZuXMne/bs4V//+lelYpgwYQLt27cnKiqKkpISFi1aRIsWLa5YdtOmTaxYsYJevXrh6+vLpk2bOH36tKn8O++8w6hRo3B3d6dPnz6UlJSwdetWsrKyGDNmDIMHD2bKlCncd999TJo0ieDgYJKSkvjtt98YN24cwcHB1f8whagnJFELUY+MGjWKnJwcXnnlFTIyMmjZsiULFy4kIiLiiuVHjx6NwWDg7rvvZsmSJfTu3ZtFixYxadIkPvzwQ+zs7IiMjOTpp5+udAz29vaMHz+e48eP4+TkxC233MKcOXOuWNbNzY21a9cybdo0cnNzCQsL4+OPP6Zv374APP300+h0OqZMmcLYsWNxdnYmOjqa0aNHA6DT6Vi7di2vvfYa999/P3l5eQQFBdGjRw9pYYsbhkYppawdhBBCCCGuTAY8EUIIIeowSdRCCCFEHSaJWgghhKjDJFELIYQQdZgkaiGEEKIOk0QthBBC1GGSqKthxowZNG7cGEdHRzp37szmzZutHZKZyZMn07FjR1xdXfH19SU2NtZsPmMwjpU8fPhwvL29cXFx4YEHHiA9Pd2sTHJyMv369UOn0+Hr68vYsWPNpkMEWL16Ne3atcPBwYFmzZoRFxd3WTzX8/P64IMP0Gg0pudwoeHV9eTJkzz22GN4e3vj5OREdHQ0W7duNW1XSjFhwgQCAgJwcnKiZ8+eHDp0yOwYmZmZDB48GDc3Nzw8PHjqqafIz883K7Nr1y5uueUWHB0dCQkJ4aOPProslnnz5hEZGYmjoyPR0dEsXrzYYvXU6/W89dZbhIeH4+TkRNOmTXn33Xe5+InS+lzXtWvX0r9/fwIDA9FoNCxYsMBse12qW2ViqW5dy8rKeO2114iOjsbZ2ZnAwECGDBnCqVOn6mVda4X15gOpn+bMmaPs7e3Vd999p/bu3aueeeYZ5eHhodLT060dmknv3r3VrFmz1J49e1RCQoK6++67VWhoqMrPzzeVef7551VISIhasWKF2rp1q7r55ptV165dTdvLy8tVq1atVM+ePdWOHTvU4sWLVaNGjdT48eNNZY4ePap0Op0aM2aM2rdvn/riiy+UjY2NWrJkianM9fy8Nm/erBo3bqxat26tXnrppQZZ18zMTBUWFqaGDRumNm3apI4ePaqWLl2qDh8+bCrzwQcfKHd3d7VgwQK1c+dOde+996rw8HBVVFRkKtOnTx/Vpk0b9c8//6i///5bNWvWTA0aNMi0PScnR/n5+anBgwerPXv2qJ9++kk5OTmpr776ylRm/fr1ysbGRn300Udq37596s0331R2dnZq9+7dFqnre++9p7y9vdWiRYvUsWPH1Lx585SLi4v67LPPGkRdFy9erN544w3122+/KUDNnz/fbHtdqltlYqluXbOzs1XPnj3V3Llz1YEDB9TGjRtVp06dVPv27c2OUV/qWhskUVdRp06d1PDhw03v9Xq9CgwMVJMnT7ZiVBXLyMhQgFqzZo1Syvgfw87OTs2bN89UZv/+/QpQGzduVEoZ/2NptVqVlpZmKjNz5kzl5uZmmgd43LhxKioqyuxcDz/8sOrdu7fp/fX6vPLy8lRERISKj49Xt912mylRN7S6vvbaa6p79+5X3W4wGJS/v7+aMmWKaV12drZycHBQP/30k1JKqX379ilAbdmyxVTmr7/+UhqNRp08eVIppdS///1v5enpaar/+XM3b97c9H7gwIGqX79+Zufv3Lmzeu6552pWyXP69eunnnzySbN1999/vxo8eHCDq+ulyasu1a0ysdSkrleyefNmBaikpKR6XVdLkUvfVVBaWsq2bdvo2bOnaZ1Wq6Vnz55s3LjRipFVLCcnBwAvLy8Atm3bRllZmVk9IiMjCQ0NNdVj48aNREdH4+fnZyrTu3dvcnNz2bt3r6nMxcc4X+b8Ma7n5zV8+HD69et3WTwNra4LFy6kQ4cOPPTQQ/j6+hITE8M333xj2n7s2DHS0tLM4nB3d6dz585m9fXw8KBDhw6mMj179kSr1bJp0yZTmVtvvRV7e3uz+iYmJpKVlWUqU9FnUlNdu3ZlxYoVHDx4EDBOeblu3TrT8KMNqa6Xqkt1q0wslpaTk4NGo8HDw6PB17UyJFFXwZkzZ9Dr9WZ/0AH8/PxIS0uzUlQVMxgMjB49mm7dutGqVSsA0tLSsLe3N/0nOO/ieqSlpV2xnue3VVQmNzeXoqKi6/Z5zZkzh+3btzN58uTLtjW0uh49epSZM2cSERHB0qVLeeGFFxg1ahT//e9/zeKtKI60tDR8fX3Nttva2uLl5WWRz8RS9X399dd55JFHiIyMxM7OjpiYGEaPHm2aaash1fVSdalulYnFkoqLi3nttdcYNGiQaTz3hlrXypJJORq44cOHs2fPHtatW2ftUGpFSkoKL730EvHx8WbzKTdUBoOBDh068P777wMQExPDnj17+PLLLxk6dKiVo7Osn3/+mR9//JHZs2cTFRVFQkICo0ePJjAwsMHVVRiVlZUxcOBAlFLMnDnT2uHUGdKiroJGjRphY2NzWY/h9PR0/P39rRTV1Y0YMYJFixaxatUqs+kA/f39KS0tJTs726z8xfXw9/e/Yj3Pb6uojJubG05OTtfl89q2bRsZGRm0a9cOW1tbbG1tWbNmDZ9//jm2trb4+fk1mLoCBAQE0LJlS7N1LVq0IDk52SzeiuLw9/cnIyPDbHt5eTmZmZkW+UwsVd+xY8eaWtXR0dE8/vjjvPzyy6YrJw2prpeqS3WrTCyWcD5JJyUlER8fbzY7WkOra1VJoq4Ce3t72rdvz4oVK0zrDAYDK1asoEuXLlaMzJxSihEjRjB//nxWrlxJeHi42fb27dtjZ2dnVo/ExESSk5NN9ejSpQu7d+82+89x/j/P+UTRpUsXs2OcL3P+GNfj8+rRowe7d+8mISHBtHTo0IHBgwebXjeUugJ069btskftDh48SFhYGADh4eH4+/ubxZGbm8umTZvM6pudnc22bdtMZVauXInBYKBz586mMmvXrqWsrMysvs2bN8fT09NUpqLPpKYKCwvRas3/RNnY2GAwGBpcXS9Vl+pWmVhq6nySPnToEMuXL8fb29tse0Oqa7VYrRtbPTVnzhzl4OCg4uLi1L59+9Szzz6rPDw8zHoMW9sLL7yg3N3d1erVq1VqaqppKSwsNJV5/vnnVWhoqFq5cqXaunWr6tKli+rSpYtp+/lHlnr16qUSEhLUkiVLlI+PzxUfWRo7dqzav3+/mjFjxhUfWbren9fFvb4bWl03b96sbG1t1XvvvacOHTqkfvzxR6XT6dQPP/xgKvPBBx8oDw8P9fvvv6tdu3ap++6774qP9cTExKhNmzapdevWqYiICLNHXbKzs5Wfn596/PHH1Z49e9ScOXOUTqe77FEXW1tbNXXqVLV//3719ttvW/TxrKFDh6qgoCDT41m//fabatSokRo3blyDqGteXp7asWOH2rFjhwLUJ598onbs2GHq6VyX6laZWKpb19LSUnXvvfeq4OBglZCQYPY36+Ie3PWlrrVBEnU1fPHFFyo0NFTZ29urTp06qX/++cfaIZkBrrjMmjXLVKaoqEi9+OKLytPTU+l0OjVgwACVmppqdpzjx4+rvn37KicnJ9WoUSP1yiuvqLKyMrMyq1atUm3btlX29vaqSZMmZuc473p/Xpcm6oZW1z/++EO1atVKOTg4qMjISPX111+bbTcYDOqtt95Sfn5+ysHBQfXo0UMlJiaalTl79qwaNGiQcnFxUW5ubuqJJ55QeXl5ZmV27typunfvrhwcHFRQUJD64IMPLovl559/VjfddJOyt7dXUVFR6s8//7RYPXNzc9VLL72kQkNDlaOjo2rSpIl64403zP541+e6rlq16or/T4cOHVrn6laZWKpb12PHjl31b9aqVavqXV1rg0api4b5EUIIIUSdIveohRBCiDpMErUQQghRh0miFkIIIeowSdRCCCFEHSaJWgghhKjDJFELIYQQdZgk6moqKSlh4sSJlJSUWDuUWncj1RVurPpKXRuuG6m+Db2u8hx1NeXm5uLu7k5OTo7ZmLQN0Y1UV7ix6it1bbhupPo29LpKi1oIIYSowyRRCyGEEHXYDTcfdXl5OTt27MDPz++ymXmqIi8vD4CTJ0+Sm5trqfDqpBuprnBj1Vfq2nDdSPWtj3U1GAykp6cTExODrW3FqfiGu0e9ZcsWOnXqZO0whBBCCDZv3kzHjh0rLHPDtaj9/PwA44cTEBBg5WiEEELciFJTU+nUqZMpJ1XkhkvU5y93BwQEEBwcbOVohBBC3MgqcwtWOpMJIYQQdZgkaiGEEKIOk0QthBBC1GE33D1qIYSoiF6vp6yszNphiHrOzs4OGxsbixxLEnUN7DmZw6nsItqEeODn5mjtcIQQNaCUIi0tjezsbGuHIhoIDw8P/P390Wg0NTqOJOoamLRoH5uPZTL90RjuaR1o7XCEEDVwPkn7+vqi0+lq/MdV3LiUUhQWFpKRkQFQ40eBJVHXwG1qK51tEtCc0oAkaiHqLb1eb0rS3t7e1g5HNABOTk4AZGRk4OvrW6PL4NKZrAZuKVrBK3a/4JyxzdqhCCFq4Pw9aZ1OZ+VIRENy/veppn0eJFHXgMHR0/iiMNO6gQghLEIudwtLstTvkyTqGlBOXgBoiyVRCyGEqB2SqGtA62y8l2VXmm3dQIQQwoIaN27MtGnTKl1+9erVaDSaWu8xHxcXh4eHR62eoy6yaqKePHkyHTt2xNXVFV9fX2JjY0lMTKxwn7i4ODQajdni6GidR6PsXBsB4FCaY5XzCyFubJf+Lbx0mThxYrWOu2XLFp599tlKl+/atSupqam4u7tX63yiYlbt9b1mzRqGDx9Ox44dKS8v5//+7//o1asX+/btw9nZ+ar7ubm5mSV0a91XcnQzJmqdXhK1EOL6S01NNb2eO3cuEyZMMPvb6OLiYnqtlEKv119z7mMAHx+fKsVhb2+Pv79/lfYRlWfVFvWSJUsYNmwYUVFRtGnThri4OJKTk9m2reJe1BqNBn9/f9NSmWnCaoOzhy8Arob6MVG5EKJhufjvoLu7u9nfxgMHDuDq6spff/1F+/btcXBwYN26dRw5coT77rsPPz8/XFxc6NixI8uXLzc77qWXvjUaDd9++y0DBgxAp9MRERHBwoULTdsvvfR9/hL10qVLadGiBS4uLvTp08fsi0V5eTmjRo3Cw8MDb29vXnvtNYYOHUpsbGyVPoOZM2fStGlT7O3tad68Od9//71pm1KKiRMnEhoaioODA4GBgYwaNcq0/d///jcRERE4Ojri5+fHgw8+WKVzXy916h51To6xZerl5VVhufz8fMLCwggJCeG+++5j79691yO8y7h4GhO1B3kUleqtEoMQonYopSgsLbfKopSyWD1ef/11PvjgA/bv30/r1q3Jz8/n7rvvZsWKFezYsYM+ffrQv39/kpOTKzzOO++8w8CBA9m1axd33303gwcPJjPz6h1pCwsLmTp1Kt9//z1r164lOTmZV1991bT9ww8/5Mcff2TWrFmsX7+e3NxcFixYUKW6zZ8/n5deeolXXnmFPXv28Nxzz/HEE0+watUqAH799Vc+/fRTvvrqKw4dOsSCBQuIjo4GYOvWrYwaNYpJkyaRmJjIkiVLuPXWW6t0/uulzgx4YjAYGD16NN26daNVq1ZXLde8eXO+++47WrduTU5ODlOnTqVr167s3bv3ivNLl5SUUFJSYnqfl5dnsZh1HsbLQ86aEk7m5hHUyMNixxZCWFdRmZ6WE5Za5dz7JvVGZ2+ZP8+TJk3irrvuMr338vKiTZs2pvfvvvsu8+fPZ+HChYwYMeKqxxk2bBiDBg0C4P333+fzzz9n8+bN9OnT54rly8rK+PLLL2natCkAI0aMYNKkSabtX3zxBePHj2fAgAEATJ8+ncWLF1epblOnTmXYsGG8+OKLAIwZM4Z//vmHqVOncscdd5CcnIy/vz89e/bEzs6O0NBQOnXqBEBycjLOzs7cc889uLq6EhYWRkxMTJXOf73UmRb18OHD2bNnD3PmzKmwXJcuXRgyZAht27bltttu47fffsPHx4evvvrqiuUnT56Mu7u7aWnZsqXFYtY4elB+7iPMy0y32HGFEMJSOnToYPY+Pz+fV199lRYtWuDh4YGLiwv79++/Zou6devWptfOzs64ubmZhsi8Ep1OZ0rSYBxG83z5nJwc0tPTTUkTwMbGhvbt21epbvv376dbt25m67p168b+/fsBeOihhygqKqJJkyY888wzzJ8/n/LycgDuuusuwsLCaNKkCY8//jg//vgjhYWFVTr/9VInWtQjRoxg0aJFrF279oqt4orY2dkRExPD4cOHr7h9/PjxjBkzxvT+5MmTlkvWGg35Glc8VA4FWRlAc8scVwhhdU52Nuyb1Ntq57aUSzvmvvrqq8THxzN16lSaNWuGk5MTDz74IKWlpRUex87Ozuy9RqPBYDBUqbwlL+lXRkhICImJiSxfvpz4+HhefPFFpkyZwpo1a3B1dWX79u2sXr2aZcuWMWHCBCZOnMiWLVvq3CNgVm1RK6UYMWIE8+fPZ+XKlYSHh1f5GHq9nt27d1910HMHBwfc3NxMi6ura03DNlNg4wZAce5pix5XCGFdGo0Gnb2tVZbafJJl/fr1DBs2jAEDBhAdHY2/vz/Hjx+vtfNdibu7O35+fmzZssW0Tq/Xs3379iodp0WLFqxfv95s3fr1680aY05OTvTv35/PP/+c1atXs3HjRnbv3g2Ara0tPXv25KOPPmLXrl0cP36clStX1qBmtcOqLerhw4cze/Zsfv/9d1xdXUlLSwOM/4jnBzQfMmQIQUFBTJ48GTDeb7n55ptp1qwZ2dnZTJkyhaSkJJ5++mmr1CHDsTE5uVpyiqUzmRCi7ouIiOC3336jf//+aDQa3nrrrQpbxrVl5MiRTJ48mWbNmhEZGckXX3xBVlZWlb6kjB07loEDBxITE0PPnj35448/+O2330y92OPi4tDr9XTu3BmdTscPP/yAk5MTYWFhLFq0iKNHj3Lrrbfi6enJ4sWLMRgMNG9e966MWjVRz5w5E4Dbb7/dbP2sWbMYNmwYYLzhr9VeaPhnZWXxzDPPkJaWhqenJ+3bt2fDhg0WvfdcFb81+4Dv/0lilEMz7rZKBEIIUXmffPIJTz75JF27dqVRo0a89tpr5OZe/0dMX3vtNdLS0hgyZAg2NjY8++yz9O7du0qzTMXGxvLZZ58xdepUXnrpJcLDw5k1a5Ypp3h4ePDBBx8wZswY9Ho90dHR/PHHH3h7e+Ph4cFvv/3GxIkTKS4uJiIigp9++omoqKhaqnH1adT1vmlgZSdOnCAkJISUlJQq3w+/kk/iD/L5ikM8dnMo/4qNtkCEQojrrbi4mGPHjhEeHm61kQ5vdAaDgRYtWjBw4EDeffdda4djERX9XlUlF9WJzmT1mZfO2GEiq6Bm05gJIcSNJCkpiWXLlnHbbbdRUlLC9OnTOXbsGI8++qi1Q6tz6szjWfVVdOYSVti/Qv9Tn1o7FCGEqDe0Wi1xcXF07NiRbt26sXv3bpYvX06LFi2sHVqdIy3qGnK1NdBUm8qZklPWDkUIIeqNkJCQy3psiyuTRF1DhqY9eXhtESW2/iywdjBCCCEaHEnUNeTmG8om1QLbQuPD/NaayUsIIUTDJPeoa8hTZw9AuUGRV1Ju5WiEEEI0NNKiriEnrZ4n7ZfjrM8lK687bo4ycboQQgjLkURdUxotE7TfgRZ2Z44HH0nUQgghLEcufdeUjS35GuOg9wU5V59JRgghhKgOSdQWUKA1TsxRlHPGypEIIUTV3X777YwePdr0vnHjxkybNq3CfTQaDQsWLKjxuS11nIpMnDiRtm3b1uo5apMkagsotvMAoDRXErUQ4vrp378/ffr0ueK2v//+G41Gw65du6p83C1btvDss8/WNDwzV0uWqamp9O3b16LnamgkUVtAqb0HAPoCSdRCiOvnqaeeIj4+nhMnTly2bdasWXTo0IHWrVtX+bg+Pj7odDpLhHhN/v7+ODg4XJdz1VeSqC1A7+hhfFGYadU4hBA3lnvuuQcfHx/i4uLM1ufn5zNv3jyeeuopzp49y6BBgwgKCkKn0xEdHc1PP/1U4XEvvfR96NAhbr31VhwdHWnZsiXx8fGX7fPaa69x0003odPpaNKkCW+99RZlZcY5EOLi4njnnXfYuXMnGo0GjUZjivnSS9+7d+/mzjvvxMnJCW9vb5599lny8/NN24cNG0ZsbCxTp04lICAAb29vhg8fbjpXZRgMBiZNmkRwcDAODg60bduWJUuWmLaXlpYyYsQIAgICcHR0JCwszDTVslKKiRMnEhoaioODA4GBgYwaNarS564O6fVtAcrJCwBNUZaVIxFCWFxpQdX3sXEAm3N/XvXloC8BjRbsnK59XHvnSp/G1taWIUOGEBcXxxtvvGEacGnevHno9XoGDRpEfn4+7du357XXXsPNzY0///yTxx9/nKZNm9KpU6drnsNgMHD//ffj5+fHpk2byMnJMbuffZ6rqytxcXEEBgaye/dunnnmGVxdXRk3bhwPP/wwe/bsYcmSJaa5ot3dL39CpqCggN69e9OlSxe2bNlCRkYGTz/9NCNGjDD7MrJq1SoCAgJYtWoVhw8f5uGHH6Zt27Y888wzlfrcPvvsMz7++GO++uorYmJi+O6777j33nvZu3cvERERfP755yxcuJCff/6Z0NBQUlJSSElJAeDXX3/l008/Zc6cOURFRZGWlsbOnTsrdd7qkkRtAVrnRgDYlWZbNxAhhOW9H1j1fR6Kg6gBxtcH/oB5wyCsOzzx54Uy06Kh8Ozl+07MqdKpnnzySaZMmcKaNWtM8zDPmjWLBx54AHd3d9zd3Xn11VdN5UeOHMnSpUv5+eefK5Woly9fzoEDB1i6dCmBgcbP4v3337/svvKbb75pet24cWNeffVV5syZw7hx43BycsLFxQVbW1v8/f2veq7Zs2dTXFzM//73P5ydjV9Ypk+fTv/+/fnwww/x8/MDwNPTk+nTp2NjY0NkZCT9+vVjxYoVlU7UU6dO5bXXXuORRx4B4MMPP2TVqlVMmzaNGTNmkJycTEREBN27d0ej0RAWFmbaNzk5GX9/f3r27ImdnR2hoaGV+hxrQi59W4CdizcAjmXSohZCXF+RkZF07dqV7777DoDDhw/z999/89RTTwGg1+t59913iY6OxsvLCxcXF5YuXUpycnKljr9//35CQkJMSRqgS5cul5WbO3cu3bp1w9/fHxcXF958881Kn+Pic7Vp08aUpAG6deuGwWAgMTHRtC4qKgobGxvT+4CAADIyKvd4bG5uLqdOnaJbt25m67t168b+/fsB4+X1hIQEmjdvzqhRo1i2bJmp3EMPPURRURFNmjThmWeeYf78+ZSX1+6olNKitgBHNx8AdOW5Vo5ECGFx/1eNmfFsLuocFdnfeAzNJe2i0btrFtdFnnrqKUaOHMmMGTOYNWsWTZs25bbbbgNgypQpfPbZZ0ybNo3o6GicnZ0ZPXo0paWlFjv/xo0bGTx4MO+88w69e/fG3d2dOXPm8PHHH1vsHBezs7Mze6/RaDAYDBY7frt27Th27Bh//fUXy5cvZ+DAgfTs2ZNffvmFkJAQEhMTWb58OfHx8bz44oumKxqXxmUp0qK2ACcP46VvF0MueoOycjRCCIuyd676YnNRG8jG1rju4vvTFR23GgYOHIhWq2X27Nn873//48knnzTdr16/fj333Xcfjz32GG3atKFJkyYcPHiw0sdu0aIFKSkppKammtb9888/ZmU2bNhAWFgYb7zxBh06dCAiIoKkpCTz6trbo9frr3munTt3UlBw4f79+vXr0Wq1NG/evNIxV8TNzY3AwMDLpthcv349LVu2NCv38MMP88033zB37lx+/fVXMjONHYadnJzo378/n3/+OatXr2bjxo3s3m25L16Xkha1Bbh4nrtvosknt6gMT2d7K0ckhLiRuLi48PDDDzN+/Hhyc3MZNmyYaVtERAS//PILGzZswNPTk08++YT09HSzpFSRnj17ctNNNzF06FCmTJlCbm4ub7zxhlmZiIgIkpOTmTNnDh07duTPP/9k/vz5ZmUaN27MsWPHSEhIIDg4GFdX18seyxo8eDBvv/02Q4cOZeLEiZw+fZqRI0fy+OOPm+5PW8LYsWN5++23adq0KW3btmXWrFkkJCTw448/AvDJJ58QEBBATEwMWq2WefPm4e/vj4eHB3Fxcej1ejp37oxOp+OHH37AycnJ7D62pUmL2gLsXH1Jw5tTypvMghJrhyOEuAE99dRTZGVl0bt3b7P7yW+++Sbt2rWjd+/e3H777fj7+xMbG1vp42q1WubPn09RURGdOnXi6aef5r333jMrc++99/Lyyy8zYsQI2rZty4YNG3jrrbfMyjzwwAP06dOHO+64Ax8fnys+IqbT6Vi6dCmZmZl07NiRBx98kB49ejB9+vSqfRjXMGrUKMaMGcMrr7xCdHQ0S5YsYeHChURERADGHuwfffQRHTp0oGPHjhw/fpzFixej1Wrx8PDgm2++oVu3brRu3Zrly5fzxx9/4O3tbdEYL6ZRSt1Q12pPnDhBSEgIKSkpBAcHW+y4t01ZRdLZQn55vgsdGntZ7LhCiNpXXFzMsWPHCA8Px9HR0drhiAaiot+rquQiaVFbyPl5qTMLLNdBQwghhJBEbSFe5+5LZxVKohZCCGE5kqgt5IWsqay0H4PTifXXLiyEEEJUkiRqC2mkztBEm4bKS712YSGEEKKSrJqoJ0+eTMeOHXF1dcXX15fY2Fiz0WeuZt68eURGRuLo6Eh0dDSLFy++DtFWbFuzUTxUMoFtdu2tHYoQQogGxKqJes2aNQwfPpx//vmH+Ph4ysrK6NWrl9nD7pfasGEDgwYN4qmnnmLHjh3ExsYSGxvLnj17rmPklyv3j2GLiuRkyfWZGk4IYXmWHN1KCEv9Pll1wJOLpxUD41Rovr6+bNu2jVtvvfWK+3z22Wf06dOHsWPHAvDuu+8SHx/P9OnT+fLLL2s95qs5P8hJpnQmE6Lesbe3R6vVcurUKXx8fLC3tzeN7CVEVSmlKC0t5fTp02i1WuztazYIVp0amSwnxzhrjJfX1Z9D3rhxI2PGjDFb17t3b7P5TK0hsDyFITZL0eT4Ad2uWV4IUXdotVrCw8NJTU3l1KlqjO0txBXodDpCQ0PRamt28brOJGqDwcDo0aPp1q0brVq1umq5tLS0y4aS8/PzIy0t7YrlS0pKKCm5MFpYXl6eZQK+hE/uPibZ/Zd/SqKBN65ZXghRt9jb2xMaGkp5efk1x6QW4lpsbGywtbW1yJWZOpOohw8fzp49e1i3bp1Fjzt58mTeeecdix7zSpw9fQFwNeRSpjdgZyMd6oWobzQaDXZ2drU2C5IQ1VEnssmIESNYtGgRq1atuuZQav7+/qSnp5utS09Pv+pk5OPHjycnJ8e07Nu3z2JxX0znbkzUnpo8GfRECCGExVg1USulGDFiBPPnz2flypWEh4dfc58uXbqwYsUKs3Xx8fFXnMgcwMHBATc3N9Pi6upqkdgvZeNsvK/uST5ZBWW1cg4hhBA3Hqte+h4+fDizZ8/m999/x9XV1XSf2d3dHScn49ytQ4YMISgoiMmTJwPw0ksvcdttt/Hxxx/Tr18/5syZw9atW/n666+tVg8AdMZE7aQpJTsnF/xr5wuBEEKIG4tVW9QzZ84kJyeH22+/nYCAANMyd+5cU5nk5GSzCcu7du3K7Nmz+frrr2nTpg2//PILCxYsqLAD2nXh4EY5NgAUZGdYNxYhhBANhlVb1JWZYXP16tWXrXvooYd46KGHaiGiGtBoKLRxxU2fTWHOaWtHI4QQooGoE53JGooiWw8AyvLOWDcQIYQQDYYkagsqtfcAoDxfErUQQgjLkERtQeUOnsYXhZnWDUQIIUSDIYnaks71/NYUZ1k5ECGEEA2FJGoL0p5L1HYlkqiFEEJYhiRqC7JxD+SEakRWWc1mShFCCCHOqzNjfTcE+k7Pcdua5uiUDcOsHYwQQogGQVrUFnR+TurCUj3FZTL7jhBCiJqTRG1Brg622GqNU5rJxBxCCCEsQS59W5Am9xTzHSaAvozMgrUEuDtZOyQhhBD1nCRqS7J1IFodAi2syysC3K0dkRBCiHpOErUlOXky1XMCm9Lg8SKZ6lIIIUTNyT1qS9LacLTR7WxRkWQVSmcyIYQQNSeJ2sI8dcae35kF0plMCCFEzcmlbwuLKduBjc1WbDIBbrJ2OEIIIeo5aVFbWOf0uUyy+y/emdutHYoQQogGQBK1hSkn43jfWpmYQwghhAVIorYwrbM3ALbF2dYNRAghRIMgidrCbF0aAeBQlm3dQIQQQjQIkqgtzMHN2KLWleeglLJyNEIIIeo7SdQWpvPwBcCNPApL5VlqIYQQNVOtRJ2SksKJEydM7zdv3szo0aP5+uuvLRZYfWXvarz07Um+PEsthBCixqqVqB999FFWrVoFQFpaGnfddRebN2/mjTfeYNKkSRYNsL7R6IyXvj01eTKDlhBCiBqrVqLes2cPnTp1AuDnn3+mVatWbNiwgR9//JG4uDhLxlf/6IyPZ3mQT2Z+iZWDEUIIUd9VK1GXlZXh4OAAwPLly7n33nsBiIyMJDU11XLR1UfnnqO21RjIyzlr5WCEEELUd9VK1FFRUXz55Zf8/fffxMfH06dPHwBOnTqFt7d3pY+zdu1a+vfvT2BgIBqNhgULFlRYfvXq1Wg0msuWtLS06lSjdtg5UqJxBKAw+7SVgxFCCFHfVStRf/jhh3z11VfcfvvtDBo0iDZt2gCwcOFC0yXxyigoKKBNmzbMmDGjSudPTEwkNTXVtPj6+lZp/9pWZGuch7osTxK1EEKImqnWpBy33347Z86cITc3F09PT9P6Z599Fp1OV+nj9O3bl759+1b5/L6+vnh4eFR5v+sl3ymQ/FI9+UXF1g5FCCFEPVetFnVRURElJSWmJJ2UlMS0adNITEy8Lq3btm3bEhAQwF133cX69etr/XxVtbJLHN1LPieBFtYORQghRD1XrUR933338b///Q+A7OxsOnfuzMcff0xsbCwzZ860aIAXCwgI4Msvv+TXX3/l119/JSQkhNtvv53t268+U1VJSQm5ubmmJS8vr9biO880J7U8niWEEKKGqpWot2/fzi233ALAL7/8gp+fH0lJSfzvf//j888/t2iAF2vevDnPPfcc7du3p2vXrnz33Xd07dqVTz/99Kr7TJ48GXd3d9PSsmXLWovvPC9nY6LOkgFPhBBC1FC1EnVhYSGurq4ALFu2jPvvvx+tVsvNN99MUlKSRQO8lk6dOnH48OGrbh8/fjw5OTmmZd++fbUeU+OTi1hg/xYP5X1f6+cSQgjRsFUrUTdr1owFCxaQkpLC0qVL6dWrFwAZGRm4ublZNMBrSUhIICAg4KrbHRwccHNzMy3nv2DUJleVR1vtEYLKkzEYZGIOIYQQ1VetXt8TJkzg0Ucf5eWXX+bOO++kS5cugLF1HRMTU+nj5Ofnm7WGjx07RkJCAl5eXoSGhjJ+/HhOnjxpuh8+bdo0wsPDiYqKori4mG+//ZaVK1eybNmy6lSj1ji07MvTy7JIVr50Ly7HXWdn7ZCEEELUU9VK1A8++CDdu3cnNTXV9Aw1QI8ePRgwYEClj7N161buuOMO0/sxY8YAMHToUOLi4khNTSU5Odm0vbS0lFdeeYWTJ0+i0+lo3bo1y5cvNztGXeDg24x/7DqTX1JOZmGpJGohhBDVplE1nDT5/CxawcHBFgmotp04cYKQkBBSUlJqNeZbPlpJSmYRv77QlfZhntfeQQghxA2jKrmoWveoDQYDkyZNwt3dnbCwMMLCwvDw8ODdd9/FYDBUK+gGpayYAdr1DLFZKj2/hRBC1Ei1Ln2/8cYb/Oc//+GDDz6gW7duAKxbt46JEydSXFzMe++9Z9Eg6x1DGWPyp4Id/Jo3AvCzdkRCCCHqqWol6v/+9798++23plmzAFq3bk1QUBAvvviiJGp7F8qxxZZyirMzgJusHZEQQoh6qlqXvjMzM4mMjLxsfWRkJJmZmTUOqt7TaCiyM07MUZJ3xsrBCCGEqM+qlajbtGnD9OnTL1s/ffp0WrduXeOgGoJSOw8AyvNlTmohhBDVV61L3x999BH9+vVj+fLlpmeoN27cSEpKCosXL7ZogPVVuaMnFIIqkCsMQgghqq9aLerbbruNgwcPMmDAALKzs8nOzub+++9n7969fP+9DJsJoJyMj2RpirOsHIkQQoj6rFotaoDAwMDLOo3t3LmT//znP3z99dc1Dqy+0+q8AbAtkUQthBCi+qrVohbXZutiTNSOpZKohRBCVJ8k6lri4OYDgE6fS7leBoERQghRPZKoa4mjuzFRe5BPTlGZlaMRQghRX1XpHvX9999f4fbs7OyaxNKg2DgbL317aPLIKizF28XByhEJIYSoj6qUqN3d3a+5fciQITUKqMFw8gLAk3zSC6RFLYQQonqqlKhnzZpVW3E0PDpvCjVOFOJIpkzMIYQQoprkHnVtadSMUY3/oG/pB2QVSqIWQghRPZKoa5Gnzh5AWtRCCCGqTRJ1LfJyNiZqmZNaCCFEdUmirkUDTnzEAvs3KUvZZu1QhBBC1FOSqGtRY0MSbbVHST9xhPTcYmuHI4QQoh6SRF2LHO96iw/c32abPoLftp+0djhCCCHqIUnUtanpHYR3f5DTeDBvWwpKKWtHJIQQop6RRF3L+rUOxMnOhqOnC9ienG3tcIQQQtQzkqhrU1YSLom/MTlwLVoMzNuaYu2IhBBC1DOSqGuT0sMfo4lNn8HzNn+waFcqhaXl1o5KCCFEPSKJujZ5NYG7PwJgjN08Ikr3s2RPmpWDEkIIUZ9Ioq5tbQdDqwewxcBndtNZtDnR2hEJIYSoR6yaqNeuXUv//v0JDAxEo9GwYMGCa+6zevVq2rVrh4ODA82aNSMuLq7W46wRjQbu+ZRytxBCtae57+RUUs4WWDsqIYQQ9YRVE3VBQQFt2rRhxowZlSp/7Ngx+vXrxx133EFCQgKjR4/m6aefZunSpbUcaQ05umP70Hfo0XKfzQb2LfnK2hEJIYSoJ6o0zaWl9e3bl759+1a6/Jdffkl4eDgff/wxAC1atGDdunV8+umn9O7du7bCtIyQTiRGjqDlgc+55dCHGE73R+sTYe2ohBBC1HH16h71xo0b6dmzp9m63r17s3HjxqvuU1JSQm5urmnJy8ur7TCvKjz2LTarKHQUU/jTUCiXyTqEEEJUrF4l6rS0NPz8/MzW+fn5kZubS1FR0RX3mTx5Mu7u7qalZcuW1yPUK3JytGdFy3+RpVxwydwLK96xWixCCCHqh3qVqKtj/Pjx5OTkmJZ9+/ZZNZ7eXWIYV/as8c3G6XB4uVXjEUIIUbfVq0Tt7+9Penq62br09HTc3NxwcnK64j4ODg64ubmZFldX1+sR6lXFhHhw1Ps2/lt+l3HF/OehOMeqMQkhhKi76lWi7tKlCytWrDBbFx8fT5cuXawUUdVpNBoe6hDC++WDOWDXAnq9B47uxo36MusGJ4QQos6xaqLOz88nISGBhIQEwPj4VUJCAsnJyYDxsvWQIUNM5Z9//nmOHj3KuHHjOHDgAP/+97/5+eefefnll60RfrXdHxNEudaBPnlvcjig34UNf46Br26Vy+FCCCFMrJqot27dSkxMDDExMQCMGTOGmJgYJkyYAEBqaqopaQOEh4fz559/Eh8fT5s2bfj444/59ttv6/6jWZfwdXPk9pt8AA2/bDthXGkwQOJfkLoTtHYXCmenwJlDIFNkCiHEDUmjbrBJkk+cOEFISAgpKSkEBwdbLY4le1J5/oft+Lo6sOH1O7G10ULBGUhcDG0eBZtzj7j/9Rps+hJ03hDc0biEdILAduDgYrX4hRBCVF9VcpFVBzy5kd0Z6YeXsz0ZeSWsPXSaOyP9wLkRtBtiXrAoG2zsofAsHFxiXAA0WvCLguBOxsQd0Aa8Iy4keCGEEA2CtKit6J0/9jJr/XEifF24q6UfYd46wrydCfPW4efqiFarMRYsL4HUXXBiM5zYAilbIPfE5Qe0dQK/ltDpOWjz8PWtjBBCiEqTFnU98XDHEOI2HOdQRj6HMvLNtjnYagnx0tHYW0fbEA+evqUdjiEdLxTIPQUp5xL3ia2QvgdK8+HkNijJvVAudRfMfw4a32KaclMIIUT9IYnaiiL93fjl+S7sSM7m+NkCks4WkpxZyImsIkrKDRzOyOdwRj7L92ewIOEU0x5uS6ugc49yuQVCVKxxAWNntMyjkLYTgtpfOElqAmTsM15Wv9gPD4KzDwTGQGBb8I8Guys/iy6EEMJ6JFFbWfswL9qHeZmtK9cbOJVdTFJmAUcy8pmx+giHM/IZ8O/1jLmrOc/e2gSb85fFz9NqoVEz43Kx5v1gkA/YOlxYV5gJh+ONr3fOPre/HQS1g7CuENoVQjtfeL5bCCGE1cg96nogs6CU//ttN0v2pgHQKdyLTwa2IdhTV70DlhXBkVXG1vapBDi1AwoyzMtotODXypi4w7oaO6u5hxq/EADoy41ltPVqzBwhhKgTqpKLJFHXE0op5m07wTsL91JQqsfVwZZJsVHEtg1Co9Fc+wBXUVpuYNWBdPz1abQx7IOkDZC0HrKOXV74jbQLl8d/ew52zYFe/4KuI43rck7C0vHg1RS8m1746ewDNYhRCCEaGulM1gBpNBoGdgjh5nBvXv45gW1JWbw8dycr9mfwXmw07jq7ax/kIkdP5zNnSwq/bDtBZoFxus2nu8cw7p5B2NtqITcVkjecS9wb4Oxh84FYDOeGO7143ekDsO/3y09m7wreTcDZFxzdwMHtop/uxkfSzl+aV0qSuhBCXERa1PVQud7AzNVHmLbiEHqDws/NgR4t/IgOcqdVoDs3+bvgYGtz2X7FZXqW7Enjp83JbDqWaVrv7WzP2XPJuk2wO18Makeo9zUuq5cWQFkx2DmCvbNxXVYSHFgEZ49A5hE4exRyUoBr/Iq9eRps7Y2vF70Mh1fA7eOh7aBzFS4FpZfObkKIBkNa1A2crY2WkT0iuOUmH16em8CxMwXM3nRhqFU7Gw03+bnSKtCdVsHuhHs7s+JAOvN3nCS70NgS1mrgjua+DOoUyu3NfViVeJpX5+1k54kc+n3+Nx880Jp+rQOuHoS984UEfZ5nGHQZbr6uvASyjht7pBeeheJc4+NjxblQkmO8X34+SQOk74PsJLC5qKWetA71wwOctQ/C4BOJd5N22PhHgW8UeIWD9vIvJUII0VBIi7qeKyrVsyoxg10ncth7KofdJ3NMyfhKAt0debhjKAM7BhPgbt5CPZldxKifdrAtKQuAx24O5c1+LXG0u46JsOAsnN4PPpGmR8o2z/2ATvsnX7G4snVC49PcOEqbbwtwDQCdl/Eyu3+r6xe3EEJUgXQmq0BDS9SXUkpxIqvIlLT3nMzlcEY+UYFuDOocyq0RPpc/2nWRMr2BT+MPMnPNEZSCSH9Xpj/ajma+139ccb1B8f7i/fxn3VF8yObewBx0WYmElB2nuTaFmzQncNKUXnlnjzAYvevC+9kPQ84J6P85BJ97zjz5H+OlentXcHA1jp1u63T5PXLTew3YOkLk3Re2nU40XjXwDJPH2YQQlSaXvm9gGo2GEC8dIV46+rSq4NL1VdjZaBnXJ5Kbm3gz5ucEDqTlce/0dUy8N4oH2gVXmOQtqbC0nFE/JbB8fzqgYchdnRlxZzPKDYp1h87wnx0nWb7vFL7lqTTXpBCpSaGz62kiXUvw1OShcb/kFz9jv/GSuqH8wrqT22HDF1ULzMXfPFEvHAUp/8DA76HlvcZ1h5fDn6+AVxPwDDdenjf9bHz5LQMhhKiAJGpxRbfe5MPiUbcwem4CG46cZdwvu5i6NJHYmCDubxdEpL9brZ07PbeYp/67hT0nc7G31TLlwdbc1zYIMN5/vyPSlzsifckrbsWSPWksSDjJ50fOorKBbLglohFv39USs6FfHv7B+Ky4T/ML6wLaQJcRxqFXS/KhJA/Ki89tPHeh6dILTpeO8KbzMibvi2cyO3PYeF8+6/iVK+gaYIzDJ9L4s9G5187eVfiUhBA3Crn0LSqkNyi+XHOEb/8+StZF975bBrjxQPtg7m0TiI+rQwVHqJp9p3J56r9bSM0pxsvZnm+GtL9s5LYrSc0pIm7DcWatO06p3oCNVsOQLmGM7nFTlR9dq7HCTOOwrZnHjM+jX/yzOPvK+zj7wNjDF95v/gb0ZcYhYt0CjevKigCNsad9faUvM96CyDpu/CxsnYy9+e10xp/2zsYrD+cZ9ICmfg+sU14KRVmgLzU+1qgvM77WlxoHDjKUG/9NzT4LR+Mtmfpcb1EhuUddAUnU1VNabmB1Yga/bj/BygMZlOmNvzY2Wg233eTD/e2C6BHph5N99TuerTyQzsjZOygo1dPUx5lZwzpd+zGxSxw/U8C//tx/7pI5eDnb80qvm3ikY+h1u2xfocJM4+Nrpw8YlzMHjT89G8PQPy6U+yTKOEPa0ysv3FPf8AUsexPsnI0t+IsTnOnnudf2LsZ75m6B0PGpC8fNOGAcUc4j5MLjbpZ6dt2gN/483ws/bTccir9wdSHruDFJK/3Vj+HiD68mXnj/TQ84uRUe+enCLYddP8PCkaCxMdbl/Ah5519fvF6jwdi3wB5Gbrtw3N+Hw7G1cOdb0HrghXjXfGj87M4/1WCnMx7vYhd/VBot3PLKhfcr3oVDS6HbaIh+0Lju+HqIu5sqG73H+O8EsOkr2P8HtHkEYh4zrisvgfS94BZk/KInSb1ekXvUwuLsbbX0ivKnV5Q/WQWlLNp1il+3nyQhJZuVBzJYeSADnb0NPVv4cU/rAG5r7nPFZ7kvVa43sPdULsv2pTFz9REMCro29WbmY+1xd6p6S7hxI2e+HdqBtQdPM2nRPg5n5PPG/D388E8yE/u3pHMTK19e1nkZl4tnQoMLSe68VgOMI72db02D8fE2gLIC41IZjZqbJ+pfnjC29of+AeG3Gtdt+RaWjDd2lLNzNP60dTDOg25jBzYXvz730yMU+lzUE/8/vSBlEzy+AJreYVyXshlWvHN5TLaOxs5+zo2MtxpKC6Gs0HjF4NJbC8pg/HnxI3iG8otuUVSS7SVXIfJPQ3byheOD8fPe/wdVdvOLF7705J4yJvyclAvbzz9qaPocbc/9PPdZamyMSff8Z1BWCCjjl4Tz0nbB8b8h/LYL6zKPwTfnPmutLbgGGn9fzi/uwedeBxl/55QyLhc/0phzEgrPgM7bWP5GdfaIsR+LZ+MLT4uU5MOeX899ETz35U9re+EL2HUkLWpRI0dO5zN/+0nm7zjJyewi03pXR1t6tfTnnjYBdG/WCDsb47d9g0GRmJ7HhiNn2XjkLJuOnSWv+EIHr4c7hPCvAa1M5WuiTG/gh3+S+DT+ILnnzvFIxxDeGxBdN1rXVaUUFOdAUea55FZ00R/3gnM/i4yD0ZTmG59V13nB7a9fOMa3d8GZRHjsNwjuYFy3YTose6NqsVx6qX5WP0haBw9+B60eMK47sQ02f23843fx4uJX+dZfca7xUrGDy4XR60oLjF9aDHpAGWeOU+cX/UWvDRclYo1x0pnzzhw2fpZe4cbPCIwD9hyOP/f5FVz4HM3+RF7y51Ip6P2+caQ9MI6dX3DG2PfgfGv4/P6VvWqhlPGyuI39hX3S9hgTiV9L46OIYJyX/ufHIT/d/AvHtYw7dqHOi16Grd9B9zHQ8+1zn8NxiLvn3JfKRsYk7tzI2LfCI9RYL/dQ47q6PIqgvgzyUo1fRnJPGq/m5J6CvFPGn4/+fOGL4V+vw6aZxishd537cpmdDNOizY9p4wBvXTIvQjXJpe8KSKKuHUopElKy+WNnKot3p5KWe6HF46Gzo1dLPwpK9Gw8etY0ZOl5bo62dG7izd3R/jUeu/xKzuaX8En8QX7anIxBwUPtg/nwgdZo62Oyrg2lBVCUbWylmpaSc/dQL76fWnZhPQo6Pn3hGLmnjMPJOnkaW4zi+tGXQ36a8d8g9+S5nxe9zjlp7A9w/lbAyB0XOi4ufwd2zoHbxkGHJ4zrTmyFb3tc+7y2Tsak7REK7iHQ+70LTzQk/mWc7Cf8Vmjc3biuMBN2fG9MdrYOF67aXHwF50r/94M6gP25qwsZ+42LVxPj9Lznj/v3x+d+jzMvJOZrfYF5bq2xQynA1lnG2Fo9CF1eNK7Lz4A/XjJ+ITz/JVBrB4N/vvZnUwmSqCsgibr2GQyKrUlZLNp1isW7UzmTb56YdfY2dGzsRdem3nRt2oiWgW7XpYW7ZE8qw2fvQG9QDO0SxsR7oyz+pUCIeq+0wNiXofCs8bJ4wRkoOG1M+tnJxsv6eamX73dxS/2Pl2BbHNzxhvFLABjvp8/sWvV4RmyFRhHG18snwrpPofML0PcD47qck/Bpyyvvq7W76DZAELgHnbtFEGD8AuHkWfV4LETuUQur0mo1dAr3olO4F2/3j2LT0bOsOJCBu5MdXZt60zrYwzjxx3XWp1UAUx/SM+bnnfx3YxJO9ra81qe5xZL1mfwS/jl6lvZhnpeN+iZEvWHvfKED49WUlxgvJZ9P3NnJ5nPeN77FeD83MOai47pAm0HGfctLQF9yyeurDF508XDCXk0hrJtxgKHznDyMM/jZORs7ULoHgVuwMTk3kE520qIWN5wfNyXxxvw9ALza6yZG3BlR7WOlZBaydG8ay/amszUpE4MCT50dXz7W3vod14QQdZa0qIWowODOYRSV6vnXn/uZuuwgTva2PNU9/No7YrwXvy81l2V701m6N40DaXlm2z11dmQVlvHYfzbx3oBoBnYIqY0qCCFuIJKoxQ3p6VuaUFCi59PlB3l30T509jYM6hR6xbJ6g2JbUpax5bwvjZTMC73bbbQaOjX2oneUH3dF+ePtbM8r83by565Uxv2yi8MZ+bzWJ7LS9+BP55VwOq+EFgGutXb/fNPRsxzKyMdTZ4+X84XFU2eHrQV62wshLEsStbhhjerRjMLScr5ae5T/m78bJzsbYmOMQ5WWlOvZcOQsS/eksXx/ulmHOAdbLbfe5EPvKH96RPri6Wxvdtzpg2Jo5uPCZysO8fXaoxw9nc+0R2Jwcbj6f7djZwr4eu0Rft12klK9gTYhHrzeJ5IuTS13+XzXiWw+XHKA9YfPXrWMu5MdXs72+Ls50r9NIPe1DcS5griFELWvTtyjnjFjBlOmTCEtLY02bdrwxRdf0KlTpyuWjYuL44knnjBb5+DgQHFx5QZAkHvU4mJKKd763Tggio1Ww+geERzMyGfVgQzySy483+3maEvPFn70ivLn1psaobO/dvJauPMUY+ftpKTcQKS/K98O7UCwp/lIaztTsvlyzRGW7E0zPW5rq9VQbjC+ub25D+N6R9IysPpjqx87U8DUpYn8udvYU9feRkvXZt4Ulug5W1BCVmEZWYWllw1rDuDqYMv97YJ47OYwIvxcqx2DEMJcvbpHPXfuXMaMGcOXX35J586dmTZtGr179yYxMRFfX98r7uPm5kZi4oVhBuURG1FdGo2GSfe2oqjUwK/bT/Bx/EHTNl9XB3pF+dE7yp+bm3hXeRCWe9sEEuLpxDP/28aBtDxiZ6zn6yEdiAnx4O9DZ/hyzRE2HLnQuu0R6cvztzelsbczX6w8xOxNyaxOPM2ag6cZ0DaIl++6iRCvyg+pmpFbzGcrDjFnSwp6g0Kj4arH0RsUOUVlZBaUkFlQxs6UbH7clMTxs4X8d2MS/92YROdwLx67OYzeUf5W6bUvxI3K6i3qzp0707FjR6ZPnw6AwWAgJCSEkSNH8vrrr19WPi4ujtGjR5OdnV2t80mLWlxJud7AW7/vYXtSNrc396F3K3/aBntYZFCUU9lFPPXfrexPNc4G1qSRs6kTmq1Ww71tA3nu1qY09zdvsR4/U8DUZYks2nWhJfzYzWGMuLMZXpdcbr9YbnEZX605wnfrjlNUZhya9M5IX8b2bk6LgMq3zA0GxfojZ/h+YxLL96dzrpFPIxcHHukYwtCujS06IYsQN5J6M+BJaWkpOp2OX375hdjYWNP6oUOHkp2dze+//37ZPnFxcTz99NMEBQVhMBho164d77//PlFRUVc8R0lJCSUlJab3J0+epGXLlpKoxXVVUFLOy3MTWLbPOFmIk50Nj3QK4elbmhDkUfEz17tOZPPBXwdMrW+dvQ2eOnvKDQbK9YoyvQG9QVFmUJTrDaaECtAu1IPX+7agU/i1ZyCrSGpOET9tTuGnzcmczjP+f3JztOX/7m7BwA4hMsqbEFVUbxL1qVOnCAoKYsOGDXTp0sW0fty4caxZs4ZNmzZdts/GjRs5dOgQrVu3Jicnh6lTp7J27Vr27t17xcpOnDiRd965fGIASdTiejMYFHEbjlNSbuCRjiGXdUKriFKKvw+d4YO/DrAvNfea5SN8XRjbuzl3tfSz6K2hMr2B+H3pTF952BRHp3Av3h8QTTNfl2vsLYQ4r0En6kuVlZXRokULBg0axLvvvnvZdmlRi4bEYDA+x11uUNhqNdjaaLDVarGz0WBro8VOa/zpqbOr1b4b5XoDs9Yf55P4gxSV6bG30fLC7U158Y6mlZo1TYgbXb3pTNaoUSNsbGxIT083W5+eno6/v3+ljmFnZ0dMTAyHDx++4nYHBwccHC7cR8vNvXZrRIi6SqvV0CrI3dphYGuj5Zlbm9CnlT8Tft/DqsTTfLbiEIt2neL9AdEVjsqWU1RG8tlCsotKcbC1wdFOi6OdDY7nXjvYGX/a22ilo6gQWDlR29vb0759e1asWGG6R20wGFixYgUjRoyo1DH0ej27d+/m7rurMTG7EKJGQrx0fDesI3/uTmXiwn0cOV3Aw1//wyMdQ7i3bSAnMotIyiwg6WwhKZmFJGUWkl1YVqlj29tquaO5D/e3C+aO5r7S01zcsKz+eNaYMWMYOnQoHTp0oFOnTkybNo2CggLTs9JDhgwhKCiIyZONk9RPmjSJm2++mWbNmpGdnc2UKVNISkri6aefrug0QohaotFouKd1ILc08+GDJQf4aXMyc7akMGdLylX3aeRij7ezA6V6A8Vl+nOLgeJyvel57tJyA0v3prN0bzqeOjv6twnk/nbBtAl2l5a2uKFYPVE//PDDnD59mgkTJpCWlkbbtm1ZsmQJfn5+ACQnJ6O9aPaTrKwsnnnmGdLS0vD09KR9+/Zs2LCBli2vMs2ZEOK6cNfZMfn+aO5vF8Tkxfs5W1BKqJeOMG8dYV7OhJx7Heqlu+poZ0qpc8nbQEpmIQt3nmL+jpOczivhfxuT+N/GJJr4OPNAu2BiY4IIdHc0li81UFSmNy6leorKyikqNeDv7kAzX8sP1FKuN7DzRDYbDp/lbEEpeoOi/Fyve9Prc73yvZztuTPSl1sifHCyrzv370/nlbDrRDa7TuSQkVfMA+2C6dC4Zk8HiNph9eeorzd5jlqI+qVcb2D9kbP8tv0ES/emUVxmMG3TajB7HO1KIv1diY0J4t42gQRe41G4iqRkFrL20GnWHjzNhiNnySsuv/ZOF3G003JrhA+9rjL0bG3KKihl18kcdp9LzLtP5pCaYz6ao0YDQ7s0Zmzv5vVy2NjswlLsbbWVGjWwLqg3vb6tQRK1EPVXXnEZS/ak8dv2k2w8aj5muZ2NBkc7G5zsbHCyN3ZOO3amgFK9MbFrNNA53IvYtkH0jQ7A3cnuSqcwnedUdjHHzuSz/vBZ/j50muNnC83KeOjs6Na0EeGNnLHRarDVarCx0WCn1Rrf22iw0Wo4nJHPsr3pnMy+fDKXXlHGYWmv9Sx9dSilWH3wNDNXHWHz8czLtms00MzHhehgd8r0ij92ngIgyMOJyfdHc+tNPhaPyZIycovZdCyTzeeWxPQ87G20/GtAq3oxa50k6gpIohaiYcgqKKVMb8DR3picrzTEa05hGYv3pLJgx0k2HbuQrOxttMbL0Tc1IruwjJPZRaRmF3Equ5hT2UXklVzeWrbVamgX6smtNzXilggfWgW5V3pWtGtNj9onyp+RPZoRFVjzHv16g+KvPan8e9URs2fuwxs5Ex3kTutgd1oHe9Ay0M1sopi/D53m9V93m75QPNg+mLf6tcRdd/UvNNdLSbmeE1lFJCRnGxPz8UyOnSm4avlnbgnn9b4tKv3vYw2SqCsgiVqIG9PJ7CIWJpxiwY6TJKbnXbO8h86OQHcnOjT25JYIH25u4oWro2WSVvLZQpbtS2PZvnQ2X/QFomcLP0b1aEbrYI8qH7O03MD8HSf4cs1RUxLT2dvw2M1hPNU9HD83x2seo6CknClLE/nvxuMoBT6uDrx7Xyv6tKrc47I1cSq7iONnCjiRVURKVqHxZ6bxZ3pe8WWTxmg00DLAjU7hXnQO96J9mBffbzzO5yuNj+re0dyHzwfFWOzfzNIkUVdAErUQYn9qLgt2nGRfai5+bo4EejgR6H7up4cTgR6O1+1e58H0PKavPMwfu06ZktHtzX0YeWcE7cM8r7l/YWk5czan8M3fR033nd2d7HiiW2OGdW2Mh67q98K3Hs9k3K+7OHramPD7RQfwRLfG+Lo60sjV3mKfTUZeMQsTjB0G956qeIwLJzsbIgNc6RzuTedwL9qFeV7x9sUfO0/x6rlZ6yJ8XfjP0I6Eelc8mY3BoFh76DQr9mdwk58LD3UIwdGudjv+SaKugCRqIURddOR0PjNWHeb3hFPoz/WQ696sESPubIavqwMnsoo4mV3EiaxCTppeF5GeW2zqUOfr6sAztzRhUOfQCuc/r4ziMj1frDzEl2uOmuI5z8XBlkYu9vi4OhgXFwcCPZy4yc+VCD8XgjycrvoIXVGpnmX7jP0M1h0+Yzq2jVZDmJeOIE8nQrx0hHjqCD73OtjTCW9n+0o/lrfrRDbP/G8r6bkleOrsmPlYe26+wiA8p/NKmLcthdmbkjmRdaEPgbezPU92D+exm8Mq7MtQE5KoKyCJWghRlyWdLeDfq47w6/YTpnnJryXUS8dztzXhgXbBFm8J7j2Vw5SliRw9XUBGXrFZr/urcba3oZmfKzf5upiSt1aj4feEUyzZk0pBqd5Utm2IB/e3C+Ke1oEVzgpXVWk5xTz7/VZ2ncjBVqvh3dhWDOoUilKKf45m8uOmJJbuTaNMb/yM3Rxt6dPKn/WHz5ru07s42DL45lCe6haObyVuHVSFJOoKSKIWQtQHKZmFfLnmCPO2ncBWqyHY04kgDyeCPY2tzvPvgzyd8HFxuC6DwCilKCjVczqv5KKlmIy8EpIyCzmcns/RM/mm5Hc1IV5ODIgJJrZtIE18am8yl6JSPWN/2WmaKrZf6wD2p+aaLukDxIR6MLhzGPe0DsDRzoYyvYFFu04xc/URDqbnA8bOhw+0D+a5W5vQuJGzRWKTRF0BSdRCiPpEb1BoNdSb0djK9AaSzhZwMD2fg+l5HDr3M7e4jB4t/Lg/Joj2YZ7XrT5KKb5YeZhP4g+a1jnb2xAbE8SjnUOv2tPeYFCsPJDBv1cfZntyNmB8bv/u6ADe7NcSf/eatbAlUVdAErUQQtx4lu5NY+6WFO6M9CU2JqjS9/CVUmw5nsXM1YdZlXgaVwdb1o+/E7ca9iavN7NnCSGEENdD7yh/ekdV/TEzjUZDp3AvOoV3Yt+pXI6czq9xkq4qSdRCCCFEJbQMdKNloNt1P6/MGyeEEELUYZKohRBCiDpMErUQQghRh0miFkIIIeowSdRCCCFEHXbD9fo2GIzD36Wmplo5EiGEEDeq8znofE6qyA2XqNPT0wHo1KmTlSMRQghxo0tPTyc0NLTCMjfcyGTl5eXs2LEDPz8/tNqaXfnPy8ujZcuW7Nu3D1dXVwtFKETdJ7/74kZkyd97g8FAeno6MTEx2NpW3Ga+4RK1JeXm5uLu7k5OTg5ubtf/IXghrEV+98WNyFq/99KZTAghhKjDJFELIYQQdZgk6hpwcHDg7bffxsHBwdqhCHFdye++uBFZ6/de7lELIYQQdZi0qIUQQog6TBK1EEIIUYdJohZCCCHqMEnUNTBjxgwaN26Mo6MjnTt3ZvPmzdYOSYhatXbtWvr3709gYCAajYYFCxZYOyQhat3kyZPp2LEjrq6u+Pr6EhsbS2Ji4nU7vyTqapo7dy5jxozh7bffZvv27bRp04bevXuTkZFh7dCEqDUFBQW0adOGGTNmWDsUIa6bNWvWMHz4cP755x/i4+MpKyujV69eFBQUXJfzS6/vaurcuTMdO3Zk+vTpgHE4uJCQEEaOHMnrr79u5eiEqH0ajYb58+cTGxtr7VCEuK5Onz6Nr68va9as4dZbb63180mLuhpKS0vZtm0bPXv2NK3TarX07NmTjRs3WjEyIYQQtS0nJwcALy+v63I+SdTVcObMGfR6PX5+fmbr/fz8SEtLs1JUQgghapvBYGD06NF069aNVq1aXZdz3nDTXAohhBDVNXz4cPbs2cO6deuu2zklUVdDo0aNsLGxMc1tfV56ejr+/v5WikoIIURtGjFiBIsWLWLt2rUEBwdft/PKpe9qsLe3p3379qxYscK0zmAwsGLFCrp06WLFyIQQQliaUooRI0Ywf/58Vq5cSXh4+HU9v7Soq2nMmDEMHTqUDh060KlTJ6ZNm0ZBQQFPPPGEtUMTotbk5+dz+PBh0/tjx46RkJCAl5cXoaGhVoxMiNozfPhwZs+eze+//46rq6upL5K7uztOTk61fn55PKsGpk+fzpQpU0hLS6Nt27Z8/vnndO7c2dphCVFrVq9ezR133HHZ+qFDhxIXF3f9AxLiOtBoNFdcP2vWLIYNG1b755dELYQQQtRdco9aCCGEqMMkUQshhBB1mCRqIYQQog6TRC2EEELUYZKohRBCiDpMErUQQghRh0miFkIIIeowSdRCCCFEHSaJWghRazQaDQsWLLB2GELUa5KohWighg0bhkajuWzp06ePtUMTQlSBTMohRAPWp08fZs2aZbbOwcHBStEIIapDWtRCNGAODg74+/ubLZ6enoDxsvTMmTPp27cvTk5ONGnShF9++cVs/927d3PnnXfi5OSEt7c3zz77LPn5+WZlvvvuO6KionBwcCAgIIARI0aYbT9z5gwDBgxAp9MRERHBwoULTduysrIYPHgwPj4+ODk5ERERcdkXCyFudJKohbiBvfXWWzzwwAPs3LmTwYMH88gjj7B//34ACgoK6N27N56enmzZsoV58+axfPlys0Q8c+ZMhg8fzrPPPsvu3btZuHAhzZo1MzvHO++8w8CBA9m1axd33303gwcPJjMz03T+ffv28ddff7F//35mzpxJo0aNrt8HIER9oIQQDdLQoUOVjY2NcnZ2Nlvee+89pZRSgHr++efN9uncubN64YUXlFJKff3118rT01Pl5+ebtv/5559Kq9WqtLQ0pZRSgYGB6o033rhqDIB68803Te/z8/MVoP766y+llFL9+/dXTzzxhGUqLEQDJfeohWjA7rjjDmbOnGm2zsvLy/S6S5cuZtu6dOlCQkICAPv376dNmzY4Ozubtnfr1g2DwUBiYiIajYZTp07Ro0ePCmNo3bq16bWzszNubm5kZGQA8MILL/DAAw+wfft2evXqRWxsLF27dq1WXYVoqCRRC9GAOTs7X3Yp2lKcnJwqVc7Ozs7svUajwWAwANC3b1+SkpJYvHgx8fHx9OjRg+HDhzN16lSLxytEfSX3qIW4gf3zzz+XvW/RogUALVq0YOfOnRQUFJi2r1+/Hq1WS/PmzXF1daVx48asWLGiRjH4+PgwdOhQfvjhB6ZNm8bXX39do+MJ0dBIi1qIBqykpIS0tDSzdba2tqYOW/PmzaNDhw50796dH3/8kc2bN/Of//wHgMGDB/P2228zdOhQJk6cyOnTpxk5ciSPP/44fn5+AEycOJHnn38eX19f+vbtS15eHuvXr2fkyJGVim/ChAm0b9+eqKgoSkpKWLRokemLghDCSBK1EA3YkiVLCAgIMFvXvHlzDhw4ABh7ZM+ZM4cXX3yRgIAAfvrpJ1q2bAmATqdj6dKlvPTSS3Ts2BGdTscDDzzAJ598YjrW0KFDKS4u5tNPP+XVV1+lUaNGPPjgg5WOz97envHjx3P8+HGcnJy45ZZbmDNnjgVqLkTDoVFKKWsHIYS4/jQaDfPnzyc2NtbaoQghKiD3qIUQQog6TBK1EEIIUYfJPWohblBy10uI+kFa1EIIIUQdJolaCCGEqMMkUQshhBB1mCRqIYQQog6TRC2EEELUYZKohRBCiDpMErUQQghRh0miFkIIIeowSdRCCCFEHfb/bp5XEFN8oAIAAAAASUVORK5CYII=", "text/plain": [ "
" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "from previous_chapters import plot_losses\n", "\n", "epochs_tensor = torch.linspace(0, num_epochs, len(train_losses))\n", "plot_losses(epochs_tensor, tokens_seen, train_losses, val_losses)" ] }, { "cell_type": "markdown", "id": "6777e0c4-d82c-46d8-84fb-1376c4f8bae0", "metadata": { "id": "6777e0c4-d82c-46d8-84fb-1376c4f8bae0" }, "source": [ "- 如我们所见,在第一个训练轮次的开始,损失急剧下降,这意味着模型开始迅速学习。\n", "- 大约在训练1个训练轮次时,模型出现了轻微的过拟合。" ] }, { "cell_type": "markdown", "id": "87b79a47-13f9-4d1f-87b1-3339bafaf2a3", "metadata": { "id": "87b79a47-13f9-4d1f-87b1-3339bafaf2a3" }, "source": [ "## 7.7 抽取并保存模型回复" ] }, { "cell_type": "markdown", "id": "5a25cc88-1758-4dd0-b8bf-c044cbf2dd49", "metadata": { "id": "5a25cc88-1758-4dd0-b8bf-c044cbf2dd49" }, "source": [ "" ] }, { "cell_type": "markdown", "id": "17510e9d-7727-4d58-ba9a-d82ec23c1427", "metadata": { "id": "17510e9d-7727-4d58-ba9a-d82ec23c1427" }, "source": [ "- 在本节中,我们保存测试集的响应,以便在下一节进行评估。\n", "- 我们还保存了模型的副本,以备将来使用。\n", "- 但首先,让我们粗略的查看一下微调后的模型生成的响应。" ] }, { "cell_type": "code", "execution_count": 36, "id": "VQ2NZMbfucAc", "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "VQ2NZMbfucAc", "outputId": "8416b4ac-1993-4628-dea6-7789cdc8926c" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n", "\n", "### Instruction:\n", "Rewrite the sentence using a simile.\n", "\n", "### Input:\n", "The car is very fast.\n", "\n", "Correct response:\n", ">> The car is as fast as lightning.\n", "\n", "Model response:\n", ">> The car is as fast as a bullet.\n", "-------------------------------------\n", "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n", "\n", "### Instruction:\n", "What type of cloud is typically associated with thunderstorms?\n", "\n", "Correct response:\n", ">> The type of cloud typically associated with thunderstorms is cumulonimbus.\n", "\n", "Model response:\n", ">> The type of cloud associated with thunderstorms is a cumulus cloud.\n", "-------------------------------------\n", "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n", "\n", "### Instruction:\n", "Name the author of 'Pride and Prejudice'.\n", "\n", "Correct response:\n", ">> Jane Austen.\n", "\n", "Model response:\n", ">> The author of 'Pride and Prejudice' is Jane Austen.\n", "-------------------------------------\n" ] } ], "source": [ "torch.manual_seed(123)\n", "\n", "\n", "for entry in test_data[:3]:\n", "\n", " input_text = format_input(entry)\n", "\n", " token_ids = generate(\n", " model=model,\n", " idx=text_to_token_ids(input_text, tokenizer).to(device),\n", " max_new_tokens=256,\n", " context_size=BASE_CONFIG[\"context_length\"],\n", " eos_id=50256\n", " )\n", " generated_text = token_ids_to_text(token_ids, tokenizer)\n", " response_text = (\n", " generated_text[len(input_text):]\n", " .replace(\"### Response:\", \"\")\n", " .strip()\n", ")\n", "\n", " print(input_text)\n", " print(f\"\\nCorrect response:\\n>> {entry['output']}\")\n", " print(f\"\\nModel response:\\n>> {response_text.strip()}\")\n", " print(\"-------------------------------------\")" ] }, { "cell_type": "markdown", "id": "49ab64c1-586f-4939-8def-23feeb1b3599", "metadata": { "id": "49ab64c1-586f-4939-8def-23feeb1b3599" }, "source": [ "- 从测试集中的指令、给定的响应以及模型的响应来看,模型的表现相对较好。\n", "- 第一个和最后一个指令的回答显然是正确的。\n", "- 第二个回答接近正确;模型回答为“积云”(cumulus cloud),而不是“雷雨云”(cumulonimbus)(不过需要注意的是,积云可能发展成雷雨云,而雷雨云具有产生雷暴的能力)。\n", "- 最重要的是,我们可以看到,模型评估结果不像第六章那样直接,因为在第六章中我们只需要计算正确的垃圾邮件/非垃圾邮件类别标签的百分比来获得分类准确率。\n", "- 实际上,指令微调后的LLM(如聊天机器人)通常通过多种方法进行评估:\n", " - 短答案和多选基准,如MMLU(“大规模多任务语言理解测量”,[https://arxiv.org/abs/2009.03300](https://arxiv.org/abs/2009.03300)),测试模型的知识。\n", " - 与其他LLM的人工偏好比较,如LMSYS聊天机器人竞技场([https://arena.lmsys.org](https://arena.lmsys.org))。\n", " - 自动化对话基准,其中使用另一个LLM,如GPT-4,来评估响应,例如AlpacaEval([https://tatsu-lab.github.io/alpaca_eval/](https://tatsu-lab.github.io/alpaca_eval/))。\n", "- 在下一节中,我们将使用类似AlpacaEval的方法,使用另一个LLM来评估我们模型的响应;不过,我们将使用自己的测试集,而不是公开可用的基准数据集。\n", "- 为此,我们将模型的响应添加到 `test_data` 字典中,并将其保存为 `\"instruction-data-with-response.json\"` 文件,以便记录,这样我们可以在需要时在单独的Python会话中加载并分析它。" ] }, { "cell_type": "code", "execution_count": 37, "id": "-PNGKzY4snKP", "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "-PNGKzY4snKP", "outputId": "0453dfb3-51cd-49e2-9e63-f65b606c3478" }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "100%|██████████| 110/110 [01:11<00:00, 1.54it/s]\n" ] } ], "source": [ "from tqdm import tqdm\n", "\n", "for i, entry in tqdm(enumerate(test_data), total=len(test_data)):\n", "\n", " input_text = format_input(entry)\n", "\n", " token_ids = generate(\n", " model=model,\n", " idx=text_to_token_ids(input_text, tokenizer).to(device),\n", " max_new_tokens=256,\n", " context_size=BASE_CONFIG[\"context_length\"],\n", " eos_id=50256\n", " )\n", " generated_text = token_ids_to_text(token_ids, tokenizer)\n", " response_text = generated_text[len(input_text):].replace(\"### Response:\", \"\").strip()\n", "\n", " test_data[i][\"model_response\"] = response_text\n", "\n", "\n", "with open(\"instruction-data-with-response.json\", \"w\") as file:\n", " json.dump(test_data, file, indent=4) # \"indent\"设置用于美化输出" ] }, { "cell_type": "markdown", "id": "228d6fa7-d162-44c3-bef1-4013c027b155", "metadata": { "id": "228d6fa7-d162-44c3-bef1-4013c027b155" }, "source": [ "- 让我们再检查一下其中一个条目,确认回复是否已正确添加到 `test_data` 中。" ] }, { "cell_type": "code", "execution_count": 38, "id": "u-AvCCMTnPSE", "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "u-AvCCMTnPSE", "outputId": "ce3b2545-8990-4446-e44c-a945e0049c06" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "{'instruction': 'Rewrite the sentence using a simile.', 'input': 'The car is very fast.', 'output': 'The car is as fast as lightning.', 'model_response': 'The car is as fast as a bullet.'}\n" ] } ], "source": [ "print(test_data[0])" ] }, { "cell_type": "markdown", "id": "c1b2f3f6-8569-405a-9db6-d47cba65608a", "metadata": { "id": "c1b2f3f6-8569-405a-9db6-d47cba65608a" }, "source": [ "- 最后保存这个模型以便日后复用" ] }, { "cell_type": "code", "execution_count": 39, "id": "8cBU0iHmVfOI", "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "8cBU0iHmVfOI", "outputId": "d6e7f226-9310-43f5-f31f-adc3a893a8e9", "scrolled": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Model saved as gpt2-medium355M-sft.pth\n" ] } ], "source": [ "import re\n", "\n", "\n", "file_name = f\"{re.sub(r'[ ()]', '', CHOOSE_MODEL) }-sft.pth\"\n", "torch.save(model.state_dict(), file_name)\n", "print(f\"Model saved as {file_name}\")\n", "\n", "# Load model via\n", "# model.load_state_dict(torch.load(\"gpt2-medium355M-sft.pth\"))" ] }, { "cell_type": "markdown", "id": "obgoGI89dgPm", "metadata": { "id": "obgoGI89dgPm" }, "source": [ "## 7.8 评价微调后的大语言模型" ] }, { "cell_type": "markdown", "id": "805b9d30-7336-499f-abb5-4a21be3129f5", "metadata": { "id": "805b9d30-7336-499f-abb5-4a21be3129f5" }, "source": [ "" ] }, { "cell_type": "markdown", "id": "68d2b9d3-b6ff-4533-a89d-7b66079b4fd1", "metadata": { "id": "68d2b9d3-b6ff-4533-a89d-7b66079b4fd1" }, "source": [ "- 在本节中,我们通过使用另一个更强大的LLM来自动化微调后的LLM的响应评估。\n", "- 具体来说,我们使用了Meta AI发布的8B参数指令微调Llama 3模型,该模型可以通过ollama本地运行([https://ollama.com](https://ollama.com))。\n", "- (另外,如果您更喜欢通过OpenAI API使用像GPT-4这样的更强大的LLM,请参阅 [llm-instruction-eval-openai.ipynb](../03_model-evaluation/llm-instruction-eval-openai.ipynb) )" ] }, { "cell_type": "markdown", "id": "ea427a30-36ba-44e3-bb1f-eb0d7008d6e9", "metadata": { "id": "ea427a30-36ba-44e3-bb1f-eb0d7008d6e9" }, "source": [ "- Ollama 是一个高效运行LLM的程序。\n", "- 它是 llama.cpp 的一个封装器([https://github.com/ggerganov/llama.cpp](https://github.com/ggerganov/llama.cpp)),该项目使用纯C/C++实现LLM,以最大化效率。\n", "- 请注意,Ollama 是用于生成文本(进行模型推理)的工具,而不是用于训练或微调LLM的工具。\n", "- 在运行以下代码之前,请访问 [https://ollama.com](https://ollama.com) 安装Ollama,并按照指示操作(例如,点击“下载”按钮并下载适用于您操作系统的Ollama应用程序)。" ] }, { "cell_type": "markdown", "id": "747a2fc7-282d-47ec-a987-ed0a23ed6822", "metadata": { "id": "747a2fc7-282d-47ec-a987-ed0a23ed6822" }, "source": [ "- 对于macOS和Windows用户,点击已下载的ollama应用程序;如果提示安装命令行工具,选择“是”。\n", "- Linux用户可以使用ollama网站上提供的安装命令。\n", "\n", "- 通常,在我们通过命令行使用ollama之前,需要先启动ollama应用程序或在单独的终端中运行 `ollama serve`。\n", "\n", "\n", "\n", "- 在另一个终端运行ollama应用程序或 `ollama serve` 后,在命令行中执行以下命令,尝试使用8B参数的Llama 3模型(该模型占用4.7GB的存储空间,第一次执行此命令时会自动下载)。\n", "```bash\n", "# 8B 模型\n", "ollama run llama3\n", "```\n", "\n", "\n", "输出可能如下所示\n", "\n", "```\n", "$ ollama run llama3\n", "pulling manifest\n", "pulling 6a0746a1ec1a... 100% ▕████████████████▏ 4.7 GB\n", "pulling 4fa551d4f938... 100% ▕████████████████▏  12 KB\n", "pulling 8ab4849b038c... 100% ▕████████████████▏  254 B\n", "pulling 577073ffcc6c... 100% ▕████████████████▏  110 B\n", "pulling 3f8eb4da87fa... 100% ▕████████████████▏  485 B\n", "verifying sha256 digest\n", "writing manifest\n", "removing any unused layers\n", "success\n", "```\n", "\n", "- 请注意,`llama3` 指的是经过指令微调的8B参数Llama 3模型。\n", "\n", "- 使用ollama与 `\"llama3\"` 模型(8B参数模型)时需要16GB的RAM;如果您的机器不支持,可以尝试使用更小的模型,例如3.8B参数的phi-3模型,通过设置 `model = \"phi-3\"`,该模型只需8GB的RAM。\n", "\n", "- 如果您的机器支持,您还可以使用更大的70B参数Llama 3模型,只需将 `llama3` 替换为 `llama3:70b`。\n", "\n", "- 下载完成后,您将看到一个命令行提示符,可以与模型进行对话。\n", "\n", "- 尝试输入类似 \"What do llamas eat?\" 的提示,模型应返回类似以下的输出:\n", "\n", "```\n", ">>> What do llamas eat?\n", "Llamas are ruminant animals, which means they have a four-chambered\n", "stomach and eat plants that are high in fiber. In the wild, llamas\n", "typically feed on:\n", "1. Grasses: They love to graze on various types of grasses, including tall\n", "grasses, wheat, oats, and barley.\n", "```" ] }, { "cell_type": "markdown", "id": "7b7b341c-ba0e-40bb-a52c-cb328bbd1fe4", "metadata": { "id": "7b7b341c-ba0e-40bb-a52c-cb328bbd1fe4" }, "source": [ "- 结束运行仅需要输入 `/bye`" ] }, { "cell_type": "markdown", "id": "faaf3e02-8ca0-4edf-be23-60625a5b14e3", "metadata": { "id": "faaf3e02-8ca0-4edf-be23-60625a5b14e3" }, "source": [ "- 以下代码检查ollama会话是否正常运行" ] }, { "cell_type": "code", "execution_count": 1, "id": "026e8570-071e-48a2-aa38-64d7be35f288", "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 193 }, "id": "026e8570-071e-48a2-aa38-64d7be35f288", "outputId": "e30d3533-e1f5-4aa9-b24f-33273fc7b30e" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Ollama running: True\n" ] } ], "source": [ "import psutil\n", "\n", "def check_if_running(process_name):\n", " running = False\n", " for proc in psutil.process_iter([\"name\"]):\n", " if process_name in proc.info[\"name\"]:\n", " running = True\n", " break\n", " return running\n", "\n", "ollama_running = check_if_running(\"ollama\")\n", "# 检查运行情况\n", "if not ollama_running:\n", " raise RuntimeError(\"Ollama not running. Launch ollama before proceeding.\")\n", "print(\"Ollama running:\", check_if_running(\"ollama\"))" ] }, { "cell_type": "code", "execution_count": 2, "id": "723c9b00-e3cd-4092-83c3-6e48b5cf65b0", "metadata": { "id": "723c9b00-e3cd-4092-83c3-6e48b5cf65b0" }, "outputs": [], "source": [ "# 该单元格是可选的;它允许您重新启动notebook\n", "# 并只运行第7.7节,而无需重新运行之前的代码。\n", "import json\n", "from tqdm import tqdm\n", "# 我们的初始数据集\n", "file_path = \"instruction-data-with-response.json\"\n", "\n", "with open(file_path, \"r\") as file:\n", " test_data = json.load(file)\n", "\n", "\n", "def format_input(entry):\n", " instruction_text = (\n", " f\"Below is an instruction that describes a task. \"\n", " f\"Write a response that appropriately completes the request.\"\n", " f\"\\n\\n### Instruction:\\n{entry['instruction']}\"\n", " )\n", "\n", " input_text = f\"\\n\\n### Input:\\n{entry['input']}\" if entry[\"input\"] else \"\"\n", "\n", " return instruction_text + input_text" ] }, { "cell_type": "markdown", "id": "b3464705-d026-4594-977f-fb357e51c3a9", "metadata": { "id": "b3464705-d026-4594-977f-fb357e51c3a9" }, "source": [ "- 现在,与我们之前使用的 `ollama run` 命令互动模型的另一种方式是通过其REST API,在Python中通过以下函数进行操作。\n", "- 在运行notebook中的下一个单元格之前,请确保ollama仍在运行(前面的代码单元格应显示 `\"Ollama running: True\"`)。\n", "- 接下来,运行以下代码单元格来查询模型。" ] }, { "cell_type": "code", "execution_count": 3, "id": "e3ae0e10-2b28-42ce-8ea2-d9366a58088f", "metadata": { "id": "e3ae0e10-2b28-42ce-8ea2-d9366a58088f" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Llamas are herbivores, which means they primarily feed on plant-based foods. Their diet typically consists of:\n", "\n", "1. Grasses: Llamas love to graze on various types of grasses, including tall grasses, short grasses, and even weeds.\n", "2. Hay: High-quality hay, such as alfalfa or timothy hay, is a staple in a llama's diet. They enjoy the sweet taste and texture of fresh hay.\n", "3. Grains: Llamas may receive grains like oats, barley, or corn as part of their daily ration. However, it's essential to provide these grains in moderation, as they can be high in calories.\n", "4. Fruits and vegetables: Llamas enjoy a variety of fruits and veggies, such as apples, carrots, sweet potatoes, and leafy greens like kale or spinach.\n", "5. Minerals: Llamas require access to mineral supplements, which help maintain their overall health and well-being.\n", "\n", "In the wild, llamas might also eat:\n", "\n", "1. Leaves: They'll munch on leaves from trees and shrubs, including plants like willow, alder, and birch.\n", "2. Bark: In some cases, llamas may eat the bark of certain trees, like aspen or cottonwood.\n", "3. Mosses and lichens: These non-vascular plants can be a tasty snack for llamas.\n", "\n", "In captivity, llama owners typically provide a balanced diet that includes a mix of hay, grains, and fruits/vegetables. It's essential to consult with a veterinarian or experienced llama breeder to determine the best feeding plan for your llama.\n" ] } ], "source": [ "import urllib.request\n", "\n", "def query_model(\n", " prompt,\n", " model=\"llama3\",\n", " url=\"http://localhost:11434/api/chat\"\n", "):\n", " # 预处理的类型与符号\n", " # 创建数据负载字典\n", " data = {\n", " \"model\": model,\n", " \"messages\": [\n", " {\"role\": \"user\", \"content\": prompt}\n", " ],\n", " \"options\": { # 以下设置是为了获得确定性的响应\n", " \"seed\": 123,\n", " \"temperature\": 0,\n", " \"num_ctx\": 2048\n", " }\n", " }\n", " \n", "\n", " # 将字典转换为JSON格式的字符串并编码为字节\n", " payload = json.dumps(data).encode(\"utf-8\")\n", " # 对JSON进行编码\n", " # 创建请求对象,设置请求方法为POST并添加必要的头部信息\n", " request = urllib.request.Request(\n", " url,\n", " data=payload,\n", " method=\"POST\"\n", " )\n", " # 设置请求头部为JSON格式\n", " request.add_header(\"Content-Type\", \"application/json\")\n", "\n", " # 发送请求并捕获响应\n", " response_data = \"\"\n", " with urllib.request.urlopen(request) as response:\n", " # 读取并解码响应\n", " while True:\n", " line = response.readline().decode(\"utf-8\")\n", " if not line:\n", " break\n", " response_json = json.loads(line)\n", " response_data += response_json[\"message\"][\"content\"]\n", "\n", " return response_data\n", "\n", "\n", "model = \"llama3\"\n", "result = query_model(\"What do Llamas eat?\", model)\n", "print(result)" ] }, { "cell_type": "markdown", "id": "207ae28f-0f8c-4fda-aeef-e7e3046249cc", "metadata": { "id": "207ae28f-0f8c-4fda-aeef-e7e3046249cc" }, "source": [ "- 现在,使用我们上面定义的 `query_model` 函数,我们可以评估微调后的模型的响应;让我们在前面一节中查看的前三个测试集响应上试试。" ] }, { "cell_type": "code", "execution_count": 4, "id": "86b839d4-064d-4178-b2d7-01691b452e5e", "metadata": { "id": "86b839d4-064d-4178-b2d7-01691b452e5e" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "Dataset response:\n", ">> The car is as fast as lightning.\n", "\n", "Model response:\n", ">> The car is as fast as a bullet.\n", "\n", "Score:\n", ">> I'd rate the model response \"The car is as fast as a bullet.\" an 85 out of 100.\n", "\n", "Here's why:\n", "\n", "* The response uses a simile correctly, comparing the speed of the car to something else (in this case, a bullet).\n", "* The comparison is relevant and makes sense, as bullets are known for their high velocity.\n", "* The phrase \"as fast as\" is used correctly to introduce the simile.\n", "\n", "The only reason I wouldn't give it a perfect score is that some people might find the comparison slightly less vivid or evocative than others. For example, comparing something to lightning (as in the original response) can be more dramatic and attention-grabbing. However, \"as fast as a bullet\" is still a strong and effective simile that effectively conveys the idea of the car's speed.\n", "\n", "Overall, I think the model did a great job!\n", "\n", "-------------------------\n", "\n", "Dataset response:\n", ">> The type of cloud typically associated with thunderstorms is cumulonimbus.\n", "\n", "Model response:\n", ">> The type of cloud associated with thunderstorms is a cumulus cloud.\n", "\n", "Score:\n", ">> I'd score this model response as 40 out of 100.\n", "\n", "Here's why:\n", "\n", "* The model correctly identifies that thunderstorms are related to clouds (correctly identifying the type of phenomenon).\n", "* However, it incorrectly specifies the type of cloud associated with thunderstorms. Cumulus clouds are not typically associated with thunderstorms; cumulonimbus clouds are.\n", "* The response lacks precision and accuracy in its description.\n", "\n", "Overall, while the model attempts to address the instruction, it provides an incorrect answer, which is a significant error.\n", "\n", "-------------------------\n", "\n", "Dataset response:\n", ">> Jane Austen.\n", "\n", "Model response:\n", ">> The author of 'Pride and Prejudice' is Jane Austen.\n", "\n", "Score:\n", ">> I'd rate my own response as 95 out of 100. Here's why:\n", "\n", "* The response accurately answers the question by naming the author of 'Pride and Prejudice' as Jane Austen.\n", "* The response is concise and clear, making it easy to understand.\n", "* There are no grammatical errors or ambiguities that could lead to confusion.\n", "\n", "The only reason I wouldn't give myself a perfect score is that the response is slightly redundant - it's not necessary to rephrase the question in the answer. A more concise response would be simply \"Jane Austen.\"\n", "\n", "-------------------------\n" ] } ], "source": [ "for entry in test_data[:3]:\n", " # 自定义提示词\n", " prompt = (\n", " f\"Given the input `{format_input(entry)}` \"\n", " f\"and correct output `{entry['output']}`, \"\n", " f\"score the model response `{entry['model_response']}`\"\n", " f\" on a scale from 0 to 100, where 100 is the best score. \"\n", " )\n", "\n", " print(\"\\nDataset response:\")\n", " print(\">>\", entry['output'])\n", " print(\"\\nModel response:\")\n", " print(\">>\", entry[\"model_response\"])\n", " print(\"\\nScore:\")\n", " print(\">>\", query_model(prompt))\n", " print(\"\\n-------------------------\")" ] }, { "cell_type": "markdown", "id": "b114fd65-9cfb-45f6-ab74-8331da136bf3", "metadata": { "id": "b114fd65-9cfb-45f6-ab74-8331da136bf3" }, "source": [ "- 如我们所见,Llama 3 模型给出了合理的评估\n", "- 如果模型的回答不完全正确,它会根据部分正确的内容给予相应的分数,例如“积云”这个回答。\n", "- 请注意,之前的提示会返回详细的评估结果;我们可以调整提示,使其生成介于0到100之间的整数分数(其中100为最佳),以便计算模型的平均得分。\n", "- 对测试集中的110个条目进行评估大约需要1分钟(在M3 MacBook Air上运行)。" ] }, { "cell_type": "code", "execution_count": 5, "id": "9d7bca69-97c4-47a5-9aa0-32f116fa37eb", "metadata": { "id": "9d7bca69-97c4-47a5-9aa0-32f116fa37eb" }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "Scoring entries: 100%|████████████████████████| 110/110 [01:10<00:00, 1.57it/s]" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Number of scores: 110 of 110\n", "Average score: 50.32\n", "\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "\n" ] } ], "source": [ "# 生成模型得分的函数\n", "def generate_model_scores(json_data, json_key, model=\"llama3\"):\n", " scores = []\n", " \n", " for entry in tqdm(json_data, desc=\"Scoring entries\"):\n", " prompt = (\n", " f\"Given the input `{format_input(entry)}` \"\n", " f\"and correct output `{entry['output']}`, \"\n", " f\"score the model response `{entry[json_key]}`\"\n", " f\" on a scale from 0 to 100, where 100 is the best score. \"\n", " f\"Respond with the integer number only.\"\n", " )\n", " score = query_model(prompt, model)\n", " try:\n", " scores.append(int(score))\n", " except ValueError:\n", " print(f\"Could not convert score: {score}\")\n", " continue\n", "\n", " return scores\n", "\n", "\n", "scores = generate_model_scores(test_data, \"model_response\")\n", "print(f\"Number of scores: {len(scores)} of {len(test_data)}\")\n", "print(f\"Average score: {sum(scores)/len(scores):.2f}\\n\")" ] }, { "cell_type": "markdown", "id": "407f08d5-9ada-4301-9ebc-f0533c76d3f2", "metadata": { "id": "407f08d5-9ada-4301-9ebc-f0533c76d3f2" }, "source": [ "- 我们的模型平均得分超过50分,可以将其作为参考,与其他模型进行对比;或尝试其他训练设置来改进模型表现。\n", "- 截至汉化的时候,ollama 在不同操作系统上的结果可能会有所不同,因此您得到的数值可能与上面显示的略有差异。" ] }, { "cell_type": "markdown", "id": "6408768b-2784-44f1-b48e-aed0c1eb9b94", "metadata": { "id": "6408768b-2784-44f1-b48e-aed0c1eb9b94" }, "source": [ "- 以下看作参考\n", " - Llama 3 8B 基础模型得分为 58.51\n", " - Llama 3 8B 指令微调模型得分为 82.65" ] }, { "cell_type": "markdown", "id": "412d7325-284a-446c-92a1-5aa8acc52dee", "metadata": { "id": "412d7325-284a-446c-92a1-5aa8acc52dee" }, "source": [ "## 7.9 总结" ] }, { "cell_type": "markdown", "id": "tIbNMluCDjVM", "metadata": { "id": "tIbNMluCDjVM" }, "source": [ "### 7.9.1 展望\n", "\n", "- 本章标志着本书的结束。\n", "- 我们覆盖了LLM开发的主要步骤:实现LLM架构、预训练LLM以及微调模型。\n", "\n", "\n", "\n", "- 正如本章所述,指令微调后,有时会进行一个可选步骤——偏好微调.\n", "- 偏好微调有助于定制模型,更好地符合用户偏好。如果您感兴趣,可以查看 [../04_preference-tuning-with-dpo](../04_preference-tuning-with-dpo) 文件夹。\n", "\n", "- 本GitHub仓库还包含大量额外的附加资料,您可能会喜欢;有关更多信息,请查看本仓库README页面中的 [Bonus Material](https://github.com/rasbt/LLMs-from-scratch?tab=readme-ov-file#bonus-material) 部分。\n", "\n", "### 7.9.2 紧跟时代潮流\n", "\n", "- 本节没有代码内容。\n", "\n", "### 7.9.3 寄语\n", "\n", "- 我希望您喜欢这个从零实现LLM的过程,以及编写预训练和微调代码的过程。\n", "- 在我看来,从零构建大模型是理解LLM如何工作的最佳方式;我希望您通过这种方式获得了更深入的理解。\n", "- 虽然本书是为了教学目的,但您可能有兴趣将不同的、更强大的LLM应用于实际应用中。\n", " - 为此,您可以考虑使用一些流行工具,如axolotl([https://github.com/OpenAccess-AI-Collective/axolotl](https://github.com/OpenAccess-AI-Collective/axolotl))或LitGPT([https://github.com/Lightning-AI/litgpt](https://github.com/Lightning-AI/litgpt)),这是我参与开发的工具。" ] }, { "cell_type": "markdown", "id": "f9853e7f-a81a-4806-9728-be1690807185", "metadata": { "id": "f9853e7f-a81a-4806-9728-be1690807185" }, "source": [ "## 总结与收获\n", "\n", "- 请参阅 [./gpt_instruction_finetuning.py](./gpt_instruction_finetuning.py) 脚本,它是一个自包含的分类微调脚本。\n", "- [./ollama_evaluate.py](./ollama_evaluate.py) 是基于第7.8节的独立脚本,通过Ollama和Llama 3评估包含“output”和“response”键的JSON文件。\n", "- [./load-finetuned-model.ipynb](./load-finetuned-model.ipynb) 笔记本演示了如何在新会话中加载微调后的模型。\n", "- 您可以在 [./exercise-solutions.ipynb](./exercise-solutions.ipynb) 找到习题解答。" ] }, { "cell_type": "markdown", "id": "b9cc51ec-e06c-4470-b626-48401a037851", "metadata": {}, "source": [ "## 接下来做什么?\n", "\n", "- 恭喜您完成了本书!如果您在寻找更多资源,我在这个GitHub仓库中添加了几个附加部分,您可能会感兴趣。\n", "- 附加资料的完整列表可以在主README的 [Bonus Material](https://github.com/rasbt/LLMs-from-scratch?tab=readme-ov-file#bonus-material) 部分查看。\n", "- 在这里,我想特别强调几个我喜欢的部分:\n", " 1. [从零开始的直接偏好优化(DPO)用于LLM对齐](../04_preference-tuning-with-dpo/dpo-from-scratch.ipynb) 实现了一种流行的偏好调优机制,使本章的模型与人类偏好更加契合。\n", " 2. [从零开始实现Llama 3.2](../../ch05/07_gpt_to_llama/standalone-llama32.ipynb),这是Meta AI流行的Llama 3.2的从零实现,包括加载官方的预训练权重;如果您想做一些额外的实验,可以将每章中的 `GPTModel` 模型替换为 `Llama3Model` 类(它可以作为1:1替代)。\n", " 3. [将GPT转换为Llama](../../ch05/07_gpt_to_llama) 包含逐步指南的代码,解释了GPT-2和各种Llama模型之间的区别。\n", " 4. [理解嵌入层和线性层的区别](../../ch02/03_bonus_embedding-vs-matmul/embeddings-and-linear-layers.ipynb) 是一个概念性解释,说明了我们在LLM输入阶段使用的PyTorch中的 `Embedding` 层在数学上等价于对独热编码数据应用的线性层。\n", "- 祝您学习愉快!" ] } ], "metadata": { "accelerator": "GPU", "colab": { "gpuType": "A100", "provenance": [] }, "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.12.6" } }, "nbformat": 4, "nbformat_minor": 5 }