{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "0fgOxpmGrOvn"
},
"source": [
"##### Copyright 2026 Google LLC."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "zxdx4xJxrTfP"
},
"outputs": [],
"source": [
"# @title Licensed under the Apache License, Version 2.0 (the \"License\");\n",
"# you may not use this file except in compliance with the License.\n",
"# You may obtain a copy of the License at\n",
"#\n",
"# https://www.apache.org/licenses/LICENSE-2.0\n",
"#\n",
"# Unless required by applicable law or agreed to in writing, software\n",
"# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
"# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
"# See the License for the specific language governing permissions and\n",
"# limitations under the License."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Qw6ttkOtrQ_D"
},
"source": [
"# Get started with Music generation using Lyria RealTime"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "d4f919f05306"
},
"source": [
""
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "uX1mTOHNO2gz"
},
"source": [
"[Lyria RealTime](https://deepmind.google/technologies/lyria/),\n",
"provides access to a state-of-the-art, real-time, streaming music\n",
"generation model. It allows developers to build applications where users\n",
"can interactively create, continuously steer, and perform instrumental\n",
"music using text prompts.\n",
"\n",
"Lyria RealTime main characteristics are:\n",
"* **Highest quality text-to-audio model**: Lyria RealTime generates high-quality instrumental music (no voice) using the latest models produced by DeepMind.\n",
"* **Non-stopping music**: Using websockets, Lyria RealTime continuously generates music in real time.\n",
"* **Mix and match influences**: Prompt the model to describe musical idea, genre, instrument, mood, or characteristic. The prompts can be mixed to blend\n",
"influences and create unique compositions.\n",
"* **Creative control**: Set the `guidance`, the `bpm`, the `density` of musical notes/sounds, the `brightness` and the `scale` in real time. The model will smoothly transition based on the new input.\n",
"\n",
"Check Lyria RealTime's [documentation](https://ai.google.dev/gemini-api/docs/realtime-music-generation) for more details."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "e87HPlxJ7Gih"
},
"source": [
"\n",
"
| \n", " 🪧\n", " | \n", " \n", "\n",
" Lyria RealTime is a preview feature. It is free to use for now with quota limitations, but is subject to change.\n", " | \n",
"
generate_music - The main functionreceive - Collects audio from the API and plays itsend method to send the new prompts/config to the model. Check the [python code sample](./Get_started_LyriaRealTime.ipynb) for such an example."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "7p_5nXFo3ecA"
},
"outputs": [],
"source": [
"import asyncio\n",
"\n",
"file_index = 0\n",
"\n",
"async def generate_music(prompts=None, max_chunks=10, config=None):\n",
" async with client.aio.live.music.connect(model=MODEL_ID) as session:\n",
" async def receive():\n",
" global file_index\n",
" # Start a new `.wav` file.\n",
" file_name = f\"audio_{file_index}.wav\"\n",
" with wave_file(file_name) as wav:\n",
" file_index += 1\n",
"\n",
" logger.debug('receive')\n",
"\n",
" # Read chunks from the socket.\n",
" n = 0\n",
" async for message in session.receive():\n",
" n+=1\n",
" if n > max_chunks:\n",
" break\n",
"\n",
" # Write audio the chunk to the `.wav` file.\n",
" audio_chunk = message.server_content.audio_chunks[0].data\n",
" if audio_chunk is not None:\n",
" logger.debug('Got audio_chunk')\n",
" wav.writeframes(audio_chunk)\n",
"\n",
" await asyncio.sleep(10**-12)\n",
"\n",
" # This code example doesn't have a way to receive requests because of colab\n",
" # limitations, check the python code sample for a more complete example\n",
"\n",
" while prompts is None:\n",
" input_prompt = await asyncio.to_thread(input, \"prompt > \")\n",
" prompts = parse_input(input_prompt)\n",
"\n",
" # Sending the provided prompts\n",
" await session.set_weighted_prompts(\n",
" prompts=prompts\n",
" )\n",
"\n",
" # Set initial configuration\n",
" if config is not None:\n",
" await session.set_music_generation_config(config=config)\n",
"\n",
" # Start music generation\n",
" await session.play()\n",
"\n",
" receive_task = asyncio.create_task(receive())\n",
"\n",
" # Don't quit the loop until tasks are done\n",
" await asyncio.gather(receive_task)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "gGYtiV2N8b2o"
},
"source": [
"# Try Lyria RealTime\n",
"\n",
"Because of Colab limitation you won't be able to experience the \"real time\" part of Lyria RealTime, so all those examples are going to be one-offs prompt to get an audio file.\n",
"\n",
"One thing to note is that the audio will only be played at the end of the session when all would have been written in the wav file. When using the API for real you'll be able to start plyaing as soon as the first chunk arrives. So the longer the duration (using the dedicated parameter) you set, the longer you'll have to wait until you hear something."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "kpifUNrOhgNe"
},
"source": [
"## Simple Lyria RealTime example\n",
"Here's first a simple example:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "e9byyNVthoZv"
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"DEBUG:Bidi:receive\n",
"DEBUG:Bidi:Got audio_chunk\n",
"DEBUG:Bidi:Got audio_chunk\n",
"DEBUG:Bidi:Got audio_chunk\n",
"DEBUG:Bidi:Got audio_chunk\n",
"DEBUG:Bidi:Got audio_chunk\n",
"DEBUG:Bidi:Got audio_chunk\n",
"DEBUG:Bidi:Got audio_chunk\n",
"DEBUG:Bidi:Got audio_chunk\n",
"DEBUG:Bidi:Got audio_chunk\n",
"DEBUG:Bidi:Got audio_chunk\n"
]
},
{
"data": {
"text/html": [
"\n",
" \n",
" "
],
"text/plain": [
"