diff --git a/Realtime_Voice_Changer_on_Colab.ipynb b/Realtime_Voice_Changer_on_Colab.ipynb index d5f95f1e..d998459c 100644 --- a/Realtime_Voice_Changer_on_Colab.ipynb +++ b/Realtime_Voice_Changer_on_Colab.ipynb @@ -1,6 +1,6 @@ { "cells": [ - { + { "cell_type": "markdown", "metadata": { "id": "view-in-github", @@ -16,35 +16,40 @@ "id": "Lbbmx_Vjl0zo" }, "source": [ - "Realtime Voice Changer by w-okada\n", + "### w-okada's Voice Changer | **Google Colab**\n", + "\n", "---\n", "\n", - "This is a attempt to run [Realtime Voice Changer](https://github.com/w-okada/voice-changer) on Google Colab.\\\n", - "Colab File updated by [rafacasari](https://github.com/Rafacasari)" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "cellView": "form", - "id": "vV1t7PBRm-o6" - }, - "outputs": [], - "source": [ - "# @title **Always use Colab GPU!**\n", - "# @markdown **A GPU can be used for faster processing.**\\\n", - "# @markdown You can check the Colab GPU running this cell.\\\n", - "# @markdown or use the menu **Runtime** -> **Change runtime** -> **Hardware acceleration** to select a GPU, if needed.\n", - "!nvidia-smi" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "aLypf-RLIK-w" - }, - "source": [ + "##**READ ME - VERY IMPORTANT**\n", + "\n", + "This is an attempt to run [Realtime Voice Changer](https://github.com/w-okada/voice-changer) on Google Colab, still not perfect but is totally usable, you can use the following settings for better results:\n", + "\n", + "If you're using a index: `f0: RMVPE_ONNX | Chunk: 112 or higher | Extra: 8192`\\\n", + "If you're not using a index: `f0: RMVPE_ONNX | Chunk: 96 or higher | Extra: 16384`\\\n", + "**Don't forget to select your Colab GPU in the GPU field (Tesla T4, for free users)*\n", + "> Seems that PTH models performance better than ONNX for now, you can still try ONNX models and see if it satisfies you\n", + "\n", + "\n", + "*You can always [click here](https://github.com/YunaOneeChan/Voice-Changer-Settings) to check if these settings are up-to-date*\n", + "

\n", + "\n", + "---\n", + "\n", + "###Always use Colab GPU (**VERY VERY VERY IMPORTANT!**)\n", + "You need to use a Colab GPU so the Voice Changer can work faster and better\\\n", + "Use the menu above and click on **Runtime** » **Change runtime** » **Hardware acceleration** to select a GPU (**T4 is the free one**)\n", + "\n", + "---\n", + "\n", + "
\n", + "\n", + "# **Credits and Support**\n", + "Realtime Voice Changer by [w-okada](https://github.com/w-okada)\\\n", + "Colab files updated by [rafacasari](https://github.com/Rafacasari)\\\n", + "Recommended settings by [YunaOneeChan](https://github.com/YunaOneeChan)\n", + "\n", + "Need help? [AI Hub Discord](https://discord.gg/aihub) » ***#help-realtime-vc***\n", + "\n", "---" ] }, @@ -57,9 +62,8 @@ }, "outputs": [], "source": [ - "# @title **[Optional]** Setup/Start Google Drive\n", - "# @markdown Using Google Drive can improve load times, since you will not need to re-download every time you use this.\n", - "\n", + "# @title **[Optional]** Connect to Google Drive\n", + "# @markdown Using Google Drive can improve load times a bit and your models will be stored, so you don't need to re-upload every time that you use.\n", "import os\n", "from google.colab import drive\n", "\n", @@ -73,31 +77,18 @@ "cell_type": "code", "execution_count": null, "metadata": { - "cellView": "form", - "id": "86wTFmqsNMnD" + "id": "86wTFmqsNMnD", + "cellView": "form" }, "outputs": [], "source": [ - "# @title **[1]** Clone the repository\n", - "# @markdown Clone the repository using this cell, this process should be really fast.\n", + "# @title **[1]** Clone repository and install dependencies\n", + "# @markdown This first step will download the latest version of Voice Changer and install the dependencies. **It will take around 2 minutes to complete.**\n", "\n", "!git clone --depth 1 https://github.com/w-okada/voice-changer.git &> /dev/null\n", "\n", "%cd voice-changer/server/\n", - "print(\"\\033[92mSuccessfully cloned the repository, proceed to the next cell!\")" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "cellView": "form", - "id": "LwZAAuqxX7yY" - }, - "outputs": [], - "source": [ - "# @title **[2]** Install Modules\n", - "# @markdown This cell can take a few minutes to run.\n", + "print(\"\\033[92mSuccessfully cloned the repository\")\n", "\n", "!apt-get install libportaudio2 &> /dev/null\n", "!pip install onnxruntime-gpu uvicorn faiss-gpu fairseq jedi google-colab moviepy decorator==4.4.2 sounddevice numpy==1.23.5 pyngrok --quiet\n", @@ -108,23 +99,6 @@ "print(\"\\033[92mSuccessfully installed all packages!\")" ] }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "cellView": "form", - "id": "lQxvqDH2D1MK" - }, - "outputs": [], - "source": [ - "# @title **[3]** Setup ngrok account\n", - "# @markdown Setup a free account at [Ngrok](https://dashboard.ngrok.com/signup), then click [this link](https://dashboard.ngrok.com/get-started/your-authtoken) to get your auth token, copy it and place it here:\n", - "\n", - "NgrokToken = '' # @param {type:\"string\"}\n", - "\n", - "!ngrok config add-authtoken {NgrokToken}" - ] - }, { "cell_type": "code", "execution_count": null, @@ -134,22 +108,114 @@ }, "outputs": [], "source": [ - "# @title **[4]** Start Server\n", - "# @markdown **Run this cell AFTER ngrok setup!**\\\n", - "# @markdown This cell will start the server, the first time that you run it will download the example models, so it can take a while and console may be spammed a little bit. **So be fast to open the server link.**\n", - "import portpicker\n", + "# @title **[2]** Start Server **using ngrok** (Recommended | **need a ngrok account**)\n", + "# @markdown This cell will start the server, the first time that you run it will download the models, so it can take a while (~1-2 minutes)\n", + "\n", + "# @markdown ---\n", + "# @markdown You'll need a ngrok account, but **it's free**!\n", + "# @markdown ---\n", + "# @markdown **1** - Create a **free** account at [ngrok](https://dashboard.ngrok.com/signup)\\\n", + "# @markdown **2** - If you didn't logged in with Google or Github, you will need to **verify your e-mail**!\\\n", + "# @markdown **3** - Click [this link](https://dashboard.ngrok.com/get-started/your-authtoken) to get your auth token, copy it and place it here:\n", + "from pyngrok import conf, ngrok\n", + "\n", + "Token = '' # @param {type:\"string\"}\n", + "# @markdown **4** - Still need further tests, but maybe region can help a bit on latency?\\\n", + "# @markdown `Default Region: us - United States (Ohio)`\n", + "Region = \"us - United States (Ohio)\" # @param [\"ap - Asia/Pacific (Singapore)\", \"au - Australia (Sydney)\",\"eu - Europe (Frankfurt)\", \"in - India (Mumbai)\",\"jp - Japan (Tokyo)\",\"sa - South America (Sao Paulo)\", \"us - United States (Ohio)\"]\n", + "\n", + "MyConfig = conf.PyngrokConfig()\n", + "\n", + "MyConfig.auth_token = Token\n", + "MyConfig.region = Region[0:2]\n", + "\n", + "conf.get_default().authtoken = Token\n", + "conf.get_default().region = Region[0:2]\n", + "\n", + "conf.set_default(MyConfig);\n", + "\n", + "# @markdown ---\n", + "# @markdown If you want to automatically clear the output when the server loads, check this option.\n", + "Clear_Output = True # @param {type:\"boolean\"}\n", + "\n", + "import portpicker, subprocess, threading, time, socket, urllib.request\n", "PORT = portpicker.pick_unused_port()\n", "\n", + "from IPython.display import clear_output, Javascript\n", + "\n", "from pyngrok import ngrok\n", - "public_url = ngrok.connect(PORT).public_url\n", + "ngrokConnection = ngrok.connect(PORT)\n", + "public_url = ngrokConnection.public_url\n", + "\n", + "def iframe_thread(port):\n", + " while True:\n", + " time.sleep(0.5)\n", + " sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n", + " result = sock.connect_ex(('127.0.0.1', port))\n", + " if result == 0:\n", + " break\n", + " sock.close()\n", + " clear_output()\n", + " print(\"------- SERVER READY! -------\")\n", + " print(\"Your server is available at:\")\n", + " print(public_url)\n", + " print(\"-----------------------------\")\n", + " display(Javascript('window.open(\"{url}\", \\'_blank\\');'.format(url=public_url)))\n", + "\n", + "threading.Thread(target=iframe_thread, daemon=True, args=(PORT,)).start()\n", + "\n", + "!python3 MMVCServerSIO.py \\\n", + " -p {PORT} \\\n", + " --https False \\\n", + " --content_vec_500 pretrain/checkpoint_best_legacy_500.pt \\\n", + " --content_vec_500_onnx pretrain/content_vec_500.onnx \\\n", + " --content_vec_500_onnx_on true \\\n", + " --hubert_base pretrain/hubert_base.pt \\\n", + " --hubert_base_jp pretrain/rinna_hubert_base_jp.pt \\\n", + " --hubert_soft pretrain/hubert/hubert-soft-0d54a1f4.pt \\\n", + " --nsf_hifigan pretrain/nsf_hifigan/model \\\n", + " --crepe_onnx_full pretrain/crepe_onnx_full.onnx \\\n", + " --crepe_onnx_tiny pretrain/crepe_onnx_tiny.onnx \\\n", + " --rmvpe pretrain/rmvpe.pt \\\n", + " --model_dir model_dir \\\n", + " --samples samples.json" + ] + }, + { + "cell_type": "code", + "source": [ + "# @title **[Optional]** Start Server **using localtunnel** (ngrok alternative | no account needed)\n", + "# @markdown This cell will start the server, the first time that you run it will download the models, so it can take a while (~1-2 minutes)\n", + "\n", + "# @markdown ---\n", + "!npm config set update-notifier false\n", + "!npm install -g localtunnel\n", + "print(\"\\033[92mLocalTunnel installed!\")\n", + "# @markdown If you want to automatically clear the output when the server loads, check this option.\n", + "Clear_Output = True # @param {type:\"boolean\"}\n", + "\n", + "import portpicker, subprocess, threading, time, socket, urllib.request\n", + "PORT = portpicker.pick_unused_port()\n", + "\n", + "from IPython.display import clear_output, Javascript\n", + "\n", + "def iframe_thread(port):\n", + " while True:\n", + " time.sleep(0.5)\n", + " sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n", + " result = sock.connect_ex(('127.0.0.1', port))\n", + " if result == 0:\n", + " break\n", + " sock.close()\n", + " clear_output()\n", + " print(\"Use the following endpoint to connect to localtunnel:\", urllib.request.urlopen('https://ipv4.icanhazip.com').read().decode('utf8').strip(\"\\n\"))\n", + " p = subprocess.Popen([\"lt\", \"--port\", \"{}\".format(port)], stdout=subprocess.PIPE)\n", + " for line in p.stdout:\n", + " print(line.decode(), end='')\n", + "\n", + "threading.Thread(target=iframe_thread, daemon=True, args=(PORT,)).start()\n", "\n", - "print(\"Your server will be available at:\")\n", - "print(public_url)\n", - "print(public_url)\n", - "print(public_url)\n", - "print(\"Starting the server in 3 seconds\")\n", "\n", - "!sleep 3\n", "!python3 MMVCServerSIO.py \\\n", " -p {PORT} \\\n", " --https False \\\n", @@ -166,24 +232,88 @@ " --model_dir model_dir \\\n", " --samples samples.json \\\n", " --colab True" - ] + ], + "metadata": { + "cellView": "form", + "id": "ZwZaCf4BeZi2" + }, + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "markdown", + "source": [ + "# In Development | **Need contributors**" + ], + "metadata": { + "id": "iuf9pBHYpTn-" + } + }, + { + "cell_type": "code", + "source": [ + "# @title **[BROKEN]** Start Server using Colab Tunnels (trying to fix this TwT)\n", + "# @markdown **Issue:** Everything starts correctly, but when you try to use the client, you'll see in your browser console a bunch of errors **(Error 500 - Not Allowed.)**\n", + "\n", + "import portpicker, subprocess, threading, time, socket, urllib.request\n", + "PORT = portpicker.pick_unused_port()\n", + "\n", + "def iframe_thread(port):\n", + " while True:\n", + " time.sleep(0.5)\n", + " sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n", + " result = sock.connect_ex(('127.0.0.1', port))\n", + " if result == 0:\n", + " break\n", + " sock.close()\n", + " from google.colab.output import serve_kernel_port_as_window\n", + " serve_kernel_port_as_window(PORT)\n", + "\n", + "threading.Thread(target=iframe_thread, daemon=True, args=(PORT,)).start()\n", + "\n", + "!python3 MMVCServerSIO.py \\\n", + " -p {PORT} \\\n", + " --https False \\\n", + " --content_vec_500 pretrain/checkpoint_best_legacy_500.pt \\\n", + " --content_vec_500_onnx pretrain/content_vec_500.onnx \\\n", + " --content_vec_500_onnx_on true \\\n", + " --hubert_base pretrain/hubert_base.pt \\\n", + " --hubert_base_jp pretrain/rinna_hubert_base_jp.pt \\\n", + " --hubert_soft pretrain/hubert/hubert-soft-0d54a1f4.pt \\\n", + " --nsf_hifigan pretrain/nsf_hifigan/model \\\n", + " --crepe_onnx_full pretrain/crepe_onnx_full.onnx \\\n", + " --crepe_onnx_tiny pretrain/crepe_onnx_tiny.onnx \\\n", + " --rmvpe pretrain/rmvpe.pt \\\n", + " --model_dir model_dir \\\n", + " --samples samples.json" + ], + "metadata": { + "id": "P2BN-iWvDrMM", + "cellView": "form" + }, + "execution_count": null, + "outputs": [] } ], "metadata": { - "accelerator": "GPU", "colab": { "provenance": [], - "include_colab_link": true + "private_outputs": true, + "include_colab_link": true, + "gpuType": "T4", + "collapsed_sections": [ + "iuf9pBHYpTn-" + ] }, - "gpuClass": "standard", "kernelspec": { "display_name": "Python 3", "name": "python3" }, "language_info": { "name": "python" - } + }, + "accelerator": "GPU" }, "nbformat": 4, "nbformat_minor": 0 -} \ No newline at end of file +}