{ "cells": [ { "cell_type": "markdown", "metadata": { "id": "view-in-github", "colab_type": "text" }, "source": [ "\"Open" ] }, { "cell_type": "markdown", "metadata": { "id": "Lbbmx_Vjl0zo" }, "source": [ "### w-okada's Voice Changer | **Google Colab**\n", "\n", "---\n", "\n", "##**READ ME - VERY IMPORTANT**\n", "\n", "This is an attempt to run [Realtime Voice Changer](https://github.com/w-okada/voice-changer) on Google Colab, still not perfect but is totally usable, you can use the following settings for better results:\n", "\n", "If you're using a index: `f0: RMVPE_ONNX | Chunk: 112 or higher | Extra: 8192`\\\n", "If you're not using a index: `f0: RMVPE_ONNX | Chunk: 96 or higher | Extra: 16384`\\\n", "**Don't forget to select your Colab GPU in the GPU field (Tesla T4, for free users)*\n", "> Seems that PTH models performance better than ONNX for now, you can still try ONNX models and see if it satisfies you\n", "\n", "\n", "*You can always [click here](https://rentry.co/VoiceChangerGuide#gpu-chart-for-known-working-chunkextra\n", ") to check if these settings are up-to-date*\n", "

\n", "\n", "---\n", "\n", "###Always use Colab GPU (**VERY VERY VERY IMPORTANT!**)\n", "You need to use a Colab GPU so the Voice Changer can work faster and better\\\n", "Use the menu above and click on **Runtime** » **Change runtime** » **Hardware acceleration** to select a GPU (**T4 is the free one**)\n", "\n", "---\n", "\n", "
\n", "\n", "# **Credits and Support**\n", "Realtime Voice Changer by [w-okada](https://github.com/w-okada)\\\n", "Colab files updated by [rafacasari](https://github.com/Rafacasari)\\\n", "Recommended settings by [Raven](https://github.com/ravencutie21)\\\n", "Modified again by [Hina](https://huggingface.co/HinaBl)\n", "\n", "Need help? [AI Hub Discord](https://discord.gg/aihub) » ***#help-realtime-vc***\n", "\n", "---" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "id": "RhdqDSt-LfGr" }, "outputs": [], "source": [ "# @title **[Optional]** Connect to Google Drive\n", "# @markdown Using Google Drive can improve load times a bit and your models will be stored, so you don't need to re-upload every time that you use.\n", "import os\n", "from google.colab import drive\n", "\n", "if not os.path.exists('/content/drive'):\n", " drive.mount('/content/drive')\n", "\n", "%cd /content/drive/MyDrive" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "86wTFmqsNMnD", "cellView": "form" }, "outputs": [], "source": [ "# @title **[1]** Clone repository and install dependencies\n", "# @markdown This first step will download the latest version of Voice Changer and install the dependencies. **It will take around 2 minutes to complete.**\n", "import os\n", "import time\n", "import subprocess\n", "import threading\n", "import shutil\n", "import base64\n", "import codecs\n", "\n", "from IPython.display import clear_output, Javascript\n", "\n", "externalgit=codecs.decode('uggcf://tvguho.pbz/j-bxnqn/ibvpr-punatre.tvg','rot_13')\n", "rvctimer=codecs.decode('uggcf://tvguho.pbz/uvanoy/eipgvzre.tvg','rot_13')\n", "pathloc=codecs.decode('ibvpr-punatre','rot_13')\n", "!git clone --depth 1 $externalgit &> /dev/null\n", "\n", "def update_timer_and_print():\n", " global timer\n", " while True:\n", " hours, remainder = divmod(timer, 3600)\n", " minutes, seconds = divmod(remainder, 60)\n", " timer_str = f'{hours:02}:{minutes:02}:{seconds:02}'\n", " print(f'\\rTimer: {timer_str}', end='', flush=True) # Print without a newline\n", " time.sleep(1)\n", " timer += 1\n", "timer = 0\n", "threading.Thread(target=update_timer_and_print, daemon=True).start()\n", "\n", "# os.system('cls')\n", "clear_output()\n", "!rm -rf rvctimer\n", "!git clone --depth 1 $rvctimer\n", "\n", "\n", "%cd $pathloc/server/\n", "\n", "print(\"\\033[92mSuccessfully cloned the repository\")\n", "\n", "\n", "\n", "!apt-get install libportaudio2 &> /dev/null --quiet\n", "!pip install pyworld onnxruntime-gpu uvicorn faiss-gpu fairseq jedi google-colab moviepy decorator==4.4.2 sounddevice numpy==1.23.5 pyngrok --quiet\n", "print(\"\\033[92mInstalling Requirements!\")\n", "clear_output()\n", "!pip install -r requirements.txt --no-build-isolation --quiet\n", "# Maybe install Tensor packages?\n", "#!pip install torch-tensorrt\n", "#!pip install TensorRT\n", "print(\"\\033[92mSuccessfully installed all packages!\")\n", "# os.system('cls')\n", "clear_output()\n", "print(\"\\033[92mFinished, please continue to the next cell\")" ] }, { "cell_type": "code", "source": [ "\n", "#@title #**[Optional]** Upload a voice model (Run this before running the Voice Changer)**[Currently Under Construction]**\n", "#@markdown ---\n", "import os\n", "import json\n", "from IPython.display import Image\n", "\n", "\n", "#@markdown #Model Number `(Default is 0)` you can add multiple models as long as you change the number!\n", "model_number = \"0\" #@param ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9', '10', '11', '12', '13', '14', '15', '16', '17', '18', '19', '20', '21', '22', '23', '24', '25', '26', '27', '28', '29', '30', '31', '32', '33', '34', '35', '36', '37', '38', '39', '40', '41', '42', '43', '44', '45', '46', '47', '48', '49', '50', '51', '52', '53', '54', '55', '56', '57', '58', '59', '60', '61', '62', '63', '64', '65', '66', '67', '68', '69', '70', '71', '72', '73', '74', '75', '76', '77', '78', '79', '80', '81', '82', '83', '84', '85', '86', '87', '88', '89', '90', '91', '92', '93', '94', '95', '96', '97', '98', '99', '100', '101', '102', '103', '104', '105', '106', '107', '108', '109', '110', '111', '112', '113', '114', '115', '116', '117', '118', '119', '120', '121', '122', '123', '124', '125', '126', '127', '128', '129', '130', '131', '132', '133', '134', '135', '136', '137', '138', '139', '140', '141', '142', '143', '144', '145', '146', '147', '148', '149', '150', '151', '152', '153', '154', '155', '156', '157', '158', '159', '160', '161', '162', '163', '164', '165', '166', '167', '168', '169', '170', '171', '172', '173', '174', '175', '176', '177', '178', '179', '180', '181', '182', '183', '184', '185', '186', '187', '188', '189', '190', '191', '192', '193', '194', '195', '196', '197', '198', '199']\n", "\n", "!rm -rf model_dir/$model_number\n", "#@markdown ---\n", "#@markdown #**[Optional]** Add an icon to the model `(can be any image/leave empty for no image)`\n", "icon_link = \"https://cdn.discordapp.com/attachments/1144453160912572506/1144453161210351697/mika.png?ex=65163190&is=6514e010&hm=6cfc987d42e448b2912f5225e2c865df92d688c8dc46a135c2cca32682a3f3ea&\" #@param {type:\"string\"}\n", "#@markdown ---\n", "icon_link = '\"'+icon_link+'\"'\n", "!mkdir model_dir\n", "!mkdir model_dir/$model_number\n", "#@markdown #Put your model's download link here `(must be a zip file)`\n", "model_link = \"https://huggingface.co/Kit-Lemonfoot/kitlemonfoot_rvc_models/resolve/main/Mika%20Melatika%20(Speaking)(KitLemonfoot).zip\" #@param {type:\"string\"}\n", "model_link = '\"'+model_link+'\"'\n", "!curl -L $model_link > model.zip\n", "\n", "\n", "# Conditionally set the iconFile based on whether icon_link is empty\n", "if icon_link:\n", " iconFile = \"icon.png\"\n", " !curl -L $icon_link > model_dir/$model_number/icon.png\n", "else:\n", " iconFile = \"\"\n", " print(\"icon_link is empty, so no icon file will be downloaded.\")\n", "#@markdown ---\n", "\n", "\n", "!unzip model.zip -d model_dir/$model_number\n", "\n", "# Checks all the files in model_number and puts it outside of it\n", "\n", "!mv model_dir/$model_number/*/* model_dir/$model_number/\n", "!rm -rf model_dir/$model_number/*/\n", "\n", "# if theres a folder in the number,\n", "# take all the files in the folder and put it outside of that folder\n", "\n", "\n", "#@markdown #**Model Voice Convertion Setting**\n", "#@markdown Tune `-12=F-M`**||**`0=M-M/F-F`**||**`12=M-F`\n", "Tune = 12 #@param {type:\"slider\",min:-50,max:50,step:1}\n", "#@markdown Index `0=Default`**||**`1=Replicate Accent`\n", "Index = 0 #@param {type:\"slider\",min:0,max:1,step:0.1}\n", "#@markdown ---\n", "\n", "# @markdown #**[Optional]** Parameter file for your voice model\n", "#@markdown _(must be named params.json)_ (Leave Empty for Default)\n", "param_link = \"\" #@param {type:\"string\"}\n", "if param_link == \"\":\n", " from voice_changer.RVC.RVCModelSlotGenerator import RVCModelSlotGenerator\n", " from voice_changer.VoiceChangerParamsManager import VoiceChangerParamsManager\n", " from voice_changer.utils.LoadModelParams import LoadModelParamFile, LoadModelParams\n", " from voice_changer.utils.VoiceChangerParams import VoiceChangerParams\n", "\n", " model_dir1 = \"model_dir/\"+model_number+\"/\"\n", "\n", " is_pth = True # Set this to True if you want to search for .pth files, or False for .onnx files\n", " file_extension = \".pth\" if is_pth else \".onnx\"\n", "\n", " # pth_files = [f for f in os.listdir(model_dir1) if f.endswith(file_extension)]\n", "\n", " pth_files = [f for f in os.listdir(model_dir1) if f.endswith(\".pth\") or f.endswith(\".onnx\")]\n", " print(pth_files)\n", " index_files = [f for f in os.listdir(model_dir1) if f.endswith(\".index\")]\n", "\n", "\n", "\n", "\n", " if pth_files:\n", " model_name = pth_files[0].replace(\".pth\", \"\")\n", "\n", " else:\n", " model_name = \"Null\"\n", " if index_files:\n", " index_name = index_files[0].replace(\".index\", \"\")\n", " else:\n", " index_name = \"\"\n", "\n", " original_string = str(pth_files)\n", " string_pth_files = original_string[2:-2]\n", " print(\"IM A STRING\"+original_string)\n", "\n", " print(model_name)\n", " voiceChangerParams = VoiceChangerParams(\n", " model_dir=\"./model_dir/\"+model_number,\n", " content_vec_500=\"\",\n", " content_vec_500_onnx=\"\",\n", " content_vec_500_onnx_on=\"\",\n", " hubert_base=\"\",\n", " hubert_base_jp=\"\",\n", " hubert_soft=\"\",\n", " nsf_hifigan=\"\",\n", " crepe_onnx_full=\"\",\n", " crepe_onnx_tiny=\"\",\n", " rmvpe=\"\",\n", " rmvpe_onnx=\"\",\n", " sample_mode=\"\"\n", " )\n", " vcparams = VoiceChangerParamsManager.get_instance()\n", " vcparams.setParams(voiceChangerParams)\n", "\n", " file = LoadModelParamFile(\n", " name=string_pth_files,\n", " kind=\"rvcModel\",\n", " dir=\"\",\n", " )\n", "\n", " loadParam = LoadModelParams(\n", " voiceChangerType=\"RVC\",\n", " files=[file],\n", " slot=\"\",\n", " isSampleMode=False,\n", " sampleId=\"\",\n", " params={},\n", " )\n", " slotInfo = RVCModelSlotGenerator.loadModel(loadParam)\n", " print(slotInfo.samplingRate)\n", "\n", "#----------------Make the Json File-----------\n", " params_content = {\n", " \"slotIndex\": -1,\n", " \"voiceChangerType\": \"RVC\",\n", " \"name\": model_name,\n", " \"description\": \"\",\n", " \"credit\": \"\",\n", " \"termsOfUseUrl\": \"\",\n", " \"iconFile\": iconFile,\n", " \"speakers\": {\n", " \"0\": \"target\"\n", " },\n", " \"modelFile\": string_pth_files,\n", " \"indexFile\": f\"{index_name}.index\",\n", " \"defaultTune\": Tune,\n", " \"defaultIndexRatio\": Index,\n", " \"defaultProtect\": 0.5,\n", " \"isONNX\": False,\n", " \"modelType\": \"pyTorchRVCv2\",\n", " \"samplingRate\": slotInfo.samplingRate,\n", " \"f0\": True,\n", " \"embChannels\": 768,\n", " \"embOutputLayer\": 12,\n", " \"useFinalProj\": False,\n", " \"deprecated\": False,\n", " \"embedder\": \"hubert_base\",\n", " \"sampleId\": \"\"\n", " }\n", "\n", " # Write the content to params.json\n", " with open(f\"{model_dir1}/params.json\", \"w\") as param_file:\n", " json.dump(params_content, param_file)\n", "\n", "\n", "# !unzip model.zip -d model_dir/0/\n", "clear_output()\n", "print(\"\\033[92mModel with the name of \"+model_name+\" has been Imported to slot \"+model_number)\n", "Image(url=icon_link)" ], "metadata": { "id": "_ZtbKUVUgN3G", "cellView": "form" }, "execution_count": null, "outputs": [] }, { "cell_type": "code", "source": [ "#@title Delete a model `[Only Use When Needed]`\n", "#@markdown ---\n", "#@markdown Select which slot you want to delete\n", "Delete_Slot = \"0\" #@param ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9', '10', '11', '12', '13', '14', '15', '16', '17', '18', '19', '20', '21', '22', '23', '24', '25', '26', '27', '28', '29', '30', '31', '32', '33', '34', '35', '36', '37', '38', '39', '40', '41', '42', '43', '44', '45', '46', '47', '48', '49', '50', '51', '52', '53', '54', '55', '56', '57', '58', '59', '60', '61', '62', '63', '64', '65', '66', '67', '68', '69', '70', '71', '72', '73', '74', '75', '76', '77', '78', '79', '80', '81', '82', '83', '84', '85', '86', '87', '88', '89', '90', '91', '92', '93', '94', '95', '96', '97', '98', '99', '100', '101', '102', '103', '104', '105', '106', '107', '108', '109', '110', '111', '112', '113', '114', '115', '116', '117', '118', '119', '120', '121', '122', '123', '124', '125', '126', '127', '128', '129', '130', '131', '132', '133', '134', '135', '136', '137', '138', '139', '140', '141', '142', '143', '144', '145', '146', '147', '148', '149', '150', '151', '152', '153', '154', '155', '156', '157', '158', '159', '160', '161', '162', '163', '164', '165', '166', '167', '168', '169', '170', '171', '172', '173', '174', '175', '176', '177', '178', '179', '180', '181', '182', '183', '184', '185', '186', '187', '188', '189', '190', '191', '192', '193', '194', '195', '196', '197', '198', '199']\n", "# {type:\"slider\",min:0,max:1,step:0.1}\n", "\n", "!rm -rf model_dir/$Model_Number\n", "print(\"\\033[92mSuccessfully removed Model is slot \"+Delete_Slot)\n" ], "metadata": { "id": "P9g6rG1-KUwt", "cellView": "form" }, "execution_count": null, "outputs": [] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "lLWQuUd7WW9U", "cellView": "form" }, "outputs": [], "source": [ "# @title **[2]** Start Server **using ngrok** (Recommended | **need a ngrok account**)\n", "# @markdown This cell will start the server, the first time that you run it will download the models, so it can take a while (~1-2 minutes)\n", "\n", "# @markdown ---\n", "# @markdown You'll need a ngrok account, but **it's free**!\n", "# @markdown ---\n", "# @markdown **1** - Create a **free** account at [ngrok](https://dashboard.ngrok.com/signup)\\\n", "# @markdown **2** - If you didn't logged in with Google or Github, you will need to **verify your e-mail**!\\\n", "# @markdown **3** - Click [this link](https://dashboard.ngrok.com/get-started/your-authtoken) to get your auth token, copy it and place it here:\n", "from pyngrok import conf, ngrok\n", "\n", "f0_det= \"rmvpe_onnx\" #@param [\"rmvpe_onnx\",\"rvc\"]\n", "Token = 'YOUR_TOKEN_HERE' # @param {type:\"string\"}\n", "# @markdown **4** - Still need further tests, but maybe region can help a bit on latency?\\\n", "# @markdown `Default Region: us - United States (Ohio)`\n", "Region = \"us - United States (Ohio)\" # @param [\"ap - Asia/Pacific (Singapore)\", \"au - Australia (Sydney)\",\"eu - Europe (Frankfurt)\", \"in - India (Mumbai)\",\"jp - Japan (Tokyo)\",\"sa - South America (Sao Paulo)\", \"us - United States (Ohio)\"]\n", "MyConfig = conf.PyngrokConfig()\n", "\n", "MyConfig.auth_token = Token\n", "MyConfig.region = Region[0:2]\n", "\n", "conf.get_default().authtoken = Token\n", "conf.get_default().region = Region[0:2]\n", "\n", "conf.set_default(MyConfig);\n", "\n", "# @markdown ---\n", "# @markdown If you want to automatically clear the output when the server loads, check this option.\n", "Clear_Output = True # @param {type:\"boolean\"}\n", "\n", "#@markdown ---\n", "#@markdown If you want to use a custom background for the voice changer\n", "Use_Custom_BG=False #@param{type:\"boolean\"}\n", "BG_URL=\"https://w.wallha.com/ws/14/cMmpo5vn.jpg\" #@param{type:\"string\"}\n", "#@markdown Text colors can be hex ``#101010`` or name of color ``black`` (css)\n", "Text_Color=\"green\" #@param{type:\"string\"}\n", "if Use_Custom_BG==True:\n", " if BG_URL==\"\":\n", " !cp -f rvctimer/index.html $pathloc/client/demo/dist/\n", " else:\n", " html_template = f'''\n", " \n", " \n", " \n", " \n", " Voice Changer Client Demo\n", " \n", " \n", " \n", " \n", "
\n", " \n", " \n", " '''\n", " with open('index.html', 'w') as file:\n", " file.write(html_template)\n", " !mkdir ../client/demo/dist/temp/\n", " !mv ../client/demo/dist/index.html ../client/demo/dist/temp/index.html\n", " !mv index.html ../client/demo/dist/\n", "else:\n", " !cp -f ../client/demo/dist/temp/index.html ../client/demo/dist/index.html\n", "\n", "mainpy=codecs.decode('ZZIPFreireFVB.cl','rot_13')\n", "\n", "import portpicker, socket, urllib.request\n", "PORT = portpicker.pick_unused_port()\n", "\n", "from pyngrok import ngrok\n", "# Edited ⏬⏬\n", "ngrokConnection = ngrok.connect(PORT)\n", "public_url = ngrokConnection.public_url\n", "\n", "def iframe_thread(port):\n", " while True:\n", " time.sleep(0.5)\n", " sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n", " result = sock.connect_ex(('127.0.0.1', port))\n", " if result == 0:\n", " break\n", " sock.close()\n", " clear_output()\n", " print(\"------- SERVER READY! -------\")\n", " print(\"Your server is available at:\")\n", " print(public_url)\n", " print(\"-----------------------------\")\n", " # display(Javascript('window.open(\"{url}\", \\'_blank\\');'.format(url=public_url)))\n", "\n", "print(PORT)\n", "\n", "\n", "\n", "threading.Thread(target=iframe_thread, daemon=True, args=(PORT,)).start()\n", "\n", "\n", "!python3 $mainpy \\\n", " -p {PORT} \\\n", " --https False \\\n", " --content_vec_500 pretrain/checkpoint_best_legacy_500.pt \\\n", " --content_vec_500_onnx pretrain/content_vec_500.onnx \\\n", " --content_vec_500_onnx_on true \\\n", " --hubert_base pretrain/hubert_base.pt \\\n", " --hubert_base_jp pretrain/rinna_hubert_base_jp.pt \\\n", " --hubert_soft pretrain/hubert/hubert-soft-0d54a1f4.pt \\\n", " --nsf_hifigan pretrain/nsf_hifigan/model \\\n", " --crepe_onnx_full pretrain/crepe_onnx_full.onnx \\\n", " --crepe_onnx_tiny pretrain/crepe_onnx_tiny.onnx \\\n", " --rmvpe pretrain/rmvpe.pt \\\n", " --model_dir model_dir \\\n", " --samples samples.json\n", "\n" ] }, { "cell_type": "markdown", "source": [ "![](https://i.pinimg.com/474x/de/72/9e/de729ecfa41b69901c42c82fff752414.jpg)\n", "![](https://i.pinimg.com/474x/de/72/9e/de729ecfa41b69901c42c82fff752414.jpg)" ], "metadata": { "id": "2Uu1sTSwTc7q" } } ], "metadata": { "colab": { "provenance": [], "private_outputs": true, "gpuType": "T4", "include_colab_link": true }, "kernelspec": { "display_name": "Python 3", "name": "python3" }, "language_info": { "name": "python" }, "accelerator": "GPU" }, "nbformat": 4, "nbformat_minor": 0 }