Introduce Whisper large-v3-turbo model into faster-whisper and whisper_streaming

Introduce Whisper large-v3-turbo into faster-whisper

All you have to do is just follow the hints discussed in the issue below:

github.com

  1. Fetch the model into the local
from huggingface_hub import snapshot_download

repo_id = "deepdml/faster-whisper-large-v3-turbo-ct2"
local_dir = "faster-whisper-large-v3-turbo-ct2"
snapshot_download(repo_id=repo_id, local_dir=local_dir, repo_type="model")
  1. Specify faster-whisper-large-v3-turbo-ct2 when creating WhisperModel
from faster_whisper import WhisperModel

model = WhisperModel("faster-whisper-large-v3-turbo-ct2")

huggingface.co

Introduce Whisper large-v3-turbo into whisper_streaming

  1. Add faster-whisper-large-v3-turbo-ct2 into the choise set of --model argument
    parser.add_argument('--model', type=str, default='large-v2', choices="tiny.en,tiny,base.en,base,small.en,small,medium.en,medium,large-v1,large-v2,large-v3,large,faster-whisper-large-v3-turbo-ct2".split(","),help="Name size of the Whisper model to use (default: large-v2). The model is automatically downloaded from the model hub if not present in model cache dir.")
  1. Specify faster-whisper-large-v3-turbo-ct2 as the model when calling whisper_online_server.py
python whisper_online_server.py --model faster-whisper-large-v3-turbo-ct2 --backend faster-whisper  # and so on