Main Content

speechClient

Interface with pretrained model or third-party speech service

Since R2022b

    Description

    Use a speechClient object to interface with a wav2vec 2.0 pretrained speech-to-text model or third-party cloud-based speech services. Use the object with speech2text or text2speech.

    Note

    To use speechClient to interface with third-party speech services, you must download the extended Audio Toolbox™ functionality from File Exchange. The File Exchange submission includes a tutorial to get started with the third-party services.

    Using wav2vec 2.0 requires Deep Learning Toolbox™ and installing the pretrained model.

    Creation

    Description

    example

    clientObj = speechClient(name) returns a speechClient object that interfaces with the specified pretrained model or speech service.

    clientObj = speechClient(___,Name=Value) sets Properties using one or more name-value arguments.

    Input Arguments

    expand all

    Name of the pretrained model or speech service, specified as "wav2vec2.0", "Google", "IBM", "Microsoft", or "Amazon".

    • "wav2vec2.0" –– Use a pretrained wav2vec 2.0 model. You can only use wav2vec 2.0 to perform speech-to-text transcription, and therefore you cannot use it with text2speech.

    • "Google" –– Interface with the Google® Cloud Speech-to-Text and Text-to-Speech service.

    • "IBM" –– Interface with the IBM® Watson Speech to Text and Text to Speech service.

    • "Microsoft" –– Interface with the Microsoft® Azure® Speech service.

    • "Amazon" –– Interface with the Amazon® Transcribe and Amazon Polly services.

    Using the wav2vec 2.0 pretrained model requires Deep Learning Toolbox and installing the pretrained wav2vec 2.0 model. If the model is not installed, calling speechClient with "wav2vec2.0" provides a link to download and install the model.

    To use any of the third-party speech services (Google, IBM, Microsoft, or Amazon), you must download the extended Audio Toolbox functionality from File Exchange. The File Exchange submission includes a tutorial to get started with the third-party services.

    Data Types: string | char

    Output Arguments

    expand all

    Client object to use with speech2text to transcribe speech in audio signals to text, or with text2speech to synthesize speech signals from text.

    Properties

    expand all

    Segmentation of the output transcript, specified as "word", "sentence", or "none".

    This property applies only to the wav2vec 2.0 pretrained model and the Amazon speech service.

    • "word"speech2text returns the transcript as a table where each word is in its own row. This is the default for the wav2vec 2.0 pretrained model.

    • "sentence"speech2text returns the transcript as a table where each sentence is in its own row. The wav2vec 2.0 pretrained model does not support this option.

    • "none"speech2text returns a string containing the entire transcript. This is the default for the Amazon speech service.

    Data Types: string | char

    Include timestamps of transcribed speech in the transcript, specified as true or false. If you specify TimeStamps as true, speech2text includes an additional column in the transcript table that contains the timestamps. When using the wav2vec 2.0 pretrained model, the speech2text function determines the timestamps using the algorithm described in [2].

    This property applies only if you set the Segmentation property to "word" or "sentence".

    Data Types: logical

    Connection timeout, specified as a nonnegative scalar in seconds. The timeout specifies the time to wait for the initial server connection to the third-party speech service.

    This property applies only to the third-party speech services.

    Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

    Object Functions

    Note

    For the third-party speech services, you can configure server-specific options using the following functions. See the documentation for the specific service for option names and values.

    setOptionsSet server options
    getOptionsGet server options
    clearOptionsRemove all server options

    Examples

    collapse all

    Download and install the pretrained wav2vec 2.0 model for speech-to-text transcription.

    Type speechClient("wav2vec2.0") into the command line. If the pretrained model for wav2vec 2.0 is not installed, the function provides a download link. To install the model, click the link to download the file and unzip it to a location on the MATLAB path.

    Alternatively, execute the following commands to download the wav2vec 2.0 model, unzip it to your temporary directory, and then add it to your MATLAB path.

    downloadFile = matlab.internal.examples.downloadSupportFile("audio","wav2vec2/wav2vec2-base-960.zip");
    wav2vecLocation = fullfile(tempdir,"wav2vec");
    unzip(downloadFile,wav2vecLocation)
    addpath(wav2vecLocation)

    Check that the installation is successful by typing speechClient("wav2vec2.0") into the command line. If the model is installed, then the function returns a Wav2VecSpeechClient object.

    speechClient("wav2vec2.0")
    ans = 
      Wav2VecSpeechClient with properties:
    
        Segmentation: 'word'
          TimeStamps: 0
    
    

    Read in an audio file containing speech and listen to it.

    [y,fs] = audioread("speech_dft.wav");
    soundsc(y,fs)

    Create a speechClient object that uses the wav2vec 2.0 pretrained network. This requires installing the pretrained network. If the network is not installed, the function provides a link with instructions to download and install the pretrained model.

    transcriber = speechClient("wav2vec2.0");

    Use speech2text to obtain a transcription of the audio signal.

    transcript = speech2text(transcriber,y,fs)
    transcript=12×2 table
        Transcript     Confidence
        ___________    __________
    
        "the"           0.97605  
        "discreet"      0.91927  
        "fourier"       0.84546  
        "transform"     0.89922  
        "of"            0.66676  
        "a"             0.50026  
        "real"          0.88796  
        "valued"        0.89913  
        "signal"         0.8041  
        "is"            0.53891  
        "conjugate"     0.98438  
        "symmetric"     0.89333  
    
    

    References

    [1] Baevski, Alexei, Henry Zhou, Abdelrahman Mohamed, and Michael Auli. “Wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations,” 2020. https://doi.org/10.48550/ARXIV.2006.11477.

    [2] Kürzinger, Ludwig, Dominik Winkelbauer, Lujun Li, Tobias Watzel, and Gerhard Rigoll. “CTC-Segmentation of Large Corpora for German End-to-End Speech Recognition.” In Speech and Computer, edited by Alexey Karpov and Rodmonga Potapova, 12335:267–78. Cham: Springer International Publishing, 2020. https://doi.org/10.1007/978-3-030-60276-5_27.

    Version History

    Introduced in R2022b