Main Content

本页的翻译已过时。点击此处可查看最新英文版本。

使用深度学习进行语音命令识别

此示例说明如何训练一个深度学习模型来检测音频中是否存在语音命令。此示例使用语音命令数据集 [1] 来训练卷积神经网络,以识别给定的一组命令。

要从头开始训练网络,您必须首先下载数据集。如果您不想下载数据集或训练网络,则可以加载随此示例提供的一个预训练网络,并执行此示例的接下来的两个部分:通过预训练网络识别命令使用来自麦克风的流音频检测命令

通过预训练网络识别命令

在进入详细的训练过程之前,您将使用一个预训练语音识别网络来识别语音命令。

加载该预训练网络。

load('commandNet.mat')

该网络经过训练以识别下列语音命令:

  • “yes”

  • “no”

  • “up”

  • “down”

  • “left”

  • “right”

  • “on”

  • “off”

  • “stop”

  • “go”

在有人说“stop”的位置加载一个简短语音信号。

 [x,fs] = audioread('stop_command.flac');

收听命令。

 sound(x,fs)

预训练网络将基于听觉的频谱图作为输入。您首先将语音波形转换为基于听觉的频谱图。

使用函数 extractAuditoryFeature 计算听觉频谱图。在示例的后面部分,您将详细了解特征提取。

auditorySpect = helperExtractAuditoryFeatures(x,fs);

根据听觉频谱图对命令进行分类。

command = classify(trainedNet,auditorySpect)
command = 

  categorical

     stop 

该网络经过训练以将不属于该集合的单词分类为“unknown”。

现在,您将对命令列表中未包含的一个单词 ("play") 分类以进行识别。

加载并收听语音信号。

x = audioread('play_command.flac');
sound(x,fs)

计算听觉频谱图。

auditorySpect = helperExtractAuditoryFeatures(x,fs);

对信号进行分类。

command = classify(trainedNet,auditorySpect)
command = 

  categorical

     unknown 

网络经过训练以将背景噪声分类为“background”。

创建由随机噪声组成的时长为一秒的信号。

x = pinknoise(16e3);

计算听觉频谱图。

auditorySpect = helperExtractAuditoryFeatures(x,fs);

对背景噪声进行分类。

command = classify(trainedNet,auditorySpect)
command = 

  categorical

     background 

使用来自麦克风的流音频检测命令

基于来自麦克风的流音频测试预训练的命令检测网络。尝试说出其中一个命令,例如 yesnostop。然后,尝试说一个未知的单词,如 MarvinSheilabedhousecatbird 或从 0 到 9 的任意数字。

指定分类率(以 Hz 为单位),并创建一个可以从麦克风读取音频的音频设备读取器。

classificationRate = 20;
adr = audioDeviceReader('SampleRate',fs,'SamplesPerFrame',floor(fs/classificationRate));

初始化一个音频缓冲区。提取网络的分类标签。分别为流音频标签和分类概率初始化时长半秒的缓冲区。通过这些缓冲区来比较较长时间内的分类结果,并就是否检测到了命令达成“一致”。指定决策逻辑的阈值。

audioBuffer = dsp.AsyncBuffer(fs);

labels = trainedNet.Layers(end).Classes;
YBuffer(1:classificationRate/2) = categorical("background");

probBuffer = zeros([numel(labels),classificationRate/2]);

countThreshold = ceil(classificationRate*0.2);
probThreshold = 0.7;

创建一个图窗,在图窗存在期间一直检测命令。要无限期运行循环,请将 timeLimit 设置为 Inf。要停止实时检测,只需关闭图窗。

h = figure('Units','normalized','Position',[0.2 0.1 0.6 0.8]);

timeLimit = 20;

tic
while ishandle(h) && toc < timeLimit

    % Extract audio samples from the audio device and add the samples to
    % the buffer.
    x = adr();
    write(audioBuffer,x);
    y = read(audioBuffer,fs,fs-adr.SamplesPerFrame);

    spec = helperExtractAuditoryFeatures(y,fs);

    % Classify the current spectrogram, save the label to the label buffer,
    % and save the predicted probabilities to the probability buffer.
    [YPredicted,probs] = classify(trainedNet,spec,'ExecutionEnvironment','cpu');
    YBuffer = [YBuffer(2:end),YPredicted];
    probBuffer = [probBuffer(:,2:end),probs(:)];

    % Plot the current waveform and spectrogram.
    subplot(2,1,1)
    plot(y)
    axis tight
    ylim([-1,1])

    subplot(2,1,2)
    pcolor(spec')
    caxis([-4 2.6445])
    shading flat

    % Now do the actual command detection by performing a very simple
    % thresholding operation. Declare a detection and display it in the
    % figure title if all of the following hold: 1) The most common label
    % is not background. 2) At least countThreshold of the latest frame
    % labels agree. 3) The maximum probability of the predicted label is at
    % least probThreshold. Otherwise, do not declare a detection.
    [YMode,count] = mode(YBuffer);

    maxProb = max(probBuffer(labels == YMode,:));
    subplot(2,1,1)
    if YMode == "background" || count < countThreshold || maxProb < probThreshold
        title(" ")
    else
        title(string(YMode),'FontSize',20)
    end

    drawnow
end

加载语音命令数据集

此示例使用 Google Speech Commands Dataset [1]。下载该数据集并解压缩下载的文件。将 PathToDatabase 设置为数据的位置。

url = 'https://ssd.mathworks.com/supportfiles/audio/google_speech.zip';
downloadFolder = tempdir;
dataFolder = fullfile(downloadFolder,'google_speech');

if ~exist(dataFolder,'dir')
    disp('Downloading data set (1.4 GB) ...')
    unzip(url,downloadFolder)
end

创建训练数据存储

创建一个指向该训练数据集的 audioDatastore (Audio Toolbox)

ads = audioDatastore(fullfile(dataFolder, 'train'), ...
    'IncludeSubfolders',true, ...
    'FileExtensions','.wav', ...
    'LabelSource','foldernames')
ads = 

  audioDatastore with properties:

                       Files: {
                              ' ...\AppData\Local\Temp\google_speech\train\bed\00176480_nohash_0.wav';
                              ' ...\AppData\Local\Temp\google_speech\train\bed\004ae714_nohash_0.wav';
                              ' ...\AppData\Local\Temp\google_speech\train\bed\004ae714_nohash_1.wav'
                               ... and 51085 more
                              }
                     Folders: {
                              'C:\Users\jibrahim\AppData\Local\Temp\google_speech\train'
                              }
                      Labels: [bed; bed; bed ... and 51085 more categorical]
    AlternateFileSystemRoots: {}
              OutputDataType: 'double'
      SupportedOutputFormats: ["wav"    "flac"    "ogg"    "mp4"    "m4a"]
         DefaultOutputFormat: "wav"

选择要识别的单词

指定您希望模型识别为命令的单词。将所有非命令单词标注为 unknown。将非命令单词标注为 unknown 会创建一个单词组,用来逼近除命令之外的所有单词的分布。网络使用该组来学习命令与所有其他单词之间的差异。

为了减少已知单词和未知单词之间的类不平衡并加快处理速度,只在训练集中包括未知单词的一小部分。

使用 subset (Audio Toolbox) 创建一个仅包含命令和未知单词子集的数据存储。计算属于每个类别的示例的数量。

commands = categorical(["yes","no","up","down","left","right","on","off","stop","go"]);

isCommand = ismember(ads.Labels,commands);
isUnknown = ~isCommand;

includeFraction = 0.2;
mask = rand(numel(ads.Labels),1) < includeFraction;
isUnknown = isUnknown & mask;
ads.Labels(isUnknown) = categorical("unknown");

adsTrain = subset(ads,isCommand|isUnknown);
countEachLabel(adsTrain)
ans =

  11×2 table

     Label     Count
    _______    _____

    down       1842 
    go         1861 
    left       1839 
    no         1853 
    off        1839 
    on         1864 
    right      1852 
    stop       1885 
    unknown    6483 
    up         1843 
    yes        1860 

创建验证数据存储

创建一个指向该验证数据集的 audioDatastore (Audio Toolbox)。按照用于创建训练数据存储的相同步骤进行操作。

ads = audioDatastore(fullfile(dataFolder, 'validation'), ...
    'IncludeSubfolders',true, ...
    'FileExtensions','.wav', ...
    'LabelSource','foldernames')

isCommand = ismember(ads.Labels,commands);
isUnknown = ~isCommand;

includeFraction = 0.2;
mask = rand(numel(ads.Labels),1) < includeFraction;
isUnknown = isUnknown & mask;
ads.Labels(isUnknown) = categorical("unknown");

adsValidation = subset(ads,isCommand|isUnknown);
countEachLabel(adsValidation)
ads = 

  audioDatastore with properties:

                       Files: {
                              ' ...\AppData\Local\Temp\google_speech\validation\bed\026290a7_nohash_0.wav';
                              ' ...\AppData\Local\Temp\google_speech\validation\bed\060cd039_nohash_0.wav';
                              ' ...\AppData\Local\Temp\google_speech\validation\bed\060cd039_nohash_1.wav'
                               ... and 6795 more
                              }
                     Folders: {
                              'C:\Users\jibrahim\AppData\Local\Temp\google_speech\validation'
                              }
                      Labels: [bed; bed; bed ... and 6795 more categorical]
    AlternateFileSystemRoots: {}
              OutputDataType: 'double'
      SupportedOutputFormats: ["wav"    "flac"    "ogg"    "mp4"    "m4a"]
         DefaultOutputFormat: "wav"


ans =

  11×2 table

     Label     Count
    _______    _____

    down        264 
    go          260 
    left        247 
    no          270 
    off         256 
    on          257 
    right       256 
    stop        246 
    unknown     850 
    up          260 
    yes         261 

要使用整个数据集来训练网络并达到尽可能最高的准确度,请将 reduceDataset 设置为 false。要快速运行此示例,请将 reduceDataset 设置为 true

reduceDataset = false;
if reduceDataset
    numUniqueLabels = numel(unique(adsTrain.Labels));
    % Reduce the dataset by a factor of 20
    adsTrain = splitEachLabel(adsTrain,round(numel(adsTrain.Files) / numUniqueLabels / 20));
    adsValidation = splitEachLabel(adsValidation,round(numel(adsValidation.Files) / numUniqueLabels / 20));
end

计算听觉频谱图

为了准备能够高效训练卷积神经网络的数据,请将语音波形转换为基于听觉的频谱图。

定义特征提取的参数。segmentDuration 是每个语音段的持续时间(以秒为单位)。frameDuration 是用于计算频谱的每个帧的持续时间。hopDuration 是每个频谱之间的时间步。numBands 是听觉频谱图中的滤波器的数量。

创建一个 audioFeatureExtractor (Audio Toolbox) 对象来执行特征提取。

fs = 16e3; % Known sample rate of the data set.

segmentDuration = 1;
frameDuration = 0.025;
hopDuration = 0.010;

segmentSamples = round(segmentDuration*fs);
frameSamples = round(frameDuration*fs);
hopSamples = round(hopDuration*fs);
overlapSamples = frameSamples - hopSamples;

FFTLength = 512;
numBands = 50;

afe = audioFeatureExtractor( ...
    'SampleRate',fs, ...
    'FFTLength',FFTLength, ...
    'Window',hann(frameSamples,'periodic'), ...
    'OverlapLength',overlapSamples, ...
    'barkSpectrum',true);
setExtractorParams(afe,'barkSpectrum','NumBands',numBands,'WindowNormalization',false);

从数据集中读取一个文件。训练卷积神经网络要求输入大小一致。数据集中一些文件的长度不到 1 秒。在音频信号的前后应用零填充,使其长度为 segmentSamples

x = read(adsTrain);

numSamples = size(x,1);

numToPadFront = floor( (segmentSamples - numSamples)/2 );
numToPadBack = ceil( (segmentSamples - numSamples)/2 );

xPadded = [zeros(numToPadFront,1,'like',x);x;zeros(numToPadBack,1,'like',x)];

要提取音频特征,请调用 extract。输出是列向为时间的 Bark 谱。

features = extract(afe,xPadded);
[numHops,numFeatures] = size(features)
numHops =

    98


numFeatures =

    50

在此示例中,您通过应用对数对听觉频谱图进行后处理。对小数字取对数可能会导致舍入误差。

为了加快处理速度,您可以使用 parfor 在多个工作进程之间分配特征提取。

首先,确定数据集的分区数量。如果您没有 Parallel Computing Toolbox™,请使用单一分区。

if ~isempty(ver('parallel')) && ~reduceDataset
    pool = gcp;
    numPar = numpartitions(adsTrain,pool);
else
    numPar = 1;
end

对于每个分区,从数据存储中读取,对信号进行零填充,然后提取特征。

parfor ii = 1:numPar
    subds = partition(adsTrain,numPar,ii);
    XTrain = zeros(numHops,numBands,1,numel(subds.Files));
    for idx = 1:numel(subds.Files)
        x = read(subds);
        xPadded = [zeros(floor((segmentSamples-size(x,1))/2),1);x;zeros(ceil((segmentSamples-size(x,1))/2),1)];
        XTrain(:,:,:,idx) = extract(afe,xPadded);
    end
    XTrainC{ii} = XTrain;
end

将输出转换为 4 维数组,第 4 维为听觉频谱图。

XTrain = cat(4,XTrainC{:});

[numHops,numBands,numChannels,numSpec] = size(XTrain)
numHops =

    98


numBands =

    50


numChannels =

     1


numSpec =

       25021

按窗口幂缩放特征,然后取对数。要获得具有更平滑分布的数据,请使用小偏移取频谱图的对数。

epsil = 1e-6;
XTrain = log10(XTrain + epsil);

对验证集执行上述特征提取步骤。

if ~isempty(ver('parallel'))
    pool = gcp;
    numPar = numpartitions(adsValidation,pool);
else
    numPar = 1;
end
parfor ii = 1:numPar
    subds = partition(adsValidation,numPar,ii);
    XValidation = zeros(numHops,numBands,1,numel(subds.Files));
    for idx = 1:numel(subds.Files)
        x = read(subds);
        xPadded = [zeros(floor((segmentSamples-size(x,1))/2),1);x;zeros(ceil((segmentSamples-size(x,1))/2),1)];
        XValidation(:,:,:,idx) = extract(afe,xPadded);
    end
    XValidationC{ii} = XValidation;
end
XValidation = cat(4,XValidationC{:});
XValidation = log10(XValidation + epsil);

对训练标签和验证标签进行隔离。删除空类别。

YTrain = removecats(adsTrain.Labels);
YValidation = removecats(adsValidation.Labels);

可视化数据

绘制几个训练样本的波形和听觉频谱图。播放对应的音频片段。

specMin = min(XTrain,[],'all');
specMax = max(XTrain,[],'all');
idx = randperm(numel(adsTrain.Files),3);
figure('Units','normalized','Position',[0.2 0.2 0.6 0.6]);
for i = 1:3
    [x,fs] = audioread(adsTrain.Files{idx(i)});
    subplot(2,3,i)
    plot(x)
    axis tight
    title(string(adsTrain.Labels(idx(i))))

    subplot(2,3,i+3)
    spect = (XTrain(:,:,1,idx(i))');
    pcolor(spect)
    caxis([specMin specMax])
    shading flat

    sound(x,fs)
    pause(2)
end

添加背景噪声数据

网络必须不仅能够识别不同发音的单词,还要能够检测输入是否包含静音或背景噪声。

使用 _background_ 文件夹中的音频文件创建一秒背景噪声的片段采样。根据每个背景噪声文件创建相同数量的背景片段。您还可以创建自己的背景噪声录音,并将它们添加到 _background_ 文件夹。在计算频谱图之前,该函数将使用从 volumeRange 给出的范围内的对数均匀分布中采样的因子重新调整每个音频片段。

adsBkg = audioDatastore(fullfile(dataFolder, 'background'))
numBkgClips = 4000;
if reduceDataset
    numBkgClips = numBkgClips/20;
end
volumeRange = log10([1e-4,1]);

numBkgFiles = numel(adsBkg.Files);
numClipsPerFile = histcounts(1:numBkgClips,linspace(1,numBkgClips,numBkgFiles+1));
Xbkg = zeros(size(XTrain,1),size(XTrain,2),1,numBkgClips,'single');
bkgAll = readall(adsBkg);
ind = 1;

for count = 1:numBkgFiles
    bkg = bkgAll{count};
    idxStart = randi(numel(bkg)-fs,numClipsPerFile(count),1);
    idxEnd = idxStart+fs-1;
    gain = 10.^((volumeRange(2)-volumeRange(1))*rand(numClipsPerFile(count),1) + volumeRange(1));
    for j = 1:numClipsPerFile(count)

        x = bkg(idxStart(j):idxEnd(j))*gain(j);

        x = max(min(x,1),-1);

        Xbkg(:,:,:,ind) = extract(afe,x);

        if mod(ind,1000)==0
            disp("Processed " + string(ind) + " background clips out of " + string(numBkgClips))
        end
        ind = ind + 1;
    end
end
Xbkg = log10(Xbkg + epsil);
adsBkg = 

  audioDatastore with properties:

                       Files: {
                              ' ...\AppData\Local\Temp\google_speech\background\doing_the_dishes.wav';
                              ' ...\AppData\Local\Temp\google_speech\background\dude_miaowing.wav';
                              ' ...\AppData\Local\Temp\google_speech\background\exercise_bike.wav'
                               ... and 3 more
                              }
                     Folders: {
                              'C:\Users\jibrahim\AppData\Local\Temp\google_speech\background'
                              }
    AlternateFileSystemRoots: {}
              OutputDataType: 'double'
                      Labels: {}
      SupportedOutputFormats: ["wav"    "flac"    "ogg"    "mp4"    "m4a"]
         DefaultOutputFormat: "wav"

Processed 1000 background clips out of 4000
Processed 2000 background clips out of 4000
Processed 3000 background clips out of 4000
Processed 4000 background clips out of 4000

对背景噪声的频谱图进行拆分,以用于训练集、验证集和测试集。由于 _background_noise_ 文件夹仅包含大约五分半钟的背景噪声,因此不同数据集中的背景采样高度相关。要增加背景噪声的变化,您可以创建自己的背景文件并添加到该文件夹中。要增强网络的抗噪稳健性,您还可以尝试将背景噪声混合到语音文件中。

numTrainBkg = floor(0.85*numBkgClips);
numValidationBkg = floor(0.15*numBkgClips);

XTrain(:,:,:,end+1:end+numTrainBkg) = Xbkg(:,:,:,1:numTrainBkg);
YTrain(end+1:end+numTrainBkg) = "background";

XValidation(:,:,:,end+1:end+numValidationBkg) = Xbkg(:,:,:,numTrainBkg+1:end);
YValidation(end+1:end+numValidationBkg) = "background";

绘制训练集和验证集中不同类标签的分布。

figure('Units','normalized','Position',[0.2 0.2 0.5 0.5])

subplot(2,1,1)
histogram(YTrain)
title("Training Label Distribution")

subplot(2,1,2)
histogram(YValidation)
title("Validation Label Distribution")

定义神经网络架构

创建一个层数组形式的简单网络架构。使用卷积层和批量归一化层,并使用最大池化层“在空间上”(即,在时间和频率上)对特征图进行下采样。添加最终的最大池化层,它随时间对输入特征图进行全局池化。这会在输入频谱图中强制实施(近似的)时间平移不变性,从而使网络在对语音进行分类时不依赖于语音的准确时间位置,得到相同的分类结果。全局池化还可以显著减少最终全连接层中的参数数量。为了降低网络记住训练数据特定特征的可能性,可为最后一个全连接层的输入添加一个小的丢弃率。

该网络很小,因为它只有五个卷积层和几个滤波器。numF 控制卷积层中的滤波器数量。要提高网络的准确度,请尝试通过添加一些相同的块(由卷积层、批量归一化层和 ReLU 层组成)来增加网络深度。还可以尝试通过增大 numF 来增加卷积滤波器的数量。

使用加权交叉熵分类损失。weightedClassificationLayer(classWeights) 可创建一个自定义分类层,用于计算按 classWeights 加权的观测值的交叉熵损失。按照 categories(YTrain) 中类的显示顺序指定相同顺序的类权重。为了使每个类在损失中的总权重相等,使用的类权重应与每个类的训练样本数成反比。使用 Adam 优化器训练网络时,训练算法与类权重的整体归一化无关。

classWeights = 1./countcats(YTrain);
classWeights = classWeights'/mean(classWeights);
numClasses = numel(categories(YTrain));

timePoolSize = ceil(numHops/8);

dropoutProb = 0.2;
numF = 12;
layers = [
    imageInputLayer([numHops numBands])

    convolution2dLayer(3,numF,'Padding','same')
    batchNormalizationLayer
    reluLayer

    maxPooling2dLayer(3,'Stride',2,'Padding','same')

    convolution2dLayer(3,2*numF,'Padding','same')
    batchNormalizationLayer
    reluLayer

    maxPooling2dLayer(3,'Stride',2,'Padding','same')

    convolution2dLayer(3,4*numF,'Padding','same')
    batchNormalizationLayer
    reluLayer

    maxPooling2dLayer(3,'Stride',2,'Padding','same')

    convolution2dLayer(3,4*numF,'Padding','same')
    batchNormalizationLayer
    reluLayer
    convolution2dLayer(3,4*numF,'Padding','same')
    batchNormalizationLayer
    reluLayer

    maxPooling2dLayer([timePoolSize,1])

    dropoutLayer(dropoutProb)
    fullyConnectedLayer(numClasses)
    softmaxLayer
    weightedClassificationLayer(classWeights)];

训练网络

指定训练选项。使用小批量大小为 128 的 Adam 优化器。进行 25 轮训练,并在 20 轮后将学习率降低十分之一。

miniBatchSize = 128;
validationFrequency = floor(numel(YTrain)/miniBatchSize);
options = trainingOptions('adam', ...
    'InitialLearnRate',3e-4, ...
    'MaxEpochs',25, ...
    'MiniBatchSize',miniBatchSize, ...
    'Shuffle','every-epoch', ...
    'Plots','training-progress', ...
    'Verbose',false, ...
    'ValidationData',{XValidation,YValidation}, ...
    'ValidationFrequency',validationFrequency, ...
    'LearnRateSchedule','piecewise', ...
    'LearnRateDropFactor',0.1, ...
    'LearnRateDropPeriod',20);

训练网络。如果您没有 GPU,则训练网络可能需要较长的时间。

trainedNet = trainNetwork(XTrain,YTrain,layers,options);

评估经过训练的网络

基于训练集(无数据增强)和验证集计算网络的最终准确度。网络对于此数据集非常准确。但是,训练数据、验证数据和测试数据全都具有相似的分布,不一定能反映真实环境。尤其是对仅包含少量单词读音的 unknown 类别,更是如此。

if reduceDataset
    load('commandNet.mat','trainedNet');
end
YValPred = classify(trainedNet,XValidation);
validationError = mean(YValPred ~= YValidation);
YTrainPred = classify(trainedNet,XTrain);
trainError = mean(YTrainPred ~= YTrain);
disp("Training error: " + trainError*100 + "%")
disp("Validation error: " + validationError*100 + "%")
Training error: 1.907%
Validation error: 5.5376%

绘制混淆矩阵。使用列汇总和行汇总显示每个类的准确率和召回率。对混淆矩阵的类进行排序。最大的混淆发生在未知单词与命令之间,以及 upoffdownno,以及 gono 这三对命令之间。

figure('Units','normalized','Position',[0.2 0.2 0.5 0.5]);
cm = confusionchart(YValidation,YValPred);
cm.Title = 'Confusion Matrix for Validation Data';
cm.ColumnSummary = 'column-normalized';
cm.RowSummary = 'row-normalized';
sortClasses(cm, [commands,"unknown","background"])

在处理硬件资源受限的应用(如移动应用)时,请考虑可用内存和计算资源的限制。当使用 CPU 时,以 KB 为单位计算网络总大小,并测试网络的预测速度。预测时间是指对单个输入图像进行分类的时间。如果向网络中输入多个图像,可以同时对它们进行分类,从而缩短每个图像的预测时间。然而,在对流音频进行分类时,单个图像预测时间是最相关的。

info = whos('trainedNet');
disp("Network size: " + info.bytes/1024 + " kB")

for i = 1:100
    x = randn([numHops,numBands]);
    tic
    [YPredicted,probs] = classify(trainedNet,x,"ExecutionEnvironment",'cpu');
    time(i) = toc;
end
disp("Single-image prediction time on CPU: " + mean(time(11:end))*1000 + " ms")
Network size: 286.7402 kB
Single-image prediction time on CPU: 2.5119 ms

参考资料

[1] Warden P."Speech Commands:A public dataset for single-word speech recognition", 2017.可从 https://storage.googleapis.com/download.tensorflow.org/data/speech_commands_v0.01.tar.gz 获得。Copyright Google 2017.Speech Commands Dataset 是根据 Creative Commons Attribution 4.0 许可证授权的,可通过 https://creativecommons.org/licenses/by/4.0/legalcode 获得。

参考

[1] Warden P. "Speech Commands: A public dataset for single-word speech recognition", 2017. Available from http://download.tensorflow.org/data/speech_commands_v0.01.tar.gz. Copyright Google 2017. The Speech Commands Dataset is licensed under the Creative Commons Attribution 4.0 license, available here: https://creativecommons.org/licenses/by/4.0/legalcode.

另请参阅

| |

相关主题