檔案建立日期: 2019/12/22
最後修訂日期: None
相關軟體資訊:
Win 10 | Python 3.7.2 | Numpy 1.17.3 | sklearn 0.21.3 |
matplotlib 3.1.1 | pandas 0.25.1 | nltk 3.4.5 | hmmlearn 0.2.2 |
pandas_datareader 0.8.1 | yfinance 0.1.52 | scipy 1.3.1 | python_speech_features 0.6 |
參考檔案: AI with Python Tutorial
說明: 所有內容歡迎引用, 只需註明來源及作者, 本文內容如有錯誤或用詞不當, 敬請指正.
標題: 人工智慧 (08) 語音識別
語音是人交流的最基本手段, 語音處理的基本目標是提供人與機器之間的互動。
語音處理系統主要有三個任務:
- 語音識別使機器可以捕捉我們所說的單詞,短語和句子
- 自然語言處理使機器能夠理解我們所說的內容
- 語音合成以允許機器講話。
構建語音識別器
- 開發語音識別系統的困難
- 詞彙量 - 小型詞彙表/中等詞彙表/大型詞彙表, 詞彙量越大,識別就越困難。
- 頻帶特徵 - 人類語音包含具有整個頻率範圍的高頻寬
- 口語模式 - 隔離單詞模式/已連線單詞模式/連續語音模式。
- 演講風格 - 正式的/自發的/隨意的。
- 說話者相關性:獨立說話者/自適應說話者/非獨立說話者。
- 噪聲型別:訊雜比具體取決於在背景噪聲下的聲學環境.
- 高等範圍: 訊雜比大於30dB
- 中等範圍: 訊雜比在30dB至10db之間
- 低等範圍: 訊雜比小於10dB
- 麥克風特性:麥克風的質量 / 嘴和麥克風之間的距離
- 視覺化音訊訊號
- 記錄 - 使用麥克風進行錄製。
- 取樣 - 以一定的頻率進行取樣並將訊號轉換為離散的數值形式儲存。
- 表徵音訊訊號 - 表徵音訊訊號使用諸如傅立葉變換這樣的數學工具來將時域訊號轉換為頻域。
- 產生單調音訊訊號
- 語音特徵提取 - 不同的特徵提取技術,例如MFCC,PLP,PLP-RASTA等。
- 口語識別
例 1. 聲音檔案讀取及顯示
# -----------------------------------------------------------------------------
# Sampling Voice
# -----------------------------------------------------------------------------
import numpy as np
import matplotlib.pyplot as plt
from scipy.io import wavfile
music_file = "C:/Windows/media/Windows Logon.wav"
sampling, audio = wavfile.read(music_file) # 讀取音樂檔案
print('Signal shape :', raw_audio.shape)
print('Signal Datatype:', raw_audio.dtype)
print('Signal duration:', round(audio.shape[0] / float(sampling), 2), 'seconds')
audio = audio / np.power(2, 15) # 縮放在 -1 ~ +1 之間
time_axis = 1000 * np.arange(0, len(audio), 1) / float(sampling) # X 座標
plt.plot(time_axis, audio, color='blue') # 畫圖
plt.xlabel('Time (milliseconds)')
plt.ylabel('Amplitude')
plt.title('Input audio signal')
plt.show()
例 2. 從時域轉換成頻域
# -----------------------------------------------------------------------------
# Transforming voice data to Frequency Domain
# -----------------------------------------------------------------------------
import numpy as np
import matplotlib.pyplot as plt
from scipy.io import wavfile
sampling, audio = wavfile.read("C:/Windows/media/Windows Logon.wav") # 載什資料
# 傳立葉轉換 - 從時域轉換成頻域
spectrum = np.abs(np.fft.fft(audio[:, 0]))
x = np.arange(sampling)
half_x = x[range(int(sampling/2))]
spectrum = spectrum/sampling
spectrum_half = 10*np.log10(spectrum[range(int(sampling/2))])
plt.figure() # 畫圖
plt.plot(half_x,spectrum_half, color='b')
plt.xlabel('Frequency (kHz)')
plt.ylabel('Signal power (dB)')
plt.tight_layout()
plt.show()
列 3. 產生單音訊號
# -----------------------------------------------------------------------------
# Generating Monotone Audio Signal
# -----------------------------------------------------------------------------
import numpy as np
import matplotlib.pyplot as plt
duration = 4 # 時長4秒
sampling = 44100 # 取樣頻率為 44.1KHz (每秒44100個點)
tone = 784 # 頻率為 784Hz
min_val = -4 * np.pi # 從 -4π 到 +4π
max_val = 4 * np.pi
t = np.linspace(min_val, max_val, duration * sampling) # 總點數
audio = np.sin(2 * np.pi * frequency_tone * t) # 單頻波形
audio = audio[:100] # 只畫前100個點
time = 1000 * np.arange(0, len(audio), 1) / float(sampling)
plt.plot(time, audio, color='blue') # 畫圖
plt.xlabel('Time in milliseconds')
plt.ylabel('Amplitude')
plt.title('Generated audio signal')
plt.show()
例 4. 取出語音特徵
# -----------------------------------------------------------------------------
# Feature Extraction from Speech
# -----------------------------------------------------------------------------
import numpy as np
import matplotlib.pyplot as plt
from scipy.io import wavfile
from python_speech_features import mfcc, logfbank
file = "C:/Windows/media/Windows Logon.wav"
sampling, audio = wavfile.read(file) # 載入資料集
audio = audio[:15000] # 只取前面15000個資料
features_mfcc = mfcc(audio, sampling, nfft=1024) # 計算MFCC特徵
print('MFCC:\nNumber of windows =', features_mfcc.shape[0])
print('Length of each feature =', features_mfcc.shape[1])
mfcc = features_mfcc.T # 畫MFCC特徵
plt.subplot(211)
plt.imshow(mfcc)
plt.title('MFCC')
filterbank = logfbank(audio, sampling, nfft=1024) # 計算filterbank特徵
print('Filter bank:\nNumber of windows =', filterbank.shape[0])
print('Length of each feature =', filterbank.shape[1])
filterbank = filterbank.T # # 畫filterbank特徵
plt.subplot(212)
plt.imshow(filterbank)
plt.title('Filter bank')
plt.tight_layout()
plt.show()
例 5. 語音識別
# -----------------------------------------------------------------------------
# Google Speech API - Recognition of Spoken Words
# -----------------------------------------------------------------------------
import speech_recognition as sr
recording = sr.Recognizer()
# Sometime it may not work
# Open Settings from your start menu.
# Click on Privacy in order to access all your privacy settings.
# Select Microphone from the left pane and then click the Change button.
# Now, turn on microphone for this device.
# Turn on Allow apps to access your microphone.
#
# May it still not work, check device no of microphone you have in your system.
# #0, #1, #4, #5, #9 for device_index
with sr.Microphone(device_index=0) as source:
recording.adjust_for_ambient_noise(source)
print("Please Say something:")
audio = recording.listen(source)
try:
print("You said: \n" + recording.recognize_google(audio))
except Exception as e:
print(e)
本作品採用《CC 協議》,轉載必須註明作者和本文連結