Azure Cognitive Services- Speech To Text - 图1

    Speech 服务是认知服务的一种,提供了语音转文本,文本转语音, 语音翻译等,今天我们实战的是语音转文本(Speech To Text)。

    STT支持两种访问方式,1.是SDK,2.是REST API。

    其中:
    SDK方式支持 识别麦克风的语音流 和 语音文件;
    REST API方式仅支持语音文件;

    准备工作:创建 认知服务之Speech服务:

    Azure Cognitive Services- Speech To Text - 图2
    创建完成后,两个重要的参数可以在页面查看:
    Azure Cognitive Services- Speech To Text - 图3

    一. REST API方式将语音文件转换成文本:

    Azure global的 Speech API 终结点请参考:
    https://docs.microsoft.com/zh-cn/azure/cognitive-services/speech-service/rest-speech-to-text#regions-and-endpoints

    Azure 中国区 的 Speech API 终结点:
    截至到2020.2月,仅中国东部2区域已开通Speech服务,服务终结点为:
    https://chinaeast2.stt.speech.azure.cn/speech/recognition/conversation/cognitiveservices/v1

    对于Speech To Text来说,有两种身份验证方式:
    其中Authorization Token有效期为10分钟。
    Azure Cognitive Services- Speech To Text - 图4
    为了简便,本文使用了Ocp-Apim-Subscription-Key的方式。
    注意:如果要实现文本转语音,按照上表,则必须使用 Authorization Token形式进行身份验证。

    构建请求的其他注意事项:

    1. 文件格式:

    Azure Cognitive Services- Speech To Text - 图5

    1. 请求头:

    Azure Cognitive Services- Speech To Text - 图6
    需要注意的是,Key或者Authorization是二选一的关系。

    1. 请求参数:

    Azure Cognitive Services- Speech To Text - 图7

    在Postman中的示例如下:

    Azure Cognitive Services- Speech To Text - 图8

    Azure Cognitive Services- Speech To Text - 图9

    Azure Cognitive Services- Speech To Text - 图10

    如果要在REST API中使用 Authorization Token,则需要先获得Token:
    Global 获取Token的终结点:
    https://docs.microsoft.com/zh-cn/azure/cognitive-services/speech-service/rest-speech-to-text#authentication
    中国区获取Token的终结点:
    截至2020.02,只有中国东部2有Speech服务,其Token终结点为:
    https://chinaeast2.api.cognitive.azure.cn/sts/v1.0/issuetoken

    Postman获取Token 参考如下:
    Azure Cognitive Services- Speech To Text - 图11

    二. SDK方式将语音文件转换成文本(Python示例):

    在官网可以看到类似的代码,但需要注意的是,该代码仅在Azure Global的Speech服务中正常工作,针对中国区,需要做特定的修改(见下文)。

    1. import azure.cognitiveservices.speech as speechsdk
    2. # Creates an instance of a speech config with specified subscription key and service region.
    3. # Replace with your own subscription key and service region (e.g., "chinaeast2").
    4. speech_key, service_region = "YourSubscriptionKey", "YourServiceRegion"
    5. speech_config = speechsdk.SpeechConfig(subscription=speech_key, region=service_region)
    6. # Creates an audio configuration that points to an audio file.
    7. # Replace with your own audio filename.
    8. audio_filename = "whatstheweatherlike.wav"
    9. audio_input = speechsdk.AudioConfig(filename=audio_filename)
    10. # Creates a recognizer with the given settings
    11. speech_recognizer = speechsdk.SpeechRecognizer(speech_config=speech_config, audio_config=audio_input)
    12. print("Recognizing first result...")
    13. # Starts speech recognition, and returns after a single utterance is recognized. The end of a
    14. # single utterance is determined by listening for silence at the end or until a maximum of 15
    15. # seconds of audio is processed. The task returns the recognition text as result.
    16. # Note: Since recognize_once() returns only a single utterance, it is suitable only for single
    17. # shot recognition like command or query.
    18. # For long-running multi-utterance recognition, use start_continuous_recognition() instead.
    19. result = speech_recognizer.recognize_once()
    20. # Checks result.
    21. if result.reason == speechsdk.ResultReason.RecognizedSpeech:
    22. print("Recognized: {}".format(result.text))
    23. elif result.reason == speechsdk.ResultReason.NoMatch:
    24. print("No speech could be recognized: {}".format(result.no_match_details))
    25. elif result.reason == speechsdk.ResultReason.Canceled:
    26. cancellation_details = result.cancellation_details
    27. print("Speech Recognition canceled: {}".format(cancellation_details.reason))
    28. if cancellation_details.reason == speechsdk.CancellationReason.Error:
    29. print("Error details: {}".format(cancellation_details.error_details))

    代码提供页面:
    https://docs.azure.cn/zh-cn/cognitive-services/speech-service/quickstarts/speech-to-text-from-file?tabs=linux&pivots=programming-language-python#create-a-python-application-that-uses-the-speech-sdk

    针对中国区,需要使用自定义终结点的方式,才能正常使用SDK:

    1. speech_key, service_region = "Your Key", "chinaeast2"
    2. template = "wss://{}.stt.speech.azure.cn/speech/recognition" \
    3. "/conversation/cognitiveservices/v1?initialSilenceTimeoutMs={:d}&language=zh-CN"
    4. speech_config = speechsdk.SpeechConfig(subscription=speech_key,
    5. endpoint=template.format(service_region, int(initial_silence_timeout_ms)))

    中国区完整代码为:

    1. #!/usr/bin/env python
    2. # coding: utf-8
    3. # Copyright (c) Microsoft. All rights reserved.
    4. # Licensed under the MIT license. See LICENSE.md file in the project root for full license information.
    5. """
    6. Speech recognition samples for the Microsoft Cognitive Services Speech SDK
    7. """
    8. import time
    9. import wave
    10. try:
    11. import azure.cognitiveservices.speech as speechsdk
    12. except ImportError:
    13. print("""
    14. Importing the Speech SDK for Python failed.
    15. Refer to
    16. https://docs.microsoft.com/azure/cognitive-services/speech-service/quickstart-python for
    17. installation instructions.
    18. """)
    19. import sys
    20. sys.exit(1)
    21. # Set up the subscription info for the Speech Service:
    22. # Replace with your own subscription key and service region (e.g., "westus").
    23. speech_key, service_region = "your key", "chinaeast2"
    24. # Specify the path to an audio file containing speech (mono WAV / PCM with a sampling rate of 16
    25. # kHz).
    26. filename = "D:\FFOutput\speechtotext.wav"
    27. def speech_recognize_once_from_file_with_custom_endpoint_parameters():
    28. """performs one-shot speech recognition with input from an audio file, specifying an
    29. endpoint with custom parameters"""
    30. initial_silence_timeout_ms = 15 * 1e3
    31. template = "wss://{}.stt.speech.azure.cn/speech/recognition/conversation/cognitiveservices/v1?initialSilenceTimeoutMs={:d}&language=zh-CN"
    32. speech_config = speechsdk.SpeechConfig(subscription=speech_key,
    33. endpoint=template.format(service_region, int(initial_silence_timeout_ms)))
    34. print("Using endpoint", speech_config.get_property(speechsdk.PropertyId.SpeechServiceConnection_Endpoint))
    35. audio_config = speechsdk.audio.AudioConfig(filename=filename)
    36. # Creates a speech recognizer using a file as audio input.
    37. # The default language is "en-us".
    38. speech_recognizer = speechsdk.SpeechRecognizer(speech_config=speech_config, audio_config=audio_config)
    39. result = speech_recognizer.recognize_once()
    40. # Check the result
    41. if result.reason == speechsdk.ResultReason.RecognizedSpeech:
    42. print("Recognized: {}".format(result.text))
    43. elif result.reason == speechsdk.ResultReason.NoMatch:
    44. print("No speech could be recognized: {}".format(result.no_match_details))
    45. elif result.reason == speechsdk.ResultReason.Canceled:
    46. cancellation_details = result.cancellation_details
    47. print("Speech Recognition canceled: {}".format(cancellation_details.reason))
    48. if cancellation_details.reason == speechsdk.CancellationReason.Error:
    49. print("Error details: {}".format(cancellation_details.error_details))
    50. speech_recognize_once_from_file_with_custom_endpoint_parameters()

    需要注意的是,如果我们使用SDK识别麦克风中的语音,则将

    1. speech_recognizer = speechsdk.SpeechRecognizer(speech_config=speech_config, audio_config=audio_config)

    修改为如下即可(去掉audio_config参数):

    1. speech_recognizer = speechsdk.SpeechRecognizer(speech_config=speech_config)

    公众号链接:https://mp.weixin.qq.com/s/NA9kQsVDfzTXEqHMTdDExA