OpenClaw 语音控制之 移动端麦克风接入
17.1 方案架构17.1.1 整体架构移动端麦克风接入 OpenClaw 的整体架构可分为四层:┌──────────────────────────────────────────────────────────┐ │ 移动端 (Client) │ │ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌───────────┐ │ │ │ 麦克风采集│→│ 音频编码 │→│ 网络传输 │→│ 安全认证 │ │ │ └──────────┘ └──────────┘ └──────────┘ └───────────┘ │ └────────────────────────┬─────────────────────────────────┘ │ WebSocket / gRPC Stream ▼ ┌──────────────────────────────────────────────────────────┐ │ 服务端 (Server) │ │ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌───────────┐ │ │ │ 连接管理 │→│ 音频解码 │→│ 语音识别 │→│ OpenClaw │ │ │ │ │ │ │ │ (ASR) │ │ 指令解析 │ │ │ └──────────┘ └──────────┘ └──────────┘ └───────────┘ │ └──────────────────────────────────────────────────────────┘数据流向说明:采集层:移动端通过系统 API 从麦克风获取原始 PCM 音频数据编码层:将 PCM 数据压缩编码(推荐 OPUS),降低传输带宽传输层:通过 WebSocket 或 gRPC 流式传输到服务端服务端:解码后进行语音识别(ASR),将识别文本送入 OpenClaw 指令引擎17.1.2 技术选型矩阵组件推荐方案备选方案选型理由音频编码OPUSAAC-LCOPUS 延迟最低可达 5ms,适合实时场景传输协议WebSocketgRPC StreamWebSocket 浏览器兼容性好,实现简单语音识别阿里云 ASR / 讯飞 ASR自建 Whisper国内服务稳定,中文识别准确率高安全认证JWT Token + TLSmTLSJWT 实现简单,适合移动端场景17.1.3 音频参数基准参数推荐值说明采样率16000 Hz语音识别标准采样率位深度16 bit人声动态范围足够声道单声道 (Mono)语音不需要立体声编码格式OPUS低延迟、高压缩比帧大小20msOPUS 标准帧时长目标码率16-24 kbps语音质量与带宽平衡17.2 移动端实现17.2.1 Android 录音实现(1) 权限配置在AndroidManifest.xml中声明麦克风权限:!-- AndroidManifest.xml -- uses-permission android:name="android.permission.RECORD_AUDIO" / uses-permission android:name="android.permission.INTERNET" / !-- Android 13+ 需要运行时权限请求 --在 Activity/Fragment 中请求运行时权限(Kotlin):// PermissionHelper.kt import android.Manifest import android.content.pm.PackageManager import androidx.core.app.ActivityCompat import androidx.core.content.ContextCompat import androidx.appcompat.app.AppCompatActivity object PermissionHelper { private const val PERMISSION_REQUEST_CODE = 1001 fun requestMicrophonePermission(activity: AppCompatActivity): Boolean { return if (ContextCompat.checkSelfPermission( activity, Manifest.permission.RECORD_AUDIO ) == PackageManager.PERMISSION_GRANTED ) { true // 已授权 } else { ActivityCompat.requestPermissions( activity, arrayOf(Manifest.permission.RECORD_AUDIO), PERMISSION_REQUEST_CODE ) false // 等待用户授权 } } }(2) AudioRecord 实时采集AudioRecord是 Android 底层音频采集 API,相比MediaRecorder更适合实时流式传输场景:// AudioRecorder.kt import android.media.AudioFormat import android.media.AudioRecord import android.media.MediaRecorder import java.io.IOException import java.net.URI import org.java_websocket.client.WebSocketClient import org.java_websocket.handshake.ServerHandshake class AudioRecorder( private val sampleRate: Int = 16000, private val channelConfig: Int = AudioFormat.CHANNEL_IN_MONO, private val audioEncoding: Int = AudioFormat.ENCODING_PCM_16BIT ) { private var audioRecord: AudioRecord? = null private var isRecording = false private var webSocketClient: WebSocketClient? = null // 计算最小缓冲区大小 private val bufferSize: Int = AudioRecord.getMinBufferSize( sampleRate, channelConfig, audioEncoding ).also { if (it = 0) throw IllegalStateException( "AudioRecord 缓冲区大小计算失败: $it。请检查采样率、声道和编码格式是否有效。" ) } fun startRecording(serverUrl: String) { // 初始化 WebSocket 连接 webSocketClient = object : WebSocketClient(URI(serverUrl)) { override fun onOpen(handshakedata: ServerHandshake?) { isRecording = true startCapture() } override fun onMessage(message: String?) { // 处理服务端返回的识别结果 message?.let { handleRecognitionResult(it) } } override fun onClose(code: Int, reason: String?, remote: Boolean) { isRecording = false stopRecording() } override fun onError(ex: Exception?) { isRecording = false stopRecording() } } webSocketClient?.connect() } private fun startCapture() { audioRecord = AudioRecord( MediaRecorder.AudioSource.MIC, sampleRate, channelConfig, audioEncoding, bufferSize ) if (audioRecord?.state != AudioRecord.STATE_INITIALIZED) { throw IllegalStateException("AudioRecord 初始化失败") } audioRecord?.startRecording() isRecording = true // 启动采集线程 Thread { val buffer = ByteArray(bufferSize) while (isRecording) { val read = audioRecord?.read(buffer, 0, buffer.size) ?: 0 if (read 0 webSocketClient?.isOpen == true) { // 发送 PCM 数据(实际项目中建议先进行 OPUS 编码) webSocketClient?.send(buffer) } } }.start() } fun stopRecording() { isRecording = false audioRecord?.stop() audioRecord?.release() audioRecord = null webSocketClient?.close() } private fun handleRecognitionResult(result: String) { // 处理语音识别结果,可回调到 UI 层 // 实际项目中建议通过 Listener 或 LiveData/Flow 回调到 ViewModel println("识别结果: $result") } }(3) OPUS 编码集成使用纯 Java 实现的 OPUS 编码器库Concentus,无需 NDK 即可在 Android 上使用:// OpusEncoder.kt — 使用 jopus 封装库的示例 import org.concentus.OpusEncoder import org.concentus.OpusApplication class OpusEncoder( sampleRate: Int = 16000, channels: Int = 1, application: OpusApplication = OpusApplication.OPUS_APPLICATION_VOIP ) { private val encoder = OpusEncoder(sampleRate, channels, application).apply { bitrate = 20000 // 20 kbps } fun encode(pcmData: ByteArray): ByteArray { // PCM 16-bit 转 ShortArray val shorts = ShortArray(pcmData.size / 2) for (i in shorts.indices) { shorts[i] = ((pcmData[2 * i].toInt() and 0xFF) or (pcmData[2 * i + 1].toInt() shl 8)).toShort() } val encoded = ByteArray(1280) // OPUS 最大帧大小 val encodedLength = encoder.en