Mac OS X 與 iOS 的 Audio API

Preview:

DESCRIPTION

AVFoundation 與 Core Audio 簡介

Citation preview

Mac OS X 與 iOS 的 Audio API楊維中 a.k.a zonblezonble@gmail.com

Friday, August 16, 13

與 Audio 有關的 API• AVFoundation

• Audio Service

• OpenAL

• Audio Queue

• Audio Unit

• Audio Session

• MPNowPlayingCenter

• Background Task

• Remote Control Events

• …

Friday, August 16, 13

任務:播放網站上的⾳音檔⽽而且,抓到了多少資料就⾺馬上播放多少資料

Friday, August 16, 13

這就不會是我們要的

• Audio Service:⽤用來處理簡短的⾳音效,例如按到虛擬鍵盤上的喀嚓聲

• OpenAL:製作遊戲中的3D⾳音效,可以設定⾳音效是從哪個⽅方向傳來的

Friday, August 16, 13

AVAudioPlayer

• ⾼高階的 Audio Player

• iOS 2.2

• ⽀支援多種格式• 可以從 file URL 或是 NSData 建⽴立 player 播放,但只能夠播放 local file

• 適合製作遊戲背景⾳音樂

Friday, August 16, 13

AVPlayer

• ⾼高階的 Audio Player

• iOS 4.0

• ⽀支援本機檔案與遠端檔案• 適合網路廣播•無法知道檔案總⻑⾧長度

Friday, August 16, 13

我們的服務需要…

• 可以直接讀多少資料就播放多少資料• 可以⼀一邊下載⼀一邊 cache ⽇日後離線播放

• 知道載⼊入進度• 檔案可能加密過,必須解密…

• 對,有些公司就是會做這種事

Friday, August 16, 13

我們需要了解底層 APICoreAudio

Friday, August 16, 13

因為太難了所以先講簡單的…

Friday, August 16, 13

Audio Session

• ⽤用來表⽰示⺫⽬目前的 Audio 屬於哪⼀一種類

• 設定 Audio Session Category

• ⼀一般背景、Media Playback…etc

• 然後呼叫 Audio Session Start

• 有 C 與 Objective-C API

Friday, August 16, 13

Audio Session

• 還要處理 Interrupt 與 Resume

• 實作 delegate,更新 UI

• Interrupt 的場合:

• 別的 app 播放⾳音樂

• 來電• 鬧鐘…

Friday, August 16, 13

回到 Core Audio

Friday, August 16, 13

說實在這種 code 寫過⼀一次不會想寫第⼆二次

Friday, August 16, 13

請忘記 OO忘記Objective-C

Audio 串流就是快速處理⼀一堆連續的 Binary 資料

Friday, August 16, 13

基本觀念• Audio 資料是連續的 binary 資料

• 資料中包含的是連續的sample/frame,⼀一秒鐘會有 44100 個 sample

• MP3/AAC等⺫⽬目前通⽤用的壓縮⾳音檔格式,會將若干⼤大⼩小的frame變成⼀一個packet

• 如果是VBR(變動碼率)的⾳音檔,每個packet裡頭的資料量不⼀一樣⼤大

Friday, August 16, 13

⼀一些術語

• Sample Rate:每秒鐘有多少 sample

• Packet Size:每個 Packet 有多少 sample

• Bit Rate:每秒鐘有多少 bit

Friday, August 16, 13

完整流程

• 建⽴立網路連線,從連線 callback 讀取資料

• 解密,並將解密過的資料放⼊入記憶體• 建⽴立 Audio Queue 或是 Audio Unit

Graph,收取系統通知需要下⼀一段資料的 callback

• 提供資料

Friday, August 16, 13

我們專注在• 將資料讀⼊入記憶體• Parse 出 packet 並保存

• 將資料餵給 Audio API

• 註冊 callback

• 在 callback 中回傳下⼀一段資料

• 資料要轉換成 Linear PCM 格式

Friday, August 16, 13

Audio Unit 與 Audio Queue 兩組 API 的差別?• 在上⾴頁的流程中,最⼤大的差別在於Audio

Queue API 不需要⼿手動將資料轉換成 Linear PCM 格式

• 不容易設定• 可以直接對Linear PCM資料做⼿手腳…

• Audio Unit 中可以在 Audio Graph 中增加 mixer 與 EQ effect node

Friday, August 16, 13

PacketID3 data

MP3 Header

MP3 Data

MP3 Header

MP3 Data

MP3 Header

MP3 Data

Packet

Friday, August 16, 13

辨識 MP3 header

• MP3 Header 共 4 個 bytes

• 其中前11個bit是sync word(都是1),看到 sync word就可以知道是⼀一個packet的開頭

• sync word之後描述這個packet的格式

• http://www.mp3-tech.org/programmer/frame_header.html

Friday, August 16, 13

Sample Packet Parserdef parse(content): i = 0 while i + 2 < len(content): frameSync = (content[i] << 8) | (content[i + 1] & (0x80 | 0x40 | 0x20)) if frameSync != 0xffe0: if foundFirstFrame: pass i += 1; continue if not foundFirstFrame: foundFirstFrame = True audioVersion = (content[i + 1] >> 3) & 0x03; layer = (content[i + 1] >> 1) & 0x03 hasCRC = not(content[i + 1] & 0x01) bitrateIndex = content[i + 2] >> 4; sampleRateIndex = content[i + 2] >> 2 & 0x03; bitrate = [0, 32000, 40000, 48000, 56000, 64000, 80000, 96000, 112000, 128000, 160000, 192000, 224000, 256000, 320000, 0][bitrateIndex]; hasPadding = not(not((content[i + 2] >> 1) & 0x01)) frameLength = 144 * bitrate / 44100 + \ (1 if hasPadding else 0) + \ (2 if hasCRC else 0) i += frameLength

Friday, August 16, 13

MPEG 4 Audio/Video

MOOV

MDAT

Friday, August 16, 13

CoreAudio 提供我們⼀一組 Audio Parser• ⽤用途是找出 Packet,並分析出檔案格式

• C API

• iOS 2/Mac OS X 10.5

• 本機檔案可以呼叫 AudioFileOpenURL

• 串流資料可以呼叫AudioFileStreamOpen

Friday, August 16, 13

AudioFileStreamOpen

• AudioFileStreamOpen(self, ZBAudioFileStreamPropertyListener, ZBAudioFileStreamPacketsCallback, kAudioFileMP3Type, &audioFileStreamID);

• self :傳遞⼀一個callback可以使⽤用的物件

• ZBAudioFileStreamPropertyListener 檔案格式 callback

• ZBAudioFileStreamPacketsCallback parse 出 packet 的 callback

• kAudioFileMP3Type 給 parser 的 hint

• audioFileStreamID 產⽣生 audio file stream ID

Friday, August 16, 13

在記憶體中保存 packet?

• 其實我們只要⽤用個簡單的 structure 就可以保存typedef struct { size_t length; // packet ⻑⾧長度 void *data; // packet 資料的指標}

Friday, August 16, 13

保存檔案格式void ZBAudioFileStreamPropertyListener(void * inClientData, AudioFileStreamID inAudioFileStream, AudioFileStreamPropertyID inPropertyID, UInt32 * ioFlags){ ZBSimplePlayer *self = (ZBSimplePlayer *)inClientData; if (inPropertyID == kAudioFileStreamProperty_DataFormat) { UInt32 dataSize = 0; OSStatus status = 0; AudioStreamBasicDescription audioStreamDescription; Boolean writable = false; status = AudioFileStreamGetPropertyInfo(inAudioFileStream, kAudioFileStreamProperty_DataFormat, &dataSize, &writable); status = AudioFileStreamGetProperty(inAudioFileStream, kAudioFileStreamProperty_DataFormat, &dataSize, &audioStreamDescription); // 然後把 audioStreamDescription 存起來 }}

Friday, August 16, 13

保存 Packetstatic void ZBAudioQueueOutputCallback(void * inUserData, AudioQueueRef inAQ,AudioQueueBufferRef inBuffer){ ZBSimplePlayer *self = (ZBSimplePlayer *)inClientData; for (int i = 0; i < inNumberPackets; ++i) { SInt64 packetStart = inPacketDescriptions[i].mStartOffset; UInt32 packetSize = inPacketDescriptions[i].mDataByteSize; assert(packetSize > 0); self->packetData[self->packetCount].length = (size_t)packetSize; self->packetData[self->packetCount].data = malloc(packetSize); memcpy(packetData[self->packetCount].data, inInputData + packetStart, packetSize); self->packetCount++; }}

Friday, August 16, 13

下⼀一步• 為了要播放順利,我們會在有⼀一定數量的 packet 之後,才會開始呼叫 Audio API 開始播放。資料不夠會產⽣生爆⾳音

• 等待 packet 的過程叫buffering…緩衝處理

• 資料換算成時間的⽅方式:packet 數量 * frames per packet / sample rate

• 100 * 1152 / 44100 = 2.61...

Friday, August 16, 13

播放���� ����

���� ����

���� ����

Callback

Callback

Friday, August 16, 13

Audio Queue 的播放• 每次 callback,提供⼀一個新的buffer struct

• buffer物件包含

• packet 數量

• 指向 packet 資料的指標

• packet格式

• 將 buffer 送⼊入 queue 中

Friday, August 16, 13

AudioUnit 的播放

• Audio Unit API 會給你⼀一個 struct 的指,標,叫做IOData,並傳⼊入⼀一定⼤大⼩小的frame 數量

• 對IOData指定所需要⼤大⼩小的Linear PCM資料

Friday, August 16, 13

iOS 上的 Audio Unit

• ⼀一般模式與畫⾯面鎖定時,要求的frame數量不⼀一樣⼤大

• 平時要求1024,但畫⾯面鎖定時要求4096

• 主要考量是節電• 如果資料量不夠,會停⽌止播放

Friday, August 16, 13

Audio Queue

Friday, August 16, 13

建⽴立 Audio Queue

OSStatus status = AudioQueueNewOutput(audioStreamBasicDescription, ZBAudioQueueOutputCallback, self, CFRunLoopGetCurrent(), kCFRunLoopCommonModes, 0, &outputQueue);assert(status == noErr);

Friday, August 16, 13

Enqueue Data AudioQueueBufferRef buffer; status = AudioQueueAllocateBuffer(outputQueue, totalSize, &buffer); assert(status == noErr); buffer->mAudioDataByteSize = totalSize; buffer->mUserData = self;

AudioStreamPacketDescription *packetDescs = calloc(inPacketCount, sizeof(AudioStreamPacketDescription));

totalSize = 0; for (index = 0 ; index < inPacketCount ; index++) { size_t readIndex = index + readHead; memcpy(buffer->mAudioData + totalSize, packetData[readIndex].data, packetData[readIndex].length); AudioStreamPacketDescription description; description.mStartOffset = totalSize; description.mDataByteSize = (UInt32)packetData[readIndex].length; description.mVariableFramesInPacket = 0; totalSize += packetData[readIndex].length; memcpy(&(packetDescs[index]), &description, sizeof(AudioStreamPacketDescription)); } status = AudioQueueEnqueueBuffer(outputQueue, buffer, (UInt32)inPacketCount, packetDescs); free(packetDescs);

Friday, August 16, 13

Audio Unit

Friday, August 16, 13

Audio Unit

Output Node

Effect Node

Mixer Node

Render Callback

Audio Unit Graph

Mixer Unit

Effect Unit

Output Unit

Get Info

Get Info

Get Info

Friday, August 16, 13

建⽴立 audio converter

AudioConverterRef converter;AudioStreamBasicDescription fromFormat=…⋯; AudioStreamBasicDescription destFormat=…⋯; AudioConverterNew(&fromFormat, &destFormat, &converter);

Friday, August 16, 13

使⽤用 audio converter

AudioBufferList *list;UInt32 packetSize = 1024;AudioConverterFillComplexBuffer(converter, ZBPlayerConverterFiller, self, &packetSize, list, NULL);

Friday, August 16, 13

audio converter callback

OSStatus ZBPlayerConverterFiller (AudioConverterRef inAudioConverter, UInt32* ioNumberDataPackets, AudioBufferList* ioData, AudioStreamPacketDescription** outDataPacketDescription, void* inUserData) { ZBSimpleAUPlayer *self = (ZBSimpleAUPlayer *)inUserData; *ioNumberDataPackets = 1; static AudioStreamPacketDescription aspdesc; ioData->mNumberBuffers = 1; void *data = self->packetData[readHead].data; UInt32 length = self->packetData[readHead].length; ioData->mBuffers[0].mData = data; ioData->mBuffers[0].mDataByteSize = length;

*outDataPacketDescription = &aspdesc; aspdesc.mDataByteSize = length; aspdesc.mStartOffset = 0; aspdesc.mVariableFramesInPacket = 1; readHead++; return noErr;}

Friday, August 16, 13

NewAUGraph(&audioGraph); // 建立 audio grapg

AudioComponentDescription cdesc;bzero(&cdesc, sizeof(AudioComponentDescription));cdesc.componentType = kAudioUnitType_Output;cdesc.componentSubType = kAudioUnitSubType_DefaultOutput;cdesc.componentManufacturer = kAudioUnitManufacturer_Apple;cdesc.componentFlags = 0;cdesc.componentFlagsMask = 0;AUGraphAddNode(audioGraph, &cdesc, &outputNode); // 建立 output node

AUGraphOpen(audioGraph);AUGraphNodeInfo(audioGraph, outputNode, &cdesc, &outputUnit); // 建立 output unit

AudioStreamBasicDescription destFormat = LFPCMStreamDescription(); // 設定輸出格式AudioUnitSetProperty(outputUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &destFormat, sizeof(destFormat));

AURenderCallbackStruct callbackStruct;callbackStruct.inputProc = ZBPlayerAURenderCallback;callbackStruct.inputProcRefCon = self;

AudioUnitSetProperty(outputUnit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Input, 0, &callbackStruct, sizeof(callbackStruct)); // 建立 render callback

AUGraphInitialize(audioGraph); // Init audio graphCAShow(audioGraph); // 開始 audio graph

Friday, August 16, 13

播放

• AUGraphStart(audioGraph);

• AUGraphStop(audioGraph);

Friday, August 16, 13

Sample Code

• https://github.com/zonble/ZBSimplePlayer

Friday, August 16, 13

因為 Audio Unit API 需要直接使⽤用 Linear

PCM 資料,我們也可以直接修改資料產⽣生

效果

Friday, August 16, 13

Thanks!Friday, August 16, 13

Recommended