Upload
gasper-kolenc
View
172
Download
2
Embed Size (px)
Citation preview
Utilizing AVFoundation at Dubsmash
● First off - Gasper
● Funded my own mobile dev company back in Slovenia
● Past half a year driving the crazy train that is Dubsmash’s iOS app
● Dubsmash is aiming to revolutionize the way we do video communication
Unavoidable intro
Who the fu*k am I?
● Video capture
● Stitching together several video parts
● Merging sound with video
● Rendering additional layers on top of the video
Video creation process in Dubsmash app
Overview
Video capture
Video captureIt all starts here...
Key components
● Audio player
● Video camera object
Obstacles to overcome
● Different screen sizes and aspect ratios
● Keeping video capture in sync with sound
● Make the video personal by maintaining eye contact
Video captureVideoCamera class
Just a few publicly exposed functions
● Able to initialize a new AVCaptureMovieFileOutput
● Rotate camera
● Start and stop capture
Video captureVideoCamera class
func initializeNewMovieFileOutput() -> AVCaptureMovieFileOutput {
resetCurrentMovieFileOutput()
captureSession.beginConfiguration()
let newMovieFileOutput = AVCaptureMovieFileOutput()
if captureSession.canAddOutput(newMovieFileOutput) {
captureSession.addOutput(newMovieFileOutput)
}
captureSession.commitConfiguration()
return newMovieFileOutput
}
Title of the slide
Initial video rendering
Initial video renderingRenderEngine class
RenderEngine class with several video rendering capabilities
● Merging video files
● Adding sound to a video object
● Drawing addition layers on top of video
● Compressing video
Initial video renderingMerging video parts
● Length of the output video is constrained to audio asset’s duration
● Stitching the video parts together using AVMutableComposition
○ Leads to possible discrepancies between the video and audio length, so we
need to make sure to scale video parts to appropriate lengths
● Using AVMutableVideoCompositionInstruction which contains an
AVMutableVideoCompositionLayerInstruction for every video part in
its layerInstructions property
● In the end export through AVAssetExportSession
Initial video renderingMerging video parts
do {
let mutableComposition = AVMutableComposition()
for videoAsset in videoAssets {
let videoTrack = mutableComposition.addMutableTrackWithMediaType
(AVMediaTypeVideo, preferredTrackID: kCMPersistentTrackID_Invalid)
guard let videoAssetTrack = videoAsset.tracksWithMediaType(AVMediaTypeVideo).first
else { throw NSError(...) }
...
1. Wrapping everything in a do - catch statement
2. Creating a AVMutableComposition
3. Extracting an AVAssetTrack for every video part
Initial video renderingMerging video parts
let adjustedDuration = CMTime(seconds: videoAsset.duration.seconds +
durationDifferencePerClip, preferredTimescale: videoAsset.duration.timescale)
try videoTrack.insertTimeRange(
CMTimeRange(start: kCMTimeZero, duration: videoAsset.duration),
ofTrack: videoAssetTrack,
atTime: timeOffset)
videoTrack.scaleTimeRange(CMTimeRange(start: timeOffset, duration: videoAsset.duration),
toDuration: adjustedDuration)
4. Scaling AVAssetTrack to appropriate length
Initial video renderingMerging video parts
let layerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: videoTrack)
mainInstruction.layerInstructions.append(layerInstruction)
layerInstruction.setOpacity(
mainInstruction.layerInstructions.count < videoAssets.count ? 0 : 1,
atTime: timeOffset + adjustedDuration)
var layerTransform = videoAssetTrack.preferredTransform
// ... stuff with transforms
layerInstruction.setTransform(layerTransform, atTime: kCMTimeZero)
5. Setting appropriate layer instruction
Initial video renderingMerging video parts
exportSession = AVAssetExportSession(asset: asset, presetName: exportPreset)exportSession?.videoComposition = videoCompositionexportSession?.outputFileType = AVFileTypeMPEG4exportSession?.shouldOptimizeForNetworkUse = true
taskCompletion = BFTaskCompletionSource()let appEnteredBackgroundSignal = // Signal for UIApplicationDidEnterBackgroundNotification appEnteredBackgroundSignal.subscribeNext { _ in cancelExport()}
exportSession?.exportAsynchronouslyWithCompletionHandler { … }
6. Exporting through AVAssetExportSession
Initial video renderingAdding sound to video
● Sound and video playing together from different sources results in apparent lack
of synchronization between the two
● To mitigate this problem we add sound to the video through creating an
AVMutableComposition object with two AVAssetTracks
let audioAssetTrack = audioAsset.tracksWithMediaType(AVMediaTypeAudio).first
let videoAssetTrack = videoAsset.tracksWithMediaType(AVMediaTypeVideo).first
Rendering additional layers
Rendering additional layersRendering text on top of video
● Users have the ability to enrich the video through text, filters and stickers
● Code snippet for rendering text:
let textImage = textField.screenshot()videoContainer.convertRect(textField.frame, fromView: textField.superview))
let videoLayer = addLayerOverlayToVideoComposition( videoComposition)let textLayer = CALayer()textLayer.bounds = CGRect(x: 0, y: 0, width: textImage.size.width, height: textImage.size.height)textLayer.contents = textImage.CGImagevideoLayer.addSublayer(textLayer)
Rendering additional layersRendering text on top of video
let parentLayer = CALayer()parentLayer.bounds = CGRect(origin: CGPointZero, size: exportSize)parentLayer.anchorPoint = CGPointZeroparentLayer.position = CGPointZero let videoLayer = CALayer()videoLayer.bounds = parentLayer.boundsparentLayer.addSublayer(videoLayer)videoLayer.position = CGPoint(x: parentLayer.bounds.width / 2, y: parentLayer.bounds.height / 2) let layer = CALayer()layer.frame = parentLayer.boundsparentLayer.addSublayer(layer)videoComposition.animationTool = AVVideoCompositionCoreAnimationTool( postProcessingAsVideoLayer: videoLayer, inLayer: parentLayer)return layer
Common pitfalls
Common pitfallsLove / hate relationship with AVFoundation
● Doing practically anything AV related on a background thread → Crash
○ Your best friend is now:
UIApplication.sharedApplication().applicationState == .Active
● Setting max recorded duration on a AVCaptureMovieFileOutput? Who
cares! AVFoundation certainly doesn’t…
● Something is bound to go wrong at a certain time, thankfully we have these
descriptive errors popping up:
Error Domain=NSOSStatusErrorDomain Code=-12780
"The operation couldn’t be completed. (OSStatus error -12780.)"
Shameless plugLooking for mobile devs, iOS & Android
Questions