视频转场

视频转场,顾名思义就是由一个视频过渡到另外一个视频,通过添加一定的图像处理效果,让两个视频之间的转场更加顺畅、切合用户需要。

首先先回顾以下视频合成的流程

  1. 获取视频资源AVAsset
  2. 创建自定义合成对象AVMutableComposition
  3. 创建视频组件AVMutableVideoComposition,这个类是处理视频中要编辑的东西。可以设定所需视频的大小、规模以及帧的持续时间。以及管理并设置视频组件的指令。
  4. 创建遵循AVVideoCompositing协议的customVideoCompositorClass,这个类主要用于定义视频合成器的属性和方法。
  5. 在可变组件中添加资源数据,也就是轨道AVMutableCompositionTrack(一般添加2种:音频轨道和视频轨道)。
  6. 创建视频应用层的指令AVMutableVideoCompositionLayerInstruction 用户管理视频框架应该如何被应用和组合,也就是说是子视频在总视频中出现和消失的时间、大小、动画等。
  7. 创建视频导出会话对象AVAssetExportSession,主要是根据 videoComposition 去创建一个新的视频,并输出到一个指定的文件路径中去。

构建转场AVMutableVideoCompositionLayerInstruction

视频转场,首要条件是获取到需要添加转场的两个视频的视频帧数据。所以我们在构建AVMutableVideoCompositionLayerInstruction的时候,添加一段转场所需的AVMutableVideoCompositionLayerInstruction

  1. - (void)buildTransitionComposition:(AVMutableComposition *)composition andVideoComposition:(AVMutableVideoComposition *)videoComposition {
  2. NSUInteger clipsCount = self.clips.count;
  3. CMTime nextClipStartTime = kCMTimeZero;
  4. /// 转场时间为2s
  5. CMTime transitionDuration = CMTimeMakeWithSeconds(2, 600);
  6. // Add two video tracks and two audio tracks.
  7. AVMutableCompositionTrack *compositionVideoTracks[2];
  8. AVMutableCompositionTrack *compositionAudioTracks[2];
  9. compositionVideoTracks[0] = [composition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
  10. compositionVideoTracks[1] = [composition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
  11. compositionAudioTracks[0] = [composition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
  12. compositionAudioTracks[1] = [composition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
  13. CMTimeRange *timeRanges = alloca(sizeof(CMTimeRange) * clipsCount);
  14. CMTimeRange *transitionTimeRanges = alloca(sizeof(CMTimeRange) * clipsCount);
  15. // Place clips into alternating video & audio tracks in composition
  16. for (int i = 0; i < clipsCount; i++) {
  17. AVAsset *asset = [self.clips objectAtIndex:i];
  18. CMTimeRange timeRange = CMTimeRangeMake(kCMTimeZero, [asset duration]);
  19. AVAssetTrack *clipVideoTrack = [[asset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
  20. [compositionVideoTracks[i] insertTimeRange:timeRange ofTrack:clipVideoTrack atTime:nextClipStartTime error:nil];
  21. AVAssetTrack *clipAudioTrack = [[asset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0];
  22. [compositionAudioTracks[i] insertTimeRange:timeRange ofTrack:clipAudioTrack atTime:nextClipStartTime error:nil];
  23. timeRanges[i] = CMTimeRangeMake(nextClipStartTime, timeRange.duration);
  24. /// 根据转场时间,相对应的剪短当前视频时长
  25. if (i > 0) {
  26. timeRanges[i].start = CMTimeAdd(timeRanges[i].start, transitionDuration);
  27. timeRanges[i].duration = CMTimeSubtract(timeRanges[i].duration, transitionDuration);
  28. }
  29. if (i+1 < clipsCount) {
  30. timeRanges[i].duration = CMTimeSubtract(timeRanges[i].duration, transitionDuration);
  31. }
  32. /// 更新下个 asset 开始时间
  33. nextClipStartTime = CMTimeAdd(nextClipStartTime, asset.duration);
  34. nextClipStartTime = CMTimeSubtract(nextClipStartTime, transitionDuration);
  35. /// 处理转场时间
  36. if (i+1 < clipsCount) {
  37. transitionTimeRanges[i] = CMTimeRangeMake(nextClipStartTime, transitionDuration);
  38. }
  39. }
  40. NSMutableArray *instructions = [NSMutableArray array];
  41. for (int i = 0; i < clipsCount; i++) {
  42. CustomVideoCompositionInstruction *videoInstruction = [[CustomVideoCompositionInstruction alloc] initTransitionWithSourceTrackIDs:@[@(compositionVideoTracks[i].trackID)] forTimeRange:timeRanges[i]];
  43. videoInstruction.trackID = compositionVideoTracks[i].trackID;
  44. [instructions addObject:videoInstruction];
  45. /// 转场
  46. if (i+1 < clipsCount) {
  47. CustomVideoCompositionInstruction *videoInstruction = [[CustomVideoCompositionInstruction alloc] initTransitionWithSourceTrackIDs:@[[NSNumber numberWithInt:compositionVideoTracks[0].trackID], [NSNumber numberWithInt:compositionVideoTracks[1].trackID]] forTimeRange:transitionTimeRanges[i]];
  48. // First track -> Foreground track while compositing
  49. videoInstruction.foregroundTrackID = compositionVideoTracks[0].trackID;
  50. // Second track -> Background track while compositing
  51. videoInstruction.backgroundTrackID = compositionVideoTracks[1].trackID;
  52. [instructions addObject:videoInstruction];
  53. }
  54. }
  55. videoComposition.instructions = instructions;
  56. }

我们添加了视频转场对应的AVMutableVideoCompositionLayerInstruction之后,在对应AVMutableVideoCompositionLayerInstruction的时间范围内,startVideoCompositionRequest:方法会将对应的AVMutableVideoCompositionLayerInstruction回调出来;我们可以根据对应AVMutableVideoCompositionLayerInstruction持有的trackID获取对应转场视频的视频帧,根据对应的视频帧作自定义的图像处理,进而生成转场动画。

对于图像处理可以根据自己所需选择使用 OpenGL、Metal 处理。

  1. - (void)startVideoCompositionRequest:(nonnull AVAsynchronousVideoCompositionRequest *)request {
  2. @autoreleasepool {
  3. dispatch_async(_renderingQueue, ^{
  4. if (self.shouldCancelAllRequests) {
  5. [request finishCancelledRequest];
  6. } else {
  7. NSError *err = nil;
  8. // Get the next rendererd pixel buffer
  9. CVPixelBufferRef resultPixels = [self newRenderedPixelBufferForRequest:request error:&err];
  10. if (resultPixels) {
  11. CFRetain(resultPixels);
  12. // The resulting pixelbuffer from OpenGL renderer is passed along to the request
  13. [request finishWithComposedVideoFrame:resultPixels];
  14. CFRelease(resultPixels);
  15. } else {
  16. [request finishWithError:err];
  17. }
  18. }
  19. });
  20. }
  21. }
  22. - (CVPixelBufferRef)newRenderedPixelBufferForRequest:(AVAsynchronousVideoCompositionRequest *)request error:(NSError **)errOut {
  23. CVPixelBufferRef dstPixels = nil;
  24. CustomVideoCompositionInstruction *currentInstruction = request.videoCompositionInstruction;
  25. float tweenFactor = factorForTimeInRange(request.compositionTime, request.videoCompositionInstruction.timeRange);
  26. // 获取指定 track 的 pixelBuffer
  27. if (currentInstruction.trackID) {
  28. CVPixelBufferRef currentPixelBuffer = [request sourceFrameByTrackID:currentInstruction.trackID];
  29. dstPixels = currentPixelBuffer;
  30. } else {
  31. CVPixelBufferRef currentPixelBuffer1 = [request sourceFrameByTrackID:currentInstruction.foregroundTrackID];
  32. CVPixelBufferRef currentPixelBuffer2 = [request sourceFrameByTrackID:currentInstruction.backgroundTrackID];
  33. dstPixels = [self.renderContext newPixelBuffer];
  34. // ogl
  35. //[self.oglRenderer renderPixelBuffer:dstPixels foregroundPixelBuffer:currentPixelBuffer1 backgroundPixelBuffer:currentPixelBuffer2 tweenFactor:tweenFactor];
  36. // metal
  37. [self.metalRenderer renderPixelBuffer:dstPixels foregroundPixelBuffer:currentPixelBuffer1 backgroundPixelBuffer:currentPixelBuffer2 tweenFactor:tweenFactor];
  38. }
  39. return dstPixels;
  40. }

下面简单介绍一个透明度转场案例,可以让大家清晰的了解大概的操作流程

OpenGL 转场

shader
vertex shader

  1. attribute vec4 position;
  2. attribute vec2 texCoord;
  3. varying vec2 texCoordVarying;
  4. void main()
  5. {
  6. gl_Position = position;
  7. texCoordVarying = texCoord;
  8. }

fragment shader

  1. uniform sampler2D Sampler;
  2. precision mediump float;
  3. varying highp vec2 texCoordVarying;
  4. void main()
  5. {
  6. vec3 color = texture2D(Sampler, texCoordVarying).rgb;
  7. gl_FragColor = vec4(color, 1.0);
  8. }

设置 shader 着色器

  1. - (BOOL)loadShaders {
  2. GLuint vertShader, fragShader;
  3. NSString *vertShaderSource, *fragShaderSource;
  4. NSString *vertShaderPath = [[NSBundle mainBundle] pathForResource:@"transition.vs" ofType:nil];
  5. NSString *fragShaderPath = [[NSBundle mainBundle] pathForResource:@"transition.fs" ofType:nil];
  6. // Create the shader program.
  7. self.program = glCreateProgram();
  8. // Create and compile the vertex shader.
  9. vertShaderSource = [NSString stringWithContentsOfFile:vertShaderPath encoding:NSUTF8StringEncoding error:nil];
  10. if (![self compileShader:&vertShader type:GL_VERTEX_SHADER source:vertShaderSource]) {
  11. NSLog(@"Failed to compile vertex shader");
  12. return NO;
  13. }
  14. // Create and compile Y fragment shader.
  15. fragShaderSource = [NSString stringWithContentsOfFile:fragShaderPath encoding:NSUTF8StringEncoding error:nil];
  16. if (![self compileShader:&fragShader type:GL_FRAGMENT_SHADER source:fragShaderSource]) {
  17. NSLog(@"Failed to compile fragment shader");
  18. return NO;
  19. }
  20. // Attach vertex shader to programY.
  21. glAttachShader(self.program, vertShader);
  22. // Attach fragment shader to programY.
  23. glAttachShader(self.program, fragShader);
  24. // Bind attribute locations. This needs to be done prior to linking.
  25. glBindAttribLocation(self.program, ATTRIB_VERTEX, "position");
  26. glBindAttribLocation(self.program, ATTRIB_TEXCOORD, "texCoord");
  27. // Link the program.
  28. if (![self linkProgram:self.program]) {
  29. NSLog(@"Failed to link program");
  30. if (vertShader) {
  31. glDeleteShader(vertShader);
  32. vertShader = 0;
  33. }
  34. if (fragShader) {
  35. glDeleteShader(fragShader);
  36. fragShader = 0;
  37. }
  38. if (_program) {
  39. glDeleteProgram(_program);
  40. _program = 0;
  41. }
  42. return NO;
  43. }
  44. // Get uniform locations.
  45. uniforms[UNIFORM] = glGetUniformLocation(_program, "Sampler");
  46. // Release vertex and fragment shaders.
  47. if (vertShader) {
  48. glDetachShader(_program, vertShader);
  49. glDeleteShader(vertShader);
  50. }
  51. if (fragShader) {
  52. glDetachShader(_program, fragShader);
  53. glDeleteShader(fragShader);
  54. }
  55. return YES;
  56. }
  57. - (BOOL)compileShader:(GLuint *)shader type:(GLenum)type source:(NSString *)sourceString
  58. {
  59. if (sourceString == nil) {
  60. NSLog(@"Failed to load vertex shader: Empty source string");
  61. return NO;
  62. }
  63. GLint status;
  64. const GLchar *source;
  65. source = (GLchar *)[sourceString UTF8String];
  66. *shader = glCreateShader(type);
  67. glShaderSource(*shader, 1, &source, NULL);
  68. glCompileShader(*shader);
  69. #if defined(DEBUG)
  70. GLint logLength;
  71. glGetShaderiv(*shader, GL_INFO_LOG_LENGTH, &logLength);
  72. if (logLength > 0) {
  73. GLchar *log = (GLchar *)malloc(logLength);
  74. glGetShaderInfoLog(*shader, logLength, &logLength, log);
  75. NSLog(@"Shader compile log:\n%s", log);
  76. free(log);
  77. }
  78. #endif
  79. glGetShaderiv(*shader, GL_COMPILE_STATUS, &status);
  80. if (status == 0) {
  81. glDeleteShader(*shader);
  82. return NO;
  83. }
  84. return YES;
  85. }
  86. - (BOOL)linkProgram:(GLuint)prog
  87. {
  88. GLint status;
  89. glLinkProgram(prog);
  90. #if defined(DEBUG)
  91. GLint logLength;
  92. glGetProgramiv(prog, GL_INFO_LOG_LENGTH, &logLength);
  93. if (logLength > 0) {
  94. GLchar *log = (GLchar *)malloc(logLength);
  95. glGetProgramInfoLog(prog, logLength, &logLength, log);
  96. NSLog(@"Program link log:\n%s", log);
  97. free(log);
  98. }
  99. #endif
  100. glGetProgramiv(prog, GL_LINK_STATUS, &status);
  101. if (status == 0) {
  102. return NO;
  103. }
  104. return YES;
  105. }

渲染处理

  1. - (void)renderPixelBuffer:(CVPixelBufferRef)destinationPixelBuffer
  2. foregroundPixelBuffer:(CVPixelBufferRef)foregroundPixelBuffer
  3. backgroundPixelBuffer:(CVPixelBufferRef)backgroundPixelBuffer
  4. tweenFactor:(float)tween {
  5. if (!foregroundPixelBuffer || !backgroundPixelBuffer) {
  6. return;
  7. }
  8. [EAGLContext setCurrentContext:self.currentContext];
  9. CVOpenGLESTextureRef foregroundTexture = [self textureForPixelBuffer:foregroundPixelBuffer];
  10. CVOpenGLESTextureRef backgroundTexture = [self textureForPixelBuffer:backgroundPixelBuffer];
  11. CVOpenGLESTextureRef destTexture = [self textureForPixelBuffer:destinationPixelBuffer];
  12. glUseProgram(self.program);
  13. glBindFramebuffer(GL_FRAMEBUFFER, self.offscreenBufferHandle);
  14. glViewport(0, 0, (int)CVPixelBufferGetWidth(destinationPixelBuffer), (int)CVPixelBufferGetHeight(destinationPixelBuffer));
  15. // 第一个纹理
  16. glActiveTexture(GL_TEXTURE0);
  17. glBindTexture(CVOpenGLESTextureGetTarget(foregroundTexture), CVOpenGLESTextureGetName(foregroundTexture));
  18. glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
  19. glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
  20. glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
  21. glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
  22. // 第二个纹理
  23. glActiveTexture(GL_TEXTURE1);
  24. glBindTexture(CVOpenGLESTextureGetTarget(backgroundTexture), CVOpenGLESTextureGetName(backgroundTexture));
  25. glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
  26. glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
  27. glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
  28. glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
  29. // Attach the destination texture as a color attachment to the off screen frame buffer
  30. glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, CVOpenGLESTextureGetTarget(destTexture), CVOpenGLESTextureGetName(destTexture), 0);
  31. if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) {
  32. NSLog(@"Failed to make complete framebuffer object %x", glCheckFramebufferStatus(GL_FRAMEBUFFER));
  33. return;
  34. }
  35. // clear
  36. glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
  37. glClear(GL_COLOR_BUFFER_BIT);
  38. // 顶点
  39. GLfloat quadVertexData [] = {
  40. -1.0, 1.0,
  41. 1.0, 1.0,
  42. -1.0, -1.0,
  43. 1.0, -1.0,
  44. };
  45. // texture data varies from 0 -> 1, whereas vertex data varies from -1 -> 1
  46. GLfloat quadTextureData [] = {
  47. 0.5 + quadVertexData[0]/2, 0.5 + quadVertexData[1]/2,
  48. 0.5 + quadVertexData[2]/2, 0.5 + quadVertexData[3]/2,
  49. 0.5 + quadVertexData[4]/2, 0.5 + quadVertexData[5]/2,
  50. 0.5 + quadVertexData[6]/2, 0.5 + quadVertexData[7]/2,
  51. };
  52. // 应用第一个纹理
  53. glUniform1i(uniforms[UNIFORM], 0);
  54. // 设置顶点数据
  55. glVertexAttribPointer(ATTRIB_VERTEX, 2, GL_FLOAT, 0, 0, quadVertexData);
  56. glEnableVertexAttribArray(ATTRIB_VERTEX);
  57. glVertexAttribPointer(ATTRIB_TEXCOORD, 2, GL_FLOAT, 0, 0, quadTextureData);
  58. glEnableVertexAttribArray(ATTRIB_TEXCOORD);
  59. // 启用混合模式
  60. glEnable(GL_BLEND);
  61. // 混合模式为,全源
  62. glBlendFunc(GL_ONE, GL_ZERO);
  63. // 绘制前景
  64. glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
  65. // 应用第二个纹理
  66. glUniform1i(uniforms[UNIFORM], 1);
  67. // 设置顶点数据
  68. glVertexAttribPointer(ATTRIB_VERTEX, 2, GL_FLOAT, 0, 0, quadVertexData);
  69. glEnableVertexAttribArray(ATTRIB_VERTEX);
  70. glVertexAttribPointer(ATTRIB_TEXCOORD, 2, GL_FLOAT, 0, 0, quadTextureData);
  71. glEnableVertexAttribArray(ATTRIB_TEXCOORD);
  72. // 混合模式绘制背景
  73. // GL_CONSTANT_ALPHA 采用 glBlendColor 中的 alpha 值
  74. glBlendColor(0, 0, 0, tween);
  75. glBlendFunc(GL_CONSTANT_ALPHA, GL_ONE_MINUS_CONSTANT_ALPHA);
  76. // 绘制背景
  77. glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
  78. glFlush();
  79. // Periodic texture cache flush every frame
  80. CVOpenGLESTextureCacheFlush(self.videoTextureCache, 0);
  81. [EAGLContext setCurrentContext:nil];
  82. }
  83. - (CVOpenGLESTextureRef)textureForPixelBuffer:(CVPixelBufferRef)pixelBuffer {
  84. CVOpenGLESTextureRef texture = NULL;
  85. CVReturn err;
  86. if (!_videoTextureCache) {
  87. NSLog(@"No video texture cache");
  88. return texture;
  89. }
  90. // Periodic texture cache flush every frame
  91. CVOpenGLESTextureCacheFlush(_videoTextureCache, 0);
  92. // CVOpenGLTextureCacheCreateTextureFromImage will create GL texture optimally from CVPixelBufferRef.
  93. err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault,
  94. _videoTextureCache,
  95. pixelBuffer,
  96. NULL,
  97. GL_TEXTURE_2D,
  98. GL_RGBA,
  99. (int)CVPixelBufferGetWidth(pixelBuffer),
  100. (int)CVPixelBufferGetHeight(pixelBuffer),
  101. GL_RGBA,
  102. GL_UNSIGNED_BYTE,
  103. 0,
  104. &texture);
  105. if (!texture || err) {
  106. NSLog(@"Error at creating luma texture using CVOpenGLESTextureCacheCreateTextureFromImage %d", err);
  107. }
  108. return texture;
  109. }

Metal 转场

shader

  1. #include <metal_stdlib>
  2. #import "ShaderTypes.h"
  3. using namespace metal;
  4. typedef struct
  5. {
  6. float4 clipSpacePosition [[ position ]]; // position 修饰符表示这个是顶点
  7. float2 textureCoordinate;
  8. float factor;
  9. } RasterizerData;
  10. vertex RasterizerData vertexShader(uint vertexID [[ vertex_id ]],
  11. constant Vertex *vertexArray [[ buffer(VertexInputIndexVertices) ]]) {
  12. RasterizerData out;
  13. out.clipSpacePosition = float4(vertexArray[vertexID].position, 0.0, 1.0);
  14. out.textureCoordinate = vertexArray[vertexID].textureCoordinate;
  15. return out;
  16. }
  17. fragment float4 samplingShader(RasterizerData input [[stage_in]],
  18. texture2d<float> foregroundTexture [[ texture(FragmentTextureIndexForeground) ]],
  19. texture2d<float> backgroundTexture [[ texture(FragmentTextureIndexbakcground) ]],
  20. constant float &factor [[ buffer(FragmentInputIndexFactor) ]]) {
  21. constexpr sampler textureSampler (mag_filter::linear,
  22. min_filter::linear);
  23. float3 forgroundColor = foregroundTexture.sample(textureSampler, input.textureCoordinate).rgb;
  24. float3 backgroundColor = backgroundTexture.sample(textureSampler, input.textureCoordinate).rgb;
  25. float3 color = forgroundColor * (1 - factor) + backgroundColor * factor;
  26. return float4(color, 1.0);
  27. }

设置 shader 着色器

  1. - (void)setupPipeline {
  2. id<MTLLibrary> defaultLibrary = [self.device newDefaultLibrary];
  3. id<MTLFunction> vertexFunction = [defaultLibrary newFunctionWithName:@"vertexShader"];
  4. id<MTLFunction> fragmentFunction = [defaultLibrary newFunctionWithName:@"samplingShader"];
  5. MTLRenderPipelineDescriptor *pipelineDescriptor = [[MTLRenderPipelineDescriptor alloc] init];
  6. pipelineDescriptor.vertexFunction = vertexFunction;
  7. pipelineDescriptor.fragmentFunction = fragmentFunction;
  8. pipelineDescriptor.colorAttachments[0].pixelFormat = MTLPixelFormatBGRA8Unorm;
  9. self.pipelineState = [self.device newRenderPipelineStateWithDescriptor:pipelineDescriptor error:NULL];
  10. self.commandQueue = [self.device newCommandQueue];
  11. }
  12. - (void)setupVertex {
  13. Vertex quardVertices[] =
  14. { // 顶点坐标,分别是x、y 纹理坐标,x、y;
  15. { { 1.0, -1.0 }, { 1.f, 1.f } },
  16. { { -1.0, -1.0 }, { 0.f, 1.f } },
  17. { { -1.0, 1.0 }, { 0.f, 0.f } },
  18. { { 1.0, -1.0 }, { 1.f, 1.f } },
  19. { { -1.0, 1.0 }, { 0.f, 0.f } },
  20. { { 1.0, 1.0 }, { 1.f, 0.f } },
  21. };
  22. self.vertices = [self.device newBufferWithBytes:quardVertices
  23. length:sizeof(quardVertices)
  24. options:MTLResourceStorageModeShared];
  25. self.numVertices = sizeof(quardVertices) / sizeof(Vertex);
  26. }

渲染处理

  1. - (void)renderPixelBuffer:(CVPixelBufferRef)destinationPixelBuffer
  2. foregroundPixelBuffer:(CVPixelBufferRef)foregroundPixelBuffer
  3. backgroundPixelBuffer:(CVPixelBufferRef)backgroundPixelBuffer
  4. tweenFactor:(float)tween {
  5. id<MTLTexture> destinationTexture = [self textureWithCVPixelBuffer:destinationPixelBuffer];
  6. id<MTLTexture> foregroundTexture = [self textureWithCVPixelBuffer:foregroundPixelBuffer];
  7. id<MTLTexture> backgroundTexture = [self textureWithCVPixelBuffer:backgroundPixelBuffer];
  8. id<MTLCommandBuffer> commandBuffer = [self.commandQueue commandBuffer];
  9. MTLRenderPassDescriptor *renderDescriptor = [MTLRenderPassDescriptor renderPassDescriptor];
  10. renderDescriptor.colorAttachments[0].clearColor = MTLClearColorMake(0, 0, 0, 1);
  11. renderDescriptor.colorAttachments[0].storeAction = MTLStoreActionStore;
  12. renderDescriptor.colorAttachments[0].loadAction = MTLLoadActionClear;
  13. renderDescriptor.colorAttachments[0].texture = destinationTexture;
  14. id<MTLRenderCommandEncoder> renderEncoder = [commandBuffer renderCommandEncoderWithDescriptor:renderDescriptor];
  15. [renderEncoder setRenderPipelineState:self.pipelineState];
  16. [renderEncoder setVertexBuffer:self.vertices offset:0 atIndex:VertexInputIndexVertices];
  17. [renderEncoder setFragmentTexture:foregroundTexture atIndex:FragmentTextureIndexForeground];
  18. [renderEncoder setFragmentTexture:backgroundTexture atIndex:FragmentTextureIndexbakcground];
  19. [renderEncoder setFragmentBytes:&tween length:sizeof(tween) atIndex:FragmentInputIndexFactor];
  20. [renderEncoder drawPrimitives:MTLPrimitiveTypeTriangle
  21. vertexStart:0
  22. vertexCount:self.numVertices]; // 绘制
  23. [renderEncoder endEncoding]; // 结束
  24. [commandBuffer commit];
  25. }
  26. - (id<MTLTexture>)textureWithCVPixelBuffer:(CVPixelBufferRef)pixelBuffer {
  27. if (pixelBuffer == NULL) {
  28. return nil;
  29. }
  30. id<MTLTexture> texture = nil;
  31. CVMetalTextureRef metalTextureRef = NULL;
  32. size_t width = CVPixelBufferGetWidth(pixelBuffer);
  33. size_t height = CVPixelBufferGetHeight(pixelBuffer);
  34. MTLPixelFormat pixelFormat = MTLPixelFormatBGRA8Unorm;
  35. CVReturn status = CVMetalTextureCacheCreateTextureFromImage(NULL,
  36. _textureCache,
  37. pixelBuffer,
  38. NULL,
  39. pixelFormat,
  40. width,
  41. height,
  42. 0,
  43. &metalTextureRef);
  44. if (status == kCVReturnSuccess) {
  45. texture = CVMetalTextureGetTexture(metalTextureRef);
  46. CFRelease(metalTextureRef);
  47. }
  48. return texture;
  49. }

如果你正在跳槽或者正准备跳槽不妨动动小手,添加一下咱们的交流群1012951431来获取一份详细的大厂面试资料为你的跳槽多添一份保障。

更多精彩分享