2017-05-09 3 views
2

비디오 처리 기능이있는 앱을 만들어야합니다.ios swift에서 세 가지 맞춤 설정에 카메라 미리보기를 추가하는 방법

내 요구 사항은 카메라 미리보기 레이어로 3 개의보기를 만들어야합니다. 첫 번째보기에는 원본 캡처 비디오가 표시되고 두 번째보기에는 원본 캡처 비디오의 플립이 표시되어야하며 마지막보기에는 원래 캡처 링 비디오의 반전 된 색상이 표시되어야합니다.

이 요구 사항으로 개발하기 시작했습니다. 는 우선 3 뷰를 생성하고 카메라 캡처 속성

@IBOutlet weak var captureView: UIView! 
    @IBOutlet weak var flipView: UIView! 
    @IBOutlet weak var InvertView: UIView! 

    //Camera Capture requiered properties 
    var videoDataOutput: AVCaptureVideoDataOutput! 
    var videoDataOutputQueue: DispatchQueue! 
    var previewLayer:AVCaptureVideoPreviewLayer! 
    var captureDevice : AVCaptureDevice! 
    let session = AVCaptureSession() 
    var replicationLayer: CAReplicatorLayer! 

enter image description here

지금은 카메라 세션을 시작합니다 AVCaptureVideoDataOutputSampleBufferDelegate라고 요구했다.

extension ViewController: AVCaptureVideoDataOutputSampleBufferDelegate{ 
    func setupAVCapture(){ 
     session.sessionPreset = AVCaptureSessionPreset640x480 
     guard let device = AVCaptureDevice 
      .defaultDevice(withDeviceType: .builtInWideAngleCamera, 
          mediaType: AVMediaTypeVideo, 
          position: .back) else{ 
          return 
     } 
     captureDevice = device 
     beginSession() 
    } 

    func beginSession(){ 
     var err : NSError? = nil 
     var deviceInput:AVCaptureDeviceInput? 
     do { 
      deviceInput = try AVCaptureDeviceInput(device: captureDevice) 
     } catch let error as NSError { 
      err = error 
      deviceInput = nil 
     } 
     if err != nil { 
      print("error: \(err?.localizedDescription)"); 
     } 
     if self.session.canAddInput(deviceInput){ 
      self.session.addInput(deviceInput); 
     } 

     videoDataOutput = AVCaptureVideoDataOutput() 
     videoDataOutput.alwaysDiscardsLateVideoFrames=true 
     videoDataOutputQueue = DispatchQueue(label: "VideoDataOutputQueue") 
     videoDataOutput.setSampleBufferDelegate(self, queue:self.videoDataOutputQueue) 
     if session.canAddOutput(self.videoDataOutput){ 
      session.addOutput(self.videoDataOutput) 
     } 
     videoDataOutput.connection(withMediaType: AVMediaTypeVideo).isEnabled = true 

     self.previewLayer = AVCaptureVideoPreviewLayer(session: self.session) 
     self.previewLayer.frame = self.captureView.bounds 
     self.previewLayer.videoGravity = AVLayerVideoGravityResizeAspect 

     self.replicationLayer = CAReplicatorLayer() 
     self.replicationLayer.frame = self.captureView.bounds 
     self.replicationLayer.instanceCount = 1 // 
     self.replicationLayer.instanceTransform = CATransform3DMakeTranslation(0.0, self.captureView.bounds.size.height/1, 0.0) 

     self.replicationLayer.addSublayer(self.previewLayer) 
     self.captureView.layer.addSublayer(self.replicationLayer) 
     self.flipView.layer.addSublayer(self.replicationLayer) 
     self.InvertView.layer.addSublayer(self.replicationLayer) 

     session.startRunning() 
    } 

    func captureOutput(_ captureOutput: AVCaptureOutput!, 
         didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, 
         from connection: AVCaptureConnection!) { 
     // do stuff here 
    } 

    // clean up AVCapture 
    func stopCamera(){ 
     session.stopRunning() 
    } 

} 

여기서는 CAReplicatorLayer를 사용하여 3 개의보기에서 비디오 캡처를 보여줍니다. self.replicationLayer.instanceCount를 1로 지정한 다음, 이와 같은 결과를 얻었습니다. 내가 3으로 self.replicationLayer.instanceCount를 지정한 경우

enter image description here

는, 그리고 나는이 같은 결과를 얻었다.

enter image description here

는 어떻게 3 다른 뷰에서 캡처 비디오를 표시하는 저를 안내합니다. 원본 캡처 비디오를 뒤집거나 뒤집을 수있는 몇 가지 아이디어를 제공하십시오. 미리 감사드립니다.

답변

1

마지막으로 나는 JohnnySlagle/Multiple-Camera-Feeds 코드의 도움으로 답변을 찾았습니다.

제가

AVCaptureVideoDataOutputSampleBufferDelegate에
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection { 
    CMFormatDescriptionRef formatDesc = CMSampleBufferGetFormatDescription(sampleBuffer); 

    // update the video dimensions information 
    _currentVideoDimensions = CMVideoFormatDescriptionGetDimensions(formatDesc); 

    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); 

    CIImage *sourceImage = [CIImage imageWithCVPixelBuffer:(CVPixelBufferRef)imageBuffer options:nil]; 

    CGRect sourceExtent = sourceImage.extent; 

    CGFloat sourceAspect = sourceExtent.size.width/sourceExtent.size.height; 


    for (VideoFeedView *feedView in self.feedViews) { 
     CGFloat previewAspect = feedView.viewBounds.size.width/feedView.viewBounds.size.height; 
     // we want to maintain the aspect radio of the screen size, so we clip the video image 
     CGRect drawRect = sourceExtent; 
     if (sourceAspect > previewAspect) { 
      // use full height of the video image, and center crop the width 
      drawRect.origin.x += (drawRect.size.width - drawRect.size.height * previewAspect)/2.0; 
      drawRect.size.width = drawRect.size.height * previewAspect; 
     } else { 
      // use full width of the video image, and center crop the height 
      drawRect.origin.y += (drawRect.size.height - drawRect.size.width/previewAspect)/2.0; 
      drawRect.size.height = drawRect.size.width/previewAspect; 
     } 
     [feedView bindDrawable]; 

     if (_eaglContext != [EAGLContext currentContext]) { 
      [EAGLContext setCurrentContext:_eaglContext]; 
     } 

     // clear eagl view to grey 
     glClearColor(0.5, 0.5, 0.5, 1.0); 
     glClear(GL_COLOR_BUFFER_BIT); 

     // set the blend mode to "source over" so that CI will use that 
     glEnable(GL_BLEND); 
     glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA); 

     glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); 
     glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); 
     // This is necessary for non-power-of-two textures 
     glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); 
     glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); 

     if (feedView.tag == 1) { 
      if (sourceImage) { 
       [_ciContext drawImage:sourceImage inRect:feedView.viewBounds fromRect:drawRect]; 
      } 
     } else if (feedView.tag == 2) { 
      sourceImage = [sourceImage imageByApplyingTransform:CGAffineTransformMakeScale(1, -1)]; 
      sourceImage = [sourceImage imageByApplyingTransform:CGAffineTransformMakeTranslation(0, sourceExtent.size.height)]; 
      if (sourceImage) { 
       [_ciContext drawImage:sourceImage inRect:feedView.viewBounds fromRect:drawRect]; 
      } 
     } else { 
      CIFilter *effectFilter = [CIFilter filterWithName:@"CIColorInvert"]; 
      [effectFilter setValue:sourceImage forKey:kCIInputImageKey]; 
      CIImage *invertImage = [effectFilter outputImage]; 
      if (invertImage) { 
       [_ciContext drawImage:invertImage inRect:feedView.viewBounds fromRect:drawRect]; 
      } 
     } 
     [feedView display]; 
    } 
} 

그게 필터를 적용이어서

@property (weak, nonatomic) IBOutlet UIView *video1; 
@property (weak, nonatomic) IBOutlet UIView *video2; 
@property (weak, nonatomic) IBOutlet UIView *video3; 

같은이어서 약간 변경 setUpFeedViews

- (void)setupFeedViews { 
    NSUInteger numberOfFeedViews = 3; 

    for (NSUInteger i = 0; i < numberOfFeedViews; i++) { 
     VideoFeedView *feedView = [self setupFeedViewWithFrame:CGRectMake(0, 0, self.video1.frame.size.width, self.video1.frame.size.height)]; 
     feedView.tag = i+1; 
     switch (i) { 
      case 0: 
       [self.video1 addSubview:feedView]; 
       break; 
      case 1: 
       [self.video2 addSubview:feedView]; 
       break; 
      case 2: 
       [self.video3 addSubview:feedView]; 
       break; 
      default: 
       break; 
     } 
     [self.feedViews addObject:feedView]; 
    } 
} 

세보기를 생성. 성공적으로 그것은 나의 요구 사항을 충족시킵니다.