2017-11-13 27 views
2

나는 이미지의 얼굴을 감지 할이 간단한 기능을 가지고있다. 이 객체를 VNDetectedObjectObservation 객체로 변환하려면이 함수를 사용하고 있습니다. 트랙 얼굴 개체 + 비전 API iOS11

func convertFaceObservationsToDetectedObjects(with observations: [VNFaceObservation]) 
    { 
     observations.forEach { observation in 
      let boundingBox = observation.boundingBox 
      let size = CGSize(width: boundingBox.width * self.IMG_VIEW.bounds.width, 
           height: boundingBox.height * self.IMG_VIEW.bounds.height) 
      let origin = CGPoint(x: boundingBox.minX * self.IMG_VIEW.bounds.width, 
           y: (1 - observation.boundingBox.minY) * self.IMG_VIEW.bounds.height - size.height) 
      let originalRect = CGRect(origin: origin, size: size) 

      var convertedRect = cameraLayer.metadataOutputRectConverted(fromLayerRect: originalRect) 
      convertedRect.origin.y = 1 - convertedRect.origin.y 
      let trackingObservation = VNDetectedObjectObservation(boundingBox: convertedRect) 

      self.anotherListOfObservations.append((tag, trackingObservation)) 
     } 
    } 

그런 다음 나는 주어진 VNDetectedObjectObservation 객체를 시도하고 추적이 위임 기능을 사용하고 있습니다 :

extension MyViewController: AVCaptureVideoDataOutputSampleBufferDelegate { 

    func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) 
    { 
     guard let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { 
       return 
     } 
     var listOfRequests:[VNTrackObjectRequest] = [] 
     for (_, observation) in self.anotherListOfObservations 
     { 
      let request = VNTrackObjectRequest(detectedObjectObservation: observation) { [unowned self] request, error in 
       self.handle(request, error: error) 
      } 

      request.trackingLevel = .accurate 
      listOfRequests.append(request) 
     } 

     do { 
      try handler.perform(listOfRequests, on: pixelBuffer) 
     } 
     catch { 
      print(error) 
     } 
    } 
} 

내 질문은 : 정말 가능 할까? 아니면 완전히 잘못하고 있니?

답변