2017-10-26 7 views
0

PODs의 CPU, MEM, NET 메트릭을 얻는 도구/방법이 있습니까? 아래 링크 이외에 어떤 도구가 있습니까?prometheus-POD metrics로 K8s 모니터링

  • - 유용한 POD 메트릭은 배포 할 수 없습니다. POD 메트릭 here을 볼 수 있습니다.
  • kubernetes-monitoring-with-prometheus-in-15-minutes - "helm"도구가있는 kube-prometheus, POD 메트릭이 없음. 메트릭 목록 here
  • prometheus-kubernetes -하지만 맞춤 서비스를 영원히 등록하는 데는 큰 어려움이 있습니다. 그들이 container_cpu metrics 언급 한 블로그에서하지만 나는 그들이 blog에서 언급 한 바와 같이 yaml file와 그

갱신 1

시도 발사 POD 원하는 통계를 볼 수 해달라고 -

  • Monitoring K8s with Prometheushere을 확인합니다. GOPATH & GOROOT

    [email protected]:~$ kubectl create -f prometheus.yaml 
    panic: interface conversion: interface {} is []interface {}, not map[string]interface {} 
    
    goroutine 1 [running]: 
    k8s.io/kubernetes/pkg/kubectl/cmd/util/openapi/validation.getObjectKind(0x14dcb20, 0xc420c56480, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0xffffffffffffff01, 0xc420f6bca0) 
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubectl/cmd/util/openapi/validation/validation.go:111 +0x539 
    k8s.io/kubernetes/pkg/kubectl/cmd/util/openapi/validation.(*SchemaValidation).ValidateBytes(0xc4207b01d0, 0xc420b3ca80, 0x16c, 0x180, 0xc420b51628, 0x4ed384) 
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubectl/cmd/util/openapi/validation/validation.go:49 +0x8f 
    k8s.io/kubernetes/pkg/kubectl/validation.ConjunctiveSchema.ValidateBytes(0xc42073cba0, 0x2, 0x2, 0xc420b3ca80, 0x16c, 0x180, 0x4ed029, 0xc420b3ca80) 
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubectl/validation/schema.go:130 +0x9a 
    k8s.io/kubernetes/pkg/kubectl/validation.(*ConjunctiveSchema).ValidateBytes(0xc42073cbc0, 0xc420b3ca80, 0x16c, 0x180, 0xc420b51700, 0x443693) 
        <autogenerated>:3 +0x7d 
    k8s.io/kubernetes/pkg/kubectl/resource.ValidateSchema(0xc420b3ca80, 0x16c, 0x180, 0x2183f80, 0xc42073cbc0, 0x20, 0xc420b51700) 
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubectl/resource/visitor.go:222 +0x68 
    k8s.io/kubernetes/pkg/kubectl/resource.(*StreamVisitor).Visit(0xc420c2eb00, 0xc420c3d440, 0x218a000, 0xc420c3d4a0) 
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubectl/resource/visitor.go:543 +0x269 
    k8s.io/kubernetes/pkg/kubectl/resource.(*FileVisitor).Visit(0xc420c3d2c0, 0xc420c3d440, 0x0, 0x0) 
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubectl/resource/visitor.go:502 +0x181 
    k8s.io/kubernetes/pkg/kubectl/resource.EagerVisitorList.Visit(0xc420f6bc30, 0x1, 0x1, 0xc420903c50, 0x1, 0xc420903c50) 
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubectl/resource/visitor.go:211 +0x100 
    k8s.io/kubernetes/pkg/kubectl/resource.(*EagerVisitorList).Visit(0xc420c3d360, 0xc420903c50, 0x7ff854222000, 0x0) 
        <autogenerated>:115 +0x69 
    k8s.io/kubernetes/pkg/kubectl/resource.FlattenListVisitor.Visit(0x2183d00, 0xc420c3d360, 0xc420c2eac0, 0xc420c2eb40, 0xc420c3d401, 0xc420c2eb40) 
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubectl/resource/visitor.go:417 +0xa3 
    k8s.io/kubernetes/pkg/kubectl/resource.(*FlattenListVisitor).Visit(0xc420c3d380, 0xc420c2eb40, 0x18, 0x18) 
        <autogenerated>:130 +0x69 
    k8s.io/kubernetes/pkg/kubectl/resource.DecoratedVisitor.Visit(0x2183d80, 0xc420c3d380, 0xc420c3d3c0, 0x3, 0x4, 0xc420c3d400, 0xc420386901, 0xc420c3d400) 
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubectl/resource/visitor.go:325 +0xd8 
    k8s.io/kubernetes/pkg/kubectl/resource.(*DecoratedVisitor).Visit(0xc420903c20, 0xc420c3d400, 0x151b920, 0xc420f6bc60) 
        <autogenerated>:153 +0x73 
    k8s.io/kubernetes/pkg/kubectl/resource.ContinueOnErrorVisitor.Visit(0x2183c80, 0xc420903c20, 0xc420c370e0, 0x7ff854222000, 0x0) 
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubectl/resource/visitor.go:352 +0xf1 
    k8s.io/kubernetes/pkg/kubectl/resource.(*ContinueOnErrorVisitor).Visit(0xc420f6bc50, 0xc420c370e0, 0x40f3f8, 0x60) 
        <autogenerated>:144 +0x60 
    k8s.io/kubernetes/pkg/kubectl/resource.(*Result).Visit(0xc4202c23f0, 0xc420c370e0, 0x6, 0x0) 
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubectl/resource/result.go:95 +0x62 
    k8s.io/kubernetes/pkg/kubectl/cmd.RunCreate(0x21acd60, 0xc420320e40, 0xc42029d440, 0x2182e40, 0xc42000c018, 0x2182e40, 0xc42000c020, 0xc420173000, 0x176f608, 0x4) 
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubectl/cmd/create.go:187 +0x4a8 
    k8s.io/kubernetes/pkg/kubectl/cmd.NewCmdCreate.func1(0xc42029d440, 0xc4202aa580, 0x0, 0x2) 
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubectl/cmd/create.go:73 +0x17f 
    k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc42029d440, 0xc4202aa080, 0x2, 0x2, 0xc42029d440, 0xc4202aa080) 
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:603 +0x22b 
    k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc420235b00, 0x8000102, 0x0, 0xffffffffffffffff) 
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:689 +0x339 
    k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(0xc420235b00, 0xc420320e40, 0x2182e00) 
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:648 +0x2b 
    k8s.io/kubernetes/cmd/kubectl/app.Run(0x0, 0x0) 
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubectl/app/kubectl.go:39 +0xd5 
    main.main() 
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubectl/kubectl.go:26 +0x22 
    

    prometheus.yaml

    # This scrape config scrapes kubelets 
    - job_name: 'kubernetes-nodes' 
        kubernetes_sd_configs: 
        - role: node 
    
        # couldn't get prometheus to validate the kublet cert for scraping, so don't bother for now 
        tls_config: 
        insecure_skip_verify: true 
        bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token 
    
        relabel_configs: 
        - target_label: __scheme__ 
        replacement: https 
        - source_labels: [__meta_kubernetes_node_label_kubernetes_io_hostname] 
        target_label: instance 
    
  • +0

    그 yaml의 내용은 무엇입니까? – Robert

    +0

    @ 로버트 확인하시기 바랍니다. 질문을 업데이트했습니다. – Veerendra

    답변

    0

    당신은 cadvisor를 찾고 있습니다와 go lang를 설치했습니다.

    +0

    안녕하세요, 답변 해 주셔서 감사합니다. 하지만 나는 '프로 메테 우스'가 필요해! 그것은 프로 메테우스에서 효과가 있습니까? – Veerendra

    +0

    예, cadvisor가 Pod에 대한 Prometheus 측정 항목을 내 보냅니다 -이 블로그 게시물은 설정 방법을 안내해야합니다. https://www.weave.works/blog/aggregating-pod-resource-cpu-memory-usage-arbitrary-labels -prometheus/ –

    +0

    질문의 'UPDATE1'을 확인할 수 있습니까? 문제 해결 – Veerendra

    0

    @ brian-brazil의 답변에 대한 완전한 답변을 제공하고 싶습니다. cAvisorPrometheus을 지원합니다. 그런 다음 curlhttp://localhost:8080/metrics이 메트릭을 확인

  • Readme에서 말했듯이

    1. 그냥 cAdvisor 컨테이너를 시작합니다. URL을 구성 할 수 있습니다 Prometheus 당겨 서버