크기가 2 인 단일 노드 풀이있는 GKE 클러스터가 있습니다. 세 번째 노드를 추가 할 때 그 세 번째 노드에 포드가 배포되지 않습니다.Kubernetes가 사용 가능한 노드에서 포드를 확장하지 않음
$ kubectl get po -o wide --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
default attachment-proxy-659bdc84d-ckdq9 1/1 Running 0 10m 10.0.38.3 gke-cluster0-pool-d59e9506-vp6t
default elasticsearch-0 1/1 Running 0 4m 10.0.39.11 gke-cluster0-pool-d59e9506-b7nb
default front-webapp-646bc49675-86jj6 1/1 Running 0 10m 10.0.38.10 gke-cluster0-pool-d59e9506-vp6t
default kafka-0 1/1 Running 3 4m 10.0.39.9 gke-cluster0-pool-d59e9506-b7nb
default mailgun-http-98f8d997c-hhfdc 1/1 Running 0 4m 10.0.38.17 gke-cluster0-pool-d59e9506-vp6t
default stamps-5b6fc489bc-6xtqz 2/2 Running 3 10m 10.0.38.13 gke-cluster0-pool-d59e9506-vp6t
default user-elasticsearch-6b6dd7fc8-b55xx 1/1 Running 0 10m 10.0.38.4 gke-cluster0-pool-d59e9506-vp6t
default user-http-analytics-6bdd49bd98-p5pd5 1/1 Running 0 4m 10.0.39.8 gke-cluster0-pool-d59e9506-b7nb
default user-http-graphql-67884c678c-7dcdq 1/1 Running 0 4m 10.0.39.7 gke-cluster0-pool-d59e9506-b7nb
default user-service-5cbb8cfb4f-t6zhv 1/1 Running 0 4m 10.0.38.15 gke-cluster0-pool-d59e9506-vp6t
default user-streams-0 1/1 Running 0 4m 10.0.39.10 gke-cluster0-pool-d59e9506-b7nb
default user-streams-elasticsearch-c64b64d6f-2nrtl 1/1 Running 3 10m 10.0.38.6 gke-cluster0-pool-d59e9506-vp6t
default zookeeper-0 1/1 Running 0 4m 10.0.39.12 gke-cluster0-pool-d59e9506-b7nb
kube-lego kube-lego-7799f6b457-skkrc 1/1 Running 0 10m 10.0.38.5 gke-cluster0-pool-d59e9506-vp6t
kube-system event-exporter-v0.1.7-7cb7c5d4bf-vr52v 2/2 Running 0 10m 10.0.38.7 gke-cluster0-pool-d59e9506-vp6t
kube-system fluentd-gcp-v2.0.9-648rh 2/2 Running 0 14m 10.0.38.2 gke-cluster0-pool-d59e9506-vp6t
kube-system fluentd-gcp-v2.0.9-fqjz6 2/2 Running 0 9m 10.0.39.2 gke-cluster0-pool-d59e9506-b7nb
kube-system heapster-v1.4.3-6fc45b6cc4-8cl72 3/3 Running 0 4m 10.0.39.6 gke-cluster0-pool-d59e9506-b7nb
kube-system k8s-snapshots-5699c68696-h8r75 1/1 Running 0 4m 10.0.38.16 gke-cluster0-pool-d59e9506-vp6t
kube-system kube-dns-778977457c-b48w5 3/3 Running 0 4m 10.0.39.5 gke-cluster0-pool-d59e9506-b7nb
kube-system kube-dns-778977457c-sw672 3/3 Running 0 10m 10.0.38.9 gke-cluster0-pool-d59e9506-vp6t
kube-system kube-dns-autoscaler-7db47cb9b7-tjt4l 1/1 Running 0 10m 10.0.38.11 gke-cluster0-pool-d59e9506-vp6t
kube-system kube-proxy-gke-cluster0-pool-d59e9506-b7nb 1/1 Running 0 9m 10.128.0.4 gke-cluster0-pool-d59e9506-b7nb
kube-system kube-proxy-gke-cluster0-pool-d59e9506-vp6t 1/1 Running 0 14m 10.128.0.2 gke-cluster0-pool-d59e9506-vp6t
kube-system kubernetes-dashboard-76c679977c-mwqlv 1/1 Running 0 10m 10.0.38.8 gke-cluster0-pool-d59e9506-vp6t
kube-system l7-default-backend-6497bcdb4d-wkx28 1/1 Running 0 10m 10.0.38.12 gke-cluster0-pool-d59e9506-vp6t
kube-system nginx-ingress-controller-78d546664f-gf6mx 1/1 Running 0 4m 10.0.39.3 gke-cluster0-pool-d59e9506-b7nb
kube-system tiller-deploy-5458cb4cc-26x26 1/1 Running 0 4m 10.0.39.4 gke-cluster0-pool-d59e9506-b7nb
가 그럼 난 노드 풀에 다른 노드를 추가 : 원래의 노드 풀에서 실행중인 포드 있습니다
$ kubectl get node
NAME STATUS ROLES AGE VERSION
gke-cluster0-pool-d59e9506-b7nb Ready <none> 13m v1.8.3-gke.0
gke-cluster0-pool-d59e9506-vp6t Ready <none> 18m v1.8.3-gke.0
그리고 여기에 : 여기
은 원래 2 노드 노드 풀입니다 :gcloud container clusters resize cluster0 --node-pool pool --size 3
세번째 첨가하여 준비한다 : 0
무슨 일$ kubectl get po -o wide --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
default attachment-proxy-659bdc84d-ckdq9 1/1 Running 0 17m 10.0.38.3 gke-cluster0-pool-d59e9506-vp6t
default elasticsearch-0 1/1 Running 0 10m 10.0.39.11 gke-cluster0-pool-d59e9506-b7nb
default front-webapp-646bc49675-86jj6 1/1 Running 0 17m 10.0.38.10 gke-cluster0-pool-d59e9506-vp6t
default kafka-0 1/1 Running 3 11m 10.0.39.9 gke-cluster0-pool-d59e9506-b7nb
default mailgun-http-98f8d997c-hhfdc 1/1 Running 0 10m 10.0.38.17 gke-cluster0-pool-d59e9506-vp6t
default stamps-5b6fc489bc-6xtqz 2/2 Running 3 16m 10.0.38.13 gke-cluster0-pool-d59e9506-vp6t
default user-elasticsearch-6b6dd7fc8-b55xx 1/1 Running 0 17m 10.0.38.4 gke-cluster0-pool-d59e9506-vp6t
default user-http-analytics-6bdd49bd98-p5pd5 1/1 Running 0 10m 10.0.39.8 gke-cluster0-pool-d59e9506-b7nb
default user-http-graphql-67884c678c-7dcdq 1/1 Running 0 10m 10.0.39.7 gke-cluster0-pool-d59e9506-b7nb
default user-service-5cbb8cfb4f-t6zhv 1/1 Running 0 10m 10.0.38.15 gke-cluster0-pool-d59e9506-vp6t
default user-streams-0 1/1 Running 0 10m 10.0.39.10 gke-cluster0-pool-d59e9506-b7nb
default user-streams-elasticsearch-c64b64d6f-2nrtl 1/1 Running 3 17m 10.0.38.6 gke-cluster0-pool-d59e9506-vp6t
default zookeeper-0 1/1 Running 0 10m 10.0.39.12 gke-cluster0-pool-d59e9506-b7nb
kube-lego kube-lego-7799f6b457-skkrc 1/1 Running 0 17m 10.0.38.5 gke-cluster0-pool-d59e9506-vp6t
kube-system event-exporter-v0.1.7-7cb7c5d4bf-vr52v 2/2 Running 0 17m 10.0.38.7 gke-cluster0-pool-d59e9506-vp6t
kube-system fluentd-gcp-v2.0.9-648rh 2/2 Running 0 20m 10.0.38.2 gke-cluster0-pool-d59e9506-vp6t
kube-system fluentd-gcp-v2.0.9-8tb4n 2/2 Running 0 4m 10.0.40.2 gke-cluster0-pool-d59e9506-1rzm
kube-system fluentd-gcp-v2.0.9-fqjz6 2/2 Running 0 15m 10.0.39.2 gke-cluster0-pool-d59e9506-b7nb
kube-system heapster-v1.4.3-6fc45b6cc4-8cl72 3/3 Running 0 11m 10.0.39.6 gke-cluster0-pool-d59e9506-b7nb
kube-system k8s-snapshots-5699c68696-h8r75 1/1 Running 0 10m 10.0.38.16 gke-cluster0-pool-d59e9506-vp6t
kube-system kube-dns-778977457c-b48w5 3/3 Running 0 11m 10.0.39.5 gke-cluster0-pool-d59e9506-b7nb
kube-system kube-dns-778977457c-sw672 3/3 Running 0 17m 10.0.38.9 gke-cluster0-pool-d59e9506-vp6t
kube-system kube-dns-autoscaler-7db47cb9b7-tjt4l 1/1 Running 0 17m 10.0.38.11 gke-cluster0-pool-d59e9506-vp6t
kube-system kube-proxy-gke-cluster0-pool-d59e9506-1rzm 1/1 Running 0 4m 10.128.0.3 gke-cluster0-pool-d59e9506-1rzm
kube-system kube-proxy-gke-cluster0-pool-d59e9506-b7nb 1/1 Running 0 15m 10.128.0.4 gke-cluster0-pool-d59e9506-b7nb
kube-system kube-proxy-gke-cluster0-pool-d59e9506-vp6t 1/1 Running 0 20m 10.128.0.2 gke-cluster0-pool-d59e9506-vp6t
kube-system kubernetes-dashboard-76c679977c-mwqlv 1/1 Running 0 17m 10.0.38.8 gke-cluster0-pool-d59e9506-vp6t
kube-system l7-default-backend-6497bcdb4d-wkx28 1/1 Running 0 17m 10.0.38.12 gke-cluster0-pool-d59e9506-vp6t
kube-system nginx-ingress-controller-78d546664f-gf6mx 1/1 Running 0 11m 10.0.39.3 gke-cluster0-pool-d59e9506-b7nb
kube-system tiller-deploy-5458cb4cc-26x26 1/1 Running 0 11m 10.0.39.4 gke-cluster0-pool-d59e9506-b7nb
:
NAME STATUS ROLES AGE VERSION
gke-cluster0-pool-d59e9506-1rzm Ready <none> 3m v1.8.3-gke.0
gke-cluster0-pool-d59e9506-b7nb Ready <none> 14m v1.8.3-gke.0
gke-cluster0-pool-d59e9506-vp6t Ready <none> 19m v1.8.3-gke.0
그러나 DaemonSet
에 속하는 것을 제외 포드의 것도 추가 된 노드에 예정되지 않는다? 포드가 추가 된 노드로 퍼지지 않는 이유는 무엇입니까? 포드가 세 번째 노드에 배포 될 것으로 예상했을 것입니다. 이 세 번째 노드로 작업 부하를 분산 시키려면 어떻게합니까?
기술적으로 매니페스트 리소스 요청 측면에서 내 전체 응용 프로그램은 하나의 노드에 적합합니다. 그러나 두 번째 노드가 추가되면 응용 프로그램이 두 번째 노드에 배포됩니다. 그래서 나는 제 3의 노드를 추가 할 때 그 노드와 함께 포드를 스케쥴해야한다고 생각합니다. 그러나 그것은 내가 보는 것과 다릅니다. DaemonSet
만 세 번째 노드로 예약됩니다. 노드 풀을 늘리거나 줄이려고 노력했습니다.
업데이트
두 선점 노드를 다시 시작하고 지금 모든 포드는 하나 개의 노드에 있습니다. 무슨 일이야? 리소스 요청이 퍼져 나갈 수있는 유일한 방법입니까?