2016-06-08 4 views
0

나는 Hypriot OS (2015-11-15 stable build)가 장착 된 4 개의 Raspberry Pi 2에서 Kubernetes 1.2.0 cluster을 실행 중입니다. 설정은 데모 용으로 제작되었습니다. 그들은 소비자 급 라우터 (IP 192.168.1.1)가 연결되어있는 스위치를 통해 네트워크에 연결되며, 무선 브리지, DHCP 서버 및 DNS 서버 (또한 로컬 DNS로 설정된 DD-WRT를 실행하여 Raspi는 호스트 이름으로 연결할 수 있음). 설치 스크립트 및 설정 yamls는 Github에서 찾을 수 있습니다.Kubernetes 클러스터는 끝없이 gcr.io DNS 조회를 시도하여 라우터를 습격합니다. 무엇이 잘못되었으며 어떻게 멈출 수 있습니까?

문제는 Raspi가 2600+ 활성 IP 연결, 마스터 노드에서 1600, ~ 300을 보여주는 라우터를 압도 할 정도로 UDP : 53에 대해 엄청난 양의 DNS 조회를 생성한다는 것입니다. 작업자 노드에서. 클러스터에서 배치, 포드, 서비스 또는 다른 것을 실행하지 않습니다. 내부 DNS (SkyDNS)가 설치되지 않았습니다. 나는이 모든 조회가 필요한 이유에 대한 단서를 가지고 있지 않지만 신속하게 연속적으로 해고 당한다. 단 4 개의 노드 만 있으면 라우터는 계속 유지할 수 있지만, 금요일에 할 계획 인 데모를 위해 적어도 4 개 이상 연결해야합니다. 라우터를 압도하고 클러스터.

문제를 해결하기 위해, 나는 내 클러스터를 해결하기 위해 필사적 보이는 도메인 찾아 시도했습니다 당신이 볼 수 있듯이

HypriotOS: [email protected] in ~ 
$ tcpdump -vvv -s 0 -l -n port 53 
tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes 
10:29:39.724300 IP (tos 0x0, ttl 64, id 4189, offset 0, flags [DF], proto UDP (17), length 52) 
    192.168.1.94.58760 > 192.168.1.1.53: [bad udp cksum 0x83e1 -> 0x3e07!] 32499+ A? gcr.io. (24) 
10:29:39.724434 IP (tos 0x0, ttl 64, id 4190, offset 0, flags [DF], proto UDP (17), length 52) 
    192.168.1.94.58760 > 192.168.1.1.53: [bad udp cksum 0x83e1 -> 0x076d!] 46450+ AAAA? gcr.io. (24) 
10:29:39.725011 IP (tos 0x0, ttl 64, id 23734, offset 0, flags [DF], proto UDP (17), length 68) 
    192.168.1.1.53 > 192.168.1.94.58760: [udp sum ok] 32499 q: A? gcr.io. 1/0/0 gcr.io. [10s] A 173.194.65.82 (40) 
10:29:39.725226 IP (tos 0x0, ttl 64, id 23735, offset 0, flags [DF], proto UDP (17), length 80) 
    192.168.1.1.53 > 192.168.1.94.58760: [udp sum ok] 46450 q: AAAA? gcr.io. 1/0/0 gcr.io. [10s] AAAA 2a00:1450:4013:c00::52 (52) 
10:29:39.730163 IP (tos 0x0, ttl 64, id 4191, offset 0, flags [DF], proto UDP (17), length 52) 
    192.168.1.94.46180 > 192.168.1.1.53: [bad udp cksum 0x83e1 -> 0xef5b!] 65218+ A? gcr.io. (24) 

는 클러스터가 gcr.io을 찾고, 어떤 173.194.65.82로 잘 처리되고 즉시 다시 보입니다 (타임 스탬프에주의하십시오).

누구나 무슨 일이 일어나고 있는지, 그리고 더 중요한 것은 Raspi를 파쇄하고 뉴질랜드에 기반을 둔 개 도보 서비스를 시작하는 방법을 끝내는 방법에 대한 단서가 있습니까? 몇 가지 로그가 포함되며 더 많은 정보 요청에 신속하게 응답 할 수 있습니다. 누군가가 나를 도울 수 있기를 바랍니다. 미리 감사드립니다.

줄리안

HypriotOS: [email protected] in ~ 
$ docker logs k8s-master 
I0608 09:19:08.523757  769 server.go:137] Running kubelet in containerized mode (experimental) 
W0608 09:19:39.449996  769 server.go:445] Could not load kubeconfig file /var/lib/kubelet/kubeconfig: stat /var/lib/kubelet/kubeconfig: no such file or directory. Trying auth path instead. 
W0608 09:19:39.450301  769 server.go:406] Could not load kubernetes auth path /var/lib/kubelet/kubernetes_auth: stat /var/lib/kubelet/kubernetes_auth: no such file or directory. Continuing with defaults. 
I0608 09:19:39.451561  769 plugins.go:71] No cloud provider specified. 
I0608 09:19:39.451704  769 server.go:312] Successfully initialized cloud provider: "" from the config file: "" 
I0608 09:19:39.452446  769 manager.go:132] cAdvisor running in container: "/docker/23c19dff6bca673029f1480f181089156640df376c46d371e4e7c438a9701d62" 
I0608 09:19:41.022249  769 fs.go:109] Filesystem partitions: map[/dev/root:{mountpoint:/rootfs major:179 minor:2 fsType: blockSize:0}] 
E0608 09:19:41.038167  769 machine.go:176] failed to get cache information for node 0: open /sys/devices/system/cpu/cpu0/cache: no such file or directory 
I0608 09:19:43.098937  769 manager.go:169] Machine: {NumCores:4 CpuFrequency:900000 MemoryCapacity:970452992 MachineID:822a063820bf4276a8c5b4da928a438c SystemUUID:07c0f9c7ac2242e2954579d53e00b836 BootID:3148f74f-555c-4df9-ab12-79e04a88e086 Filesystems:[{Device:/dev/root Capacity:14946500608 Type:vfs Inodes:3796576}] DiskMap:map[179:0:{Name:mmcblk0 Major:179 Minor:0 Size:16021192704 Scheduler:deadline}] NetworkDevices:[{Name:eth0 MacAddress:b8:27:eb:8b:3c:c6 Speed:100 Mtu:1500} {Name:flannel0 MacAddress: Speed:10 Mtu:1472}] Topology:[{Id:0 Memory:0 Cores:[{Id:0 Threads:[0] Caches:[]} {Id:1 Threads:[1] Caches:[]} {Id:2 Threads:[2] Caches:[]} {Id:3 Threads:[3] Caches:[]}] Caches:[]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} 
I0608 09:19:43.109629  769 manager.go:175] Version: {KernelVersion:4.1.12-hypriotos-v7+ ContainerOsVersion:Debian GNU/Linux 8 (jessie) DockerVersion:1.9.0 CadvisorVersion: CadvisorRevision:} 
I0608 09:19:43.118227  769 server.go:319] Using root directory: /var/lib/kubelet 
I0608 09:19:43.119828  769 server.go:673] Adding manifest file: /etc/kubernetes/manifests-multi 
I0608 09:19:43.120179  769 file.go:47] Watching path "/etc/kubernetes/manifests-multi" 
I0608 09:19:43.120347  769 server.go:683] Watching apiserver 
W0608 09:19:43.164980  769 kubelet.go:508] Hairpin mode set to "promiscuous-bridge" but configureCBR0 is false, falling back to "hairpin-veth" 
I0608 09:19:43.165217  769 kubelet.go:276] Hairpin mode set to "hairpin-veth" 
I0608 09:19:44.445117  769 manager.go:244] Setting dockerRoot to /var/lib/docker 
I0608 09:19:44.452306  769 plugins.go:56] Registering credential provider: .dockercfg 
I0608 09:19:44.458106  769 plugins.go:291] Loaded volume plugin "kubernetes.io/aws-ebs" 
I0608 09:19:44.458441  769 plugins.go:291] Loaded volume plugin "kubernetes.io/empty-dir" 
I0608 09:19:44.458994  769 plugins.go:291] Loaded volume plugin "kubernetes.io/gce-pd" 
I0608 09:19:44.459312  769 plugins.go:291] Loaded volume plugin "kubernetes.io/git-repo" 
I0608 09:19:44.459766  769 plugins.go:291] Loaded volume plugin "kubernetes.io/host-path" 
I0608 09:19:44.460058  769 plugins.go:291] Loaded volume plugin "kubernetes.io/nfs" 
I0608 09:19:44.460314  769 plugins.go:291] Loaded volume plugin "kubernetes.io/secret" 
I0608 09:19:44.460872  769 plugins.go:291] Loaded volume plugin "kubernetes.io/iscsi" 
I0608 09:19:44.461310  769 plugins.go:291] Loaded volume plugin "kubernetes.io/glusterfs" 
I0608 09:19:44.461611  769 plugins.go:291] Loaded volume plugin "kubernetes.io/persistent-claim" 
I0608 09:19:44.462352  769 plugins.go:291] Loaded volume plugin "kubernetes.io/rbd" 
I0608 09:19:44.462801  769 plugins.go:291] Loaded volume plugin "kubernetes.io/cinder" 
I0608 09:19:44.463297  769 plugins.go:291] Loaded volume plugin "kubernetes.io/cephfs" 
I0608 09:19:44.463928  769 plugins.go:291] Loaded volume plugin "kubernetes.io/downward-api" 
I0608 09:19:44.464562  769 plugins.go:291] Loaded volume plugin "kubernetes.io/fc" 
I0608 09:19:44.465098  769 plugins.go:291] Loaded volume plugin "kubernetes.io/flocker" 
I0608 09:19:44.465609  769 plugins.go:291] Loaded volume plugin "kubernetes.io/azure-file" 
I0608 09:19:44.466192  769 plugins.go:291] Loaded volume plugin "kubernetes.io/configmap" 
I0608 09:19:44.481512  769 server.go:632] Started kubelet 
E0608 09:19:44.483696  769 kubelet.go:956] Image garbage collection failed: unable to find data for container/
I0608 09:19:44.483849  769 server.go:109] Starting to listen on 0.0.0.0:10250 
I0608 09:19:44.484162  769 server.go:126] Starting to listen read-only on 0.0.0.0:10255 
E0608 09:19:44.513219  769 event.go:202] Unable to write event: 'Post http://rpi-master:8080/api/v1/namespaces/default/events: dial tcp 127.0.1.1:8080: connection refused' (may retry after sleeping) 
I0608 09:19:44.563938  769 container_manager_linux.go:207] Updating kernel flag: vm/overcommit_memory, expected value: 1, actual value: 0 
I0608 09:19:44.564896  769 container_manager_linux.go:207] Updating kernel flag: kernel/panic, expected value: 10, actual value: 0 
I0608 09:19:44.565542  769 container_manager_linux.go:207] Updating kernel flag: kernel/panic_on_oops, expected value: 1, actual value: 0 
I0608 09:19:44.568361  769 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer 
I0608 09:19:44.568627  769 manager.go:123] Starting to sync pod status with apiserver 
I0608 09:19:44.568820  769 kubelet.go:2356] Starting kubelet main sync loop. 
I0608 09:19:44.568969  769 kubelet.go:2365] skipping pod synchronization - [container runtime is down] 
I0608 09:19:45.499027  769 kubelet.go:2750] Recording NodeHasSufficientDisk event message for node 192.168.1.84 
I0608 09:19:45.499529  769 kubelet.go:1134] Attempting to register node 192.168.1.84 
I0608 09:19:45.506507  769 kubelet.go:1137] Unable to register 192.168.1.84 with the apiserver: Post http://rpi-master:8080/api/v1/nodes: dial tcp 127.0.1.1:8080: connection refused 
I0608 09:19:46.039350  769 kubelet.go:2750] Recording NodeHasSufficientDisk event message for node 192.168.1.84 
I0608 09:19:46.039646  769 kubelet.go:1134] Attempting to register node 192.168.1.84 
I0608 09:19:46.043880  769 kubelet.go:1137] Unable to register 192.168.1.84 with the apiserver: Post http://rpi-master:8080/api/v1/nodes: dial tcp 127.0.1.1:8080: connection refused 
E0608 09:19:46.498331  769 event.go:202] Unable to write event: 'Post http://rpi-master:8080/api/v1/namespaces/default/events: dial tcp 127.0.1.1:8080: connection refused' (may retry after sleeping) 
I0608 09:19:46.966327  769 kubelet.go:2750] Recording NodeHasSufficientDisk event message for node 192.168.1.84 
I0608 09:19:46.966641  769 kubelet.go:1134] Attempting to register node 192.168.1.84 
I0608 09:19:46.970968  769 kubelet.go:1137] Unable to register 192.168.1.84 with the apiserver: Post http://rpi-master:8080/api/v1/nodes: dial tcp 127.0.1.1:8080: connection refused 
I0608 09:19:47.512787  769 factory.go:230] Registering Docker factory 
I0608 09:19:47.576324  769 factory.go:97] Registering Raw factory 
I0608 09:19:48.044110  769 kubelet.go:2750] Recording NodeHasSufficientDisk event message for node 192.168.1.84 
I0608 09:19:48.044409  769 kubelet.go:1134] Attempting to register node 192.168.1.84 
I0608 09:19:48.049325  769 kubelet.go:1137] Unable to register 192.168.1.84 with the apiserver: Post http://rpi-master:8080/api/v1/nodes: dial tcp 127.0.1.1:8080: connection refused 
I0608 09:19:49.132613  769 manager.go:1003] Started watching for new ooms in manager 
I0608 09:19:49.154846  769 oomparser.go:182] oomparser using systemd 
I0608 09:19:49.172850  769 manager.go:256] Starting recovery of all containers 
I0608 09:19:49.529570  769 manager.go:261] Recovery completed 
I0608 09:19:49.569951  769 kubelet.go:2365] skipping pod synchronization - [container runtime is down] 
I0608 09:19:49.781660  769 kubelet.go:2750] Recording NodeHasSufficientDisk event message for node 192.168.1.84 
I0608 09:19:49.782820  769 kubelet.go:1134] Attempting to register node 192.168.1.84 
I0608 09:19:49.796120  769 kubelet.go:1137] Unable to register 192.168.1.84 with the apiserver: Post http://rpi-master:8080/api/v1/nodes: dial tcp 127.0.1.1:8080: connection refused 
I0608 09:19:53.112626  769 kubelet.go:2750] Recording NodeHasSufficientDisk event message for node 192.168.1.84 
I0608 09:19:53.112966  769 kubelet.go:1134] Attempting to register node 192.168.1.84 
I0608 09:19:53.117777  769 kubelet.go:1137] Unable to register 192.168.1.84 with the apiserver: Post http://rpi-master:8080/api/v1/nodes: dial tcp 127.0.1.1:8080: connection refused 
I0608 09:19:54.571235  769 kubelet.go:2388] SyncLoop (ADD, "file"): "k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)" 
E0608 09:19:54.571618  769 kubelet.go:2307] error getting node: node '192.168.1.84' is not in cache 
I0608 09:19:54.572268  769 kubelet.go:2451] SyncLoop (PLEG): "k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)", event: &pleg.PodLifecycleEvent{ID:"9391883ad78c50e752d5748347ef9aa2", Type:"ContainerDied", Data:"e736ec8218e250651b39758f3bbde22d4cdbb343e4118530d5791e4218786970"} 
W0608 09:19:54.586217  769 manager.go:397] Failed to update status for pod "_()": Get http://rpi-master:8080/api/v1/namespaces/default/pods/k8s-master-192.168.1.84: dial tcp 127.0.1.1:8080: connection refused 
I0608 09:19:54.597285  769 kubelet.go:2451] SyncLoop (PLEG): "k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)", event: &pleg.PodLifecycleEvent{ID:"9391883ad78c50e752d5748347ef9aa2", Type:"ContainerDied", Data:"277772303bb1fa1c72ebe496016d1a3e00e961d5935c126c5285c0af76fa8456"} 
E0608 09:19:54.609676  769 kubelet.go:1781] Failed creating a mirror pod for "k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)": Post http://rpi-master:8080/api/v1/namespaces/default/pods: dial tcp 127.0.1.1:8080: connection refused 
I0608 09:19:54.678305  769 manager.go:1698] Need to restart pod infra container for "k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)" because it is not found 
I0608 09:19:54.770520  769 kubelet.go:2451] SyncLoop (PLEG): "k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)", event: &pleg.PodLifecycleEvent{ID:"9391883ad78c50e752d5748347ef9aa2", Type:"ContainerDied", Data:"a64055838d257678ba5178bc2589f66839971070c6735335682c80785e51c943"} 
I0608 09:19:54.823445  769 kubelet.go:2451] SyncLoop (PLEG): "k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)", event: &pleg.PodLifecycleEvent{ID:"9391883ad78c50e752d5748347ef9aa2", Type:"ContainerDied", Data:"33ee7433077053694ff60552c600a535307ccfd0d752a2339c5c739591098d2b"} 
I0608 09:19:54.879917  769 kubelet.go:2451] SyncLoop (PLEG): "k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)", event: &pleg.PodLifecycleEvent{ID:"9391883ad78c50e752d5748347ef9aa2", Type:"ContainerDied", Data:"1c51763f63dfa80f6bc634f662710b71bfa341c0c69009067e2c3ae4a8a1673e"} 
I0608 09:19:54.926815  769 kubelet.go:2451] SyncLoop (PLEG): "k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)", event: &pleg.PodLifecycleEvent{ID:"9391883ad78c50e752d5748347ef9aa2", Type:"ContainerDied", Data:"087f7e397a98370f3a201e39b49e875c96b3c8290993ed1fc4a42dc848b0680b"} 
I0608 09:19:55.008764  769 kubelet.go:2451] SyncLoop (PLEG): "k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)", event: &pleg.PodLifecycleEvent{ID:"9391883ad78c50e752d5748347ef9aa2", Type:"ContainerDied", Data:"5e6ab61a95df5120cec057e515ddb7679de385169b516b7f09d3ede4e9cd2f50"} 
I0608 09:19:55.920613  769 kubelet.go:2451] SyncLoop (PLEG): "k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)", event: &pleg.PodLifecycleEvent{ID:"9391883ad78c50e752d5748347ef9aa2", Type:"ContainerStarted", Data:"390a981a905d603007fb3009953efa5bba54d26287eeff4c5cbc8983f039134f"} 
E0608 09:19:56.521544  769 event.go:202] Unable to write event: 'Post http://rpi-master:8080/api/v1/namespaces/default/events: dial tcp 127.0.1.1:8080: connection refused' (may retry after sleeping) 
I0608 09:19:57.315403  769 kubelet.go:2451] SyncLoop (PLEG): "k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)", event: &pleg.PodLifecycleEvent{ID:"9391883ad78c50e752d5748347ef9aa2", Type:"ContainerStarted", Data:"f818c0e9b622947a00cc8cc7ce719846c965bbe47a26c90bd7dcc6ec81c9ef0f"} 
I0608 09:19:59.233783  769 kubelet.go:2451] SyncLoop (PLEG): "k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)", event: &pleg.PodLifecycleEvent{ID:"9391883ad78c50e752d5748347ef9aa2", Type:"ContainerStarted", Data:"defee550850fd55fc2ecb1a41fdd47129133d0b0b8f1576f8cff0c537022782a"} 
I0608 09:19:59.830736  769 kubelet.go:2750] Recording NodeHasSufficientDisk event message for node 192.168.1.84 
I0608 09:19:59.831073  769 kubelet.go:1134] Attempting to register node 192.168.1.84 
I0608 09:19:59.837299  769 kubelet.go:1137] Unable to register 192.168.1.84 with the apiserver: Post http://rpi-master:8080/api/v1/nodes: dial tcp 127.0.1.1:8080: connection refused 
I0608 09:20:00.511849  769 kubelet.go:2451] SyncLoop (PLEG): "k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)", event: &pleg.PodLifecycleEvent{ID:"9391883ad78c50e752d5748347ef9aa2", Type:"ContainerStarted", Data:"79ea416c11adae72af1e454b07c5f00efcc6677c45a76d510cc0717dc7015806"} 
W0608 09:20:00.518862  769 manager.go:397] Failed to update status for pod "_()": Get http://rpi-master:8080/api/v1/namespaces/default/pods/k8s-master-192.168.1.84: dial tcp 127.0.1.1:8080: connection refused 
E0608 09:20:00.525216  769 kubelet.go:1781] Failed creating a mirror pod for "k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)": Post http://rpi-master:8080/api/v1/namespaces/default/pods: dial tcp 127.0.1.1:8080: connection refused 
I0608 09:20:00.615637  769 kubelet.go:2451] SyncLoop (PLEG): "k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)", event: &pleg.PodLifecycleEvent{ID:"9391883ad78c50e752d5748347ef9aa2", Type:"ContainerStarted", Data:"162a0ec1abd0a329ff4f0582a72f2c47b9e99a1fbcc02409861b397f78480d16"} 
E0608 09:20:01.612801  769 kubelet.go:1781] Failed creating a mirror pod for "k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)": Post http://rpi-master:8080/api/v1/namespaces/default/pods: dial tcp 127.0.1.1:8080: connection refused 
E0608 09:20:02.672719  769 kubelet.go:1781] Failed creating a mirror pod for "k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)": Post http://rpi-master:8080/api/v1/namespaces/default/pods: dial tcp 127.0.1.1:8080: connection refused 
W0608 09:20:04.572065  769 manager.go:397] Failed to update status for pod "_()": Get http://rpi-master:8080/api/v1/namespaces/default/pods/k8s-master-192.168.1.84: dial tcp 127.0.1.1:8080: connection refused 
E0608 09:20:06.527979  769 event.go:202] Unable to write event: 'Post http://rpi-master:8080/api/v1/namespaces/default/events: dial tcp 127.0.1.1:8080: connection refused' (may retry after sleeping) 
I0608 09:20:07.154072  769 kubelet.go:2750] Recording NodeHasSufficientDisk event message for node 192.168.1.84 
I0608 09:20:07.154551  769 kubelet.go:1134] Attempting to register node 192.168.1.84 
I0608 09:20:07.166567  769 kubelet.go:1137] Unable to register 192.168.1.84 with the apiserver: Post http://rpi-master:8080/api/v1/nodes: dial tcp 127.0.1.1:8080: connection refused 
I0608 09:20:10.483245  769 kubelet.go:2451] SyncLoop (PLEG): "k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)", event: &pleg.PodLifecycleEvent{ID:"9391883ad78c50e752d5748347ef9aa2", Type:"ContainerDied", Data:"79ea416c11adae72af1e454b07c5f00efcc6677c45a76d510cc0717dc7015806"} 
W0608 09:20:10.542522  769 manager.go:397] Failed to update status for pod "_()": Get http://rpi-master:8080/api/v1/namespaces/default/pods/k8s-master-192.168.1.84: dial tcp 127.0.1.1:8080: connection refused 
E0608 09:20:10.548165  769 kubelet.go:1781] Failed creating a mirror pod for "k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)": Post http://rpi-master:8080/api/v1/namespaces/default/pods: dial tcp 127.0.1.1:8080: connection refused 
I0608 09:20:11.954701  769 kubelet.go:2451] SyncLoop (PLEG): "k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)", event: &pleg.PodLifecycleEvent{ID:"9391883ad78c50e752d5748347ef9aa2", Type:"ContainerStarted", Data:"79a4cbedadc1a825bce592b0c4cde042ffea5aa65f7c4227c8aec379aa64012c"} 
W0608 09:20:12.042905  769 manager.go:397] Failed to update status for pod "_()": Get http://rpi-master:8080/api/v1/namespaces/default/pods/k8s-master-192.168.1.84: dial tcp 127.0.1.1:8080: connection refused 
E0608 09:20:12.044221  769 kubelet.go:1781] Failed creating a mirror pod for "k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)": Post http://rpi-master:8080/api/v1/namespaces/default/pods: dial tcp 127.0.1.1:8080: connection refused 
I0608 09:20:14.288508  769 kubelet.go:2750] Recording NodeHasSufficientDisk event message for node 192.168.1.84 
I0608 09:20:14.288868  769 kubelet.go:1134] Attempting to register node 192.168.1.84 
I0608 09:20:14.300563  769 kubelet.go:1137] Unable to register 192.168.1.84 with the apiserver: Post http://rpi-master:8080/api/v1/nodes: dial tcp 127.0.1.1:8080: connection refused 
W0608 09:20:14.574069  769 manager.go:397] Failed to update status for pod "_()": Get http://rpi-master:8080/api/v1/namespaces/default/pods/k8s-master-192.168.1.84: dial tcp 127.0.1.1:8080: connection refused 
E0608 09:20:16.536424  769 event.go:202] Unable to write event: 'Post http://rpi-master:8080/api/v1/namespaces/default/events: dial tcp 127.0.1.1:8080: connection refused' (may retry after sleeping) 
I0608 09:20:21.433294  769 kubelet.go:2750] Recording NodeHasSufficientDisk event message for node 192.168.1.84 
I0608 09:20:21.433579  769 kubelet.go:1134] Attempting to register node 192.168.1.84 
I0608 09:20:21.439670  769 kubelet.go:1137] Unable to register 192.168.1.84 with the apiserver: Post http://rpi-master:8080/api/v1/nodes: dial tcp 127.0.1.1:8080: connection refused 
I0608 09:20:23.007412  769 kubelet.go:2451] SyncLoop (PLEG): "k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)", event: &pleg.PodLifecycleEvent{ID:"9391883ad78c50e752d5748347ef9aa2", Type:"ContainerDied", Data:"79a4cbedadc1a825bce592b0c4cde042ffea5aa65f7c4227c8aec379aa64012c"} 
E0608 09:20:23.094738  769 kubelet.go:1781] Failed creating a mirror pod for "k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)": Post http://rpi-master:8080/api/v1/namespaces/default/pods: dial tcp 127.0.1.1:8080: connection refused 
W0608 09:20:23.094918  769 manager.go:397] Failed to update status for pod "_()": Get http://rpi-master:8080/api/v1/namespaces/default/pods/k8s-master-192.168.1.84: dial tcp 127.0.1.1:8080: connection refused 
I0608 09:20:23.112488  769 manager.go:2047] Back-off 10s restarting failed container=controller-manager pod=k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2) 
E0608 09:20:23.113255  769 pod_workers.go:138] Error syncing pod 9391883ad78c50e752d5748347ef9aa2, skipping: failed to "StartContainer" for "controller-manager" with CrashLoopBackOff: "Back-off 10s restarting failed container=controller-manager pod=k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)" 
I0608 09:20:24.463284  769 kubelet.go:2391] SyncLoop (UPDATE, "api"): "k8s-master-192.168.1.84_default(15a52b5d-2cb3-11e6-ae88-b827eb8b3cc6)" 
I0608 09:20:24.497971  769 manager.go:2047] Back-off 10s restarting failed container=controller-manager pod=k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2) 
E0608 09:20:24.498876  769 pod_workers.go:138] Error syncing pod 9391883ad78c50e752d5748347ef9aa2, skipping: failed to "StartContainer" for "controller-manager" with CrashLoopBackOff: "Back-off 10s restarting failed container=controller-manager pod=k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)" 
W0608 09:20:27.051713  769 request.go:627] Throttling request took 99.568025ms, request: POST:http://rpi-master:8080/api/v1/namespaces/default/events 
W0608 09:20:27.251946  769 request.go:627] Throttling request took 169.564927ms, request: PATCH:http://rpi-master:8080/api/v1/namespaces/default/events/k8s-master-192.168.1.84.14561128dd6997d6 
W0608 09:20:27.451762  769 request.go:627] Throttling request took 141.993996ms, request: POST:http://rpi-master:8080/api/v1/namespaces/default/events 
W0608 09:20:27.651819  769 request.go:627] Throttling request took 175.348684ms, request: POST:http://rpi-master:8080/api/v1/namespaces/default/events 
W0608 09:20:27.851906  769 request.go:627] Throttling request took 169.614146ms, request: PATCH:http://rpi-master:8080/api/v1/namespaces/default/events/k8s-master-192.168.1.84.14561128dd6997d6 
W0608 09:20:28.051684  769 request.go:627] Throttling request took 155.040509ms, request: POST:http://rpi-master:8080/api/v1/namespaces/default/events 
I0608 09:20:28.573729  769 kubelet.go:2750] Recording NodeHasSufficientDisk event message for node 192.168.1.84 
I0608 09:20:28.574075  769 kubelet.go:1134] Attempting to register node 192.168.1.84 
I0608 09:20:28.745103  769 kubelet.go:1150] Node 192.168.1.84 was previously registered 
W0608 09:20:28.851791  769 request.go:627] Throttling request took 122.413785ms, request: PATCH:http://rpi-master:8080/api/v1/namespaces/default/events/k8s-master-192.168.1.84.14561128dd6997d6 
W0608 09:20:29.051663  769 request.go:627] Throttling request took 157.66653ms, request: POST:http://rpi-master:8080/api/v1/namespaces/default/events 
W0608 09:20:29.251789  769 request.go:627] Throttling request took 177.7883ms, request: POST:http://rpi-master:8080/api/v1/namespaces/default/events 
W0608 09:20:29.451806  769 request.go:627] Throttling request took 174.880614ms, request: PATCH:http://rpi-master:8080/api/v1/namespaces/default/events/192.168.1.84.145611266d7f848a 
W0608 09:20:29.651741  769 request.go:627] Throttling request took 147.397079ms, request: PATCH:http://rpi-master:8080/api/v1/namespaces/default/events/192.168.1.84.145611266d7f848a 
W0608 09:20:29.851871  769 request.go:627] Throttling request took 164.236896ms, request: POST:http://rpi-master:8080/api/v1/namespaces/default/events 
W0608 09:20:30.051664  769 request.go:627] Throttling request took 177.139919ms, request: POST:http://rpi-master:8080/api/v1/namespaces/default/events 
W0608 09:20:30.251706  769 request.go:627] Throttling request took 176.659299ms, request: PATCH:http://rpi-master:8080/api/v1/namespaces/default/events/k8s-master-192.168.1.84.1456112f2f6934f8 
W0608 09:20:30.451679  769 request.go:627] Throttling request took 159.788336ms, request: PATCH:http://rpi-master:8080/api/v1/namespaces/default/events/k8s-master-192.168.1.84.1456112f2f7d5ddc 
W0608 09:20:30.651761  769 request.go:627] Throttling request took 154.810042ms, request: PATCH:http://rpi-master:8080/api/v1/namespaces/default/events/192.168.1.84.145611266d7f848a 
W0608 09:20:30.851640  769 request.go:627] Throttling request took 155.878888ms, request: POST:http://rpi-master:8080/api/v1/namespaces/default/events 
I0608 09:20:37.134464  769 kubelet.go:2451] SyncLoop (PLEG): "k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)", event: &pleg.PodLifecycleEvent{ID:"9391883ad78c50e752d5748347ef9aa2", Type:"ContainerStarted", Data:"246f201be0479d48a0a44c4d4f8a95126d73ac04146e3029739cfd1da7d1ee77"} 
E0608 09:20:55.460305  769 fsHandler.go:106] failed to collect filesystem stats - du command failed on /rootfs/var/lib/docker/overlay/23c19dff6bca673029f1480f181089156640df376c46d371e4e7c438a9701d62 with output stdout: 238752 /rootfs/var/lib/docker/overlay/23c19dff6bca673029f1480f181089156640df376c46d371e4e7c438a9701d62 
, stderr: du: cannot access '/rootfs/var/lib/docker/overlay/23c19dff6bca673029f1480f181089156640df376c46d371e4e7c438a9701d62/merged/proc/679/task/702/fdinfo/19': No such file or directory 
du: cannot access '/rootfs/var/lib/docker/overlay/23c19dff6bca673029f1480f181089156640df376c46d371e4e7c438a9701d62/merged/proc/679/task/737/fdinfo/19': No such file or directory 
du: cannot access '/rootfs/var/lib/docker/overlay/23c19dff6bca673029f1480f181089156640df376c46d371e4e7c438a9701d62/merged/proc/679/task/738/fd/19': No such file or directory 
du: cannot access '/rootfs/var/lib/docker/overlay/23c19dff6bca673029f1480f181089156640df376c46d371e4e7c438a9701d62/merged/proc/1116/task/1116/fd/3': No such file or directory 
du: cannot access '/rootfs/var/lib/docker/overlay/23c19dff6bca673029f1480f181089156640df376c46d371e4e7c438a9701d62/merged/proc/1116/task/1116/fdinfo/3': No such file or directory 
du: cannot access '/rootfs/var/lib/docker/overlay/23c19dff6bca673029f1480f181089156640df376c46d371e4e7c438a9701d62/merged/proc/1116/fd/4': No such file or directory 
du: cannot access '/rootfs/var/lib/docker/overlay/23c19dff6bca673029f1480f181089156640df376c46d371e4e7c438a9701d62/merged/proc/1116/fdinfo/4': No such file or directory 
- exit status 1 
I0608 09:20:55.460602  769 fsHandler.go:116] `du` on following dirs took 2.515023345s: [/rootfs/var/lib/docker/overlay/23c19dff6bca673029f1480f181089156640df376c46d371e4e7c438a9701d62 /rootfs/var/lib/docker/containers/23c19dff6bca673029f1480f181089156640df376c46d371e4e7c438a9701d62] 
+0

kubelet에서'--pod-infra-container-image' 플래그를 설정 했습니까? 그렇지 않다면 gcr.io에 대한 DNS 검색을 유발하는'gcr.io/google_containers/pause : 3.0'을 가져올 것입니다. –

+0

예'/ lib/systemd/system/k8s-master (또는 worker) .service' 파일에'-pod-infra-container-image = gcr.io/google_containers/pause-arm : 2.0' 플래그가 포함되어 있습니다. – Juul

+0

그리고 도와 줘서 고맙다! – Juul

답변

0

내가 errr로 관리했습니다 ... 적어도 도메인이 로컬로 해결되기 때문에 라우터에 무리를주지에서 나가는 DNS 조회를 방지 /etc/hosts173.194.65.82 gcr.io을 추가하여 문제를 "해결". 나는 내일 내 데모를 위해 할 것이라고 생각한다. 왜냐하면 최소한 라우터에 DDOS를 쓰면 작동하지 않는 클러스터가 있기 때문이다.

나는 끔찍한 못생긴하지만, 나는 내 눈에서 떨어지는 슬픔의 눈물로 Raspi 's 중 하나를 거의 단락시켰다. 누구든지 제안이 있다면 근본적인 문제를 해결하는 데 여전히 관심이 있습니다!