Установил kubernetes на Debian(VirtualBox)Mem=5048,Cpu=2 В какую сторону копать ошибку???
стартую:
kubeadm init –control-plane-endpoint=$HOSTNAME
По началу ПОДы показывал,потом началось:
kubectl get pods –all-namespaces
The connection to the server node-01:6443 was refused - did you specify the right host or port?
journalctl -f -u kubelet.service
Dec 17 12:54:22 node-01 kubelet[797]: I1217 12:54:22.422257 797 scope.go:117] "RemoveContainer" containerID="05b0005a56c8d217fdf85fc552ab3ac631003678a90080fccd5a44b7eaf6f99b"
Dec 17 12:54:22 node-01 kubelet[797]: E1217 12:54:22.422718 797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" w ith CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-node-01_kube-system(dcfb8f60d21e77af673e430a4091f4c8)\"" pod="kube-s ystem/kube-controller-manager-node-01" podUID="dcfb8f60d21e77af673e430a4091f4c8"
Dec 17 12:54:24 node-01 kubelet[797]: E1217 12:54:24.438127 797 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Dec 17 12:54:25 node-01 kubelet[797]: I1217 12:54:25.454727 797 scope.go:117] "RemoveContainer" containerID="05b935247939a259d83e6463968bda857cc1c3c28473a684c5efbf7d60163f3b"
Dec 17 12:54:25 node-01 kubelet[797]: E1217 12:54:25.455866 797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-apiserver pod=kube-apiserver-node-01_kube-system(2f88ee07fe379110744672da1a8923b3)\"" pod="kube-system/kube-apiserver-node-01" podUID="2f88ee07fe379110744672da1a8923b3"
Dec 17 12:54:27 node-01 kubelet[797]: I1217 12:54:27.469044 797 scope.go:117] "RemoveContainer" containerID="25b904a2237a91dbbfb79613c19816977b473fc370a30e03c1b5e636a8250216"
Dec 17 12:54:27 node-01 kubelet[797]: E1217 12:54:27.470882 797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-q6qkc_kube-system(5542650e-f8d2-4fa8-82a1-cca6baf42fcc)\"" pod="kube-system/kube-proxy-q6qkc" podUID="5542650e-f8d2-4fa8-82a1-cca6baf42fcc"
Dec 17 12:54:28 node-01 kubelet[797]: E1217 12:54:28.651589 797 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.1.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/node-01?timeout=10s\": dial tcp 192.168.1.10:6443: connect: connection refused" interval="7s"
Dec 17 12:54:28 node-01 kubelet[797]: E1217 12:54:28.780613 797 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://192.168.1.10:6443/api/v1/namespaces/kube-system/events/kube-scheduler-node-01.18821052a502108b\": dial tcp 192.168.1.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-scheduler-node-01.18821052a502108b kube-system 3172 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-scheduler-node-01,UID:eb3bdd36cec9338c0ac264972d3ecaa9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:BackOff,Message:Back-off restarting failed container kube-scheduler in pod kube-scheduler-node-01_kube-system(eb3bdd36cec9338c0ac264972d3ecaa9),Source:EventSource{Component:kubelet,Host:node-01,},FirstTimestamp:2025-12-17 12:19:11 -0500 EST,LastTimestamp:2025-12-17 12:52:35.458928337 -0500 EST m=+2165.415672342,Count:142,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:node-01,}"
Dec 17 12:54:29 node-01 kubelet[797]: E1217 12:54:29.443856 797 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Dec 17 12:54:30 node-01 kubelet[797]: I1217 12:54:30.485105 797 status_manager.go:895] "Failed to get status for pod" podUID="2f88ee07fe379110744672da1a8923b3" pod="kube-system/kube-apiserver-node-01" err="Get \"https://192.168.1.10:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-node-01\": dial tcp 192.168.1.10:6443: connect: connection refused"
Dec 17 12:54:30 node-01 kubelet[797]: I1217 12:54:30.486821 797 status_manager.go:895] "Failed to get status for pod" podUID="dcfb8f60d21e77af673e430a4091f4c8" pod="kube-system/kube-controller-manager-node-01" err="Get \"https://192.168.1.10:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-node-01\": dial tcp 192.168.1.10:6443: connect: connection refused"
Dec 17 12:54:30 node-01 kubelet[797]: I1217 12:54:30.488659 797 status_manager.go:895] "Failed to get status for pod" podUID="eb3bdd36cec9338c0ac264972d3ecaa9" pod="kube-system/kube-scheduler-node-01" err="Get \"https://192.168.1.10:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-node-01\": dial tcp 192.168.1.10:6443: connect: connection refused"
Dec 17 12:54:30 node-01 kubelet[797]: I1217 12:54:30.489371 797 status_manager.go:895] "Failed to get status for pod" podUID="5542650e-f8d2-4fa8-82a1-cca6baf42fcc" pod="kube-system/kube-proxy-q6qkc" err="Get \"https://192.168.1.10:6443/api/v1/namespaces/kube-system/pods/kube-proxy-q6qkc\": dial tcp 192.168.1.10:6443: connect: connection refused"
Dec 17 12:54:30 node-01 kubelet[797]: I1217 12:54:30.490248 797 status_manager.go:895] "Failed to get status for pod" podUID="51c670536858241606c4bad9a5634813" pod="kube-system/etcd-node-01" err="Get \"https://192.168.1.10:6443/api/v1/namespaces/kube-system/pods/etcd-node-01\": dial tcp 192.168.1.10:6443: connect: connection refused"
Dec 17 12:54:30 node-01 kubelet[797]: I1217 12:54:30.518076 797 scope.go:117] "RemoveContainer" containerID="e2a7510faecfe25de69adf5dcfaf75cb5d08f14560540791829c45de00002d64"
Dec 17 12:54:30 node-01 kubelet[797]: E1217 12:54:30.518386 797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-scheduler pod=kube-scheduler-node-01_kube-system(eb3bdd36cec9338c0ac264972d3ecaa9)\"" pod="kube-system/kube-scheduler-node-01" podUID="eb3bdd36cec9338c0ac264972d3ecaa9"
^C
[b]kubectl events[/b]
LAST SEEN TYPE REASON OBJECT MESSAGE
8m11s Normal Starting Node/node-01 Starting kubelet.
8m11s Warning InvalidDiskCapacity Node/node-01 invalid capacity 0 on image filesystem
8m11s (x8 over 8m11s) Normal NodeHasSufficientMemory Node/node-01 Node node-01 status is now: NodeHasSufficientMemory
8m11s (x8 over 8m11s) Normal NodeHasNoDiskPressure Node/node-01 Node node-01 status is now: NodeHasNoDiskPressure
8m11s (x7 over 8m11s) Normal NodeHasSufficientPID Node/node-01 Node node-01 status is now: NodeHasSufficientPID
8m11s Normal NodeAllocatableEnforced Node/node-01 Updated Node Allocatable limit across pods
8m2s Normal Starting Node/node-01 Starting kubelet.
8m2s Warning InvalidDiskCapacity Node/node-01 invalid capacity 0 on image filesystem
8m2s Normal NodeAllocatableEnforced Node/node-01 Updated Node Allocatable limit across pods
8m2s Normal NodeHasSufficientMemory Node/node-01 Node node-01 status is now: NodeHasSufficientMemory
8m2s Normal NodeHasNoDiskPressure Node/node-01 Node node-01 status is now: NodeHasNoDiskPressure
8m2s Normal NodeHasSufficientPID Node/node-01 Node node-01 status is now: NodeHasSufficientPID
7m1s Normal RegisteredNode Node/node-01 Node node-01 event: Registered Node node-01 in Controller
6m58s Normal Starting Node/node-01
4m58s Normal Starting Node/node-01
4m34s Normal RegisteredNode Node/node-01 Node node-01 event: Registered Node node-01 in Controller
3m34s Normal Starting Node/node-01
kubectl get pods –all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-674b8bbfcf-nvm5q 0/1 Pending 0 46m
kube-system coredns-674b8bbfcf-zpkbl 0/1 Pending 0 46m
kube-system etcd-node-01 1/1 Running 63 (2m23s ago) 46m
kube-system kube-apiserver-node-01 1/1 Running 68 (2m16s ago) 46m
kube-system kube-controller-manager-node-01 0/1 CrashLoopBackOff 69 (45s ago) 47m
kube-system kube-proxy-gxhnz 1/1 Running 21 (62s ago) 46m
kube-system kube-scheduler-node-01 1/1 Running 77 (58s ago) 46m


