0
respostas

Nodes com status de NotReady

Boa noite!

Tudo bem ?

Estou com um problema ao adicionar as instancias ao cluster EKS. Apos adicionar as instancias com o comando abaixo os nodes fica aparecendo como NotReady:

kubectl apply -f aws-auth-cm.yaml

Conteudo do aws-auth-cm.yaml:

apiVersion: v1
kind: ConfigMap
metadata:
  name: aws-auth
  namespace: kube-system
data:
  mapRoles: |
    - rolearn: arn:aws:iam::931833244323:role/nodes-eks-NodeInstanceRole-EX7VOR8CX8XZ
      username: system:node:{{EC2PrivateDNSName}}
      groups:
        - system:bootstrappers
        - system:nodes

Abaixo a saida do comando describe em um dos nodes:

Lease:              Failed to get lease: leases.coordination.k8s.io "ip-192-168-125-29.ec2.internal" not found
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  OutOfDisk        False   Mon, 30 May 2022 00:08:16 -0300   Sun, 29 May 2022 23:36:00 -0300   KubeletHasSufficientDisk     kubelet has sufficient disk space available
  MemoryPressure   False   Mon, 30 May 2022 00:08:16 -0300   Sun, 29 May 2022 23:36:00 -0300   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Mon, 30 May 2022 00:08:16 -0300   Sun, 29 May 2022 23:36:00 -0300   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Mon, 30 May 2022 00:08:16 -0300   Sun, 29 May 2022 23:36:00 -0300   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            False   Mon, 30 May 2022 00:08:16 -0300   Sun, 29 May 2022 23:36:00 -0300   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

Desde já agradeço a ajuda.