Boa tarde Lucas, realizei um describe em um dos Pods e obtive esse resultado:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m26s default-scheduler Successfully assigned default/vollmed-deployment-857dffc649-c4zc8 to minikube
Normal Pulling 2m26s kubelet Pulling image "leonardosartorello/vollmed-api:1"
Normal Pulled 106s kubelet Successfully pulled image "leonardosartorello/vollmed-api:1" in 39.47s (39.47s including waiting). Image size: 1203204265 bytes.
Warning Unhealthy 46s (x2 over 66s) kubelet Liveness probe failed: Get "http://10.244.0.21:3000/paciente": dial tcp 10.244.0.21:3000: connect: connection refused
Normal Killing 46s kubelet Container vollmed-container failed liveness probe, will be restarted
Normal Created 45s (x2 over 105s) kubelet Created container vollmed-container
Normal Started 45s (x2 over 105s) kubelet Started container vollmed-container
Normal Pulled 45s kubelet Container image "leonardosartorello/vollmed-api:1" already present on machine
Warning Unhealthy 6s (x2 over 56s) kubelet Liveness probe failed: Get "http://10.244.0.21:3000/paciente": EOF
Ele diz que o problema está ocorrendo no meu livenessProbe mesmo, mas eu já fiz algumas mudanças nele, que atualmente se encontra assim:
livenessProbe:
httpGet:
path: /paciente
port: 3000
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
E nos resources que está assim:
resources:
limits:
memory: "1024Mi"
cpu: "1000m"