Pods connexion time out kubernetes | C2C Community
Solved

Pods connexion time out kubernetes

  • 1 December 2022
  • 6 replies
  • 60 views

Userlevel 3
Badge

Hello,

I'm trying to curl inside pods of my kubernetes cluster deployed in google cloud platform but I have a timeout however when testing in the node my curl works. 
Do you have any idea what is causing this problem knowing that the cluster is configured with a shared VPC.

Thank you in advance for your help

icon

Best answer by malamin 2 December 2022, 21:13

View original

6 replies

Userlevel 6
Badge +6

Could you give more details?

Userlevel 7
Badge +58

Hi @Bouchra.abidar,

as @seijimanoan already written, you need to give more details and even some screenshots if possible.

Userlevel 3
Badge

Thanks for your return @seijimanoan 

I'm deploying a docker image to test API access on the kubernetes cluster. 

 

When I test the accesses in the nodes terminal of my kubernetes cluster deployed in google cloud platform (ssh of the vms nodes) I have access, but inside pods NO;

 

dockerfile:

FROM python:3.7-slim

RUN apt-get update -y

RUN apt-get install -y --fix-missing curl 

COPY . /home/test

WORKDIR /home/test

CMD ["bash", "/home/test/curl.sh"]

 

I created an endpoint and a service

apiVersion: v1

kind: Endpoints

metadata:

 name: ipconnexion

subsets:

 - addresses:

     - ip: ip target

   ports:

     - port: 8161 (port target)   

---

kind: Service

apiVersion: v1

metadata:

 name: ipconnexion

spec:

 ports:

   - name: http

     protocol: TCP

     port: 8161

     targetPort: 8161

 

 

Curl.sh:   curl ipconnexion:8161 

 

command used to deploy: 

kubectl create deployment test     --image=myimage

Userlevel 4
Badge

Not sure why you need to create endpoint and service like that. If you are are create deployment manually as you showed, you can also use command ` kubectl expose deployment test --port=80 --target-port=8161 ` and this will create a proper service and endpoints for you and then you can call it like this: `curl test` as you will be making a request to a service, not to a pod directly.

 

In the example you shared, most likely your service and endpoint aren’t targeting your deployment via label selectors.

Userlevel 7
Badge +21

Thank you, @Bouchra.abidar for the question.

As @ilias and @seijimanoan mention well. always make sure to come with a log screenshot. It is the best way to find out what is a problem and what would be potential solution.


So my situation is the same as theirs. But I assume from your questions your Kubernetes engine pod is not responding for public access using the curl command.

Timing out error is not only relying on one cause it’s possible from individual reseason.

First, you should make sure your development cluster pod expose and development is running properly.

 

kubectl expose (-f FILENAME | TYPE NAME) [--port=port] [--protocol=TCP|UDP|SCTP] [--target-port=number-or-name] [--name=name] [--external-ip=external-ip-of-service] [--type=type]

Then You need to make sure If it's work from your VPC and not from outside, it's because you created a private GKE cluster. The master is only reachable through the private IP or through the autorized network.

Speaking about the authorized network, as example you have one authorizer office (192.169.1.0/24). Sadly, if you registered a private IP range in your office network and not the public IP used to access the internet.

To solve that, go to a site that provide you your public IP. Then update the authorized network for your cluster with that IP/32, and try again.

 

If it works from the GCP VM, but does not work from your local that means that it's either related to the GCP Firewall or your GKE does not have a public IP.

First check if you cluster IP is public and if yes, then you need to add a firewall rule which allows the traffic over HTTPS (443 port). You can do it in the the gcloud tool or via the GCP Console "Firewall -> Create Firewall Rule".

Sometimes Intermittent time-outs suggest component performance issues, as opposed to networking problems.

In this scenario, it's important to check the usage and health of the components. You can use the inside-out technique to check the status of the pods. Run the kubectl top and kubectl get commands, as follows:

$ kubectl top pods  # Check the health of the pods and the nodes.
kubectl top nodes
kubectl get pods  # Check the state of the pod.

After checking the status you also findout the more logs.

kubectl logs my-deployment-test

 

kubectl logs my-deployment-test -c webserver
kubectl logs my-deployment-test -c my-app --previous

Log entries were made the previous time that the container was run. The existence of these entries suggests that the application did start, but it closed because of some issues.

kubectl describe pod my-deployment-test

After find out pod logs and if you see there is any memory related issue indicate on log you can remove the memory limit and monitor the application to determine how much memory it actually needs. After you learn the memory usage, you can update the memory limits on the container. If the memory usage continues to increase, determine whether there's a memory leak in the application.

Also, don’t forget to check the end point service

 I hope my case study will be helpful in debugging the error and find out the solution. If it is still not working then please send me the log screenshot and google cloud console screenshot with details.

Also, Check the following Google cloud documentation.

Userlevel 3
Badge

Thank you @malamin and @lumaks for your feedback

I will test it and keep you informed 

 

Reply