无法将tomcat容器连接到kubernetes中的mysql数据库容器?

问题描述:

请不要将其标记为重复.这次我做了一些更改.相信我,我尝试了其他答案,但它们似乎无法解决我的问题.我无法将tomcat容器与我的MySQL数据库链接Kubernetes中的容器.

Please don't mark this as duplicate.I have made some changes this time.Believe me, I have tried other answers and they don't seem to solve my issue.I am unable to link tomcat container with my MySQL database container in kubernetes.

使用此dockerfile构建我的tomcat图像

Built my tomcat image using this dockerfile

FROM picoded/tomcat7
COPY data-core-0.0.1-SNAPSHOT.war /usr/local/tomcat/webapps/data-core-0.0.1-SNAPSHOT.war

mysql-service.yaml

mysql-service.yaml

apiVersion: v1
kind: Service
metadata:
  name: mysql
spec:
  ports:
  - port: 3306
  selector:
    app: mysql

mysql-deployment.yaml

mysql-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: mysql
spec:
  selector:
    matchLabels:
      app: mysql
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
      - image: mysql:5.6
        name: mysql
        imagePullPolicy: "IfNotPresent"
        env:
        - name: MYSQL_ROOT_PASSWORD
          value: root
        - name: MYSQL_DATABASE
          value: data-core  
        ports:
        - containerPort: 3306
          name: mysql
        volumeMounts:
        - name: mysql-persistent-storage
          mountPath: /docker-entrypoint-initdb.d   //my sql init script will 
                                                     get copied from hostpath 
                                                     of persistant volume. 
      volumes:
      - name: mysql-persistent-storage
        persistentVolumeClaim:
          claimName: mysql-initdb-pv-claim

Tomcat-service.yaml

Tomcat-service.yaml

apiVersion: v1
kind: Service
metadata:
  name: tomcat
  labels:
    app: tomcat
spec:
  type: NodePort     
  ports:
  - name: myport
    port: 8080
    targetPort: 8080
    nodePort: 30000
  selector:
    app: tomcat
    tier: frontend 

Tomcat-Deployment.yaml

Tomcat-Deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: tomcat
  labels:
    app: tomcat
spec:
  selector:
    matchLabels:
      app: tomcat
      tier: frontend
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: tomcat
        tier: frontend
    spec:
      containers:
      - image: suji165475/vignesh:tomcatserver  //this is the tomcat image 
                                                  built using dockerfile with 
                                                  war file(spring boot app) 
                                                  copied to webapps folder
        name: tomcat
        env:
        - name: DB_PORT_3306_TCP_ADDR
          value: mysql                  #service name of mysql
        - name: DB_ENV_MYSQL_DATABASE
          value: data-core
        - name: DB_ENV_MYSQL_ROOT_PASSWORD
          value: root
        ports:
        - containerPort: 8080
          name: myport
        volumeMounts:
        - name: tomcat-persistent-storage
          mountPath: /var/data
      volumes:
      - name: tomcat-persistent-storage
        persistentVolumeClaim:
          claimName: tomcat-pv-claim

我已经在连接它们的Tomcat部署中指定了所有环境变量,包括MySQL服务名称,还确保为这两个容器创建持久卷和声明,但是我的war文件仍然无法在tomcat的管理器应用程序中启动.

I have specified all the environment variables including the MySQL service name in the Tomcat deployment needed for connecting them.Also made sure to create persistent volumes and claims for both the containers.Yet my war file still wont start in tomcat's manager app.

我的yaml文件是否正确,或者还需要进行一些更改?

Are my yaml files correct or is there still some changes to be made??

注意:我正在使用腻子终端的服务器上运行.

NOTE: I am running on a server using putty terminal.

用于在浏览器中访问我的应用的URL-

URL used to access my app in browser-

206.189.22.155:30000/data-core-0.0.1-SNAPSHOT

206.189.22.155:30000/data-core-0.0.1-SNAPSHOT

您的YAML文件正确.我已经重新创建了问题中提到的整个环境,并且我的Tomcat处于正常状态,并且该应用程序处于运行"状态.
如果有人也想对其进行测试,则Tomcat管理器的用户名/密码为:

Your YAML files are correct. I've recreated whole environment mentioned in the question and I've got healthy Tomcat with the application in the Running state.
If someone also wants to test it, Tomcat manager username/password are:

 username="the-manager" password="needs-a-new-password-here"

在tomcat日志中未发现严重错误,我从应用程序中得到了响应:

No SEVERE errors was found in the tomcat log, I've got the response from the application:

{"text":"Data-core"}

看起来像是正确的答案. 我还在Mysql数据库的数据核中得到了空表序列.

which looks like correct response. I've also got the empty table sequence in the Mysql database data-core.

我可以猜测您有某种连接问题,可能是由于Kubernetes网络插件(Calico/Flannel/etc.)工作不正确造成的.

I can guess yo've had some kind of connectivity problem, probably caused by incorrect work of Kubernetes network addon (Calico/Flannel/etc.)

如何对其进行故障诊断:

How to troubleshot it:

  1. 要检查设置,可以通过在两个节点上创建PV来将所有Pod放置在同一节点上.
  2. 要测试与Mysql或Tomcat资源的连接性,我们可以执行它们的pod并使用简单的命令运行测试:

  1. To check the setup, all pods can be placed on the same node by creating PV for both deployment there.
  2. To test connectivity to Mysql or Tomcat resources we can exec to their pods and run tests using simple commands:

$ kubectl exec mysql-pod-name -it -- mysql -hlocalhost -uroot -proot data-core --execute="show tables;"

或仅运行其他pod来检查服务是否正确指向mysql pod:

or just run additional pod to check if services correctly points to the mysql pod:

$ kubectl run mysql-client --rm -it --image mysql --restart=Never --command -- mysql -hmysql -uroot -proot data-core --execute="show tables;"

对于tomcat pod,我们可以使用以下命令来检查用户密码和应用程​​序响应:

For tomcat pod we can use the following commands to check the user passwords and application response:

   $ kubectl exec -ti tomcat-pod-name -- cat /usr/local/tomcat/conf/tomcat-users.xml

   $ kubectl exec -ti tomcat-pod-name -- curl http://localhost:8080/data-core-0.0.1-SNAPSHOT/

或将单独的pod与curlwget一起使用,以检查Tomcat Service和NodePort是否工作良好:

or use separate pod with curl or wget to check if Tomcat Service and NodePort works well:

   $ kubectl run curl -it --rm --image=appropriate/curl --restart=Never  -- curl http://tomcat:8080/data-core-0.0.1-SNAPSHOT/

   $ curl http://Cluster.Node.IP:30000/data-core-0.0.1-SNAPSHOT/

通过使用不同节点的IP,您还可以检查群集连接,因为NodePort Service在所有群集节点上打开相同的端口,然后节点上的iptables规则将流量转发到Pod的IP. 如果pod位于其他节点上,则为Flannel/Calico/etc.网络插件会将其传递到正确的节点和Pod.

By using IPs of different nodes you can check also cluster connectivity because NodePort Service open the same port on all cluster nodes and then iptables rules on the nodes forward traffic to Pod's IP.
If pod is located on the different node, Flannel/Calico/etc. network plugin delivers it to the correct node and to the Pod.