Kubernetes DNS Lookg无法从工作节点运行-连接超时;无法访问服务器
我已经使用Calico CNI构建了新的Kubernetes集群 v1.20.1
单主节点和单节点.
I have build new Kubernetes cluster v1.20.1
single master and single node with Calico CNI.
我在默认名称空间中部署了 busybox
pod.
I deployed the busybox
pod in default namespace.
# kubectl get pods busybox -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
busybox 1/1 Running 0 12m 10.203.0.129 node02 <none> <none>
nslookup不起作用
kubectl exec -ti busybox -- nslookup kubernetes.default
Server: 10.96.0.10
Address 1: 10.96.0.10
nslookup: can't resolve 'kubernetes.default'
集群正在运行具有最新更新的RHEL 8
cluster is running RHEL 8 with latest update
遵循以下步骤: https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/
nslookup命令无法访问名称服务器
# kubectl exec -i -t dnsutils -- nslookup kubernetes.default
;; connection timed out; no servers could be reached
command terminated with exit code 1
resolve.conf文件
# kubectl exec -ti dnsutils -- cat /etc/resolv.conf
search default.svc.cluster.local svc.cluster.local cluster.local
nameserver 10.96.0.10
options ndots:5
DNS容器正在运行
# kubectl get pods --namespace=kube-system -l k8s-app=kube-dns
NAME READY STATUS RESTARTS AGE
coredns-74ff55c5b-472vx 1/1 Running 1 85m
coredns-74ff55c5b-c75bq 1/1 Running 1 85m
DNS荚日志
kubectl logs --namespace=kube-system -l k8s-app=kube-dns
.:53
[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
CoreDNS-1.7.0
linux/amd64, go1.14.4, f59c03d
.:53
[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
CoreDNS-1.7.0
linux/amd64, go1.14.4, f59c03d
服务已定义
# kubectl get svc --namespace=kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 86m
**I can see the endpoints of DNS pod**
# kubectl get endpoints kube-dns --namespace=kube-system
NAME ENDPOINTS AGE
kube-dns 10.203.0.5:53,10.203.0.6:53,10.203.0.5:53 + 3 more... 86m
启用了日志记录,但没有看到进入DNS pod的流量
# kubectl logs --namespace=kube-system -l k8s-app=kube-dns
.:53
[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
CoreDNS-1.7.0
linux/amd64, go1.14.4, f59c03d
.:53
[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
CoreDNS-1.7.0
linux/amd64, go1.14.4, f59c03d
我可以ping DNS POD
# kubectl exec -i -t dnsutils -- ping 10.203.0.5
PING 10.203.0.5 (10.203.0.5): 56 data bytes
64 bytes from 10.203.0.5: seq=0 ttl=62 time=6.024 ms
64 bytes from 10.203.0.5: seq=1 ttl=62 time=6.052 ms
64 bytes from 10.203.0.5: seq=2 ttl=62 time=6.175 ms
64 bytes from 10.203.0.5: seq=3 ttl=62 time=6.000 ms
^C
--- 10.203.0.5 ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 6.000/6.062/6.175 ms
nmap显示端口已过滤
# ke netshoot-6f677d4fdf-5t5cb -- nmap 10.203.0.5
Starting Nmap 7.80 ( https://nmap.org ) at 2021-01-15 22:29 UTC
Nmap scan report for 10.203.0.5
Host is up (0.0060s latency).
Not shown: 997 closed ports
PORT STATE SERVICE
53/tcp filtered domain
8080/tcp filtered http-proxy
8181/tcp filtered intermapper
Nmap done: 1 IP address (1 host up) scanned in 14.33 seconds
如果我在主节点上计划POD,则nslookup可以运行nmap show port open吗?
# ke netshoot -- bash
bash-5.0# nslookup kubernetes.default
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: kubernetes.default.svc.cluster.local
Address: 10.96.0.1
nmap -p 53 10.96.0.10
Starting Nmap 7.80 ( https://nmap.org ) at 2021-01-15 22:46 UTC
Nmap scan report for kube-dns.kube-system.svc.cluster.local (10.96.0.10)
Host is up (0.000098s latency).
PORT STATE SERVICE
53/tcp open domain
Nmap done: 1 IP address (1 host up) scanned in 0.14 seconds
为什么在工作节点上运行的POD中的nslookup无法正常工作?如何解决此问题?
Why nslookup from POD running on worker node is not working? how to troubleshoot this issue?
我重新构建服务器两次,仍然是同样的问题.
I re-build the server two times, still same issue.
谢谢
SR
# cat kubeadm-config.yaml
---
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
kubeletExtraArgs:
cgroup-driver: "systemd"
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: stable
controlPlaneEndpoint: "master01:6443"
networking:
dnsDomain: cluster.local
podSubnet: 10.0.0.0/14
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs
"
首先,根据文档-请注意, Calico 和
First of all, according to the docs - please note that Calico and kubeadm support Centos/RHEL 7+.
In both Calico
and kubeadm
documentation we can see that they only support RHEL7+.
默认情况下, RHEL8 使用 nftables
代替 iptables
(我们仍然可以使用 iptables
,但"iptables" RHEL8 上的设备实际上是在后台使用内核的nft框架-请查看在RHEL 8上运行iptables" ).
By default RHEL8 uses nftables
instead of iptables
( we can still use iptables
but "iptables" on RHEL8 is actually using the kernel's nft framework in the background - look at "Running Iptables on RHEL 8").
我认为 nftables
可能会导致此网络问题,因为我们可以在nftables上找到
I believe that nftables
may cause this network issues because as we can find on nftables adoption page:
Kubernetes尚不支持nftables.
Kubernetes does not support nftables yet.
注意:目前,我强烈建议您使用 RHEL7 代替 RHEL8 .
Note: For now I highly recommend you to use RHEL7 instead of RHEL8.
请牢记这一点,我将介绍一些可能有助于您使用 RHEL8 的信息.
我已转载了您的问题,并找到了适合我的解决方案.
With that in mind, I'll present some information that may help you with RHEL8.
I have reproduced your issue and found a solution that works for me.
- 首先我打开了
Calico
所需的端口-可以找到这些端口此处在网络要求"下.
解决方法: - 接下来,我将所有集群恢复为旧的
iptables
后端节点,您可以通过在/etc/firewalld/firewalld.conf
到iptables
,如
所述此处. - 最后,我重新启动了
firewalld
以使新规则生效.
- First I opened ports required by
Calico
- these ports can be found here under "Network requirements".
As workaround: - Next I reverted to the old
iptables
backend on all cluster nodes, you can easily do so by settingFirewallBackend
in/etc/firewalld/firewalld.conf
toiptables
as described
here. - Finally I restarted
firewalld
to make the new rules active.
我尝试了在工作节点(kworker)上运行的 Pod
中的 nslookup
,它似乎可以正常工作.
I've tried nslookup
from Pod
running on worker node (kworker) and it seems to work correctly.
root@kmaster:~# kubectl get pod,svc -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/web 1/1 Running 0 112s 10.99.32.1 kworker <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/kubernetes ClusterIP 10.99.0.1 <none> 443/TCP 5m51s <none>
root@kmaster:~# kubectl exec -it web -- bash
root@web:/# nslookup kubernetes.default
Server: 10.99.0.10
Address: 10.99.0.10#53
Name: kubernetes.default.svc.cluster.local
Address: 10.99.0.1
root@web:/#