Linux

GRE 隧道之間的 Centos7 上未轉發數據包

  • April 3, 2020

如圖所示,我已經在 centos 機器和各個 centos 機器上的相應路由表之間配置了 GRE 隧道:

I dont have enough reputation to post images

Router1——–gre1———Transit-Router———gre2——–Router2

10.2.32.0/24–Router1–10.0.0.1—gre1—10.0.0.2–Transit-Router–11.0.0.2—gre2–11.0.0.1–Router2–10.4.32.0/24

我能夠從 Router-1 ping 到 gre1 隧道的另一端:

worker]# ping 10.0.0.2
PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=1.43 ms
64 bytes from 10.0.0.2: icmp_seq=2 ttl=64 time=0.472 ms
64 bytes from 10.0.0.2: icmp_seq=3 ttl=64 time=0.291 ms
64 bytes from 10.0.0.2: icmp_seq=4 ttl=64 time=0.319 ms

流量通過 GRE 隧道到達 Transit Router(由 tcpdump proto gre 驗證)

從 Router-2 ping 到 gre2 隧道另一端:

worker]# ping 11.0.0.2
PING 11.0.0.2 (11.0.0.2) 56(84) bytes of data.
64 bytes from 11.0.0.2: icmp_seq=1 ttl=64 time=1.10 ms
64 bytes from 11.0.0.2: icmp_seq=2 ttl=64 time=0.392 ms
64 bytes from 11.0.0.2: icmp_seq=3 ttl=64 time=0.369 ms
64 bytes from 11.0.0.2: icmp_seq=4 ttl=64 time=0.258 ms

此流量也在隧道上流動

在中轉路由器上,我可以在添加路由條目後 ping 路由器 1 和路由器 2 的私有地址: 中轉路由器:

[root@vmc-centos conf]# ping 10.2.32.1
PING 10.2.32.1 (10.2.32.1) 56(84) bytes of data.
64 bytes from 10.2.32.1: icmp_seq=1 ttl=64 time=0.589 ms
64 bytes from 10.2.32.1: icmp_seq=2 ttl=64 time=0.380 ms
64 bytes from 10.2.32.1: icmp_seq=3 ttl=64 time=0.383 ms

路由器 1:

worker]# tcpdump -i any proto gre -n
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on any, link-type LINUX_SLL (Linux cooked), capture size 65535 bytes
04:54:36.684864 IP 10.206.83.3 > 10.206.90.103: GREv0, length 88: IP 10.0.0.2 > 10.2.32.1: ICMP echo request, id 20445, seq 34, length 64
04:54:36.684951 IP 10.206.90.103 > 10.206.83.3: GREv0, length 88: IP 10.2.32.1 > 10.0.0.2: ICMP echo reply, id 20445, seq 34, length 64
04:54:37.684776 IP 10.206.83.3 > 10.206.90.103: GREv0, length 88: IP 10.0.0.2 > 10.2.32.1: ICMP echo request, id 20445, seq 35, length 64

中轉路由器:

[root@vmc-centos conf]# ping 10.4.32.1
PING 10.4.32.1 (10.4.32.1) 56(84) bytes of data.
64 bytes from 10.4.32.1: icmp_seq=1 ttl=64 time=0.553 ms
64 bytes from 10.4.32.1: icmp_seq=2 ttl=64 time=0.325 ms
64 bytes from 10.4.32.1: icmp_seq=3 ttl=64 time=0.354 ms

路由器 2:

worker]# sudo tcpdump -i any proto gre -n
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on any, link-type LINUX_SLL (Linux cooked), capture size 65535 bytes
04:56:57.549823 IP 10.206.83.3 > 10.206.86.199: GREv0, length 88: IP 11.0.0.2 > 10.4.32.1: ICMP echo request, id 20690, seq 24, length 64
04:56:57.549896 IP 10.206.86.199 > 10.206.83.3: GREv0, length 88: IP 10.4.32.1 > 11.0.0.2: ICMP echo reply, id 20690, seq 24, length 64

但是現在當我嘗試從路由器 1 到達路由器 2(10.4.32.1)的專用網路時,數據包到達中轉路由器但沒有從那裡轉發到路由器 2:路由器 1:

worker]# ping 10.4.32.1
PING 10.4.32.1 (10.4.32.1) 56(84) bytes of data.

中轉路由器:

[root@vmc-centos conf]# tcpdump -i any proto gre -n
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
04:59:06.382024 IP 10.206.90.103 > 10.206.83.3: GREv0, length 88: IP 10.0.0.1 > 10.4.32.1: ICMP echo request, id 36131, seq 40, length 64
04:59:07.382007 IP 10.206.90.103 > 10.206.83.3: GREv0, length 88: IP 10.0.0.1 > 10.4.32.1: ICMP echo request, id 36131, seq 41, length 64

路由器 2:

[root@wdc-10-206-86-199 worker]# sudo tcpdump -i any proto gre -n
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on any, link-type LINUX_SLL (Linux cooked), capture size 65535 bytes

在所有機器上啟用路由轉發:

[root@vmc-centos conf]# sudo sysctl -p
net.ipv4.ip_forward = 1

中轉路由器上的 iptables:

[root@vmc-centos ~]# iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         
ACCEPT     gre  --  anywhere             anywhere            
ACCEPT     gre  --  anywhere             anywhere            

Chain FORWARD (policy DROP)
target     prot opt source               destination         
DOCKER-USER  all  --  anywhere             anywhere            
DOCKER-ISOLATION-STAGE-1  all  --  anywhere             anywhere            
ACCEPT     all  --  anywhere             anywhere             ctstate RELATED,ESTABLISHED
DOCKER     all  --  anywhere             anywhere            
ACCEPT     all  --  anywhere             anywhere            
ACCEPT     all  --  anywhere             anywhere            

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         
ACCEPT     gre  --  anywhere             anywhere            

Chain DOCKER (1 references)
target     prot opt source               destination         

Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target     prot opt source               destination         
DOCKER-ISOLATION-STAGE-2  all  --  anywhere             anywhere            
RETURN     all  --  anywhere             anywhere            

Chain DOCKER-ISOLATION-STAGE-2 (1 references)
target     prot opt source               destination         
DROP       all  --  anywhere             anywhere            
RETURN     all  --  anywhere             anywhere            

Chain DOCKER-USER (1 references)
target     prot opt source               destination         
RETURN     all  --  anywhere             anywhere     

注意:我之前已經嘗試過,數據包正在到達另一個專用網路。現在我嘗試另一種設置,我缺少一些配置。

Docker 守護程序似乎正在轉發機器上執行。預設情況下,為了隔離不同網橋和宿主機上的容器,Docker 會在 iptables 中的轉發鏈上安裝一個預設的 DROP 策略。Docker 守護程序中有一個設置不這樣做。在 .iptables 中將 iptables 設置為 false /etc/docker/daemon.json。請參閱Docker 和 iptables

如果您將預設策略更改為接受,那將起作用。

iptables --policy FORWARD ACCEPT

但是,當您(或 docker 的軟體包升級,或重新啟動)重新啟動 Docker 守護程序時,如果您沒有更改 docker 守護程序的設置,預設策略將再次更改為 DROP。

引用自:https://serverfault.com/questions/1010565