CentOS 7、KVM、網橋、VLAN、綁定和無訪客流量
我遇到了一個問題,即我的 KVM 來賓沒有網路流量進出。經過很多小時的調試,我很確定我錯過了一些重要的東西。
我有一個帶有 2 個 10 Gbit 介面的伺服器,在故障轉移中綁定。相關配置和輸出:
主界面:
# Autogenerated by /usr/sbin/iface-genconf on 2020-01-09 13:08:26 # backup file moved into /etc/sysconfig/network-scripts/.ens1f0-1578575306 DEVICE=ens1f0 ONBOOT=yes TYPE=Ethernet MASTER=bond0 SLAVE='yes' BOOTPROTO=none MTU=9000 NM_CONTROLLED=no
二級介面:
# Autogenerated by /usr/sbin/iface-genconf on 2020-01-09 13:08:26 # backup file moved into /etc/sysconfig/network-scripts/.ens1f1-1578575306 DEVICE=ens1f1 ONBOOT=yes TYPE=Ethernet MASTER=bond0 SLAVE='yes' BOOTPROTO=none MTU=9000 NM_CONTROLLED=no
債券狀態:
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011) Bonding Mode: fault-tolerance (active-backup) Primary Slave: None Currently Active Slave: ens1f0 MII Status: up MII Polling Interval (ms): 0 Up Delay (ms): 0 Down Delay (ms): 0 ARP Polling Interval (ms): 500 ARP IP target/s (n.n.n.n form): 10.29.55.102, 10.29.55.103, 10.29.55.104 Slave Interface: dummy0 MII Status: down Speed: Unknown Duplex: Unknown Link Failure Count: 1 Permanent HW addr: 32:15:f6:14:3b:18 Slave queue ID: 0 Slave Interface: ens1f0 MII Status: up Speed: 10000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 0c:c4:7a:bd:3d:cc Slave queue ID: 0 Slave Interface: ens1f1 MII Status: up Speed: 10000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 0c:c4:7a:bd:3d:cd Slave queue ID: 0
在此綁定之上執行 2 個 VLAN 介面,一個用於管理流量 VLAN 302,一個用於公共流量 VLAN 101。
配置 VLAN 302:
DEVICE=bond0.302 ONBOOT=yes TYPE=Ethernet BOOTPROTO=none VLAN=yes USERCTL=no BRIDGE=manbr0
配置 VLAN 101:
DEVICE=bond0.101 ONBOOT=yes TYPE=Ethernet BOOTPROTO=none VLAN=yes USERCTL=no BRIDGE=cloudbr0
如您所見,每個 VLAN 介面都連接到網橋。manbr0 用於管理,cloudbr0 用於公共流量。
配置 manbr0:
DEVICE=manbr0 BOOTPROTO=none ONBOOT=yes TYPE=Bridge USERCTL=no NAME=manbr0 IPADDR=10.29.55.106 PREFIX=24 GATEWAY=10.29.55.1
配置 cloudbr0:
DEVICE=cloudbr0 BOOTPROTO=none ONBOOT=yes TYPE=Bridge USERCTL=no DELAY=0 MTU=1500``` brctl show output: ```[root@hv2 ~]# brctl show bridge name bridge id STP enabled interfaces manbr0 8000.da7b71f50c30 no bond0.302 cloudbr0 8000.da7b71f50c30 no bond0.101
到現在為止還挺好。管理網路就像一個魅力。如果我在 cloudbr0 上配置一個公共 IP,一切都會按預期工作。配置界面:
cloudbr0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 9000 inet 185.x.x.222 netmask 255.255.255.0 broadcast 185.x.x.255 inet6 fe80::d87b:71ff:fef5:c30 prefixlen 64 scopeid 0x20<link> ether da:7b:71:f5:0c:30 txqueuelen 1000 (Ethernet) RX packets 37448 bytes 1751235 (1.6 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 8 bytes 656 (656.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
從遠端主機 Ping:
PING 185.X.X.222 (185.X.X.222) 56(84) bytes of data. --- 185.X.X.222 ping statistics --- 10 packets transmitted, 10 received, 0% packet loss, time 9004ms rtt min/avg/max/mdev = 0.119/0.211/0.793/0.195 ms
到目前為止,一切看起來都很好。當然,在從 cloudbr0 中刪除 IP 之後,我現在正在部署具有相同 IP 的 VM:
cloudbr0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 9000 inet6 fe80::d87b:71ff:fef5:c30 prefixlen 64 scopeid 0x20<link> ether da:7b:71:f5:0c:30 txqueuelen 1000 (Ethernet) RX packets 80714 bytes 3802772 (3.6 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 22 bytes 1916 (1.8 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
所以現在我讓這個虛擬機在我之前測試的同一個 IP 上執行。但它無法到達。當我從虛擬機內部開始 ping 到遠端機器並在遠端伺服器上執行 tcpdump 時,我看到了進來的請求:
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on bond0, link-type EN10MB (Ethernet), capture size 262144 bytes 09:53:24.074319 ARP, Request who-has 185.X.X.1 tell 185.X.X.222, length 46 09:53:25.075022 ARP, Request who-has 185.X.X.1 tell 185.X.X.222, length 46 09:53:25.465094 IP 185.X.X.222 > 185.X.X.25: ICMP echo request, id 12035, seq 0, length 64 09:53:25.465146 IP 185.X.X.222 > 185.X.X.25: ICMP echo request, id 12035, seq 1, length 64 09:53:26.074392 IP 185.X.X.222 > 185.X.X.25: ICMP echo request, id 12035, seq 2, length 64 09:53:27.074432 IP 185.X.X.222 > 185.X.X.25: ICMP echo request, id 12035, seq 3, length 64 09:53:28.074464 IP 185.X.X.222 > 185.X.X.25: ICMP echo request, id 12035, seq 4, length 64 09:53:29.074502 IP 185.X.X.222 > 185.X.X.25: ICMP echo request, id 12035, seq 5, length 64 09:53:30.074540 IP 185.X.X.222 > 185.X.X.25: ICMP echo request, id 12035, seq 6, length 64 09:53:31.074610 IP 185.X.X.222 > 185.X.X.25: ICMP echo request, id 12035, seq 7, length 64 09:53:32.074651 IP 185.X.X.222 > 185.X.X.25: ICMP echo request, id 12035, seq 8, length 64 09:53:33.074688 IP 185.X.X.222 > 185.X.X.25: ICMP echo request, id 12035, seq 9, length 64 09:53:34.074730 ARP, Request who-has 185.X.X.1 tell 185.X.X.222, length 46 09:53:35.074841 ARP, Request who-has 185.X.X.1 tell 185.X.X.222, length 46 09:53:36.078219 ARP, Request who-has 185.X.X.1 tell 185.X.X.222, length 46 09:53:38.074966 ARP, Request who-has 185.X.X.1 tell 185.X.X.222, length 46 09:53:39.078221 ARP, Request who-has 185.X.X.1 tell 185.X.X.222, length 46 09:53:40.081571 ARP, Request who-has 185.X.X.1 tell 185.X.X.222, length 46 09:53:42.075139 ARP, Request who-has 185.X.X.1 tell 185.X.X.222, length 46 09:53:43.078240 ARP, Request who-has 185.X.X.1 tell 185.X.X.222, length 46 09:53:44.081570 ARP, Request who-has 185.X.X.1 tell 185.X.X.222, length 46 09:53:46.075282 ARP, Request who-has 185.X.X.1 tell 185.X.X.222, length 46 09:53:47.078253 ARP, Request who-has 185.X.X.1 tell 185.X.X.222, length 46 09:53:48.081594 ARP, Request who-has 185.X.X.1 tell 185.X.X.222, length 46 09:53:50.075420 ARP, Request who-has 185.X.X.1 tell 185.X.X.222, length 46 09:53:51.078295 ARP, Request who-has 185.X.X.1 tell 185.X.X.222, length 46 09:53:52.081614 ARP, Request who-has 185.X.X.1 tell 185.X.X.222, length 46 09:53:54.075748 ARP, Request who-has 185.X.X.1 tell 185.X.X.222, length 46 09:53:55.078285 ARP, Request who-has 185.X.X.1 tell 185.X.X.222, length 46 09:53:55.465183 IP 185.X.X.222 > 185.X.X.25: ICMP echo request, id 12035, seq 30, length 64 09:53:55.465223 IP 185.X.X.222 > 185.X.X.25: ICMP echo request, id 12035, seq 31, length 64 09:53:56.075960 IP 185.X.X.222 > 185.X.X.25: ICMP echo request, id 12035, seq 32, length 64 09:53:57.076013 IP 185.X.X.222 > 185.X.X.25: ICMP echo request, id 12035, seq 33, length 64 09:53:58.076086 IP 185.X.X.222 > 185.X.X.25: ICMP echo request, id 12035, seq 34, length 64 09:53:59.076122 IP 185.X.X.222 > 185.X.X.25: ICMP echo request, id 12035, seq 35, length 64 09:54:00.076168 IP 185.X.X.222 > 185.X.X.25: ICMP echo request, id 12035, seq 36, length 64 09:54:01.076178 IP 185.X.X.222 > 185.X.X.25: ICMP echo request, id 12035, seq 37, length 64 09:54:02.076215 IP 185.X.X.222 > 185.X.X.25: ICMP echo request, id 12035, seq 38, length 64 09:54:03.076286 IP 185.X.X.222 > 185.X.X.25: ICMP echo request, id 12035, seq 39, length 64 09:54:04.076339 ARP, Request who-has 185.X.X.1 tell 185.X.X.222, length 46 09:54:05.078322 ARP, Request who-has 185.X.X.1 tell 185.X.X.222, length 46 09:54:06.081631 ARP, Request who-has 185.X.X.1 tell 185.X.X.222, length 46 09:54:08.076504 ARP, Request who-has 185.X.X.1 tell 185.X.X.222, length 46 09:54:09.078345 ARP, Request who-has 185.X.X.1 tell 185.X.X.222, length 46 09:54:10.081670 ARP, Request who-has 185.X.X.1 tell 185.X.X.222, length 46 09:54:12.076686 ARP, Request who-has 185.X.X.1 tell 185.X.X.222, length 46 09:54:13.078363 ARP, Request who-has 185.X.X.1 tell 185.X.X.222, length 46 09:54:14.081681 ARP, Request who-has 185.X.X.1 tell 185.X.X.222, length 46 09:54:16.076836 ARP, Request who-has 185.X.X.1 tell 185.X.X.222, length 46 09:54:17.078402 ARP, Request who-has 185.X.X.1 tell 185.X.X.222, length 46 09:54:18.081705 ARP, Request who-has 185.X.X.1 tell 185.X.X.222, length 46 09:54:20.077154 ARP, Request who-has 185.X.X.1 tell 185.X.X.222, length 46 09:54:21.078355 ARP, Request who-has 185.X.X.1 tell 185.X.X.222, length 46 09:54:22.081734 ARP, Request who-has 185.X.X.1 tell 185.X.X.222, length 46 09:54:24.077303 ARP, Request who-has 185.X.X.1 tell 185.X.X.222, length 46 09:54:25.078430 ARP, Request who-has 185.X.X.1 tell 185.X.X.222, length 46
上面的這個片段是一個模式,請注意 ICMP 請求中的 seq。所以它能夠發出 10 個 ICMP 請求,然後有 20 個失敗。發出 10 個請求,20 個請求失敗。等等等等。
在 cloudbr0 網橋介面上進行 tcpdumping 時,我看到相同的模式,10 個回复,缺少 20 個,10 個回复,缺少 20 個,再次,注意 seq:
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on cloudbr0, link-type EN10MB (Ethernet), capture size 262144 bytes 09:57:55.464992 IP 185.X.X.25 > 185.X.X.222: ICMP echo reply, id 12035, seq 270, length 64 09:57:55.465005 IP 185.X.X.25 > 185.X.X.222: ICMP echo reply, id 12035, seq 271, length 64 09:57:56.086069 IP 185.X.X.25 > 185.X.X.222: ICMP echo reply, id 12035, seq 272, length 64 09:57:57.086122 IP 185.X.X.25 > 185.X.X.222: ICMP echo reply, id 12035, seq 273, length 64 09:57:58.086181 IP 185.X.X.25 > 185.X.X.222: ICMP echo reply, id 12035, seq 274, length 64 09:57:59.086215 IP 185.X.X.25 > 185.X.X.222: ICMP echo reply, id 12035, seq 275, length 64 09:58:00.086272 IP 185.X.X.25 > 185.X.X.222: ICMP echo reply, id 12035, seq 276, length 64 09:58:01.086238 IP 185.X.X.25 > 185.X.X.222: ICMP echo reply, id 12035, seq 277, length 64 09:58:02.086288 IP 185.X.X.25 > 185.X.X.222: ICMP echo reply, id 12035, seq 278, length 64 09:58:03.086362 IP 185.X.X.25 > 185.X.X.222: ICMP echo reply, id 12035, seq 279, length 64 09:58:25.465128 IP 185.X.X.25 > 185.X.X.222: ICMP echo reply, id 12035, seq 300, length 64 09:58:25.465138 IP 185.X.X.25 > 185.X.X.222: ICMP echo reply, id 12035, seq 301, length 64 09:58:26.087363 IP 185.X.X.25 > 185.X.X.222: ICMP echo reply, id 12035, seq 302, length 64 09:58:27.087421 IP 185.X.X.25 > 185.X.X.222: ICMP echo reply, id 12035, seq 303, length 64 09:58:28.087463 IP 185.X.X.25 > 185.X.X.222: ICMP echo reply, id 12035, seq 304, length 64 09:58:29.087485 IP 185.X.X.25 > 185.X.X.222: ICMP echo reply, id 12035, seq 305, length 64 09:58:30.087799 IP 185.X.X.25 > 185.X.X.222: ICMP echo reply, id 12035, seq 306, length 64 09:58:31.087577 IP 185.X.X.25 > 185.X.X.222: ICMP echo reply, id 12035, seq 307, length 64 09:58:32.087616 IP 185.X.X.25 > 185.X.X.222: ICMP echo reply, id 12035, seq 308, length 64 09:58:33.087610 IP 185.X.X.25 > 185.X.X.222: ICMP echo reply, id 12035, seq 309, length 64
當我在虛擬介面上執行 tcpdump 時,我看到了相同的模式,可以發送 10 個請求,不能發送 20 個請求。沖洗並重複。此外,我從來沒有看到回復回到虛擬界面:
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on one-79-0, link-type EN10MB (Ethernet), capture size 262144 bytes 11:12:25.494959 IP 185.X.X.222 > 185.X.X.25: ICMP echo request, id 14339, seq 12, length 64 11:12:25.494970 IP 185.X.X.222 > 185.X.X.25: ICMP echo request, id 14339, seq 13, length 64 11:12:25.600742 IP 185.X.X.222 > 185.X.X.25: ICMP echo request, id 14339, seq 14, length 64 11:12:26.600781 IP 185.X.X.222 > 185.X.X.25: ICMP echo request, id 14339, seq 15, length 64 11:12:27.600832 IP 185.X.X.222 > 185.X.X.25: ICMP echo request, id 14339, seq 16, length 64 11:12:28.600875 IP 185.X.X.222 > 185.X.X.25: ICMP echo request, id 14339, seq 17, length 64 11:12:29.600914 IP 185.X.X.222 > 185.X.X.25: ICMP echo request, id 14339, seq 18, length 64 11:12:30.600950 IP 185.X.X.222 > 185.X.X.25: ICMP echo request, id 14339, seq 19, length 64 11:12:31.600986 IP 185.X.X.222 > 185.X.X.25: ICMP echo request, id 14339, seq 20, length 64 11:12:32.601023 IP 185.X.X.222 > 185.X.X.25: ICMP echo request, id 14339, seq 21, length 64 11:12:55.494978 IP 185.X.X.222 > 185.X.X.25: ICMP echo request, id 14339, seq 42, length 64 11:12:55.494987 IP 185.X.X.222 > 185.X.X.25: ICMP echo request, id 14339, seq 43, length 64 11:12:55.601972 IP 185.X.X.222 > 185.X.X.25: ICMP echo request, id 14339, seq 44, length 64 11:12:56.602010 IP 185.X.X.222 > 185.X.X.25: ICMP echo request, id 14339, seq 45, length 64 11:12:57.602051 IP 185.X.X.222 > 185.X.X.25: ICMP echo request, id 14339, seq 46, length 64 11:12:58.602091 IP 185.X.X.222 > 185.X.X.25: ICMP echo request, id 14339, seq 47, length 64 11:12:59.602128 IP 185.X.X.222 > 185.X.X.25: ICMP echo request, id 14339, seq 48, length 64 11:13:00.602163 IP 185.X.X.222 > 185.X.X.25: ICMP echo request, id 14339, seq 49, length 64 11:13:01.602200 IP 185.X.X.222 > 185.X.X.25: ICMP echo request, id 14339, seq 50, length 64 11:13:02.602241 IP 185.X.X.222 > 185.X.X.25: ICMP echo request, id 14339, seq 51, length 64
我的虛擬介面“one-74-0”連接到網橋:
[root@hv2 ~]# brctl show bridge name bridge id STP enabled interfaces manbr0 8000.da7b71f50c30 no bond0.302 cloudbr0 8000.da7b71f50c30 no bond0.101 one-74-0
有趣的是,在 KVM 主機上的虛擬介面上有很多丟包:
one-74-0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 9000 inet6 fe80::fc00:b9ff:fe64:83de prefixlen 64 scopeid 0x20<link> ether fe:00:b9:64:83:de txqueuelen 1000 (Ethernet) RX packets 441 bytes 27054 (26.4 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 1196384 bytes 288740710 (275.3 MiB) TX errors 0 dropped 23902 overruns 0 carrier 0 collisions 0
iptables 輸出:
[root@hv2 ~]# iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination
ebtables 輸出:
[root@hv2 ~]# ebtables -L Bridge table: filter Bridge chain: INPUT, entries: 0, policy: ACCEPT Bridge chain: FORWARD, entries: 0, policy: ACCEPT Bridge chain: OUTPUT, entries: 0, policy: ACCEPT
ip_forward:
[root@hv2 ~]# cat /proc/sys/net/ipv4/ip_forward 1
qemu流程:
[root@hv2 ~]# ps -ef | grep qemu oneadmin 9198 1 6 10:06 ? 00:02:27 /usr/libexec/qemu-kvm -name guest=one-76,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-4-one-76/master-key.aes -machine pc-i440fx-rhel7.6.0,accel=kvm,usb=off,dump-guest-core=off -m 1024 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid e1660604-09ce-452a-a8d9-5cfe8d7a88a9 -no-user-config -nodefaults -chardev socket,id=charmonitor,fd=26,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive file=/var/lib/one//datastores/103/76/disk.0,format=raw,if=none,id=drive-ide0-0-0,readonly=on,cache=none,discard=unmap,aio=native -device ide-cd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,write-cache=on -drive file=/var/lib/one//datastores/103/76/disk.1,format=raw,if=none,id=drive-ide0-0-1,readonly=on -device ide-cd,bus=ide.0,unit=1,drive=drive-ide0-0-1,id=ide0-0-1 -netdev tap,fd=29,id=hostnet0,vhost=on,vhostfd=30 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=02:00:b9:64:83:de,bus=pci.0,addr=0x3 -chardev socket,id=charchannel0,fd=31,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 -vnc 0.0.0.0:76 -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny -msg timestamp=on
來賓的 virsh XML:
<domain type='kvm'> <name>one-76</name> <uuid>e1660604-09ce-452a-a8d9-5cfe8d7a88a9</uuid> <title>sysrescuecd-76</title> <metadata> <one:vm xmlns:one="http://opennebula.org/xmlns/libvirt/1.0"> <one:system_datastore><![CDATA[/var/lib/one//datastores/103/76]]></one:system_datastore> <one:name><![CDATA[sysrescuecd-76]]></one:name> <one:uname><![CDATA[oneadmin]]></one:uname> <one:uid>0</one:uid> <one:gname><![CDATA[oneadmin]]></one:gname> <one:gid>0</one:gid> <one:opennebula_version>5.10.1</one:opennebula_version> <one:stime>1578992757</one:stime> <one:deployment_time>1578992777</one:deployment_time> </one:vm> </metadata> <memory unit='KiB'>1048576</memory> <currentMemory unit='KiB'>1048576</currentMemory> <vcpu placement='static'>1</vcpu> <cputune> <shares>1024</shares> </cputune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-i440fx-rhel7.6.0'>hvm</type> <boot dev='hd'/> </os> <features> <acpi/> </features> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>destroy</on_crash> <devices> <emulator>/usr/libexec/qemu-kvm</emulator> <disk type='file' device='cdrom'> <driver name='qemu' type='raw' cache='none' io='native' discard='unmap'/> <source file='/var/lib/one//datastores/103/76/disk.0'/> <target dev='hda' bus='ide'/> <readonly/> <alias name='ide0-0-0'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/var/lib/one//datastores/103/76/disk.1'/> <target dev='hdb' bus='ide'/> <readonly/> <alias name='ide0-0-1'/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <controller type='usb' index='0' model='piix3-uhci'> <alias name='usb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> </controller> <controller type='pci' index='0' model='pci-root'> <alias name='pci.0'/> </controller> <controller type='ide' index='0'> <alias name='ide'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='virtio-serial' index='0'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </controller> <interface type='bridge'> <mac address='02:00:b9:64:83:de'/> <source bridge='cloudbr0'/> <target dev='one-76-0'/> <model type='virtio'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-4-one-76/org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='mouse' bus='ps2'> <alias name='input0'/> </input> <input type='keyboard' bus='ps2'> <alias name='input1'/> </input> <graphics type='vnc' port='5976' autoport='no' listen='0.0.0.0'> <listen type='address' address='0.0.0.0'/> </graphics> <video> <model type='cirrus' vram='16384' heads='1' primary='yes'/> <alias name='video0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </video> <memballoon model='virtio'> <alias name='balloon0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </memballoon> <seclabel type='dynamic' model='dac' relabel='yes'> <label>+9869:+9869</label> <imagelabel>+9869:+9869</imagelabel> </seclabel> </domain>
一些更有趣的事情:
- 嘗試在 VM 外部 ping 時在 VM 內進行 tcpdumping 不會呈現任何數據包 - 我正在 ping 的 IP 沒有任何內容進入。回复最終會出現在綁定上,而不是虛擬機中。
- 在虛擬機中進行 tcpdumping 並且不將 src IP 限製到我正在 ping 的主機確實會顯示此 VLAN 內的其他廣播流量。
- 一個非常有趣的。當我手動執行“ifconfig cloudbr0 down; ifconfig cloudbr0 up”時,例如 100 次中有 1 個,針對 VM 的 IP 的 1 個 ping 會通過。
我感覺網橋和 VM NIC 之間有問題。如前所述 - 當我直接在網橋上配置相同的 IP 時,一切都很好。因此,在硬體/網路方面,一切似乎都在工作。知道我調試這個最好的下一步是什麼嗎?
嘗試 fwupd以確保您的所有韌體都是最新的。
# install dnf install fwupd # check for compatible devices fwupdmgr get-devices # pull in newest update database fwupdmgr refresh # Gets the list of updates for connected hardware fwupdmgr get-updates # Run the updates fwupdmgr update
希望這可以幫助您和其他有類似問題的人。