Context
I'm setting up a server that acts as a central logging collector. The logs are sent via rsyslog to a Fluentd Docker container, and I need it to also collect logs generated by the log collector server itself.
Problem
Due to specific logging management requirements, these local logs generated by the collector server should be sent to the container (which runs in host network mode) over the internal network interface (eth1) rather than the public network interface (eth0) in order to work with the appropiate internal IP address but, despite specifying the IP address of the eth1 interface, the connection is made with the IP address of the eth0 interface.
Troubleshooting
For debugging purposes, I am using ncat where the same issue is observed. Here’s what I tried:
root@myserver:~# ncat 10.114.0.5 500 # connect to eth1 IP
On the same server, I set up a listener to check incoming connections:
root@myserver:~# ncat -nlvp 500 # receive connection from eth0 IP
Ncat: Version 7.93 ( https://nmap.org/ncat )
Ncat: Listening on :::500
Ncat: Listening on 0.0.0.0:500
Ncat: Connection from 164.X.X.X.
Ncat: Connection from 164.X.X.X:41946.
Despite attempting to connect specifically to 10.114.0.5 (the IP associated with eth1), the incoming connections are registered as coming from the external IP (164.X.X.X).
Questions
- Why is the connection not utilizing the specified internal interface?
- How can I force the server to use eth1 for sending logs to itself?
Additional info
Server OS: Debian 12
Relevant interfaces
root@myserver:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether a2:ac:35:ad:8d:99 brd ff:ff:ff:ff:ff:ff
altname enp0s3
altname ens3
inet 164.X.X.X/20 brd 164.X.X.255 scope global eth0
valid_lft forever preferred_lft forever
inet 10.19.0.8/16 brd 10.19.255.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::a0ac:35ff:fead:8d99/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 32:14:3e:16:c0:13 brd ff:ff:ff:ff:ff:ff
altname enp0s4
altname ens4
inet 10.114.0.5/20 brd 10.114.15.255 scope global eth1
valid_lft forever preferred_lft forever
inet6 fe80::3014:3eff:fe16:c013/64 scope link
valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:0a:99:c8:61 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
IP routes
root@myserver:~# ip route show
default via 164.X.X.1 dev eth0
10.19.0.0/16 dev eth0 proto kernel scope link src 10.19.0.8
10.114.0.0/20 dev eth1 proto kernel scope link src 10.114.0.5
10.114.0.0/20 dev br-4cd91dafbc4e proto kernel scope link src 10.114.0.1 linkdown
10.114.0.5 dev eth1 scope link src 10.114.0.5
164.X.X.0/20 dev eth0 proto kernel scope link src 164.X.X.X
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
I have tried changing the default route, which I also don't think would make sense since there's another more prioritized one as well, but that hasn't had any effect either.
Any ideas would be very appreciated!