Following a short/simplified diagram of my setup (sorry for the title):
Internet ----- eth0 (1.2.3.4) --- br0 (10.0.0.1)
|
+---------------+---------------+
| |
veth0 (10.0.0.2) veth1 (10.0.0.3)
httpd app
I forward external traffic via the usual DNAT/SNAT (using nftables) from eth0 to the service that is appropriate (e.g. to httpd
, if someone talks to port 443 on eth0). I can also talk between containers (e.g. from app
to httpd
), or between containers and the “host” (via br0), if they use the internal IP addresses (10.0.0.X). This all works without having to turn on route_localnet
.
What I can’t seem to make work is, if for example the container app
is trying to talk to httpd
using the external address (1.2.3.4). So e.g. if app
is sending a request to 1.2.3.4:443. That it is using 1.2.3.4 and not the internal 10.0.0.2 is beyond my control (I might be able to fix some cases by adding a split dns view, but certainly not all, so it’s not really worth it).
When I try to talk from container app
to httpd
using the external address, for example using socat (socat -4 - TCP4:1.2.3.4:443
), I get EHOSTUNREACH (“No route to host”).
I tried to add a separate DNAT for this case, e.g.: nft add rule ip contBr prerouting iif "br0" ip daddr 1.2.3.4 tcp dport { 80, 443 } dnat to 10.0.0.2
. The idea was that as it only changes the destination address before routing, it will know where to send it during routing, and the source address keeps being 10.0.0.3, so returning the answer should be fine. Doesn’t work though. Now, when trying to talk to httpd
from app
, I get ETIMEDOUT (“Connection timed out”) – which shouldn’t be any filter from me, since I reject everything I don’t want, and don’t just drop it.
Not sure what exactly to search for on the internet, or here.