Zend certified PHP/Magento developer

Mapping IPv6 addresses to Docker Containers

Getting the question up front:

I’m thinking about assigning a bunch of static ipv6 addresses to my container host, then mapping each of those to my containers. I’m trying to figure out if it makes sense. Is there a security angle I’m missing? A maintenance/reliability angle I’m missing? Is something gonna bite me later that I’m not seeing now?

Background

I use a bunch of docker containers at home for home stuff. It’s all generally working, and I’m thinking about making a change. I recently did some solid IPv6 activation around the house, so I have a lot of IPv6 working right. In the past, the main way I multiplexed containers on a single IPv4 address was by ports. So my container host would have IPv4 10.0.0.2 assigned to it. Container A would get 10.0.0.2 port 5000, and container B would get 10.0.0.2 port 6000 and so on. I’d put a load balancing web server like Caddy in front. The Caddy container gets ports 80 and 443 and internally routes to all the various containers. Inside my firewall, if I want to test Container A without passing through Caddy container, I can just hit port 5000 on the 10.0.0.2 port.

My Container Host

I have an entire /64 assigned to this one docker host. Assume it is 2001:db8:2::/64. My container host, which is Alpine linux, has an IPv6 config like this in /etc/network/interfaces.

iface eth0 inet6 auto
        hostname boxes
iface eth0 inet6 static
        address 2001:db8:2::5
iface eth0 inet6 static
        address 2001:db8:2::6
iface eth0 inet6 static
        address 2001:db8:2::7

That means I have 3 IPs I can assign out (and its trivial to just create 20 of these and call it a day). So rather than have 10.0.0.2 port 5000, I can have 2001:db8:2::5 port 80. And I can assign a AAAA record to A.containers.example.org that points to 2001:db8:2::5. No need to remember which port that container is on, and i can assign friendly names to every container/IP address by using AAAA records. Instead of a URL like http://containerhost.example.org:5000/ I can just put http://A.containers.example.org in my browser and I’m going straight to container A. Likewise B.container.example.org can go straight to container B, and I don’t have to remember that container B is on port 6000.

For public/outside access, I still have https://containerA.example.org/ which points to my Caddy container, terminates TLS, and uses SNI to find the internal container to serve the request.

I feel like I haven’t seen a lot of people do stuff like this, even in examples. So it makes me think there’s some really bad idea that I’m not seeing.

Anyone have thoughts?