I’m starting to get acquainted with Terraform, and I’m currently using it to set up multiple Nextcloud instances in a K8S cluster.
For efficiency’s sake I’m looking to have a single PostgreSQL database backend to handle all of them, and I’d like to use the terraform postgresql provider to handle creation of databases and role accounts for each Nextcloud instance.
Since this particular database will only be used by Nextcloud as a backend I’d like to not expose it to the internet because there’s no need for it — if I were doing this manually I’d just use either kubectl exec or kubectl expose to reach the database when I need to — but from what I can see of the terraform postgres provider there doesn’t seem to be an option to create such a link just during the terraform process.
Is this something people have worked with as well, or should I just bite the bullet and expose the PostgreSQL database and use IP whitelisting to shield it as well as possible?
Update: To address the close reason: I’m well aware that it’s possible to create SSH tunnels or use kubectl expose to temporarily expose the PostgresSQL service endpoint. That’s not what I’m asking. I’m asking if it’s possible to set that up within Terraform so that the exposure only exists while Terraform runs, and is properly cleaned up after Terraform finishes its job.
Second update I’m apparently not being clear enough. I am not looking for a duct-tape-and-cardboard-tube solution like an SSH tunnel; if I wanted to resort to that I’d simply run kubectl expose -n my-namespace svc.postgresql-instance and tell terraform to connect to localhost.
What I’m asking for is a way to do that or equivalent from within terraform itself. If that’s not possible, then I’ll accept “That isn’t possible” as an answer and find another way. Please stop reverring me to the SSH tunnel via R reply.