In fact, everything it's already connected. I need to make it dynamic (maybe this is not the concept). I'm working on a system that consists of two separate and independent containers: NET Core Backend & Angular Frontend (nginx).
The Angular app create connection URLs using environment variables, that means that de baseURL variable contains the URL for the server-backend connection. For local purposes this is http://localhost:5000. Everything works great: CORS is accepting calls, JWTs are issued correctly and the WebAPI it's returning values.
This product it's meant to work on a single VM server for every client. Every client will have it's own server IP to access the app. The baseURL for the frontend Angular app is hardcoded in environment.prod, but using this approach I'd have to change the baseURL in environment.prod for every implementation (client) and this, i think, it's not the right approach.
I've tried to make this work using a host alias and passing that alias with --link flag in Docker run, but didn't work. Something like:
environment.prod.ts (Angular)
export const environment = {
production: true,
baseUrl: 'http://backendalias:5000'
};
Running with the command:
docker run --rm -d -p 5500:80 --link=backend_container:backendalias frontendimage
But this can't reach the backend and no connection is established.
I need to make this implementation more dynamic so I don't have to change the codebase for every client.
Is this approach right? Do i need to change Docker network configuration via --network flag? What do you suggest for this kind of products/systems?
I want to control this (ideally) with Docker running commands.
docker-composeand useNginxfor routing. If you want I can give you that solution. That would be a generic solution and you don't need to setup usingenv variables