Tuesday, November 05, 2019

Angular in Docker Containers for Development

I've been using the Google login for authentication for my application. The chain of events is as follows:

  1. In the browser a Google login where you either enter your account information or select from an already logged in Google account.
  2. The Google login libraries talk back and forth, and come up with a token.
  3. The app sends the token to the node application, where it verifies it's validity, extracts the identification of the user, verifies against the allowed users, then responds with the authentication state to the app in the browser.
  4. The angular app watches all this happen in a guard, and when you are authenticated routes to wherever you wanted to go.
It all works fine, but I was running into two issues. 
How do you authenticate a websocket connection? I wrote the logic where the token was sent via socket, and the connection is maintained if the token is valid. But I don't trust my code when it comes to security.
The second issue is that the normal garbage traffic that hits any server gets a large app bundle, putting an unnecessary load on the server. Even if you lazy load and start with a simple log in page, the bundle is not insignificant.

I was forseeing complications as I built out my app. I wanted security to be simple, audited by people who know, covering websockets and api calls, and not being a burden on the server.

I ran across an application called oauth2_proxy, which seems to solve my problem. You put your application and all the api routes behind this proxy, which authenticates via the numerous oauth2 services available, including Google.

I set it up and got it working, then realized that I needed something very similar to my server deployment on my development machine. Knowing from experience, the setup of these things are complex and long, and I wanted to figure it out once, then change a few things and have it ready for deployment. Docker came to mind, partly because the oauth2_proxy has a docker image.

So my structure is as follows. I have it basically working, no doubt I'll find a bunch of issues, but that is why I wanted it on a development machine. I'm using docker-compose to put the thing together, and the goal is to have it ready to go with one command.

  1. Nginx as a front facing proxy. The docker image takes a configuration file, and it routes to the nodejs api applications, websockets and all the bits and pieces.
  2. Oauth2_proxy for authentication. I'm using the nginx auth_request function where a request comes into nginx, and on the locations needing authentication it calls oauth2_proxy then routes either to a login page or the desired route.
  3. Nestjs server application that handles the api calls
  4. A second nodejs application that does a bunch of work.
  5. A third nodejs application that serves websockets.
  6. Mongodb as the data store. The websocket microservice subscribes to changes and sends updates to the app in the browser.
  7. For development, I have a docker image which serves the angular-cli ng serve through nginx. The nodejs applications are also served the same way, meaning they recompile when the code is changed.
So how does it look? I'll go through this piece by piece. There were some knarly bits which swallowed too much time with a dastardly simple solution obvious only in retrospect.

Setting up a MonoRepo with Nx

When I started poking around with this idea I found that the structure of my application was lacking. Things like shared code between Angular and Nestjs, and the serve and build setup for the node applications didn't work very well. A very nice solution is the Nx system. It required a bit of work and thought to move things around, but in the end I have a setup where ng serve api starts the node application in development mode. https://nx.dev/angular/getting-started/getting-started shows how to install the system. When you install it will ask the structure of your application, I selected angular with nestjs backend. It creates a skeleton that is very nice.

Running Angular Cli in Docker

This is really neat. Here is the Dockerfile.

FROM node

ENV HOME=/usr/src/app
RUN mkdir -p $HOME
WORKDIR $HOME

RUN npm -g install @angular/cli@9.0.0-rc.0

EXPOSE 4200

USER 1000

Put this Dockerfile in the same directory as the package.json file in the Nx structure. I call it Dockerfile.angular, since I have many dockerfiles there.

Then in a docker-compose.yml file, the docker-compose configuration,

angular:
  container_name: angular
  build:
    context: .
    dockerfile: Dockerfile.angular
  ports:
    - "4200"  volumes:
    - .:/usr/src/app
  command: ng serve --aot --host 0.0.0.0

The volumes: statement lets the docker image see the current directory, then you run ng serve and it serves the application. I'm using it from an Nginx proxy, so the port is only seen from the docker network. You might want to expose it 4200:4200 to use it without Nginx.

The node applications are identical except for the Dockerfile EXPOSE statement where I set the value to the port that the Nestjs is watching. And instead of ng serve, this is what the docker-compose.yml looks like.

scripts:
  container_name: scripts
  build:
    context: .
    dockerfile: Dockerfile.scripts.dev
  ports:
    - "3333"  volumes:
    - .:/usr/src/app
  command: ng serve scripts
  depends_on:
    - mongo-replicator
  secrets:
    - mongodb_username
    - mongodb_userpwd
    - mongodb_path
ng serve scripts runs the node application. There are a couple things here that I will get into in future posts.

ng serve --aot --host 0.0.0.0 This is one of the sticky things I had to figure out. The default host is localhost, and the websockets for live reloading the app in the browser won't work unless you set this correctly.

More to come.

  1. Docker secrets and using them in the various images
  2. Setting up Mongo.
  3. The Oauth2_proxy configuration
  4. Nginx configuration



Comments: Post a Comment

Subscribe to Post Comments [Atom]





<< Home

This page is powered by Blogger. Isn't yours?

Subscribe to Posts [Atom]