Wednesday, November 13, 2019
The Secrets of Docker Secrets
Most web apps need login information of some kind, and it is a bad idea to put them in your source code where it gets saved to a git repository that everyone can see. Usually these are handled by environment variables, but Docker has come up with what they call Docker secrets. The idea is deceptively simple in retrospect. While you figure it out it is arcane and difficult to parse what is going on.
Essentially the secrets function create in memory files in the docker image that contain the secret data. The data can come from files, or a Docker swarm.
The first thing to know is that the application running in the docker image needs to be written to take advantage of the Docker secrets function. Instead of getting the password from an environment variable, it would get the password from the file system at /run/secrets/secretname. Not all images available use this functionality. If they don't describe how to use Docker secrets, the won't work. The files will be created in the image, but the application won't read them.
For a development setup having files outside of the git source tree works well. To create a file with a secret, I created a folder called serverdata, with a dev/ and prod/ folder within. In the dev/ folder, run this command with all the secret data you will need:
Function and description. I have some configuration details as well.
Using Secrets with docker-compose
This is the docker-compose.yml that builds a mongodb image with all the configuration.
Of course it gets more complicated. I wanted to watch the changes in the database within my node application for various purposes. This function is only supported in a replicated set in Mongo. To fully automate the configuration and initialization of Mongo within Docker images using replication requires a second Docker image that waits for the Mongo images to initialize, then runs a script. So here is the complete docker-compose.yml for setting up the images:
replicate.js
This sets up mongoDb with admin user and password, as well as a user that is used from the node.js apps for reading and writing data.
No passwords in my git repository, and an initialized database. This is working for my development setup, with a mongo database, replicated so that I can get change streams, and read and write function from the node.js application.
More to come.
Essentially the secrets function create in memory files in the docker image that contain the secret data. The data can come from files, or a Docker swarm.
The first thing to know is that the application running in the docker image needs to be written to take advantage of the Docker secrets function. Instead of getting the password from an environment variable, it would get the password from the file system at /run/secrets/secretname. Not all images available use this functionality. If they don't describe how to use Docker secrets, the won't work. The files will be created in the image, but the application won't read them.
For a development setup having files outside of the git source tree works well. To create a file with a secret, I created a folder called serverdata, with a dev/ and prod/ folder within. In the dev/ folder, run this command with all the secret data you will need:
The names simply need to tell you what they do. What the secret is called in the image is set in the docker configuration. This is what my dev/ folder looks like:echo "shh, this is a secret" > mysecret.txt
-rw-r--r-- 1 derek derek 66 Nov 5 14:49 mongodb_docker_path
-rw-r--r-- 1 derek derek 6 Oct 22 14:09 mongodb_rootusername
-rw-r--r-- 1 derek derek 13 Oct 22 14:08 mongodb_rootuserpwd
-rw-r--r-- 1 derek derek 18 Oct 22 14:10 mongodb_username
-rw-r--r-- 1 derek derek 14 Oct 22 14:10 mongodb_userpwd
-rw-r--r-- 1 derek derek 73 Oct 22 14:02 oauth2_clientid
-rw-r--r-- 1 derek derek 25 Oct 22 14:02 oauth2_clientsecret
-rw-r--r-- 1 derek derek 14 Oct 22 14:03 oauth2_cookiename
-rw-r--r-- 1 derek derek 25 Oct 22 14:04 oauth2_cookiesecret
-rw-r--r-- 1 derek derek 33 Oct 26 08:27 oauth2_redirecturl
Function and description. I have some configuration details as well.
Using Secrets with docker-compose
This is the docker-compose.yml that builds a mongodb image with all the configuration.
version: '3.6'services: mongo-replicator: build: ./mongo-replicator container_name: mongo-replicator secrets: - mongodb_rootusername - mongodb_rootuserpwd - mongodb_username - mongodb_userpwd environment: MONGO_INITDB_ROOT_USERNAME_FILE: /run/secrets/mongodb_rootusername MONGO_INITDB_ROOT_PASSWORD_FILE: /run/secrets/mongodb_rootuserpwd MONGO_INITDB_DATABASE: admin networks: - mongo-cluster depends_on: - mongo-primary - mongo-secondary
And the secrets are defined as follows:
The secrets: section reads the contents of the file into a namespace, which is the name of the /run/secrets/filename. Mongo docker image looks for an environment variable with the suffix _FILE, then reads the secret from that file in the image file system. Those are the only two variables supported by the Mongo image.secrets: mongodb_rootusername: file: ../../serverdata/dev/mongodb_rootusername mongodb_rootuserpwd: file: ../../serverdata/dev/mongodb_rootuserpwd mongodb_username: file: ../../serverdata/dev/mongodb_username mongodb_userpwd: file: ../../serverdata/dev/mongodb_userpwd mongodb_path: file: ../../serverdata/dev/mongodb_docker_path
Of course it gets more complicated. I wanted to watch the changes in the database within my node application for various purposes. This function is only supported in a replicated set in Mongo. To fully automate the configuration and initialization of Mongo within Docker images using replication requires a second Docker image that waits for the Mongo images to initialize, then runs a script. So here is the complete docker-compose.yml for setting up the images:
The Dockerfile for the mongo-replicator looks like this:version: '3.6'services: mongo-replicator: build: ./mongo-replicator container_name: mongo-replicator secrets: - mongodb_rootusername - mongodb_rootuserpwd - mongodb_username - mongodb_userpwd environment: MONGO_INITDB_ROOT_USERNAME_FILE: /run/secrets/mongodb_rootusername MONGO_INITDB_ROOT_PASSWORD_FILE: /run/secrets/mongodb_rootuserpwd MONGO_INITDB_DATABASE: admin networks: - mongo-cluster depends_on: - mongo-primary - mongo-secondary mongo-primary: container_name: mongo-primary image: mongo:latest command: --replSet rs0 --bind_ip_all environment: MONGO_INITDB_DATABASE: admin ports: - "27019:27017" networks: - mongo-cluster mongo-secondary: container_name: mongo-secondary image: mongo:latest command: --replSet rs0 --bind_ip_all ports: - "27018:27017" networks: - mongo-cluster depends_on: - mongo-primary
Mongo with various scripts added to it. Here they are.FROM mongo:latest ADD ./replicate.js /replicate.js ADD ./seed.js /seed.js ADD ./setup.sh /setup.sh CMD ["/setup.sh"]
replicate.js
seed.jsrs.initiate( { _id : "rs0", members: [ { _id: 0, host: "mongo-primary:27017" }, { _id: 1, host: "mongo-secondary:27017" }, ] });
and finally what does all the work, setup.shdb.users.updateOne( { email: "myemail@address.com"}, { $set: { email: "myemail@address.com", name: "My Name"} }, { upsert: true },);
In the docker-compose.yml the depends_on: orders the creation of images, so this one waits until the others are done. It runs the replication.js which initializes the replication set, then waits for a while. The password and username are read from the /run/secrets/ file, the linefeed removed, then the user is created in the mongo database. Then seed.js is called to add more initial data#!/usr/bin/env sh if [ -f /replicated.txt ]; then echo "Mongo is already set up"else echo "Setting up mongo replication and seeding initial data..." # Wait for few seconds until the mongo server is up sleep 10 mongo mongo-primary:27017 replicate.js echo "Replication done..." # Wait for few seconds until replication takes effect sleep 40 MONGO_USERNAME=`cat /run/secrets/mongodb_username|tr -d '\n'` MONGO_USERPWD=`cat /run/secrets/mongodb_userpwd|tr -d '\n'` mongo mongo-primary:27017/triggers <<EOFrs.slaveOk()use triggersdb.createUser({ user: "$MONGO_USERNAME" , pwd: "$MONGO_USERPWD", roles: [ { role: "dbOwner", db: "admin" }, { role: "readAnyDatabase", db: "admin" }, { role: 'readWrite', db: 'admin'}]}) EOF mongo mongo-primary:27017/triggers seed.js echo "Seeding done..." touch /replicated.txt fi
This sets up mongoDb with admin user and password, as well as a user that is used from the node.js apps for reading and writing data.
No passwords in my git repository, and an initialized database. This is working for my development setup, with a mongo database, replicated so that I can get change streams, and read and write function from the node.js application.
More to come.
- Using secrets in node.js applications and oauth2_proxy
- The oauth2_proxy configuration
- Nginx configuration to tie the whole mess together
Tuesday, November 05, 2019
Angular in Docker Containers for Development
I've been using the Google login for authentication for my application. The chain of events is as follows:
Put this Dockerfile in the same directory as the package.json file in the Nx structure. I call it Dockerfile.angular, since I have many dockerfiles there.
Then in a docker-compose.yml file, the docker-compose configuration,
The volumes: statement lets the docker image see the current directory, then you run ng serve and it serves the application. I'm using it from an Nginx proxy, so the port is only seen from the docker network. You might want to expose it 4200:4200 to use it without Nginx.
The node applications are identical except for the Dockerfile EXPOSE statement where I set the value to the port that the Nestjs is watching. And instead of ng serve, this is what the docker-compose.yml looks like.
ng serve --aot --host 0.0.0.0 This is one of the sticky things I had to figure out. The default host is localhost, and the websockets for live reloading the app in the browser won't work unless you set this correctly.
More to come.
- In the browser a Google login where you either enter your account information or select from an already logged in Google account.
- The Google login libraries talk back and forth, and come up with a token.
- The app sends the token to the node application, where it verifies it's validity, extracts the identification of the user, verifies against the allowed users, then responds with the authentication state to the app in the browser.
- The angular app watches all this happen in a guard, and when you are authenticated routes to wherever you wanted to go.
It all works fine, but I was running into two issues.
How do you authenticate a websocket connection? I wrote the logic where the token was sent via socket, and the connection is maintained if the token is valid. But I don't trust my code when it comes to security.
The second issue is that the normal garbage traffic that hits any server gets a large app bundle, putting an unnecessary load on the server. Even if you lazy load and start with a simple log in page, the bundle is not insignificant.
I was forseeing complications as I built out my app. I wanted security to be simple, audited by people who know, covering websockets and api calls, and not being a burden on the server.
I ran across an application called oauth2_proxy, which seems to solve my problem. You put your application and all the api routes behind this proxy, which authenticates via the numerous oauth2 services available, including Google.
I set it up and got it working, then realized that I needed something very similar to my server deployment on my development machine. Knowing from experience, the setup of these things are complex and long, and I wanted to figure it out once, then change a few things and have it ready for deployment. Docker came to mind, partly because the oauth2_proxy has a docker image.
So my structure is as follows. I have it basically working, no doubt I'll find a bunch of issues, but that is why I wanted it on a development machine. I'm using docker-compose to put the thing together, and the goal is to have it ready to go with one command.
- Nginx as a front facing proxy. The docker image takes a configuration file, and it routes to the nodejs api applications, websockets and all the bits and pieces.
- Oauth2_proxy for authentication. I'm using the nginx auth_request function where a request comes into nginx, and on the locations needing authentication it calls oauth2_proxy then routes either to a login page or the desired route.
- Nestjs server application that handles the api calls
- A second nodejs application that does a bunch of work.
- A third nodejs application that serves websockets.
- Mongodb as the data store. The websocket microservice subscribes to changes and sends updates to the app in the browser.
- For development, I have a docker image which serves the angular-cli ng serve through nginx. The nodejs applications are also served the same way, meaning they recompile when the code is changed.
So how does it look? I'll go through this piece by piece. There were some knarly bits which swallowed too much time with a dastardly simple solution obvious only in retrospect.
Setting up a MonoRepo with Nx
When I started poking around with this idea I found that the structure of my application was lacking. Things like shared code between Angular and Nestjs, and the serve and build setup for the node applications didn't work very well. A very nice solution is the Nx system. It required a bit of work and thought to move things around, but in the end I have a setup where ng serve api starts the node application in development mode. https://nx.dev/angular/getting-started/getting-started shows how to install the system. When you install it will ask the structure of your application, I selected angular with nestjs backend. It creates a skeleton that is very nice.
Running Angular Cli in Docker
This is really neat. Here is the Dockerfile.
FROM node ENV HOME=/usr/src/app RUN mkdir -p $HOME WORKDIR $HOME RUN npm -g install @angular/cli@9.0.0-rc.0 EXPOSE 4200 USER 1000
Put this Dockerfile in the same directory as the package.json file in the Nx structure. I call it Dockerfile.angular, since I have many dockerfiles there.
Then in a docker-compose.yml file, the docker-compose configuration,
angular: container_name: angular build: context: . dockerfile: Dockerfile.angular ports: - "4200" volumes: - .:/usr/src/app command: ng serve --aot --host 0.0.0.0
The volumes: statement lets the docker image see the current directory, then you run ng serve and it serves the application. I'm using it from an Nginx proxy, so the port is only seen from the docker network. You might want to expose it 4200:4200 to use it without Nginx.
The node applications are identical except for the Dockerfile EXPOSE statement where I set the value to the port that the Nestjs is watching. And instead of ng serve, this is what the docker-compose.yml looks like.
ng serve scripts runs the node application. There are a couple things here that I will get into in future posts.scripts: container_name: scripts build: context: . dockerfile: Dockerfile.scripts.dev ports: - "3333" volumes: - .:/usr/src/app command: ng serve scripts depends_on: - mongo-replicator secrets: - mongodb_username - mongodb_userpwd - mongodb_path
ng serve --aot --host 0.0.0.0 This is one of the sticky things I had to figure out. The default host is localhost, and the websockets for live reloading the app in the browser won't work unless you set this correctly.
More to come.
- Docker secrets and using them in the various images
- Setting up Mongo.
- The Oauth2_proxy configuration
- Nginx configuration
Subscribe to Posts [Atom]