Sunday, April 25, 2021
Ngrx Selectors
I look at selectors in Ngrx as queries on the data store. They are the method of getting data from the store to the components in the shape of an observable.
this.dataobservable = this.store.select(selectorfunction);
}
The thing to remember is that store.select passes the entire state to the selector function. Lets lay out a state structure, and see how selectors would work.
export const getJobState = createFeatureSelector<JobClientState>(jobFeatureKey);
Remember, this.store.select(selector) passes the entire state tree to the selector. If this wasn't a feature state, the first selector would look like this
export const getJobState = createSelector((state: AppState) => state=> state.Job)
where "Job" is jobFeatureKey.
Where do I put this? The file structure of where you place your selectors is important. In this instance, the data from one entity will be needed by another to assemble the data, and if you aren't careful you can create a circular dependency between files. The solution is to build a tree. Create a file for each Entity or state property. Then create a file where the different entities or properties are combined. A tree.
Let's start with the jobs state selectors. This is an Entity state, which exposes selectors for the data. Entity state looks like this:
{ ids: string[], entities: Dictionary<T>}
Dictionary is an object. You access the data with the id, and ids is an array of id. The id is derived from the object itself; you need a unique id for each entity.
entity[id]
Entity selectors are generated by the schematic, and look like this
export const {
selectIds,
selectEntities,
selectAll,
selectTotal,
} = adapter.getSelectors();
With the feature selector, and the Entity selectors, we can then combine selectors and drill down to the data we want. So for the job state:
export const getJobs = createSelector(getJobState, (state) => state.jobs);
export const getJobIds = createSelector(getJobs, jobs.selectIds);
export const getJobEntities = createSelector(getJobs, jobs.selectEntities);
Each of the JobClientState properties that are Entity State will have the same type of selectors.
What about the Query and Primary states? Query is for the list of jobs to be displayed for the user to select. JobPrimary is the selected job, and has a similar selector.
export const getJobQuery = createSelector(getJobState, (state) => state.jobquery);
When a job is selected, the user navigates to an edit or view url, with the id. The router state is subscribed to, and the primaryjob state is set with that id. The view then uses this selector to get the selected job
export const getJobPrimary = createSelector( getJobState, (state) => state.jobprimary)
export const JobPrimaryEntity = createSelector( getJobPrimary, getJobEntities, (primary, entities) => entities[primary]);
Friday, April 02, 2021
NGRX Effects
"In computer science, an operation, function or expression is said to have a side effect if it modifies some state variable value(s) outside its local environment, that is to say has an observable effect besides returning a value (the main effect) to the invoker of the operation."
In the Redux and Ngrx pattern, application state is modified one way only. An action is dispatched, and the reducer function returns a modified state. And action that does something other than that is a side effect, and in Ngrx, is handled by an Effect.
To work with Ngrx Effects you need two things.
- An understanding of how Effects work and what they can do.
- The ability to work with Rxjs and it's operators.
{dispatch: boolean})
export class AuthenticationEffects {
...
filter((action: Action) => action.type === 'the action type we want to find')
- Navigate to a route.
- Initialize a websocket service.
- Fetch roles and permissions for that user from the api.
- Load some common data for use in the app.
- Notify the user.
), {dispatch: false}
Here we have what could be called nested side effects. Effects are a side effect to the action dispatch => reducer => modify state path. Tap is a side effect of the observable stream. Actions => filter the type => do something else without affecting the stream. Tap receives the data emitted, and allows you to do things without affecting the data stream. The next operator in line will receive the same data.
observable.pipe(
tap( value => console.log('value emitted', value,
)
), {dispatch: false}
)
)
const myarray = [ 1, 2, 3 ]
const arrayobservable = of(myarray).pipe( // emits the array
map((items: number[]) => items.map(item => item * 100))
).subscribe(value => console.log(value)) // [ 100, 200, 300 ] one emission
map((item: number) => item * 100)
).subscribe(value => console.log(value)) // 100, 200, 300 , three emissions
- mergeMap will run the observable in parallel when the new emission arrives, merging the values of all of them into the stream as each one is completed.
- switchMap will cancel the running inner observable and run the new one.
- exhaustMap will throw away any new incoming emissions until the inner observable is completed.
- concatMap will queue all the incoming emissions, doing them in sequence, letting each one complete before running the next.
The inner observable has a pipe. You can nest piped operators to your hearts content, but this one has a specific purpose. It could be written like this:
Thursday, March 25, 2021
NGRX Actions
export interface Action { type: string }
This is what defines an Action.
Ngrx implements a message passing architecture, where Actions are dispatched.
this.store.dispatch(action)
The mechanism behind this is an ActionSubject, which is a BehaviorSubject with some extensions. When you dispatch an action, it is as simple as this.
this.actionsObserver.next(action);
The listeners subscribe to this action stream, either in the reducers which modify the state, or an Effect. This simple structure allows you to build a message passing system which define the data flows in your application.
Here is a list of some things you need to know:
- Actions are defined by { type: string }. The function name of your createAction function does not define the action.
- The string must be unique.
- Every reducer and every Effect sees every Action that is dispatched. This includes forFeature loaded reducers and Effects. Effects and reducers filter out the Action it is listening for.
- The suggested pattern for the string is "[where it came from] what it does".
- Reducers and effects can listen to multiple Actions to do the same thing. So if you dispatch an action that loads your Contacts list from the contacts module and the invoice module, two different Actions can be defined, with 'where it came from' in the string. The reducers and effects accept a list of actions to respond to.
- Actions can have other properties. The command can be accompanied by data.
You will see the flow of commands and data as your application goes through it's function.
Broadly, there are two types of Actions. Those that the reducers listen for to modify the state, and those that don't, and are listened for in Effects. Reducers are pure functions, and when something asynchronous or a side effect needs to occur, the Effect will listen. The typical example goes like this
- Component dispatches an action to load data.
- an Effect listens for the load action, and does an api call to fetch the data.
- the Effect emits an action with the data attached.
- the reducer listens for that action and updates the state.
- selected file or files are inserted in the state
- a file or all files can be removed from the state
- a file or list of files are uploaded
- the upload progress is displayed
- success clears the state
- an error sets the status
Saturday, March 20, 2021
NGRX Normalization
Application state is the data required to render the views over time.
One of the conceptual difficulties that makes Ngrx difficult is how to structure the state. There is no one answer because the source, usage and modification of the state is different in every app.
This is how I approach it.
The path between the api and components represents a series of Ngrx actions and functions. This is one direction of the data flow.
- The component dispatches an action
- An effect watches for that action and runs an http call that fetches the data
- On success a second action is dispatched containing the data
- The reducer responds to the action and updates the state
- A selector emits the data in a shape useful for the component
- The component renders the view.
- The day selection component renders the selectedday, which comes from the router url.
- The map renders routes and stop points, using the map functions and classes.
- The table lists the same routes and stop points, with duration, distance, at a specific location identified as an address and/or business location.
Wednesday, December 16, 2020
State as Observables, State as Ngrx.
Observables and Ngrx are complex. As with any technology, it is very very easy to forget what you are trying to accomplish as you wade through the details.
Start and end by thinking "What do I want to accomplish".
These tools are capable of taking a very complex problem and simplifying it. That has been my experience.
But they are also capable of taking a simple situation and making it very complicated.
Start with defining the State. It is the data the view needs to render over time. How would you think about this problem.
Where is the data coming from? Usually an api.
What does the data look like from the api? Usually not what you need for the view, so the observable chain or the reducer functions would take this maybe complex tree and transform it into what your view needs.
How do I know what the data looks like? Tap is your friend. tap(value => console.log('note from where', value) in the observable chain tells you the shape. As you change it, use a tap to verify.
What shape do I want? Flat and simple. <div *ngFor="let item of items$ | async> should give you an item that can be passed to a component for viewing or editing. So either in the effect, observable chain or reducer, transform the data into that shape.
If you are fighting with nested arrays and complex objects, make it simple. Create a relation key scheme using Entities so that the selectors are easy and fast. A one time cost of insertion vs. the every time you subscribe cost of transformation.
With most complex technical issues, framing a question is often the most difficult thing. The question here is what should my ngrx selector or observable chain emit to make my component simple? When you have answered that, the specific details of how to construct the chain, reducer, selector etc. becomes a matter of coding and testing.
What do I want to accomplish? What is the shape of the data I need?
Wednesday, November 13, 2019
The Secrets of Docker Secrets
Essentially the secrets function create in memory files in the docker image that contain the secret data. The data can come from files, or a Docker swarm.
The first thing to know is that the application running in the docker image needs to be written to take advantage of the Docker secrets function. Instead of getting the password from an environment variable, it would get the password from the file system at /run/secrets/secretname. Not all images available use this functionality. If they don't describe how to use Docker secrets, the won't work. The files will be created in the image, but the application won't read them.
For a development setup having files outside of the git source tree works well. To create a file with a secret, I created a folder called serverdata, with a dev/ and prod/ folder within. In the dev/ folder, run this command with all the secret data you will need:
The names simply need to tell you what they do. What the secret is called in the image is set in the docker configuration. This is what my dev/ folder looks like:echo "shh, this is a secret" > mysecret.txt
-rw-r--r-- 1 derek derek 66 Nov 5 14:49 mongodb_docker_path
-rw-r--r-- 1 derek derek 6 Oct 22 14:09 mongodb_rootusername
-rw-r--r-- 1 derek derek 13 Oct 22 14:08 mongodb_rootuserpwd
-rw-r--r-- 1 derek derek 18 Oct 22 14:10 mongodb_username
-rw-r--r-- 1 derek derek 14 Oct 22 14:10 mongodb_userpwd
-rw-r--r-- 1 derek derek 73 Oct 22 14:02 oauth2_clientid
-rw-r--r-- 1 derek derek 25 Oct 22 14:02 oauth2_clientsecret
-rw-r--r-- 1 derek derek 14 Oct 22 14:03 oauth2_cookiename
-rw-r--r-- 1 derek derek 25 Oct 22 14:04 oauth2_cookiesecret
-rw-r--r-- 1 derek derek 33 Oct 26 08:27 oauth2_redirecturl
Function and description. I have some configuration details as well.
Using Secrets with docker-compose
This is the docker-compose.yml that builds a mongodb image with all the configuration.
version: '3.6'services: mongo-replicator: build: ./mongo-replicator container_name: mongo-replicator secrets: - mongodb_rootusername - mongodb_rootuserpwd - mongodb_username - mongodb_userpwd environment: MONGO_INITDB_ROOT_USERNAME_FILE: /run/secrets/mongodb_rootusername MONGO_INITDB_ROOT_PASSWORD_FILE: /run/secrets/mongodb_rootuserpwd MONGO_INITDB_DATABASE: admin networks: - mongo-cluster depends_on: - mongo-primary - mongo-secondary
And the secrets are defined as follows:
The secrets: section reads the contents of the file into a namespace, which is the name of the /run/secrets/filename. Mongo docker image looks for an environment variable with the suffix _FILE, then reads the secret from that file in the image file system. Those are the only two variables supported by the Mongo image.secrets: mongodb_rootusername: file: ../../serverdata/dev/mongodb_rootusername mongodb_rootuserpwd: file: ../../serverdata/dev/mongodb_rootuserpwd mongodb_username: file: ../../serverdata/dev/mongodb_username mongodb_userpwd: file: ../../serverdata/dev/mongodb_userpwd mongodb_path: file: ../../serverdata/dev/mongodb_docker_path
Of course it gets more complicated. I wanted to watch the changes in the database within my node application for various purposes. This function is only supported in a replicated set in Mongo. To fully automate the configuration and initialization of Mongo within Docker images using replication requires a second Docker image that waits for the Mongo images to initialize, then runs a script. So here is the complete docker-compose.yml for setting up the images:
The Dockerfile for the mongo-replicator looks like this:version: '3.6'services: mongo-replicator: build: ./mongo-replicator container_name: mongo-replicator secrets: - mongodb_rootusername - mongodb_rootuserpwd - mongodb_username - mongodb_userpwd environment: MONGO_INITDB_ROOT_USERNAME_FILE: /run/secrets/mongodb_rootusername MONGO_INITDB_ROOT_PASSWORD_FILE: /run/secrets/mongodb_rootuserpwd MONGO_INITDB_DATABASE: admin networks: - mongo-cluster depends_on: - mongo-primary - mongo-secondary mongo-primary: container_name: mongo-primary image: mongo:latest command: --replSet rs0 --bind_ip_all environment: MONGO_INITDB_DATABASE: admin ports: - "27019:27017" networks: - mongo-cluster mongo-secondary: container_name: mongo-secondary image: mongo:latest command: --replSet rs0 --bind_ip_all ports: - "27018:27017" networks: - mongo-cluster depends_on: - mongo-primary
Mongo with various scripts added to it. Here they are.FROM mongo:latest ADD ./replicate.js /replicate.js ADD ./seed.js /seed.js ADD ./setup.sh /setup.sh CMD ["/setup.sh"]
replicate.js
seed.jsrs.initiate( { _id : "rs0", members: [ { _id: 0, host: "mongo-primary:27017" }, { _id: 1, host: "mongo-secondary:27017" }, ] });
and finally what does all the work, setup.shdb.users.updateOne( { email: "myemail@address.com"}, { $set: { email: "myemail@address.com", name: "My Name"} }, { upsert: true },);
In the docker-compose.yml the depends_on: orders the creation of images, so this one waits until the others are done. It runs the replication.js which initializes the replication set, then waits for a while. The password and username are read from the /run/secrets/ file, the linefeed removed, then the user is created in the mongo database. Then seed.js is called to add more initial data#!/usr/bin/env sh if [ -f /replicated.txt ]; then echo "Mongo is already set up"else echo "Setting up mongo replication and seeding initial data..." # Wait for few seconds until the mongo server is up sleep 10 mongo mongo-primary:27017 replicate.js echo "Replication done..." # Wait for few seconds until replication takes effect sleep 40 MONGO_USERNAME=`cat /run/secrets/mongodb_username|tr -d '\n'` MONGO_USERPWD=`cat /run/secrets/mongodb_userpwd|tr -d '\n'` mongo mongo-primary:27017/triggers <<EOFrs.slaveOk()use triggersdb.createUser({ user: "$MONGO_USERNAME" , pwd: "$MONGO_USERPWD", roles: [ { role: "dbOwner", db: "admin" }, { role: "readAnyDatabase", db: "admin" }, { role: 'readWrite', db: 'admin'}]}) EOF mongo mongo-primary:27017/triggers seed.js echo "Seeding done..." touch /replicated.txt fi
This sets up mongoDb with admin user and password, as well as a user that is used from the node.js apps for reading and writing data.
No passwords in my git repository, and an initialized database. This is working for my development setup, with a mongo database, replicated so that I can get change streams, and read and write function from the node.js application.
More to come.
- Using secrets in node.js applications and oauth2_proxy
- The oauth2_proxy configuration
- Nginx configuration to tie the whole mess together
Tuesday, November 05, 2019
Angular in Docker Containers for Development
- In the browser a Google login where you either enter your account information or select from an already logged in Google account.
- The Google login libraries talk back and forth, and come up with a token.
- The app sends the token to the node application, where it verifies it's validity, extracts the identification of the user, verifies against the allowed users, then responds with the authentication state to the app in the browser.
- The angular app watches all this happen in a guard, and when you are authenticated routes to wherever you wanted to go.
- Nginx as a front facing proxy. The docker image takes a configuration file, and it routes to the nodejs api applications, websockets and all the bits and pieces.
- Oauth2_proxy for authentication. I'm using the nginx auth_request function where a request comes into nginx, and on the locations needing authentication it calls oauth2_proxy then routes either to a login page or the desired route.
- Nestjs server application that handles the api calls
- A second nodejs application that does a bunch of work.
- A third nodejs application that serves websockets.
- Mongodb as the data store. The websocket microservice subscribes to changes and sends updates to the app in the browser.
- For development, I have a docker image which serves the angular-cli ng serve through nginx. The nodejs applications are also served the same way, meaning they recompile when the code is changed.
FROM node ENV HOME=/usr/src/app RUN mkdir -p $HOME WORKDIR $HOME RUN npm -g install @angular/cli@9.0.0-rc.0 EXPOSE 4200 USER 1000
Put this Dockerfile in the same directory as the package.json file in the Nx structure. I call it Dockerfile.angular, since I have many dockerfiles there.
Then in a docker-compose.yml file, the docker-compose configuration,
angular: container_name: angular build: context: . dockerfile: Dockerfile.angular ports: - "4200" volumes: - .:/usr/src/app command: ng serve --aot --host 0.0.0.0
The volumes: statement lets the docker image see the current directory, then you run ng serve and it serves the application. I'm using it from an Nginx proxy, so the port is only seen from the docker network. You might want to expose it 4200:4200 to use it without Nginx.
The node applications are identical except for the Dockerfile EXPOSE statement where I set the value to the port that the Nestjs is watching. And instead of ng serve, this is what the docker-compose.yml looks like.
ng serve scripts runs the node application. There are a couple things here that I will get into in future posts.scripts: container_name: scripts build: context: . dockerfile: Dockerfile.scripts.dev ports: - "3333" volumes: - .:/usr/src/app command: ng serve scripts depends_on: - mongo-replicator secrets: - mongodb_username - mongodb_userpwd - mongodb_path
ng serve --aot --host 0.0.0.0 This is one of the sticky things I had to figure out. The default host is localhost, and the websockets for live reloading the app in the browser won't work unless you set this correctly.
More to come.
- Docker secrets and using them in the various images
- Setting up Mongo.
- The Oauth2_proxy configuration
- Nginx configuration
Subscribe to Posts [Atom]