Saturday, July 16, 2016

Herding Cats, or Managing Complexity

Determined to find a rational web based development environment, something stable, that works, has support and is flexible enough to deal with future demands, as well has minimal html involved, I ended up using Angular 2 on a Meteor base.

Yes it is alpha, now rc, not yet released. That hasn't caused many problems, as I'm early and still banging out the basic structure and development strategy.

A few things I've figured out.


  1. Javascript on browser has all the characteristics of multithreaded programming without the tools and api to deal with it. Once you get above a certain level of complexity it falls over a cliff, and your application can easily do the same. The frameworks such as Angular and React are an attempt, very good ones, at imposing order on the chaos. 
  2. Tracking down timing and essentially race conditions takes hours of time, time I don't have. Squiggly brackets nested deep are the devils work.
  3. Angular 2 is quite remarkable in it's power and simplicity. I have found that if I have something that requires figuring out some deep in the weeds function of the framework or platform, I'm making a mistake. Back out of the hole, and I find that there is a two line call that does exactly what I need. The paths are well trodden, almost all the issues I'm figuring out have been already, so learn the framework. There are bugs of course, and a few shortcomings, but each release candidate seems to sort a few of them out.
  4. ngrx/store and /effects is very interesting. It takes the ideas of Redux, adds Observables and ties it all to Angular 2. The idea of Redux, having the state defined and stored, changed by dispatching an event, having reducers which are pure functions to change the state, is brilliant in it's simplicity. It is similar to an event handler in gui application frameworks, but different. 
  5. It's power is in what it forces you as a developer to do. Start by defining any and all of the states your application will enter into. Each component will have different states, describing all the stages of fetching, waiting for, showing, editing the data. Each state gets set in a reducer function. This forces a level of simplicity, even the pure functions and the secondary effects get separated, again forcing you as the developer to break the process down into discrete steps that can be represented by a data structure containing the state. It seems repetitive; my mind is screaming at the almost identical functions, but keeping the simple structure makes it easier to find mistakes and bugs. And ngrx/store along with other redux implementations allow you to capture a step and return to it to sort out bugs.
  6. Observables are strange, confusing, wonderful and powerful. We tend to think of our applications as a point in time being represented on screen. Something happens, we react to it, getting to the state at another point in time. Observables describe a stream of data. The stream might have only one piece, or it may have many. My contact information for a customer streams forth from the data source, into an observable which is displayed. If there is a change from any source, the change flows the same way, showing the change on the display. That flow can be altered in many ways with the Observable methods. The flows go the other way as well; editing, keystrokes, selections, all the user input flows, is altered or used in some way leading to a change in some aspect of the state. 
  7. All this stacked up with discrete state reducers and observables exposing the state makes for traceable flows of events and data that can be debugged and fixed. One state of each component is when there is no data, gracefully handled it removes the source of lots of bugs. Each error state can be defined as well allowing for graceful failure.
I've set some parameters for the application. Everything is autosaved. It is a design parameter that in any and all situations the user could leave and come back without losing data and easily getting back to where they were. The well defined states are really helpful here. There are some instances where some distinct confirmation is required; an invoice posted, sending something. That is a different action from saving the state.

A second one is that data entry is a pain in the neck. Some of it needs doing, but it should be only for situations where the experience and thought of the user is required. Otherwise the idea is to present the user with decisions that need to be made, data that needs to be entered, and a place or slate where the experience and knowledge of the user gets applied.

A third one is to whenever possible have the data structured and entered in a way that doesn't require distinct fields. Sometimes it is necessary, but that should be determined by the data and it's use rather than the application structure. Something entered once with care and verified, then reused by selection or other methods reduces the scope for error. The characteristics of our business is that much of the time careful data entry is impossible or impractical, so the idea is to choose times or contexts where careful data entry is done, the rest of the time isn't necessary.

I'm enjoying Material design, which is in alpha development for Angular 2. It makes it trivial to construct nice looking and smooth operating applications, and it enforces a certain simplicity and elegance that makes an application useful.

Monday, December 28, 2015

Triggers

Why are almost all software projects structured around the data schema as opposed to the user and environment? This always ends up with the software defining your business as opposed to the business defining the software.

I run a service business, a small shop. We service a variety of commercial and residential customers. Refrigeration, air conditioning, heat pumps, air systems, etc. We deal with a number of client accounts payable and asset management systems. We have a variety of regulatory stuff, maintain some inventory, and turn over lots of jobs, usually having 20 balls in the air at the same time.

I'm maybe 80% there, but the complexity is staggering. I'll tell you what frustrates me about the software that we use.

The more capable it is the more it tries to tell me how to run the business.
The less capable it is the more difficult to set up specific requirements such as sales taxes and the like.
Mobile is terrible. It doesn't get used for good reason. My phone for data entry has a tiny screen by the time the keyboard is up.
I don't want modules, that is database centric. I want processes.
There is about 1/4 of an employee's time worth of housekeeping.
I know where I am, what I want, what data I need, it doesn't.
It is easier to build clutter, duplicate jobs, customers, invoices than searching.

A long time ago when I started in this business we had a notebook by the phone. Every call got a note; who, what about. That was the core of the business process; everything started there either in a conversation or voice message. This was before cell phones were accessible.

Why does service business software not start with that basic function?

I know why. Three reasons. There is someone answering calls and dispatching. It is difficult. The authors don't understand the business.

My goal is to put together an event based business process package that is configurable, knows where I and my technicians are, knows where the customers are, can nag us to do the things that really need to be done, can be set up by a smart business operator to gather data at the moment in time when it is easy and likely to be done.

I get a call from a customer. In time I show up at their site, am there for an hour or more. I make a call to one of my suppliers. Then leave. Next morning freight shows up, and the software asks me which parts are for that customer and job.

When I leave that site and am not moving (not driving), the phone pops something up asking if that was a job. Yes. Equipment worked on can be selected. A description entered. You phoned a supplier, did you order parts? Yes. Done.

The phone rings, another customer. We have a job in progress. One of my techs is close by. We converse and schedule something. I need to order something. No, in addition to the job in progress.

Another call, this time a drop everything emergency. I get to the site, do the repair. My phone beeps, deadline for ordering is 3pm, you have something to order. It remembers, and the next morning when there is a minute what can be entered. Phones come builtin with bar code reading capabilities. Three of these, two of those, yes one of each for both jobs.

I don't want to see a map, i know where I am. The next day, everyone has a log where they were, who they talked to. An email from a vendor with a pdf, estimate for which job? Click. I research some repair manual. Click, attached to this customer. You were at this location and you took photos. Yes, equipment. No time right now to enter details, but housekeeping knows and someone can clean it up when they have time.

Housekeeping. This email has a phone number in it, one that shows up in the log. Is that about this job? Yes, a contact is born with email, phone etc. All tied to a job.

Something old? Throw it out, a job was created based on a call but there was one already.

This is my day. Chaotic, manic, demanding and tiring. The software i use makes it moreso. 5 email accounts, multiple devices, a list a mile long to do every morning.

Some days it is quiet. Here is a list of stuff to clean up. Oops a call, drop everything and go. No problem, it will remind me again. Or a bunch of data entry stuff can be sent to someone else.

I want a solution. I am writing it.

Sunday, December 14, 2014

Photographing Bats

One evening last summer the dogs and I were on the beach near our home. Just as it got dark a group of bats came out of the brush and flew back and forth along the shore feeding on the mayfly hatch. Of course I wanted to get a photo. So began the saga. I learned an enormous amount, spent a substantial sum, and now want to describe it for posterity.



I chanced upon a location where a small number of bats are concentrated in a small area making it possible to get a photo. The beach was enclosed on one side by bush, the other by a dock. The access to the beach was a trail, and it seems that bats either head for open space or feel their way by following a boundary of some sort. So during the time of peak activity, about 15 minutes just as it got dark, there was a certain density in the air main it possible to capture the odd one in a frame.
Another advantage was if I sat in a particular location, I could see the bats against the western sky and the reflection on the water.

The bats stayed close to shore for 10-15 minutes then spread out over the lake.
It is very dark, the subject moves very quickly in unpredictable ways. And they are small. To capture them in a photo requires artificial light, a very short exposure to freeze movement, and some way to point and trigger the camera to get one in the frame.

Start with exposure. High speed photography is the art of capturing very fast events; a bullet, a balloon popping, drops of water. The extremely short exposure are created with a flash unit in a dark room. If you look at the specifications for an electronic flash unit you will see that the duration shortens as the power decreases. At full power the flash duration is close to the synchronization speed of your camera. My Pentax K3 is 1/180 of a second, my Metz 50 AF1 at full power has the light on for 1/125 of a second. At 1/4 power it is 1/2000 of a second, much better. That is approaching the exposure speed where you can freeze movement. Still too slow for a bat. I found 1/8 power, or 1/4000 of a second better. The photo above was at that speed. http://www.gock.net/2012/01/flash-durations-small-strobes/

Now for a lens. You can't focus in real time, they are too quick and you can't see them. So you want to set up a box in space that is in focus and illuminated. I tried different lenses; 35mm which had a wide field of view and captured lots of action, unfortunately was soft so the shots were unsatisfactory.
100 mm lens which was sharp but the area in focus was to far away to illuminate with the flashes that I had.
Intensity of light decreases at a rate of the square of the distance. The box was too far away. A 50 mm lens seemed to give me the best results.


Aperture? Here a depth of field table is useful (http://www.dofmaster.com/dofjs.html). Depth of field is the space in which the lens is in focus. A large aperture, or lower f-stop number gives a narrow depth of field. The closer your subject the narrower the depth of field. The longer the lens in mm the narrower the depth of field. For our purposes the further away, the shorter the lens and the higher the f-stop the larger the box where things are in focus will be. But the further away the more light you will need. The further away the smaller the bat will be in your frame. Same with a short lens. The higher the f-stop you may run into distortion as well. I found f8, focusing at 10 feet with a 50mm lens gave me a box about 4 x 4 3/4 x 3 feet.

One flash isn't enough. I had three. One Metz 50 AF1 on the camera, and two Yongnuo 560ii manual flash units assisting. All on 1/8 power. This was the challenging part. There are a few ways to trigger multiple flashes. 1/4000 is .25 milliseconds. The Yongnuo units are a bit slower than that, but bear with me. Optical triggering, which I used takes .06 ms, or roughly a quarter of the total exposure time. Wireless radio triggers take about .25 ms to trigger, meaning that the master flash would be almost finished before the slave units would begin. I noticed that the shots where the bat was flying across the frame were soft, possibly indicating too long of an exposure. That extra .06 ms may have been the difference. In any case speed is of the essence.

I initially had the the flash units close together, but found that the images were unpleasant. 



A 5 ft long piece of aluminum angle made a bracket with the camera in the middle and the flash units on either end. It made a very big difference in image quality.



How to capture the varmints? I used a remote shutter trigger so I could sit where they were visible, then held down the button. I would take hundreds of shots and get maybe two dozen with something in the frame, and maybe one or two that were interesting.

This pray and spray method has a drawback. The flash units need to regenerate between exposures, and will shut off from time to time to cool off, or to allow the battery discharge to catch up. I used Duracell precharged NiMH 1.2 v 2400mah AA batteries, freshly charged for each session. They worked very well. I have one set of Eneloops and found they didn't keep up with the Duracell's. Even then I would get dark exposures as one or more of the flashes missed.
The solution is to shoot less. I learned discipline, which helped. I also invested in a trigger device, but ran out of time. The weather changed, or the hatch changed and the bats stopped hanging around the shore.

So next year. I am already planning and accumulating. Another two flashes. I will wire them together, getting rid of any latency. At the same distance I might be able to set them to 1/16 power. The wire harness is ready and tested. I need another support, which entails the purchase of a light stand. And I intend to test the trigger device to figure out how to set it up.

I got about 80% of the way there. The shots are decent but not excellent. My goal is to get enough excellent shots where they are also doing something interesting. Just getting one was a challenge. Now to get a good one.

Tuesday, June 15, 2010

Paper paper everywhere

I have an itch. I need to get hundreds of pieces of paper to my bookkeeper who happens to be on the other side of the country. And as usual, when poking an itch things happen in interesting ways, opening up possibilities. So start with Python, PyQt and the KDE python bindings. The plan was to write some kind of bulk scanning application, with the ability to annotate notes to the scans, bundle them up in a pdf, and send them off by email. Some issues that I ran across. The python-imaging-scan module is quite flakey. Repeated scans would lead to crashes. I encapsulated it within a subprocess, which seems to prevent bringing the whole application down, and somehow allows it to do it's thing each time reasonably reliably. Still has issues however. The python-keyring-kde works nicely. Very simple to use, and it works. I use it to keep the password needed to login to send emails. The python-imaging is very nice. I clip the images to remove whitespace around it, and it is very easy. To create a QImage is trivial using the QtImage module. The reportlab pdf creation module is very nice. Once you get your mind around the structure, which is much easier than the documentation seems to portray, it is very easy to create a simple pdf. The code is at code.google.com. There are doubtless many bugs, and I have only done testing with the scanner that I have.
I now have a few hundred pdfs containing all kinds of interesting information. They are mostly structured documents, ie. invoices. Now to figure out some way of parsing and categorizing the data so that I can use it in some way.

Tuesday, November 10, 2009

Truth in Advertising

Jasper's blog (and the seeming tenor of planetkde today) seems negative. It isn't. It is the truth. The Linux desktop is, for all it's advances and neat stuff, pretty limited.

There are a few reasons for this. Starting from the premise that it, for the most part, is written by individuals or groups to scratch a specific itch:

Major parts of the infrastructure are either missing, old and outdated, or unstable.

The process of filling the gaps and getting things working has required throwing away quite a bit of stuff, requiring updates elsewhere in the stack. Which consumes resources and creates instability and raises the cost of entry.

The amount of basic infrastructural work required to write even a modestly ambitious application limits the field to the devoted and few.

Throw in the tendency for distributions to attempt to differentiate themselves by infrastructure projects that try to solve a problem, but end up being either chronically unfinished, poorly thought out because of lack of communication, and many times eclipsed by smaller projects that work well, are well maintained but refused entry by the NIH syndrome.

And because the whole thing is necessarily in flux, the cost of entry and cost of maintainership rises.

Have no fear. It will get worse before it gets better. By the time the Xorg guys are done, and the kernel guys get a file system and scheduler that works for the desktop, and all the *Kit stuff gets finished, we will have something great. In the meantime, a mess.

What all this means is that we will have new media players every week. We don't have users that require the broad range of applications and functionality, some of which will write that stuff if the basics were there because the functionality isn't quite there yet. Linux and desktop will always be a developers platform, and as the basics get sorted out, people needing specific function for vertical or specialized requirements will flock to it. For the simple reason that it is cheap and easy.

It isn't easy now. But it is getting closer. Stuff like akonadi is amazing, opening up possibilities that the pim guys haven't imagined, and making basic function easy for developers. PyQt is simply awesome as a RAD environment. The stuff I'm doing with it makes me wonder why I would ever use C++ again. Fast and stable. The nepomuk stuff is cool, and we are going to see very neat things come out of it. Not some Nepomuk Application, but making it easy for developers to sort and index and connect their data opens possibilities that otherwise would take too many resources to write.

As for attracting and expecting commercial ports of applications to Linux, we might as well wait for the moon. It won't happen. The only way we would get a stable Skype on linux is if someone wrote one. The commercial houses have no interest in doing anything but token support of linux to check some boxes for someone. And they will never fit in with our very nice packaging and updating systems. As it always has been, if we want something good, we have to write it.

I still believe in a rich and thick desktop computer. Web based applications take me back to edlin except with colors and pictures. Serious handheld applications remind me of a time at my local credit union, oh 25 years ago, watching a woman enter information into a dbase table. She typed in a bit of stuff, hit enter, and waited a while for the data to be transfered to floppy I think. When I say serious I means something I can do business with, not texting or even email. The desktop allows a rich experience, and with quad cores, 2 terabyte drives for cheap and low cost seemingly unlimited memory, the possibilities are endless. And we will write them.

Oh, and finally. The stable, well finished and usable desktops from Apple and Microsoft are as good as they are because a bunch of guys in their basements, in their spare time made fools of them. OS9, Windows95, even NT as a server were pretty bad. They were forced to improve their wares, their processes by free software nipping at their heels. That is something we can all be very proud of.


Wednesday, September 16, 2009

Release

I've packaged up a tarball to make a first release of idfeditor, an application to create and edit the configuration files for Energy Plus building simulation. It can be found at code.google.com/p/energyplus-frontend. The first release is very basic allowing editing of most elements of the configuration file.

Next is to simplify viewing and editing the building geometry. Energy Plus has a few different ways to represent the information. One way is vertices, with a building element made up of a series of 3d points. The points can be relative to a building basepoint, or zone, the points can be clockwise, counter clockwise, starting at any of the four corners, etc. Or there is another way which has azimuth or facing angle, tilt, origin, length, width. Again from origin. It is quite flexible, meaning quite complicated and error prone.

Right now I'm working on translating the various input types into xyz space, something suitable to transforming, rotating and such. The first goal is to get the code to the point where it reads and writes the data reliably with all the defined input types. The math is fun. I vaguely remember learning all this stuff in school, and have the horrifying memory of doing the calculations on a slide rule. I can't honestly say that it is coming back, that would assume there was some memory remaining. It is all pretty basic vector geometry.


Monday, September 07, 2009

Almost ready to release

Almost ready for the first release of Energy Plus frontend, idfeditor. code.google.com/p/energyplus-frontend. The link lists the features that are done. I'm in the process of testing by building and editing a simulation, and am finding the odd thing that needs fixing. There are many things I want to do to make creating a simulation easier, but they will have to come in future versions. Anyone who wants to test can checkout the svn tree and run "python idfeditor.py".

One issue that I'm not sure about is regular crashes when calling QFileDialog. It doesn't crash each time it is called, but seemingly random. The traceback shows a call to free() in Qt. The PyQt folks may have some ideas, but has anyone else run into this? I'm not sure if it my Arch Linux setup which is somewhat bleeding edge.


This page is powered by Blogger. Isn't yours?

Subscribe to Posts [Atom]