We would like to test constrained minimization techniques for use in INVERSE. INVERSE LM has trouble with some constrained minimization problems, e.g., case99.inv.inp.
The dataflow for INVERSE might look like this:
- INVERSE parses the input deck and defines a forward model with some unknown parameters. The forward model encapsulates the measured data, a prediction function, and sensitivities. The prediction predicts data as a function of the unknown parameters. The sensitivities are the first derivatives of this function. Ideally the second derivatives could also be available. The forward model also encapsulates constraints in the form Lx=c and Lx<=c.
- The forward model is passed into a goodness-of-fit function. The function compares the measurement to a prediction and provides a scalar metric of the fit. The prediction and the fit metric are a function of the unknowns. For this type of problem, the function is usually chi2. Using the sensitivities, we can also estimate the first and second derivative of chi2. Other work may use likelihood or log-likelihood as a metric.
- This fitness function and the forward model constraints are passed to a constraint handler. This could be a linear penalty, Bardsley's active set approach, or log barriers. The constraint handler combines chi2 (or any other minimization objective) and the constraints into an unconstrained minimization objective (UMO). The UMO is defined such that its minimum must satisfy the constraints.
- That UMO is then passed into a minimization step. This could be LM or Newton's method with line search. Old parameters are passed into the step; new parameters are returned from the step.
- The constraint handler and minimization step alternate. As the solution converges, the UMO will change.
This issue will implement constraint handlers (active set and log barrier) and minimization steps (LM and Newton with line search), then test them on constrained-minimization test functions. The work will be drafted in Python, then migrated to C++.