Commit 72697ce0 authored by Vacaliuc, Bogdan's avatar Vacaliuc, Bogdan
Browse files

10: expand RefRoi deep-dive and elaborate consumer tensions



RefRoi (§9a, new): add a full treatment of the C++ pixel integrator.
Covers the 12 properties, the extract2D() flow, the two statistical
modes (uniform mean vs inverse-variance weighted mean), the flat-
index spectrum-layout assumption, and the Q-conversion branch. Calls
out that ErrorWeighting=True is statistically biased toward low-
count bins in Poisson-limited data — which is the physical reason
quicknxsv2 hardcodes False and mr_reduction silently inherits True.

Consumer terminology (§9 preface): clarify that "consumer" in this
document means the codebase-level caller of MRR (quicknxsv2 and
mr_reduction), not the per-function output-reader. Acknowledges the
user's earlier reading as a valid secondary sense and states which
sense applies throughout.

Tensions matrix (§11, replaced): expand the four-plus-two default
disagreements into one paragraph per tension, each with physics,
observable effect, who sees the difference, and hackathon decision
needed. Adds tensions 6 (QStep three-values) and 7 (QuickNXS post-
hoc scale +1.0 inconsistency between CSD and NexusData paths), and
groups all tensions into three archetypes: default-inheritance drift,
intra-project drift, naming/semantic drift.

Co-Authored-By: default avatarClaude Opus 4.7 (1M context) <noreply@anthropic.com>
parent 7c815cc6
Loading
Loading
Loading
Loading
+446 −28
Original line number Diff line number Diff line
@@ -515,6 +515,23 @@ principled division.

## 9. The inner workhorses — C++ and Python primitives MRR calls

> **Terminology note.** From here on, **"consumer"** means a
> *codebase that invokes* `MagnetismReflectometryReduction` — i.e.,
> the caller that constructs MRR's ~35 parameters and then acts on
> MRR's output.  There are two such consumers in scope: `quicknxsv2`
> (the Qt GUI, which calls MRR synchronously in response to UI
> events) and `mr_reduction` (the autoreduction library, which calls
> MRR from `ReductionProcess.reduce_workspace_group`).  This is a
> *project-level* sense of the word, not a per-function sense.  The
> user's earlier reading — "functions that use the objects produced
> by MRR()" — is a valid secondary sense: once MRR produces a
> reflectivity Workspace2D, downstream code (stitching, plotting,
> ORSO export, file writers) consumes that output.  Both senses
> matter: the *parameter-construction* side determines what MRR
> computes; the *output-use* side determines what the scientist sees.
> Throughout this document, "consumer" in the unqualified sense
> refers to the codebase-level caller.

Grouped by what MRR asks them to do:

### Bin and re-bin (Mantid core C++ algorithms)
@@ -531,7 +548,7 @@ Grouped by what MRR asks them to do:

| Algorithm | What it does | Source |
|---|---|---|
| `RefRoi` | **The REF_M/REF_L-specific pixel integrator.** Sums pixels within (xmin, xmax, ymin, ymax) on the 2D detector, optionally integrating across one axis and optionally summing pixels with error-weighted or uniform average.  Also optionally converts wavelength → Q at the workspace level (only used downstream of ConvertUnits).  C++, ~220 LOC. | `Framework/Reflectometry/src/RefRoi.cpp` |
| `RefRoi` | **The REF_M/REF_L-specific pixel integrator.** Sums pixels within (xmin, xmax, ymin, ymax) on the 2D detector, optionally integrating across one axis and optionally summing pixels with error-weighted or uniform average.  Also optionally converts wavelength → Q at the workspace level (only used downstream of ConvertUnits).  C++, ~220 LOC — see § 9a below. | `Framework/Reflectometry/src/RefRoi.cpp` |
| `SumSpectra` | Sum all spectra to one.  Used in the TOF-Q path (post-peak-crop) and in the direct-beam `process_data` path (to collapse the multi-pixel normalization). | `Framework/Algorithms/src/SumSpectra.cpp` |
| `CropWorkspace` | Restrict a workspace to `[StartWorkspaceIndex, EndWorkspaceIndex]` — used to crop to the peak pixel range. | `Framework/Algorithms/src/CropWorkspace.cpp` |

@@ -563,6 +580,217 @@ arithmetic ops. A reflectometry-focused fork of Mantid that kept
just these and the Reflectometry framework could support MR reduction
with a small fraction of Mantid's total binary size.

## 9a. `RefRoi` deep-dive — the pixel integrator

`RefRoi.cpp` is 222 lines of C++ in the Mantid Reflectometry framework.
Despite its modest size it encodes **load-bearing scientific choices**
whose effects propagate all the way to R(Q).  Source:
`/media/ssd2/mantid/Framework/Reflectometry/src/RefRoi.cpp`.

### Properties (12)

From `init()`:

| Property | Type | Role |
|---|---|---|
| `InputWorkspace` | Workspace (with `CommonBinsValidator`) | 2D detector workspace (rows = pixels, cols = TOF or wavelength bins). |
| `OutputWorkspace` | Workspace | Result. |
| `NXPixel` | int (default 304) | X-direction pixel count. |
| `NYPixel` | int (default 256) | Y-direction pixel count. |
| `XPixelMin` / `XPixelMax` | int (EMPTY_INT default) | X-ROI bounds (inclusive). |
| `YPixelMin` / `YPixelMax` | int (EMPTY_INT default) | Y-ROI bounds (inclusive). |
| `SumPixels` | bool (default false) | If true, all pixels in the main axis collapse into one histogram. |
| `NormalizeSum` | bool (default false) | If true and SumPixels=true, divides by the number of main-axis pixels. |
| `AverageOverIntegratedAxis` | bool (default false) | Extra division by the integrated-axis pixel count. |
| `ErrorWeighting` | bool (default false) | Switch between uniform mean and inverse-variance weighted mean. |
| `IntegrateY` | bool (default true) | Which detector axis is summed over. |
| `ConvertToQ` / `ScatteringAngle` | bool / double | Post-integration X-axis conversion to momentum transfer. |

### What `extract2D` actually does

The single `exec()` method delegates everything to `extract2D()`.
Pseudo-code (`RefRoi.cpp:94-218`):

```
nHisto = IntegrateY ? NXPixel : NYPixel
# If SumPixels=true, nHisto collapses to 1.

# Iteration ranges:
#   main_axis = the axis NOT integrated over (the output index)
#   integrated_axis = the axis summed over (inner loop)
if IntegrateY:
    main_axis = x-pixels [xmin, xmax] when summing, [0, NXPixel-1] otherwise
    integrated_axis = y-pixels [ymin, ymax]
else:
    main_axis = y-pixels
    integrated_axis = x-pixels

# Main loop:
for i in main_axis_min..main_axis_max:
    output_index = SumPixels ? 0 : i
    for j in integrated_axis_min..integrated_axis_max:
        # flat spectrum index, hardcoded assumption:
        flat_index = IntegrateY ? (NYPixel * i + j) : (NYPixel * j + i)
        Y_in = workspace.y(flat_index)
        E_in = workspace.e(flat_index)
        for t in all TOF bins:
            if SumPixels && NormalizeSum && ErrorWeighting:
                # accumulate signal and squared errors into temp vectors
                signal_vector[t] += Y_in[t]
                error_vector[t] += E_in[t] ** 2
            else:
                Y_out[output_index][t] += Y_in[t]
                E_out[output_index][t] += E_in[t] ** 2  # NOTE: squared
    # If error-weighted, do the inverse-variance average:
    if SumPixels && NormalizeSum && ErrorWeighting:
        for t:
            error_squared = (error_vector[t] == 0) ? 1 : error_vector[t]
            Y_out[0][t] += signal_vector[t] / error_squared
            E_out[0][t] += 1.0 / error_squared

# Normalization / finalization pass:
for i in 0..nHisto:
    for t in all bins:
        if SumPixels && NormalizeSum:
            if ErrorWeighting:
                Y_out[i][t] = Y_out[i][t] / E_out[i][t] / n_integrated
                E_out[i][t] = sqrt(1.0 / E_out[i][t]) / n_integrated
            else:
                Y_out[i][t] = Y_out[i][t] / (main_axis_max - main_axis_min + 1) / n_integrated
                E_out[i][t] = sqrt(E_out[i][t]) / (main_axis_max - main_axis_min + 1) / n_integrated
        else:
            E_out[i][t] = sqrt(E_out[i][t])  # convert accumulated sum-of-squares to standard error
```

### The error-weighting math — and why it matters

When `SumPixels=True` and `NormalizeSum=True`:

- **Uniform mean (`ErrorWeighting=False`):**  `Y_out = (Σ Y_in) / N`,
  `E_out = sqrt(Σ E_in²) / N`.  Standard propagation of independent errors
  through a plain average.

- **Inverse-variance weighted mean (`ErrorWeighting=True`):**
  `Y_out = (Σ Y_in/E_in²) / (Σ 1/E_in²)`, `E_out = 1 / sqrt(Σ 1/E_in²)`.
  Classical *inverse-variance weighting* — optimal if the error
  bars accurately reflect measurement variance and the true mean is
  constant across the averaged pixels.

This looks like a benign user choice.  It is not, in the REF_M
detector's actual noise regime:

- Pixel counts `Y` are Poisson-distributed, so `E ≈ sqrt(Y)` and
  `E² ≈ Y`.  Inverse-variance weight per bin is therefore `1/Y`
  which means **low-count bins dominate the weighted mean** (and
  zero-count bins hit the `error_squared == 0 ? 1 : error_vector[t]`
  fallback at line 182, where the fallback is `1` rather than ∞, so
  zero-count bins get weight 1 rather than zero).

- For a background region where we're estimating a per-pixel rate,
  the physically correct estimator is the sample mean (low-count and
  high-count bins contribute proportionally to their true rate),
  **not** the inverse-variance mean.

In other words, `ErrorWeighting=True` is only appropriate if pixels
are *independent estimates of the same underlying quantity, with
heterogeneous variances not tied to the signal magnitude*.  For
Poisson-limited count data, it is statistically biased toward
low-count bins.

**This is why `quicknxsv2` hardcodes `ErrorWeightedBackground=False`
while `mr_reduction` inherits the MRR default `True`.**  The
quicknxsv2 author made a physics-informed choice that was never
propagated into mr_reduction's default, or into MRR's default.  Two
consumers produce measurably different R(Q) at low S/N because of
this single inherited default.

### The `AverageOverIntegratedAxis` knob

MRR never sets `AverageOverIntegratedAxis`, so it defaults to `false`.
If any future consumer sets it (perhaps in a different reduction
workflow), an extra division by the number of integrated-axis pixels
is applied on top.  This is *not* exercised on the MR path — worth
a comment in any consolidation library saying "don't."

### The flat-index assumption

Inside the inner loop:

```cpp
int index = integrate_y ? m_nYPixel * i + j : m_nYPixel * j + i;
```

This assumes a **specific pixel-to-spectrum-index mapping**: the
workspace's spectra are laid out with stride `NYPixel` in the X
direction.  This is true for MR and LR instruments (where the IDF
maps pixels to spectra in raster order), and `CommonBinsValidator`
prevents RefRoi from being called on workspaces with a different
spectrum layout.  But if the MR instrument definition file (IDF)
ever changes pixel ordering (e.g. to optimize detector reads), this
hardcoded arithmetic would silently produce the wrong integration.
No runtime check.

### The Q conversion

When `ConvertToQ=true` (MRR never sets this — its `constant_q` branch
instead bypasses RefRoi and computes Qz in Python):

```cpp
XOut0[t] = 4.0 * M_PI * sin(theta * M_PI / 180.0) / XIn0[t_index]
```

A wavelength → Q axis transform using the SINGLE scattering angle
passed in.  The X-axis is also **reversed** (`t_index = size - 1 - t`)
because wavelength and Q are inversely related.  This is a perfectly
fine single-angle approximation but is not used by MRR today.

### How MRR calls RefRoi (twice per cross-section, when subtracting background)

From `process_data` (MagnetismReflectometryReduction.py:787-828):

1. **Background average** (if `SubtractSignalBackground=True`):
   ```
   average = RefRoi(
       IntegrateY=True, NXPixel=NX, NYPixel=NY, ConvertToQ=False,
       XPixelMin=bck_min, XPixelMax=bck_max,   # X-ROI: background region
       YPixelMin=low_res_min, YPixelMax=low_res_max,
       ErrorWeighting=<from MRR property>,
       SumPixels=True, NormalizeSum=True,
   )
   ```
   Output: ONE spectrum per TOF bin representing the mean
   background per pixel inside the (bck_min..bck_max,
   low_res_min..low_res_max) box.  Because this spectrum is broadcast
   into the subtraction, it acts as a uniform per-pixel background.

2. **Signal integrated over low-res band**:
   ```
   signal = RefRoi(
       IntegrateY=True, NXPixel=NX, NYPixel=NY, ConvertToQ=False,
       YPixelMin=low_res_min, YPixelMax=low_res_max,
       # (no X-ROI restriction — all x-pixels)
       # SumPixels=False → NXPixel spectra in the output
   )
   ```
   Output: `NX` spectra, each integrated over the low-res band.
   Each spectrum is a per-x-pixel TOF histogram.

3. `Minus(signal, average)` — subtracts the broadcast background
   from the per-pixel signal.

When `SubtractSignalBackground=False` (rare), only step 2 runs.

### Scientific effect summary

RefRoi is where the "peak vs background" distinction first bites.
Its `ErrorWeighting` knob is the single most consequential default
in the entire MR reduction because it silently changes the
interpretation of Poisson-limited background noise.  The hackathon
should (1) agree on one value for this, (2) propagate the agreed
value through both consumers, and (3) ideally upstream the
discussion to the Mantid MR team so MRR's default matches the
consensus.

## 10. Algorithms shipped by mr_reduction / quicknxsv2 — "Mantid adjacents"

`SingleReadoutDeadTimeCorrection` is a `PythonAlgorithm` subclass
@@ -655,33 +883,223 @@ default applies.

### The parameter defaults that genuinely differ in scientific effect

Reading the matrix highlights **four defaults where the consumers
disagree, with a real scientific consequence**:

1. **`ErrorWeightedBackground`** — quicknxs hardcodes `False`, mr_reduction
   takes default `True`.  Different low-signal background estimates.
   R(Q) differs at low reflectivity.
2. **`CropFirstAndLastPoints`** — quicknxs (CSD path) sets `False`
   explicitly, mr_reduction takes default `True`.  Different Q range
   at the edges.
3. **`RoundUpPixel`** — quicknxs (CSD path) sets `False`, mr_reduction
   takes default `True`.  Different `theta_f` calculation in
   constant-Q binning; only matters if `ConstantQBinning=True`.
4. **`AcceptNullReflectivity`** — quicknxs hardcodes `True`,
   mr_reduction takes default `False`.  On a zero-intensity
   cross-section, quicknxs continues silently; mr_reduction raises.
   This is a **failure-mode difference**, not a value difference, but
   it changes observable behavior.

Plus the `UseSANGLE` difference noted in §6 — a defaulting difference
that shifts every θ for every run.

And — relevant but not a "default" — `QStep`:
- quicknxs CSD path: **hardcoded -0.01**
- quicknxs NexusData path: from `conf.binning_q_step_run`
- mr_reduction: from `self.q_step` (default `-0.02`)

So two paths even *within* quicknxs disagree on `QStep`.
> **Who is "disagreeing."**  The two *consumers* (codebase-level) are
> `quicknxsv2` (interactive GUI — one process, one user's reductions,
> called via `Scale`-post-processed MRR) and `mr_reduction` (the
> autoreduce library — batch processing triggered by the data
> acquisition system, running into `/SNS/REF_M/IPTS-.../shared/autoreduce/`).
> They share neither code nor defaults when calling MRR.  The
> disagreement is *silent*: a scientist who compares the same run's
> `.dat` file from the autoreduce output to the `.dat` file saved
> from the GUI will see numerically-different R(Q) curves.  None of
> the differences is wrong per se — each is a defensible choice —
> but the *absence of agreement* is the debt.

Reading the matrix highlights **four MRR-property defaults and two
other controls where the consumers disagree, with a real scientific
consequence**.  For each, I state the physics behind the choice, the
observable effect, and who sees it:

---

**Tension 1 — `ErrorWeightedBackground` (MRR property)**

| | Value | MRR behavior |
|---|---|---|
| quicknxsv2 (both call sites) | explicitly `False` | Plain arithmetic mean of per-pixel background, error = sqrt(Σ E²)/N |
| mr_reduction | inherits MRR default `True` | Inverse-variance weighted mean of background |

- **Physics:** see § 9a above. Inverse-variance weighting is biased
  toward low-count bins in Poisson-limited data. Plain mean is the
  right estimator for a uniform rate.  The `True` default in MRR is
  a questionable choice for REF_M's noise regime.
- **Observable:** background estimate shifts by a few percent at
  low signal-to-noise; the absolute R(Q) shifts accordingly at
  low-reflectivity (high-Q) bins — precisely where science decisions
  are usually made.
- **Who sees it:** every autoreduce run vs. every GUI-reduced run.
- **Hackathon decision needed:** which value is correct?  The
  quicknxs choice `False` is the safer physics call for Poisson
  noise; mr_reduction's inherited `True` is the MRR default from
  2018 and has never been scrutinized.

---

**Tension 2 — `CropFirstAndLastPoints` (MRR property)**

| | Value | MRR behavior |
|---|---|---|
| quicknxsv2 NexusData path (data_set.py:862) | explicitly `False` | Q range extends fully to edge of computed bins |
| quicknxsv2 CrossSectionData path (data_set.py:571) | inherits MRR default `True` | First and last non-zero Q bins are removed |
| mr_reduction | inherits MRR default `True` | Same as above |

- **Physics:** the first and last Q bins of a rebinned TOF→Q
  histogram can have poor statistics due to edge binning
  (partial-bin contributions).  Cropping them is a defensible
  safety margin but throws away real data if the binning is uniform
  enough.
- **Observable:** the Q range of the output reflectivity curve differs
  by 2 bins between the NexusData path and everyone else.  For
  auto-stitching, the loss of edge bins changes the overlap regions
  between adjacent runs, which can propagate into the per-run scale
  factors.
- **Who sees it:** any user who compares GUI and autoreduce curves,
  or who exports from the GUI and expects the autoreduce-comparable
  output.
- **Hackathon decision needed:** pick one; ideally `False` (don't
  throw away data — the `CleanupBadData` path already handles truly
  bad points).

---

**Tension 3 — `RoundUpPixel` (MRR property)**

| | Value | MRR behavior |
|---|---|---|
| quicknxsv2 NexusData path | explicitly `False` | In constant-Q branch, `x_distance = (x_pixel_map - ref_pix - 0.5) * pixel_width` |
| quicknxsv2 CrossSectionData path | inherits MRR default `True` | `x_distance = (x_pixel_map - round(ref_pix)) * pixel_width` |
| mr_reduction | inherits MRR default `True` | Same as above |

- **Physics:** `ref_pix` (specular pixel) is typically a fractional
  number like 126.9.  `RoundUpPixel=True` rounds this to integer
  127; `False` keeps the fraction and also shifts by a half-pixel
  offset to account for pixel-center vs. pixel-edge convention.
- **Observable:** only matters when `ConstantQBinning=True` (the
  unusual branch).  If the scientist uses the usual TOF-path, this
  default is ignored.  But when the constant-Q branch is used, the
  per-pixel `theta_f` calculation shifts by ~0.3 pixels worth of
  angle, which is ~0.003° for REF_M — measurable at high-Q.
- **Who sees it:** any user who enables `ConstantQBinning` in the
  GUI.  Autoreduce defaults `const_q_binning=False` so is unaffected
  unless the autoreduce config is changed.
- **Hackathon decision needed:** the fractional-with-offset form
  (`False`) is more physically correct; align on it.

---

**Tension 4 — `AcceptNullReflectivity` (MRR property)**

| | Value | MRR behavior |
|---|---|---|
| quicknxsv2 (both paths) | explicitly `True` | Empty-intensity cross-sections return zero-filled curve |
| mr_reduction | inherits MRR default `False` | `raise RuntimeError("The reflectivity is all zeros: check your peak selection")` |

- **Physics:** an all-zero reflectivity curve is meaningful in
  polarized reflectometry — e.g. the On-On cross-section of a
  symmetric system may have zero intensity.  Treating that as an
  error is wrong; treating it as valid data is correct.
- **Observable:** **failure-mode disagreement**, not a numerical
  one.  For a run with a legitimately empty cross-section:
  - quicknxsv2 shows it on the plot as zero (possibly with a small
    shim added so matplotlib can draw a flat line — see `_shift_empty_reflectivity_curve`).
  - mr_reduction errors out and the whole `ReductionProcess` for
    that run fails unless wrapped in try/except.
- **Who sees it:** scientists running polarized experiments.
  Autoreduce was known to fail on null cross-sections until the
  mr_reduction consumer added its own try/except workarounds.
- **Hackathon decision needed:** `True` is the clearly correct
  value.  mr_reduction should pass it explicitly.

---

**Tension 5 — `UseSANGLE` (MRR property)**

| | Value | MRR behavior |
|---|---|---|
| quicknxsv2 | `not conf.use_dangle`, and the GUI defaults `use_dangle=False` → so MRR receives `UseSANGLE=True` by default | Use SANGLE (sample angle encoder) for θ |
| mr_reduction | `self.use_sangle`, defaults `True` in `ReductionProcess.__init__` | Use SANGLE |

*Actually the same default.*  But the wrap-around is confusing:
`use_dangle=False` in the quicknxs Configuration means "don't trust
DANGLE" which translates to `UseSANGLE=True`.  The sign flip is
buried in the call site.  **The risk is not current disagreement
but future refactor error** — anyone reading the quicknxs
`Configuration.use_dangle = False` and assuming it means
"UseSANGLE=False" at the MRR call level is wrong.  This
variable-inversion is a debt item in its own right.

- **Physics:** SANGLE = sample stage rotation (encoder on the stage).
  DANGLE = detector arm angle.  Ideally `θ_specular = SANGLE = DANGLE/2`.
  In practice encoders disagree by fractions of a degree.  "Trust
  DANGLE" means derive θ from `(DANGLE - DANGLE0)/2 + pixel offset`;
  "Trust SANGLE" means read the sample encoder directly.
- **Hackathon decision needed:** rename the Configuration attribute
  to match MRR's convention (e.g. `use_sangle` instead of
  `use_dangle`, invert the default), so the sign flip disappears.

---

**Tension 6 — `QStep` (MRR property) — three code paths, three values**

| | Value |
|---|---|
| quicknxsv2 CrossSectionData path | **hardcoded `-0.01`** |
| quicknxsv2 NexusData path | `conf.binning_q_step_run` (user-controlled, default `-0.02`) |
| mr_reduction | `self.q_step` (`ReductionProcess` default `-0.02`) |

- **Physics:** negative QStep means log-spaced bins; `-0.01` is
  finer, `-0.02` is coarser by 2x.  A **factor-of-two difference in
  Q-binning resolution** between two different places inside
  quicknxsv2.
- **Observable:** the CrossSectionData path is invoked by the
  per-cross-section "reduce one" operation (single-run mode), the
  NexusData path is the main grouped call.  Users who exercise only
  the grouped path see `-0.02`; scripts or fall-through paths that
  hit the CSD method see `-0.01`.
- **Who sees it:** unclear.  The CSD path's `reflectivity()` method
  may be legacy code; we did not trace current callers.  If this
  path is dead, the hardcoded `-0.01` should be removed.  If it is
  live, there is an unjustified intra-project inconsistency.

---

**Tension 7 — post-hoc QuickNXS compat scaling**

Not an MRR property, but a consequential post-hoc multiplier:

| | Applied? | Formula |
|---|---|---|
| quicknxsv2 NexusData path (data_set.py:890-897) | always | `(Δnorm_x · Δnorm_y / Δpeak · Δlow_res) · 0.005/sin(θ)`, with `+1.0` added to each `max` log value |
| quicknxsv2 CrossSectionData path (data_set.py:620-624) | always | **Same formula WITHOUT `+1.0`** |
| mr_reduction `quicknxs_scaling_factor` in reflectivity_output.py:99-115 | only when `quicknxs_mode=True` in the generated reproducibility script | Same formula WITH `+1.0` |

- **Physics:** this factor scales the MRR output to match the
  historical QuickNXS v1 convention (`0.005/sin(θ)` is a normalization
  to a fixed reference wavelength).  It is the "tax" for backward
  compatibility with users' archival .dat files.
- **Observable:** the `+1.0` / no-`+1.0` inconsistency between the
  two quicknxs code paths is a **~1% scale-factor difference between
  single-run reductions and grouped reductions** — and is clearly
  accidental.
- **Who sees it:** anyone stitching the output across runs will
  see the scale-factor discontinuity between neighbouring Q-ranges
  if one was reduced through each path.  In practice the grouped
  path dominates, but the stale CSD path lurks.
- **Hackathon decision needed:** make the scaling a single named
  function in one file.  Decide `+1.0` or not.  Propagate everywhere.

---

**Summary of the tensions**

The disagreements break into three archetypes:

1. **Default-inheritance drift** (Tensions 1, 2-a, 3, 4): one
   consumer passes the parameter explicitly, the other lets MRR's
   default apply.  Caused by the absence of a shared
   parameter-construction function.
2. **Intra-project drift** (Tensions 2-b, 6, 7): quicknxs has two
   internal paths that disagree with each other.  Caused by lack of
   code consolidation *within* quicknxsv2, not between projects.
3. **Naming/semantic drift** (Tension 5): a UI flag whose name is
   the inverse of the MRR property it ends up controlling.  Caused
   by local-convenience naming at the UI layer without discipline
   toward the physics or the algorithm's property contract.

The hackathon needs to resolve each of these, and more importantly
needs to **institutionalize the resolution** so they don't
resurface.  A single function with an immutable typed parameter
object is the mechanism.

### The post-hoc QuickNXS compatibility scaling