Commit 95320c7a authored by Vacaliuc, Bogdan's avatar Vacaliuc, Bogdan
Browse files

back-link existing docs to the Mantid deep-dive



- 03: correct MRR from "C++" to "Python" (it's PyAlgorithm), add
  deep-dive pointer, expand the minimum-viable-contract section
  to list the other Mantid algorithms (MRFilterCrossSections,
  MRInspectData, MRGetTheta) and the three-way
  SingleReadoutDeadTimeCorrection registration.
- 03: amend Mantid version mismatch section to note that MR Python
  algorithms are byte-identical between 6.14 and HEAD - the pin
  resolution is packaging, not science.
- 00: add new risk bullet for default-value disagreements between
  consumers; add the SingleReadoutDeadTimeCorrection three-way
  registration as a separate risk.
- 08: renumber R1 as "packaging, not math", add R9 (silent default
  disagreements) and R10 (three-way algorithm registration).
  Amend "investigation did not do: install Mantid" to reflect that
  we DID read the Mantid source.
- README: add 10 to the reading map; add to the one-slide version
  the fact that MRR is readable Python, byte-stable between Mantid
  versions, and that the real consolidation risk is the
  default-value surface.

Co-Authored-By: default avatarClaude Opus 4.7 (1M context) <noreply@anthropic.com>
parent 2ee37f53
Loading
Loading
Loading
Loading
+32 −8
Original line number Diff line number Diff line
@@ -89,23 +89,47 @@ decision matrix.

## Main risks

1. **Mantid version drift** — quicknxsv2 pins `mr_reduction==2.17.0`
   (Mantid 6.14) while `mr_reduction/next` has advanced to 6.15.  Any
   "just move files between repos" refactor will collide with this
   mismatch.  The hackathon should **resolve the Mantid version first**.
2. **Two parallel `data_info.py` implementations** — 413 vs. 1101
1. **Mantid version drift (packaging only).** quicknxsv2 pins
   `mr_reduction==2.17.0` (Mantid 6.14) while `mr_reduction/next` has
   advanced to 6.15.  Important clarification from the Mantid-source
   audit (see [10-mantid-algorithms-deep-dive.md](10-mantid-algorithms-deep-dive.md) § 0):
   **the MR Python algorithms themselves are byte-identical between
   6.14 and 6.15** — no commits on any MR file in that range.  So
   this is a **packaging** problem, not a reduction-math problem;
   resolve it first for install-ability reasons, but it does not
   block the consolidation.
2. **Silent default-value disagreements inside the MRR call.** The
   two consumers pass different defaults for `ErrorWeightedBackground`,
   `CropFirstAndLastPoints`, `RoundUpPixel`, `AcceptNullReflectivity`,
   and `UseSANGLE` — each of which changes the reduced R(Q) curve
   or its failure modes.  Plus a post-hoc QuickNXS compat scaling
   that is always applied in quicknxsv2 and conditionally applied in
   mr_reduction.  Unifying MRR's parameter-construction must pin
   these choices, not leave them to defaults.  Full matrix in
   [10-mantid-algorithms-deep-dive.md](10-mantid-algorithms-deep-dive.md) § 11.
3. **Two parallel `data_info.py` implementations** — 413 vs. 1101
   lines.  Neither is a superset.  The differences encode real
   behavioral differences (probably use-case specialization).  These
   must be reconciled by tests, not by diff.
3. **Threading model** — quicknxsv2 does **not** use `QThread` or any
   must be reconciled by tests, not by diff.  Note that a third
   "canonical" `DataInfo` implementation lives inside Mantid itself
   as `MRInspectData.DataInfo` — the hackathon should evaluate
   whether adopting that via sample-log reads would let both
   consumers delete their copies.
4. **Threading model** — quicknxsv2 does **not** use `QThread` or any
   worker-thread pattern for reduction.  Any move that keeps reduction
   synchronous in the UI thread will ship the same UI-blocking problem.
   Threading is a separate effort orthogonal to modularization but
   may as well be designed-in now.
4. **Autoreduce/livereduce contract**`mr_reduction` is copied into
5. **Autoreduce/livereduce contract**`mr_reduction` is copied into
   `/SNS/REF_M/shared/autoreduce` and `/SNS/REF_M/shared/livereduce`
   at release time.  Changing `ReductionProcess`'s signature or import
   paths breaks both deployment targets silently until a run fails.
6. **`SingleReadoutDeadTimeCorrection` registered by three packages.**
   `mr_reduction`, `quicknxsv2`, and `lr_reduction` each subscribe an
   algorithm with the same name into Mantid's AlgorithmFactory.  Last
   import wins.  The bodies are near-identical so nothing has broken
   yet, but it is accidental, not principled.  Pick one, delete the
   others.

## Out-of-scope insights worth noting

+41 −10
Original line number Diff line number Diff line
@@ -91,13 +91,22 @@ ws = api.MagnetismReflectometryReduction(
)
```

`MagnetismReflectometryReduction` is a Mantid C++ algorithm
(registered in `mantid.api.AlgorithmFactory`).  The
`MagnetismReflectometryReduction` is a **Python** `PythonAlgorithm` in
Mantid (registered via `AlgorithmFactory.subscribe` at the bottom of
`Framework/PythonInterface/plugins/algorithms/MagnetismReflectometryReduction.py`,
859 LOC, pure Python — not C++).  The only C++ dependency it has is
the `RefRoi` primitive.  The
`mr_reduction.mr_reduction.ReductionProcess.reduce_workspace_group()`
calls the **same** Mantid algorithm with its own (similar but not
calls the **same** Python algorithm with its own (similar but not
identical) parameter construction — *this is the real duplication* in
the reduction pipeline.  Two separate parameter-construction code
paths call the same Mantid C++ algorithm.
paths call the same Python algorithm.

> **Deep-dive:** See [10-mantid-algorithms-deep-dive.md](10-mantid-algorithms-deep-dive.md)
> for the internal PyExec flow, the 53-property surface, the four
> default-values that consumers silently disagree on, and the
> QuickNXS-compat post-scaling that is applied in one consumer and
> not the other.

### Implication

@@ -105,7 +114,9 @@ Whichever modularization strategy wins, **the parameter-construction
logic for `MagnetismReflectometryReduction` must be unified in one
place**.  A back-end library's job is to provide this parameter
construction as a single function, so both the UI and the scripted
(autoreduce) path call it the same way.
(autoreduce) path call it the same way.  The deep-dive doc above
highlights **four default values** that the two current consumers
pass differently — these are the concrete unification targets.

## mr_reduction coupling — only 4 files, but they're the right ones

@@ -226,6 +237,16 @@ The hackathon must either:

This decision precedes any code-level modularization work.

> **Simplification** — the pin mismatch is not a reduction-math
> problem.  `git log v6.14.0..HEAD` on each of the four MR Python
> algorithms (`MagnetismReflectometryReduction`, `MRFilterCrossSections`,
> `MRInspectData`, `MRGetTheta`) returns **zero commits**.  Whatever
> drift exists between 6.14 and 6.15 is in the primitives (`Rebin`,
> `RefRoi`, `ConvertUnits`, etc.) or elsewhere in Mantid, not in the
> MR reduction itself.  So the pinning resolution is a **packaging**
> decision, not a **science** decision.  See
> [10-mantid-algorithms-deep-dive.md](10-mantid-algorithms-deep-dive.md) § 0.

## io_orso — duplicated concept, not quite duplicated code

`mr_reduction.io_orso` (523 LOC) handles ORSO I/O using
@@ -283,12 +304,22 @@ from mr_reduction.types import MantidWorkspace
# and a __version__ attribute
```

And the Mantid C++ algorithms it must keep registered:
- `MagnetismReflectometryReduction` (provided by Mantid core, not this repo)
- `SingleReadoutDeadTimeCorrection` (provided by one of the two
  duplicates — pick one and delete the other)
And the Mantid algorithms it must keep registered:
- `MagnetismReflectometryReduction` (Python, provided by Mantid core,
  not this repo — but see [10-mantid-algorithms-deep-dive.md](10-mantid-algorithms-deep-dive.md)
  for a much deeper treatment of what this actually does and how the
  two consumers differ in their default-value choices)
- `MRFilterCrossSections`, `MRInspectData`, `MRGetTheta` (Python,
  Mantid core — though quicknxsv2 and mr_reduction currently
  **re-implement** most of their functionality in parallel)
- `SingleReadoutDeadTimeCorrection` (NOT Mantid core — registered by
  mr_reduction, quicknxsv2, and lr_reduction in parallel; last
  import wins, pick one and delete the others)
- `RefRoi` (C++, Mantid Reflectometry framework — MRR's only
  C++ dependency)

This is the *literal* dependency contract.  It's small.  The hard
part is not the contract — it's that the hackathon should build a
*richer* API that also deprecates quicknxsv2's own parallel
implementations.  See `06-modularization-strategies.md`.
implementations.  See `06-modularization-strategies.md` and
[10-mantid-algorithms-deep-dive.md](10-mantid-algorithms-deep-dive.md) § 12.
+81 −6
Original line number Diff line number Diff line
@@ -6,16 +6,30 @@ triage on Day 1 of the hackathon).

## Risks that matter

### R1 — Mantid version incompatibility blocks the consolidation
### R1 — Mantid version incompatibility blocks installation (but not the math)

**Symptom.** quicknxsv2 pins `mr_reduction==2.17.0` which requires
Mantid 6.14; mr_reduction's `next` branch requires Mantid 6.15.  A
refactor that tries to share code between them will collide.
refactor that tries to share code between them will collide at
install time.

**Likelihood:** certain.  This exists today, at HEAD.

**Impact:** blocks Option 2 and Option 3 of the strategy document.
Must be resolved before code migration starts.
**Impact:** blocks Option 2 and Option 3 of the strategy document
at the *packaging* layer.  Must be resolved before code migration
starts, but it is not a numerical/scientific blocker.

**Updated finding (see
[10-mantid-algorithms-deep-dive.md](10-mantid-algorithms-deep-dive.md) § 0):**
`git log v6.14.0..HEAD` on the MR Python algorithms
(`MagnetismReflectometryReduction`, `MRFilterCrossSections`,
`MRInspectData`, `MRGetTheta`) shows **zero commits**.  Between
Mantid 6.14 and current `main`, the reduction math is byte-identical.
So whatever "drift" exists between the two pins is in the generic
Mantid primitives (`Rebin`, `ConvertUnits`, `RefRoi`, etc.) or
elsewhere, not in the MR-specific code.  This means the pin
resolution is purely a packaging decision: pick one version range,
make everyone agree, and the scientific outputs are preserved.

**Mitigation:**

@@ -159,6 +173,62 @@ use is in scope.
  types, not REF_M-specific.  That way the decision to include
  lr_reduction is additive, not a rewrite.

### R9 — The two consumers silently pass different MRR defaults (NEW)

**Symptom.** `quicknxsv2` and `mr_reduction` both call
`MagnetismReflectometryReduction` but pass different values (or
different defaults) for parameters that materially affect R(Q):

- `ErrorWeightedBackground`: quicknxs forces `False`, mr_reduction
  inherits default `True` → different background estimate at low S/N.
- `CropFirstAndLastPoints`: quicknxs NexusData path forces `False`,
  mr_reduction inherits default `True` → different Q range at edges.
- `RoundUpPixel`: same pattern as above.
- `AcceptNullReflectivity`: quicknxs forces `True`, mr_reduction
  inherits default `False` → different *failure mode* on an empty
  cross-section.
- `UseSANGLE`: quicknxs UI default is DANGLE (`use_dangle=True`),
  mr_reduction's default is SANGLE — a **different scattering angle
  for every reduction**.
- Post-hoc QuickNXS compat scale factor — always applied in quicknxs,
  conditionally in mr_reduction (`quicknxs_mode=True`).

**Likelihood:** certain.  These are in the current sources.

**Impact:** Any "library consolidation" that doesn't explicitly pin
these to one set of values will silently change somebody's
reduction.

**Mitigation:**

- Make the parameter-construction function *own* these defaults, not
  leave them to MRR.
- Produce a before/after comparison on a suite of reference runs.
- Have the REF_M scientific lead sign off on which default wins for
  each.
- See [10-mantid-algorithms-deep-dive.md](10-mantid-algorithms-deep-dive.md) § 11 for the full matrix.

### R10 — `SingleReadoutDeadTimeCorrection` registered by three packages (NEW)

**Symptom.** `mr_reduction`, `quicknxsv2`, and `lr_reduction` each
register an algorithm with the exact same name into Mantid's
`AlgorithmFactory`.  Whichever is imported last silently replaces
the earlier registration.

**Likelihood:** certain.  Import-order dependent today.

**Impact:** Low in practice (the three bodies are near-identical),
but fragile: any divergence between copies leads to non-deterministic
behavior depending on import order.  Debugging a "wrong" dead-time
correction would be very painful.

**Mitigation:**

- Pick one canonical copy (mr_reduction's is the newest).  The
  others `import` from it rather than registering their own.
- Add a registration assertion: "algorithm already registered by
  another package" should log a loud warning at import time.

### R8 — Packaging and release flow not designed for three packages

**Symptom.** The existing SAE/conda-forge release pipeline (ABMC-label,
@@ -304,8 +374,13 @@ approve it? Are there external reviewers (e.g. NSCD, CIS)?
   based on what the code says the markers require, not
   observation.  Day 0 of the hackathon: run them.
3. **Install Mantid.**  No live Mantid environment was created.
   Claims about algorithm semantics come from reading the call
   sites, not from exercising them.
   Claims about the **behavior** of the Mantid pipeline come from
   reading the source (including the full Mantid source tree at
   `/media/ssd2/mantid`, not just the call sites in quicknxs /
   mr_reduction) — so semantics claims are *source-grounded*, not
   just inferred.  What was not done: running the pipeline on live
   data with instrumentation to confirm runtime behavior.
   See [10-mantid-algorithms-deep-dive.md](10-mantid-algorithms-deep-dive.md).
4. **Talk to the scientific team.**  Decisions about "which
   DataInfo defaults are authoritative" must come from the
   scientists.  This document makes recommendations based on
+9 −0
Original line number Diff line number Diff line
@@ -26,6 +26,7 @@ trusting `docs/developer/` in either upstream repo — both are skeletal
| Get the scientific context (MR, REF_M, ORSO, ROI, cross-sections) | [07-scientific-domain-primer.md](07-scientific-domain-primer.md) |
| See what the hackathon must decide, and known unknowns | [08-risks-and-open-questions.md](08-risks-and-open-questions.md) |
| Look up a term / acronym / file name | [09-glossary-and-pointers.md](09-glossary-and-pointers.md) |
| Open the `MagnetismReflectometryReduction` black box — PyExec flow, inner Mantid primitives, the four silent default-value disagreements between consumers | [10-mantid-algorithms-deep-dive.md](10-mantid-algorithms-deep-dive.md) |

## One-slide version

@@ -46,6 +47,14 @@ trusting `docs/developer/` in either upstream repo — both are skeletal
  but **"which back-end wins?"** — lift quicknxsv2's `data_handling/`
  into a new library, push everything into `mr_reduction`, or craft a
  neutral `mr_core`.  See [06-modularization-strategies.md](06-modularization-strategies.md).
- The `MagnetismReflectometryReduction` "black box" is actually 859 LOC
  of **Python** (not C++) — readable, patchable, and byte-identical
  between Mantid 6.14 and 6.15.  The real consolidation risk is that
  the two consumers pass **different defaults** for four scientifically
  meaningful parameters (`ErrorWeightedBackground`, `CropFirstAndLastPoints`,
  `RoundUpPixel`, `AcceptNullReflectivity`) plus `UseSANGLE` and a
  post-hoc QuickNXS scaling.  Full treatment in
  [10-mantid-algorithms-deep-dive.md](10-mantid-algorithms-deep-dive.md).

## PDF of this knowledge base