Skip to content
Snippets Groups Projects
This project is mirrored from https://github.com/mantidproject/mantid.git. Pull mirroring updated .
  1. Feb 02, 2020
  2. Jan 29, 2020
  3. Jan 28, 2020
    • Danny Hindson's avatar
      Add switch to allow the same tracks to be reused for different lambas · 504bdb4f
      Danny Hindson authored
      A switch has been added to the MCAbsorptionStrategy class that allows a pair
      of before scatter\after scatter tracks to be used to calculate the attenuation
      for a range of different lambda values. This will improve the speed of the Monte
      Carlo calculation. The Monte Carlo calculation currently sets the switch to
      reuse the tracks without exposing this in the algorithm parameters - I've mainly
      left the "old" approach in place in case any further wavelength dependent effects
      are added to the absorption calculation eg beam spreading
      
      In order to add the reuse while retaining the original behaviour (via the switch),
      I had to reverse the order of the event and lambda loops. The previous code looped
      over lambda with an inner loop of events. The event loop is now the outer loop, so
      even with the switch set to not reuse tracks the results will change slightly if
      the simulation isn't converged
      
      The function MCInteractionVolume::calculateAbsorption has been split into two:
      - track generated done in calculateBeforeAfterTrack
      - absorption calculation done in calculateAbsorption
      
      The relevant unit and system tests have been adjusted. In particular the tests in
      MCInteractionVolumeTest have been split out so the track generation and absorption
      calculation are tested separately
      504bdb4f
  4. Jan 21, 2020
    • David Fairbrother's avatar
      Switch to JSON Parser for SANS State · dab9e698
      David Fairbrother authored
      Switches to using the JSON library for parsing SANS state objects. This
      provides numerous advantages:
      
      - We do not need to maintain a custom (de)serializer
      - JSON is a documented standard
      - We can switch to type hinting in Python 3, currently a significant
      amount of CPU time is spend reverifying typed params
      - History becomes less brittle and will work from top level algorithms
      - Allows new code to be written in a more Pythonic way (e.g. not forced
      to use class level variables for them to be serialized)
      dab9e698
  5. Jan 20, 2020
    • Danny Hindson's avatar
      Improve sampling of scatter pts in absorption correction simulation · e2af417f
      Danny Hindson authored
      The sampling is now performed according to the volume of each part of
      the sample\environment that intersects the beam profile
      
      Following changes have been made:
      
      a) the function IObject::generatePointInObject has been modified so that it returns
      false rather than raising an exception if it fails to generate a point inside
      the object that is also in the active region. This is more efficient when it is
      being called with a maxAttempts value of 1 in an attempt to fairly sample the
      scatter points among the sample + environment components. This involved a change
      in IObject and various child classes (MeshObject, CSGObject, MeshObject2D, Container)
      
      b) the code that calls IObject::generatePointInObject and cycles through the
      various parts of the environment\sample has been moved from SampleEnvironment into
      MCInteractionVolume so that the sample can be included. There is a new function
      MCInteractionVolume::generatePoint that randomly generates a scatter point across the
      sample and environment components. Possible the SampleEnvironment class could be retired
      entirely and just replaced by a vector of IObject items attached to the sample.
      
      c) change CSGObject::generatePointInObject to stop calling the fallback method when the
      maxAttempts parameter equals 1. The fallback method always returns a point if the object's
      bounding box is inside the active area which doesn't produce the required sampling across
      the env components. This change has modified the random number sequences used in various
      tests - including the "sample only" tests
      
      d) added some logging to the simulation to show where the scatter points
      occurred. These show that for Pearl around 6% of the scatter points are
      in the sample which is less than the 50% assumption previously in the code
      
      e) Several changes to the unit tests (MCInteractionVolumeTest.h, MonteCarloAbsorptionTest.h,
      DirectILLSelfShieldingTest.py). The updated sampling means that the absorption corrections
      are slightly different than before for cases with a sample + environment.
      
      For DirectILLSelfShieldingTest, an extra parameter has been added to underlying algorithm
      (DirectILLSelfShielding) so that this test can continue to use 300 events per point while
      the ILLDirectGeometryReductionTest can use 5000 events per point
      
      f) some changes to system tests (ILLDirectGeometryReductionTest, complexShapeAbsorptions)
      
      The calculation used in ILLDirectGeometryReductionTest wasn't converged (changing the seed
      gave a ~25% change in the output) so have increased the number of events per point from 300
      to 5000. I didn't increase the number of events further because I didn't want to make the
      runtime of the system test (esp in debug mode) too large
      e2af417f
  6. Jan 17, 2020
  7. Jan 09, 2020
  8. Jan 06, 2020
  9. Jan 03, 2020
  10. Dec 19, 2019
    • Gemma Guest's avatar
      Update slicing test reference file · 19fe85b6
      Gemma Guest authored
      The new reference file has the following changes:
      - The IvsQ_binned results contain slightly different values due to
      binning differences to the original tests which did a manual Rebin of
      the detectors and monitors workspaces. The new workflow algorithm just
      rebins the detectors the same as the monitors. If I hack the script to
      use the old rebin params I get exactly the same results, so this
      confirms that functionally the new script is correct.
      
      - The new reference includes some interim workspaces (e.g. IvsQ and
      TOF) which were deleted in the original test scripts. These are standard
      outputs so I think it's better to include them.
      
      - The test script has been updated to not delete the interim workspaces
      mentioned above. It also now does delete the interim TRANS_LAM
      workspaces, because the workflow algorithms currently always output
      these for input workspace groups, even though they shouldn't really if
      debug is not on.
      
      Re #25881
      19fe85b6
    • Gemma Guest's avatar
      Add option to specify which filter algorithm to use · 8aa2e000
      Gemma Guest authored
      There are differences between the old FilterByTime algorithm and the new
      FilterEvents algorithm. This commit adds an option so the user can
      specify which to use and sets the default to use the old one.
      
      Also, when combining the monitors and detectors into a single workspace, we
      were merging the logs, which meant the proton charge is different to
      previous behaviour when scientists were doing this step manually. This
      commit changes MergeLogs to be false in line with the way scientists
      previously did this.
      
      I'm not sure this is really an issue because the final proton charge for
      the workspace is taken from the monitor workspace, which contains the
      unfiltered proton charge. We may want to adjust this in future to be the
      proton charge for the filtered slice.
      
      Re #25881
      8aa2e000
    • Gemma Guest's avatar
      Consolidate code to use new base class · 0467cd8c
      Gemma Guest authored
      Remove duplicate code in the original tests and make them use the new
      base class instead. Also fix some problems with the regenerate functions
      to ensure they use the correct test suite.
      
      Update the slicing test result workspace with the correct workspace
      names (the actual workspaces themselves are the same, just the names
      have changed).
      
      Re #25881
      0467cd8c
    • Gemma Guest's avatar
      Add system tests for ISIS reflectometry preprocessing · 1a69b9a0
      Gemma Guest authored
      - The 'Preprocess' test does 'normal' preprocessing, which involves
      loading runs and preparing the transmission workspace.
      - The 'Slicing' test performs time slicing of the input event workspace before
      performing the reduction.
      
      Re #25881
      1a69b9a0
    • Gemma Guest's avatar
      Change tests to use new workflow algorithm · 030b2597
      Gemma Guest authored
      Direct use of ReflectometryReductionOneAuto has been superseded by the
      new wrapper algorithm ReflectometryISISLoadAndProcess. The system tests
      have been updated to use this wrapper instead.
      
      Re #25881
      030b2597
  11. Dec 13, 2019
  12. Dec 11, 2019
  13. Dec 09, 2019
  14. Dec 06, 2019
  15. Dec 03, 2019
  16. Nov 27, 2019
  17. Nov 26, 2019
  18. Nov 22, 2019
  19. Nov 21, 2019
  20. Nov 18, 2019
  21. Nov 14, 2019
  22. Nov 12, 2019
  23. Nov 11, 2019
  24. Nov 08, 2019
  25. Nov 07, 2019
  26. Nov 06, 2019
    • Martyn Gigg's avatar
      Update system test results · 765f23a1
      Martyn Gigg authored
      The flight paths have been updated slightly so the d-spacing
      related results have changed slightly. Confirmed that this is
      acceptable for old results with C Ridley.
      765f23a1
  27. Oct 30, 2019
Loading