Commit 722da751 authored by Janik Zikovsky's avatar Janik Zikovsky
Browse files

Refs #3984 Took the wiki description from every algorithm's wiki page and...

Refs #3984 Took the wiki description from every algorithm's wiki page and placed it in a comment in the CPP file (instead of the header file).
parent 9cf3f998
......@@ -234,15 +234,12 @@ def validate_wiki(args, algos):
wikidoc = find_section_text(lines, "Summary", go_to_end=False, section2="")
if args.show_missing: print wikidoc
# desc = alg._ProxyObject__obj.getWikiDescription()
# # One-time code to add wiki desc to CPP file
# desc = find_section_text(lines, "Description", True, "Introduction")
# # Fallback to the code one
# if len(desc) == 0:
# print "- Wiki Description is missing (in the code)."
# desc = find_section_text(lines, "Description", True, "Introduction")
# if args.show_missing: print desc
# One-time code to add wiki desc to CPP file
desc = find_section_text(lines, "Description", True, "Introduction")
add_wiki_description(algo, desc)
# desc = alg._ProxyObject__obj.getWikiDescription()
# add_wiki_description(algo, desc)
props = alg._ProxyObject__obj.getProperties()
for prop in props:
......
/*WIKI*
This algorithm uses a numerical integration method to calculate attenuation factors resulting from absorption and single scattering in a sample with the material properties given. Factors are calculated for each spectrum (i.e. detector position) and wavelength point, as defined by the input workspace.
The sample is first bounded by a cuboid, which is divided up into small cubes. The cubes whose centres lie within the sample make up the set of integration elements (so you have a kind of 'Lego' model of the sample) and path lengths through the sample are calculated for the centre-point of each element, and a numerical integration is carried out using these path lengths over the volume elements.
Note that the duration of this algorithm is strongly dependent on the element size chosen, and that too small an element size can cause the algorithm to fail because of insufficient memory.
==== Assumptions ====
This algorithm assumes that the (parallel) beam illuminates the entire sample '''unless''' a 'gauge volume' has been defined using the [[DefineGaugeVolume]] algorithm (or by otherwise adding a valid XML string [[HowToDefineGeometricShape | defining a shape]] to a [[Run]] property called "GaugeVolume"). In this latter case only scattering within this volume (and the sample) is integrated, because this is all the detector can 'see'. The full sample is still used for the neutron paths. ('''N.B.''' If your gauge volume is of axis-aligned cuboid shape and fully enclosed by the sample then you will get a more accurate result from the [[CuboidGaugeVolumeAbsorption]] algorithm.)
==== Restrictions on the input workspace ====
The input workspace must have units of wavelength. The [[instrument]] associated with the workspace must be fully defined because detector, source & sample position are needed.
*WIKI*/
//----------------------------------------------------------------------
// Includes
//----------------------------------------------------------------------
......
/*WIKI*
This algorithm performs a simple numerical derivative of the values in a sample log.
The 1st order derivative is simply: dy = (y1-y0) / (t1-t0), which is placed in the log at t=(t0+t1)/2
Higher order derivatives are obtained by performing the equation above N times. Since this is a simple numerical derivative, you can expect the result to quickly get noisy at higher derivatives.
*WIKI*/
#include "MantidAlgorithms/AddLogDerivative.h"
#include "MantidKernel/System.h"
#include "MantidKernel/TimeSeriesProperty.h"
......
/*WIKI*
Workspaces contain information in logs. Often these detail what happened to the sample during the experiment. This algorithm allows one named log to be entered.
The log can be either a String, a Number, or a Number Series. If you select Number Series, the current time will be used as the time of the log entry, and the number in the text used as the (only) value.
*WIKI*/
//----------------------------------------------------------------------
// Includes
//----------------------------------------------------------------------
......
/*WIKI*
The offsets are a correction to the dSpacing values and are applied during the conversion from time-of-flight to dSpacing as follows:
:<math> d = \frac{h}{2m_N} \frac{t.o.f.}{L_{tot} sin \theta} (1+ \rm{offset})</math>
The detector offsets can be obtained from either: an [[OffsetsWorkspace]] where each pixel has one value, the offset; or a .cal file (in the form created by the ARIEL software).
'''Note:''' the workspace that this algorithms outputs is a [[Ragged Workspace]].
==== Restrictions on the input workspace ====
The input workspace must contain histogram or event data where the X unit is time-of-flight and the Y data is raw counts. The [[instrument]] associated with the workspace must be fully defined because detector, source & sample position are needed.
*WIKI*/
//----------------------------------------------------------------------
// Includes
//----------------------------------------------------------------------
......
/*WIKI*
Returns the relative efficiency of the forward detector group compared to the backward detector group. If Alpha is larger than 1 more counts has been collected in the forward group.
This algorithm leave the input workspace unchanged. To group detectors in a workspace use [[GroupDetectors]].
*WIKI*/
//----------------------------------------------------------------------
// Includes
//----------------------------------------------------------------------
......
/*WIKI*
*WIKI*/
#include "MantidAlgorithms/ApplyDetailedBalance.h"
#include "MantidKernel/System.h"
#include "MantidKernel/TimeSeriesProperty.h"
......
/*WIKI*
The transmission can be given as a MatrixWorkspace or given directly as numbers. One or the other method must be used.
See [http://www.mantidproject.org/Reduction_for_HFIR_SANS SANS Reduction] documentation for details.
*WIKI*/
//----------------------------------------------------------------------
// Includes
//----------------------------------------------------------------------
......
/*WIKI*
*WIKI*/
#include "MantidAlgorithms/BinaryOperateMasks.h"
#include "MantidKernel/System.h"
#include "MantidDataObjects/SpecialWorkspace2D.h"
......
/*WIKI*
*WIKI*/
#include "MantidAlgorithms/BlendSq.h"
#include <vector>
#include <math.h>
......
/*WIKI*
See [http://www.mantidproject.org/Reduction_for_HFIR_SANS SANS Reduction] documentation for calculation details.
*WIKI*/
//----------------------------------------------------------------------
// Includes
//----------------------------------------------------------------------
......
/*WIKI*
Calculates the probability of a neutron being transmitted through the sample using detected counts from two monitors, one in front and one behind the sample. A data workspace can be corrected for transmission by [[Divide|dividing]] by the output of this algorithm.
Because the detection efficiency of the monitors can be different the transmission calculation is done using two runs, one run with the sample (represented by <math>S</math> below) and a direct run without it(<math>D</math>). The fraction transmitted through the sample <math>f</math> is calculated from this formula:
<br>
<br>
<math> p = \frac{S_T}{D_T}\frac{D_I}{S_I} </math>
<br>
<br>
where <math>S_I</math> is the number of counts from the monitor in front of the sample (the incident beam monitor), <math>S_T</math> is the transmission monitor after the sample, etc.
The resulting fraction as a function of wavelength is created as the OutputUnfittedData workspace. However, because of statistical variations it is recommended to use the OutputWorkspace, which is the evaluation of a fit to those transmission fractions. The unfitted data is not affected by the RebinParams or Fitmethod properties but these can be used to refine the fitted data. The RebinParams method is useful when the range of wavelengths passed to CalculateTransmission is different from that of the data to be corrected.
=== Subalgorithms used ===
Uses the algorithm [[linear]] to fit to the calculated transmission fraction.
*WIKI*/
//----------------------------------------------------------------------
// Includes
//----------------------------------------------------------------------
......
/*WIKI*
See [http://www.mantidproject.org/Reduction_for_HFIR_SANS SANS Reduction] documentation for details.
*WIKI*/
//----------------------------------------------------------------------
// Includes
//----------------------------------------------------------------------
......
/*WIKI*
This is the inverse of the DspacemaptoCal algorithm. The detector offset file created by this algorithm are in the form created by the ARIEL software. The offsets are a correction to the dSpacing values and are applied during the conversion from time-of-flight to dSpacing as follows:
:<math> d = \frac{h}{m_N} \frac{t.o.f.}{L_{tot} sin \theta} (1+ \rm{offset})</math>
==Usage==
'''Python'''
LoadEmptyInstrument("POWGEN_Definition.xml","POWGEN")
CaltoDspacemap("POWGEN","PG3.cal", "PG3.dat")
'''C++'''
IAlgorithm* alg1 = FrameworkManager::Instance().createAlgorithm("LoadEmptyInstrument");
alg1->setPropertyValue("Filename", "POWGEN_Definition.xml");
alg1->setProperty<MatrixWorkspace_sptr>("OutputWorkspace", "POWGEN");
alg1->execute();
IAlgorithm* alg2 = FrameworkManager::Instance().createAlgorithm("DspacemaptoCal");
alg2->setProperty<MatrixWorkspace_sptr>("InputWorkspace", "POWGEN");
alg2->setPropertyValue("CalibrationFile", "PG3.cal");
alg2->setPropertyValue("DspacemapFile", "PG3.dat");
alg2->execute();
*WIKI*/
//----------------------------------------------------------------------
// Includes
//----------------------------------------------------------------------
......
/*WIKI*
This algorithm can be used to change the time-of-flight bins of a workspace by a specified amount (defined above as the Offset). A possible use of this algorithm is to correct time bins that have been recorded incorrectly.
Optionally, the range of spectra can be selected to apply this offset selectively using the IndexMin and IndexMax properties.
The output workspace will be an exact copy of the input workspace except for the changed time bins.
*WIKI*/
//----------------------------------------------------------------------
// Includes
//----------------------------------------------------------------------
......
/*WIKI*
*WIKI*/
#include "MantidAlgorithms/ChangeLogTime.h"
#include "MantidDataObjects/EventWorkspace.h"
#include "MantidKernel/TimeSeriesProperty.h"
......
/*WIKI*
Modifies the pulse time (wall-clock time) of all the events in the specified spectra of an EventWorkspace, by adding the given number of seconds.
*WIKI*/
#include "MantidAlgorithms/ChangePulsetime.h"
#include "MantidKernel/System.h"
#include "MantidKernel/ArrayProperty.h"
......
/*WIKI*
Compares two workspaces for equality. This algorithm is mainly intended for use by Mantid developers as part of the testing process.
The data values (X,Y and error) are always checked. The algorithm can also optionally check the axes (this includes the units), the spectra-detector map, the instrument (the name and parameter map) and any bin masking.
In the case of [[EventWorkspace]]s, they are checked to hold identical event lists. Comparisons between an EventList and a Workspace2D always fail.
*WIKI*/
//----------------------------------------------------------------------
// Includes
//----------------------------------------------------------------------
......
/*WIKI*
This algorithm will chop the input workspace into equally sized workspaces, and adjust the X-values given so that they all begin from the same point. This is useful if your raw files contain multiple frames.
=== Identifying Extended Frames ===
[[File:ChopDataIntegrationExplanation.png|frame|Figure 1: Example Monitor Spectrum with Extended Frames]]
If the parameters ''IntegrationRangeLower'', ''IntegrationRangeUpper'' and ''MonitorWorkspaceIndex'' are provided to the algorithm, then it will attempt to identify where in the workspace the frames have been extended.
For example: looking at Figure 1 which shows an input workspace covering 100000 microseconds, we can see that the first frame covers forty thousand, and the other three cover twenty thousand each.
In order for Mantid to determine this programatically, it integrates over a range (defined by IntegrationRangeLower and IntegrationRangeUpper) for each "chop" of the data. If the relative values for this integration fall within certain bounds, then the chop is deemed to be a continuation of the previous one rather than a separate frame. If this happens, then they will be placed in the same workspace within the result group.
The algorithm will only look at the workspace given in ''MonitorWorkspaceIndex'' property to determine this. Though it is expected and recommended that you use a monitor spectrum for this purpose, it is not enforced so you may use a regular detector if you have cause to do so.
*WIKI*/
#include "MantidAlgorithms/ChopData.h"
#include "MantidAPI/WorkspaceValidators.h"
#include "MantidKernel/MultiThreaded.h"
......
/*WIKI*
This algorithm performs a deep copy of all of the information in the workspace. It maintains events if the input is an [[EventWorkspace]].
*WIKI*/
//----------------------------------------------------------------------
// Includes
//----------------------------------------------------------------------
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment