Commit f0e39c9a authored by Gigg, Martyn Anthony's avatar Gigg, Martyn Anthony
Browse files

Pull out wiki sections from cpp files and convert them to reST

The converted text is stored in separate .rst files in the docs directory.
Refs #9562
parent a173e663
/*WIKI*
A composite function is a function containing other functions. It combines the values calculated by the member functions by adding them. The members are indexed from 0 to the number of functions minus 1. The indices are defined by the order in which the functions were added. Composite functions do not have their own parameters, instead they use parameters of the member functions. Parameter names are formed from the member function's index and its parameter name: f[index].[name]. For example, name "f0.Sigma" would be given to the "Sigma" parameter of a Gaussian added first to the composite function. If a member function is a composite function itself the same principle applies: 'f[index].' is prepended to a name, e.g. "f0.f1.Sigma".
The input string to the Fit algorithm for a CompositeFunction is constructed by joining the inputs of the member functions using the semicolon ';' as a separator. For example, the string for two [[Gaussian]]s with tied sigma parameters may look like the following:
name=Gaussian,PeakCentre=0,Height=1,Sigma=0.1,constraints=(0<Sigma<1);name=Gaussian,PeakCentre=1,Height=1,Sigma=0.1;ties=(f1.Sigma=f0.Sigma)
Note that the ties clause is also separated by a semicolon. It is done because the parameters are from different functions. Ties between parameters of the same function can be placed inside the member definition in which the local parameter names must be used, for example:
name = FunctionType, P1=0, ties=( P2 = 2*P1 ); name = FunctionType, P1=0, ties=( P2 = 3 )
which is equivalent to
name = FunctionType, P1=0; name = FunctionType, P1=0; ties=( f0.P2 = 2*f0.P1, f1.P2 = 3 )
Boundary constraints usually make sense to put in a local function definition but they can also be moved to the composite function level, i.e. can be separated by ';'. In this case the full parameter name must be used, for example:
name=Gaussian,PeakCentre=0,Height=1,Sigma=0.1;name=Gaussian,PeakCentre=1,Height=1,Sigma=0.1;ties=(f1.Sigma=f0.Sigma);constraints=(0<f0.Sigma<1)
Mantid defines a number of fitting functions which extend CompositeFunction. These are functions which also include other functions but use different operations to combine them. Examples are [[ProductFunction]] and [[Convolution]]. Everything said about parameters of the CompositeFunction applies to these functions.
Input strings of an extended composite function must start with "composite=FunctionName;" and followed by the definitions of its members as described for CompositeFunction. For example,
composite=ProductFunction;name=LinearBackground;name=ExpDecay
To define a composite function inside a composite function enclose the inner one in brackets:
name=LinearBackground;(composite=Convolution;name=Resolution;name=Lorentzian)
*WIKI*/
//----------------------------------------------------------------------
// Includes
//----------------------------------------------------------------------
......
/*WIKI*
This algorithm uses a numerical integration method to calculate attenuation factors resulting from absorption and single scattering in a sample with the material properties given. Factors are calculated for each spectrum (i.e. detector position) and wavelength point, as defined by the input workspace.
The sample is first bounded by a cuboid, which is divided up into small cubes. The cubes whose centres lie within the sample make up the set of integration elements (so you have a kind of 'Lego' model of the sample) and path lengths through the sample are calculated for the centre-point of each element, and a numerical integration is carried out using these path lengths over the volume elements.
Note that the duration of this algorithm is strongly dependent on the element size chosen, and that too small an element size can cause the algorithm to fail because of insufficient memory.
Note that The number density of the sample is in <math> \mathrm{\AA}^{-3} </math>
== Choosing an absorption correction algorithm ==
This flow chart is given as a way of selecting the most appropriate of the absorption correction algorithms. It also shows the algorithms that must be run first in each case. Note that this does not cover the following absorption correction algorithms: [[MonteCarloAbsorption]] (correction factors for a generic sample using a Monte Carlo instead of a numerical integration method), [[MultipleScatteringCylinderAbsorption]] & [[AnvredCorrection]] (corrections in a spherical sample, using a method imported from ISAW). Also, HRPD users can use the [[HRPDSlabCanAbsorption]] to add rudimentary calculations of the effects of the sample holder.
[[File:AbsorptionFlow.png]]
==== Assumptions ====
This algorithm assumes that the (parallel) beam illuminates the entire sample '''unless''' a 'gauge volume' has been defined using the [[DefineGaugeVolume]] algorithm (or by otherwise adding a valid XML string [[HowToDefineGeometricShape | defining a shape]] to a [[Run]] property called "GaugeVolume"). In this latter case only scattering within this volume (and the sample) is integrated, because this is all the detector can 'see'. The full sample is still used for the neutron paths. ('''N.B.''' If your gauge volume is of axis-aligned cuboid shape and fully enclosed by the sample then you will get a more accurate result from the [[CuboidGaugeVolumeAbsorption]] algorithm.)
==== Restrictions on the input workspace ====
The input workspace must have units of wavelength. The [[instrument]] associated with the workspace must be fully defined because detector, source & sample position are needed.
*WIKI*/
//----------------------------------------------------------------------
// Includes
//----------------------------------------------------------------------
......
/*WIKI*
This algorithm performs a simple numerical derivative of the values in a sample log.
The 1st order derivative is simply: dy = (y1-y0) / (t1-t0), which is placed in the log at t=(t0+t1)/2
Higher order derivatives are obtained by performing the equation above N times.
Since this is a simple numerical derivative, you can expect the result to quickly
get noisy at higher derivatives.
If any of the times in the logs are repeated, then those repeated time values will be skipped,
and the output derivative log will have fewer points than the input.
*WIKI*/
#include "MantidAlgorithms/AddLogDerivative.h"
#include "MantidKernel/System.h"
#include "MantidKernel/TimeSeriesProperty.h"
......@@ -163,4 +149,3 @@ namespace Algorithms
} // namespace Mantid
} // namespace Algorithms
/*WIKI*
Adds a [[IPeak]] to a [[PeaksWorkspace]].
*WIKI*/
#include "MantidAlgorithms/AddPeak.h"
#include "MantidKernel/System.h"
#include "MantidAPI/IPeaksWorkspace.h"
......@@ -148,4 +142,3 @@ namespace Algorithms
} // namespace Mantid
} // namespace Algorithms
/*WIKI*
Workspaces contain information in logs. Often these detail what happened to the sample during the experiment. This algorithm allows one named log to be entered.
The log can be either a String, a Number, or a Number Series. If you select Number Series, the workspace start time will be used as the time of the log entry, and the number in the text used as the (only) value.
If the LogText contains a numeric value, the created log will be of integer type if an integer is passed and floating point (double) otherwise. This applies to both the Number & Number Series options.
*WIKI*/
//----------------------------------------------------------------------
// Includes
//----------------------------------------------------------------------
......
/*WIKI*
Creates/updates a time-series log entry on a chosen workspace. The given timestamp & value are appended to the
named log entry. If the named entry does not exist then a new log is created. A time stamp must be given in
ISO8601 format, e.g. 2010-09-14T04:20:12.
By default, the given value is interpreted as a double and a double series is either created or expected. However,
if the "Type" is set to "int" then the value is interpreted as an integer and an integer is either created
or expected.
*WIKI*/
/*WIKI_USAGE*
'''Python'''
import datetime as dt
# Add an entry for the current time
log_name = "temperature"
log_value = 21.5
AddTimeSeriesLog(inOutWS, Name=log_name, Time=dt.datetime.utcnow().isoformat(), Value=log_value)
*WIKI_USAGE*/
#include "MantidAlgorithms/AddTimeSeriesLog.h"
#include "MantidKernel/DateTimeValidator.h"
#include "MantidKernel/MandatoryValidator.h"
......
/*WIKI*
The offsets are a correction to the dSpacing values and are applied during the conversion from time-of-flight to dSpacing as follows:
:<math> d = \frac{h}{2m_N} \frac{t.o.f.}{L_{tot} sin \theta} (1+ \rm{offset})</math>
The detector offsets can be obtained from either: an [[OffsetsWorkspace]] where each pixel has one value, the offset; or a .cal file (in the form created by the ARIEL software).
'''Note:''' the workspace that this algorithms outputs is a [[Ragged Workspace]].
==== Restrictions on the input workspace ====
The input workspace must contain histogram or event data where the X unit is time-of-flight and the Y data is raw counts. The [[instrument]] associated with the workspace must be fully defined because detector, source & sample position are needed.
*WIKI*/
//----------------------------------------------------------------------
// Includes
//----------------------------------------------------------------------
......
/*WIKI*
Returns the relative efficiency of the forward detector group compared to the backward detector group. If Alpha is larger than 1 more counts has been collected in the forward group.
This algorithm leave the input workspace unchanged. To group detectors in a workspace use [[GroupDetectors]].
*WIKI*/
//----------------------------------------------------------------------
// Includes
//----------------------------------------------------------------------
......@@ -156,5 +147,3 @@ void AlphaCalc::exec()
} // namespace Algorithm
} // namespace Mantid
/*WIKI*
This algorithm appends the spectra of two workspaces together.
The output workspace from this algorithm will be a copy of the first
input workspace, to which the data from the second input workspace
will be appended.
Workspace data members other than the data (e.g. instrument etc.) will be copied
from the first input workspace (but if they're not identical anyway,
then you probably shouldn't be using this algorithm!).
==== Restrictions on the input workspace ====
For [[EventWorkspace]]s, there are no restrictions on the input workspaces if ValidateInputs=false.
For [[Workspace2D]]s, the number of bins must be the same in both inputs.
If ValidateInputs is selected, then the input workspaces must also:
* Come from the same instrument
* Have common units
* Have common bin boundaries
==== Spectrum Numbers ====
If there is an overlap in the spectrum numbers of both inputs, then the output
workspace will have its spectrum numbers reset starting at 0 and increasing by
1 for each spectrum.
==== See Also ====
* [[ConjoinWorkspaces]] for joining parts of the same workspace.
*WIKI*/
#include "MantidAlgorithms/AppendSpectra.h"
#include "MantidKernel/System.h"
#include "MantidAPI/WorkspaceValidators.h"
......
/*WIKI*
Update detector positions from input table workspace. The positions are updated as absolute positions and so this update can be repeated.
The PositionTable must have columns ''Detector ID'' and ''Detector Position''. The entries of the ''Detector ID'' column are integer referring to the Detector ID and the enties of the ''Detector Position'' are [[V3D]]s referring to the position of the detector whose ID is in same row.
*WIKI*/
//----------------------------------------------------------------------
// Includes
//----------------------------------------------------------------------
......
/*WIKI*
Apply deadtime correction to each spectra of a workspace. Define:
<math>{\displaystyle{N}}</math> = true count
<math>{\displaystyle{M}}</math> = measured count
<math>{\displaystyle{t_{dead}}}</math> = dead-time
<math>{\displaystyle{t_{bin}}}</math> = time bin width
<math>{\displaystyle{F}}</math> = Number of good frames
Then this algorithm assumes that the InputWorkspace contains measured counts as a
function of TOF and returns a workspace containing true counts as a function of the
same TOF binning according to
:<math> N = \frac{M}{(1-M*(\frac{t_{dead}}{t_{bin}*F}))} </math>
*WIKI*/
//----------------------------------------------------------------------
// Includes
//----------------------------------------------------------------------
......
/*WIKI*
The fluctuation dissipation theorem [1,2] relates the dynamic susceptibility to the scattering function
<math>\left(1-e^{-\frac{E}{k_B T}}\right) S(\mathbf{q}, E) = \frac{1}{\pi} \chi'' (\mathbf{q}, E) </math>
where <math>E</math> is the energy transfer to the system. The algorithm assumes that the y axis of the
input workspace contains the scattering function <math>S</math>. The y axis of the output workspace will
contain the dynamic susceptibility. The temperature is extracted from a log attached to the workspace, as the mean
value. Alternatively, the temperature can be directly specified. The algorithm will fail if neither option is
valid.
[1] S. W. Lovesey - Theory of Neutron Scattering from Condensed Matter, vol 1
[2] I. A. Zaliznyak and S. H. Lee - Magnetic Neutron Scattering in "Modern techniques for characterizing magnetic materials"
*WIKI*/
#include "MantidAlgorithms/ApplyDetailedBalance.h"
#include "MantidKernel/System.h"
#include "MantidKernel/TimeSeriesProperty.h"
......@@ -114,4 +98,3 @@ namespace Algorithms
} // namespace Mantid
} // namespace Algorithms
/*WIKI*
The transmission can be given as a MatrixWorkspace or given directly as numbers. One or the other method must be used.
See [http://www.mantidproject.org/Reduction_for_HFIR_SANS SANS Reduction] documentation for details.
*WIKI*/
//----------------------------------------------------------------------
// Includes
//----------------------------------------------------------------------
......@@ -145,4 +136,3 @@ void ApplyTransmissionCorrection::exec()
} // namespace Algorithms
} // namespace Mantid
/*WIKI*
This algorithm is used to calculate the asymmetry for a muon workspace. The asymmetry is given by:
:<math> Asymmetry = \frac{F-\alpha B}{F+\alpha B} </math>
where F is the front spectra, B is the back spectra and a is alpha.
The errors in F-aB and F+aB are calculated by adding the errors in F and B in quadrature; any errors in alpha are ignored. The errors for the asymmetry are then calculated using the fractional error method with the values for the errors in F-aB and F+aB.
The output workspace contains one set of data for the time of flight, the asymmetry and the asymmetry errors.
Note: this algorithm does not perform any grouping; the grouping must be done via the GroupDetectors algorithm or when the NeXus file is loaded auto_group must be set to true.
*WIKI*/
/*WIKI_USAGE*
'''Python'''
OutWS = AsymmetryCalc("EmuData","1.0","0,1,2,3,4","16,17,18,19,20")
'''C++'''
IAlgorithm* alg = FrameworkManager::Instance().createAlgorithm("AsymmetryCalc");
alg->setPropertyValue("InputWorkspace", "EmuData");
alg->setPropertyValue("OutputWorkspace", "OutWS");
alg->setPropertyValue("Alpha", "1.0");
alg->setPropertyValue("ForwardSpectra", "0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15");
alg->setPropertyValue("BackwardSpectra", "16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31");
alg->execute();
Workspace* ws = FrameworkManager::Instance().getWorkspace("OutWS");
*WIKI_USAGE*/
//----------------------------------------------------------------------
// Includes
//----------------------------------------------------------------------
......@@ -170,5 +140,3 @@ void AsymmetryCalc::exec()
} // namespace Algorithm
} // namespace Mantid
/*WIKI*
The algorithm will calculate a proton_charge weighted average and standard deviation of any log value of numeric series type.
All proton charges earlier than the first data are ignored. Any proton pulse is counted for the log value on the right. This
means that if all proton pulses happen before the first value, and FixZero is false, the average and standard deviations are NANs.
If all the proton pulses occur after the last value, and FixZero is false, the average is equal to the last value, and the
standard deviation is zero.
*WIKI*/
#include "MantidAlgorithms/AverageLogData.h"
#include "MantidKernel/TimeSeriesProperty.h"
using namespace Mantid::Kernel;
......
/*WIKI*
A binary operation will be conducted on two SpecialWorkspace2D (i.e., masking workspace). The binary operations supported include AND, OR and XOR (exclusive or). The operation is done between the corresponding spectra of these two input workspaces, i.e.,
:<math> spec_i^{output} = spec_i^{in 1} \times spec_i^{in 2} </math>
:<math> spec_i^{output} = spec_i^{in 1} + spec_i^{in 2} </math>
:<math> spec_i^{output} = spec_i^{in 1} \oplus spec_i^{in 2} </math>
==Output==
A SpecialWorkspace2D with the same dimension and geometry as the input two SpecialWorkspace2D.
*WIKI*/
#include "MantidAlgorithms/BinaryOperateMasks.h"
#include "MantidKernel/System.h"
#include "MantidDataObjects/MaskWorkspace.h"
......@@ -114,4 +103,3 @@ namespace Algorithms
} // namespace Mantid
} // namespace Algorithms
/*WIKI*
Calculate Muon deadtime for each spectra in a workspace.
Define:
<math>{\displaystyle{N}}</math> = true count
<math>{\displaystyle{N_0}}</math> = true count at time zero
<math>{\displaystyle{M}}</math> = measured count
<math>{\displaystyle{t_{dead}}}</math> = dead-time
<math>{\displaystyle{t_{bin}}}</math> = time bin width
<math>{\displaystyle{t_{\mu}}}</math> = Muon decay constant
<math>{\displaystyle{F}}</math> = Number of good frames
The formula used to calculate the deadtime for each spectra:
:<math>M\exp \left( \frac{t}{t_{\mu}} \right)=N_0 - M*N_0*(\frac{t_{dead}}{t_{bin}*F}) </math>
where <math>\displaystyle{M\exp ( t/t_{\mu})}</math> as a function of <math>{\displaystyle{M}}</math> is a straight line with an intercept of <math>{\displaystyle{N_0}}</math> and a slope of <math>{\displaystyle{N_0*(\frac{t_{dead}}{t_{bin}*F})}}</math>.
*WIKI*/
//----------------------------------------------------------------------
// Includes
//----------------------------------------------------------------------
......@@ -258,5 +238,3 @@ void CalMuonDeadTime::exec()
} // namespace Algorithm
} // namespace Mantid
/*WIKI*
See [http://www.mantidproject.org/Reduction_for_HFIR_SANS SANS Reduction] documentation for calculation details.
*WIKI*/
//----------------------------------------------------------------------
// Includes
//----------------------------------------------------------------------
......
/*WIKI*
This algorithm takes a list of spectra and for each spectrum calculates an average count rate in the given region, usually a region when there are only background neutrons. This count rate is then subtracted from the counts in all the spectrum's bins. However, no bin will take a negative value as bins with count rates less than the background are set to zero (and their error is set to the backgound value).
The average background count rate is estimated in one of two ways. When Mode is set to 'Mean' it is the sum of the values in the bins in the background region divided by the width of the X range. Selecting 'Linear Fit' sets the background value to the height in the centre of the background region of a line of best fit through that region.
The error on the background value is only calculated when 'Mean' is used. It is the errors in all the bins in the background region summed in quadrature divided by the number of bins. This background error value is added in quadrature to the errors in each bin.
====ChildAlgorithms used====
The [[Linear]] algorithm is used when the Mode = Linear Fit. From the resulting line of best fit a constant value taken as the value of the line at the centre of the fitted range.
*WIKI*/
//----------------------------------------------------------------------
// Includes
//----------------------------------------------------------------------
......
/*WIKI*
Calculates the probability of a neutron being transmitted through the sample using detected counts from two monitors, one in front and one behind the sample. A data workspace can be corrected for transmission by [[Divide|dividing]] by the output of this algorithm.
Because the detection efficiency of the monitors can be different the transmission calculation is done using two runs, one run with the sample (represented by <math>S</math> below) and a direct run without it(<math>D</math>). The fraction transmitted through the sample <math>f</math> is calculated from this formula:
<br>
<br>
<math> p = \frac{S_T}{D_T}\frac{D_I}{S_I} </math>
<br>
<br>
where <math>S_I</math> is the number of counts from the monitor in front of the sample (the incident beam monitor), <math>S_T</math> is the transmission monitor after the sample, etc.
The resulting fraction as a function of wavelength is created as the OutputUnfittedData workspace. However, because of statistical variations it is recommended to use the OutputWorkspace, which is the evaluation of a fit to those transmission fractions. The unfitted data is not affected by the RebinParams or Fitmethod properties but these can be used to refine the fitted data. The RebinParams method is useful when the range of wavelengths passed to CalculateTransmission is different from that of the data to be corrected.
=== ChildAlgorithms used ===
Uses the algorithm [[linear]] to fit to the calculated transmission fraction.
*WIKI*/
//----------------------------------------------------------------------
// Includes
//----------------------------------------------------------------------
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment