Skip to content
Snippets Groups Projects
Commit 747411e0 authored by Savici, Andrei T.'s avatar Savici, Andrei T. Committed by GitHub
Browse files

Merge pull request #20817 from mantidproject/20185_Python3DocTestPoldiToPredict

Python3 doctest compatibility: PoldiCreatePeaksFromFile to PredictFractionalPeaks
parents 4de3bba2 1342fd6b
No related branches found
No related tags found
No related merge requests found
Showing
with 37 additions and 37 deletions
...@@ -71,11 +71,11 @@ The following usage example takes up the file showed above and passes it to the ...@@ -71,11 +71,11 @@ The following usage example takes up the file showed above and passes it to the
compounds = PoldiCreatePeaksFromFile('PoldiCrystalFileExample.dat', LatticeSpacingMin=0.7) compounds = PoldiCreatePeaksFromFile('PoldiCrystalFileExample.dat', LatticeSpacingMin=0.7)
compound_count = compounds.getNumberOfEntries() compound_count = compounds.getNumberOfEntries()
print 'Number of loaded compounds:', compound_count print('Number of loaded compounds: {}'.format(compound_count))
for i in range(compound_count): for i in range(compound_count):
ws = compounds.getItem(i) ws = compounds.getItem(i)
print 'Compound ' + str(i + 1) +':', ws.getName(), 'has', ws.rowCount(), 'reflections in the resolution range.' print('Compound {}: {} has {} reflections in the resolution range.'.format(str(i + 1), ws.getName(), ws.rowCount()))
The script produces a WorkspaceGroup which contains a table with reflections for each compound in the file: The script produces a WorkspaceGroup which contains a table with reflections for each compound in the file:
......
...@@ -61,7 +61,7 @@ The algorithm requires relatively little input and can be run like this: ...@@ -61,7 +61,7 @@ The algorithm requires relatively little input and can be run like this:
cell_a = np.round(cell.cell(0, 1), 5) cell_a = np.round(cell.cell(0, 1), 5)
cell_a_error = np.round(cell.cell(0, 2), 5) cell_a_error = np.round(cell.cell(0, 2), 5)
print "Refined lattice parameter a =", cell_a, "+/-", cell_a_error print("Refined lattice parameter a = {:.5f} +/- {}".format(cell_a, cell_a_error))
This will print the following output: This will print the following output:
......
...@@ -47,7 +47,7 @@ The following small usage example performs a peak fit on the sample data already ...@@ -47,7 +47,7 @@ The following small usage example performs a peak fit on the sample data already
FitPlotsWorkspace = "fit_plots_6904", FitPlotsWorkspace = "fit_plots_6904",
Version=1) Version=1)
print "There are", mtd['fit_plots_6904'].getNumberOfEntries(), "plots available for inspection." print("There are {} plots available for inspection.".format(mtd['fit_plots_6904'].getNumberOfEntries()))
Output: Output:
......
...@@ -152,7 +152,7 @@ The following example shows an example for refinement of lattice parameters usin ...@@ -152,7 +152,7 @@ The following example shows an example for refinement of lattice parameters usin
cell_a = np.round(lattice_parameters.cell(0, 1), 5) cell_a = np.round(lattice_parameters.cell(0, 1), 5)
cell_a_error = np.round(lattice_parameters.cell(0, 2), 5) cell_a_error = np.round(lattice_parameters.cell(0, 2), 5)
print "Refined lattice parameter a =", cell_a, "+/-", cell_a_error print("Refined lattice parameter a = {:.5f} +/- {}".format(cell_a, cell_a_error))
The refined lattice parameter is printed at the end: The refined lattice parameter is printed at the end:
......
...@@ -57,8 +57,8 @@ The following example extracts peaks from the correlation spectrum of a Silicon ...@@ -57,8 +57,8 @@ The following example extracts peaks from the correlation spectrum of a Silicon
ScatteringContributions="1.0", ScatteringContributions="1.0",
OutputWorkspace="Indexed") OutputWorkspace="Indexed")
print "Indexed_Si contains", mtd['peaks_refined_6904_indexed_Si'].rowCount(), "indexed peaks." print("Indexed_Si contains {} indexed peaks.".format(mtd['peaks_refined_6904_indexed_Si'].rowCount()))
print "Number of unindexed peaks:", mtd['peaks_refined_6904_unindexed'].rowCount() print("Number of unindexed peaks: {}".format(mtd['peaks_refined_6904_unindexed'].rowCount()))
Output: Output:
......
...@@ -31,8 +31,8 @@ To load only one POLDI data file (in this case a run from a calibration measurem ...@@ -31,8 +31,8 @@ To load only one POLDI data file (in this case a run from a calibration measurem
# calibration is a WorkspaceGroup, so we can use getNames() to query what's inside. # calibration is a WorkspaceGroup, so we can use getNames() to query what's inside.
workspaceNames = calibration.getNames() workspaceNames = calibration.getNames()
print "Number of data files loaded:", len(workspaceNames) print("Number of data files loaded: {}".format(len(workspaceNames)))
print "Name of data workspace:", workspaceNames[0] print("Name of data workspace: {}".format(workspaceNames[0]))
Since only one run number was supplied, only one workspace is loaded. The name corresponds to the scheme described above: Since only one run number was supplied, only one workspace is loaded. The name corresponds to the scheme described above:
...@@ -50,8 +50,8 @@ Actually, the silicon calibration measurement consists of more than one run, so ...@@ -50,8 +50,8 @@ Actually, the silicon calibration measurement consists of more than one run, so
workspaceNames = calibration.getNames() workspaceNames = calibration.getNames()
print "Number of data files loaded:", len(workspaceNames) print("Number of data files loaded: {}".format(len(workspaceNames)))
print "Names of data workspaces:", workspaceNames print("Names of data workspaces: {}".format(workspaceNames))
Now all files from the specified range are in the `calibration` WorkspaceGroup: Now all files from the specified range are in the `calibration` WorkspaceGroup:
...@@ -69,8 +69,8 @@ But in fact, these data files should not be processed separately, they belong to ...@@ -69,8 +69,8 @@ But in fact, these data files should not be processed separately, they belong to
workspaceNames = calibration.getNames() workspaceNames = calibration.getNames()
print "Number of data files loaded:", len(workspaceNames) print("Number of data files loaded: {}".format(len(workspaceNames)))
print "Names of data workspaces:", workspaceNames print("Names of data workspaces: {}".format(workspaceNames))
The merged files will receive the name of the last file in the merged range: The merged files will receive the name of the last file in the merged range:
...@@ -93,8 +93,8 @@ A situation that occurs often is that one sample consists of multiple ranges of ...@@ -93,8 +93,8 @@ A situation that occurs often is that one sample consists of multiple ranges of
workspaceNames = calibration.getNames() workspaceNames = calibration.getNames()
print "Number of data files loaded:", len(workspaceNames) print("Number of data files loaded: {}".format(len(workspaceNames)))
print "Names of data workspaces:", workspaceNames print("Names of data workspaces: {}".format(workspaceNames))
The result is the same as in the example above, two files are in the WorkspaceGroup: The result is the same as in the example above, two files are in the WorkspaceGroup:
...@@ -115,8 +115,8 @@ On the other hand it is also possible to overwrite an existing WorkspaceGroup, f ...@@ -115,8 +115,8 @@ On the other hand it is also possible to overwrite an existing WorkspaceGroup, f
workspaceNames = calibration.getNames() workspaceNames = calibration.getNames()
print "Number of data files loaded:", len(workspaceNames) print("Number of data files loaded: {}".format(len(workspaceNames)))
print "Names of data workspaces:", workspaceNames print("Names of data workspaces: {}".format(workspaceNames))
The data loaded in the first call to the algorithm have been overwritten with the merged data set: The data loaded in the first call to the algorithm have been overwritten with the merged data set:
......
...@@ -46,7 +46,7 @@ This small usage example merges two compatible POLDI-files which have been loade ...@@ -46,7 +46,7 @@ This small usage example merges two compatible POLDI-files which have been loade
# The result has one spectrum with one bin, which contains the total counts. # The result has one spectrum with one bin, which contains the total counts.
counts_6903 = int(total_6903.dataY(0)[0]) counts_6903 = int(total_6903.dataY(0)[0])
print "6903 contains a total of", counts_6903, "counts." print("6903 contains a total of {} counts.".format(counts_6903))
# The same with the second data file # The same with the second data file
raw_6904 = LoadSINQFile(Filename = "poldi2013n006904.hdf", Instrument = "POLDI") raw_6904 = LoadSINQFile(Filename = "poldi2013n006904.hdf", Instrument = "POLDI")
...@@ -56,7 +56,7 @@ This small usage example merges two compatible POLDI-files which have been loade ...@@ -56,7 +56,7 @@ This small usage example merges two compatible POLDI-files which have been loade
total_6904 = SumSpectra(spectra_6904) total_6904 = SumSpectra(spectra_6904)
counts_6904 = int(total_6904.dataY(0)[0]) counts_6904 = int(total_6904.dataY(0)[0])
print "6904 contains a total of", counts_6904, "counts." print("6904 contains a total of {} counts.".format(counts_6904))
# Now PoldiMerge is used to merge the two raw spectra by supplying a list of workspace names. # Now PoldiMerge is used to merge the two raw spectra by supplying a list of workspace names.
raw_summed = PoldiMerge("raw_6903,raw_6904") raw_summed = PoldiMerge("raw_6903,raw_6904")
...@@ -66,8 +66,8 @@ This small usage example merges two compatible POLDI-files which have been loade ...@@ -66,8 +66,8 @@ This small usage example merges two compatible POLDI-files which have been loade
spectra_summed = Integration(histo_summed) spectra_summed = Integration(histo_summed)
total_summed = SumSpectra(spectra_summed) total_summed = SumSpectra(spectra_summed)
print "6903+6904 contains a total of", int(total_summed.dataY(0)[0]), "counts." print("6903+6904 contains a total of {} counts.".format(int(total_summed.dataY(0)[0])))
print "Summing the counts of the single data files leads to", counts_6903 + counts_6904, "counts." print("Summing the counts of the single data files leads to {} counts.".format(int(counts_6903 + counts_6904)))
Output: Output:
......
...@@ -70,7 +70,7 @@ A typical peak search procedure would be performed on correlation data, so this ...@@ -70,7 +70,7 @@ A typical peak search procedure would be performed on correlation data, so this
peaks_6904 = PoldiPeakSearch(correlated_6904) peaks_6904 = PoldiPeakSearch(correlated_6904)
# The tableworkspace should contain 14 peaks. # The tableworkspace should contain 14 peaks.
print "The correlation spectrum of sample 6904 contains", peaks_6904.rowCount(), "peaks." print("The correlation spectrum of sample 6904 contains {} peaks.".format(peaks_6904.rowCount()))
Output: Output:
......
...@@ -37,8 +37,8 @@ Usage ...@@ -37,8 +37,8 @@ Usage
summary_6904 = PoldiPeakSummary(mtd["peaks_refined_6904"]) summary_6904 = PoldiPeakSummary(mtd["peaks_refined_6904"])
print "Number of refined peaks:", summary_6904.rowCount() print("Number of refined peaks: {}".format(summary_6904.rowCount()))
print "Number of columns that describe a peak:", summary_6904.columnCount() print("Number of columns that describe a peak: {}".format(summary_6904.columnCount()))
Output: Output:
......
...@@ -33,12 +33,12 @@ In the first example, POLDI data is cropped to the correct workspace size: ...@@ -33,12 +33,12 @@ In the first example, POLDI data is cropped to the correct workspace size:
raw_6903 = LoadSINQFile(Filename = "poldi2013n006903.hdf", Instrument = "POLDI") raw_6903 = LoadSINQFile(Filename = "poldi2013n006903.hdf", Instrument = "POLDI")
LoadInstrument(raw_6903, InstrumentName = "POLDI", RewriteSpectraMap=True) LoadInstrument(raw_6903, InstrumentName = "POLDI", RewriteSpectraMap=True)
print "The raw data workspace contains", len(raw_6903.readX(0)), "time bins." print("The raw data workspace contains {} time bins.".format(len(raw_6903.readX(0))))
# Truncate the data # Truncate the data
truncated_6903 = PoldiTruncateData(raw_6903) truncated_6903 = PoldiTruncateData(raw_6903)
print "The truncated data workspace contains", len(truncated_6903.readX(0)), "time bins." print("The truncated data workspace contains {} time bins.".format(len(truncated_6903.readX(0))))
Output: Output:
...@@ -62,8 +62,8 @@ The second example also examines the extra time bins: ...@@ -62,8 +62,8 @@ The second example also examines the extra time bins:
extra_6903 = mtd['extra_6903'] extra_6903 = mtd['extra_6903']
# Examine the workspace a bit # Examine the workspace a bit
print "The extra data workspace contains", extra_6903.getNumberHistograms(), "spectrum." print("The extra data workspace contains {} spectrum.".format(extra_6903.getNumberHistograms()))
print "The bins contain the following data:", [int(x) for x in extra_6903.readY(0)] print("The bins contain the following data: {}".format([int(x) for x in extra_6903.readY(0)]))
Output: Output:
......
...@@ -37,8 +37,8 @@ Usage ...@@ -37,8 +37,8 @@ Usage
coefficients = [1., 3., 5.] # 1 + 3x + 5x^2 coefficients = [1., 3., 5.] # 1 + 3x + 5x^2
data_ws = PolynomialCorrection(data_ws, coefficients, Operation="Divide") data_ws = PolynomialCorrection(data_ws, coefficients, Operation="Divide")
print "First 5 y values:", data_ws.readY(0)[0:5] print("First 5 y values: {}".format(data_ws.readY(0)[0:5]))
print "First 5 error values:", data_ws.readE(0)[0:5] print("First 5 error values: {}".format(data_ws.readE(0)[0:5]))
.. testoutput:: .. testoutput::
...@@ -58,8 +58,8 @@ Usage ...@@ -58,8 +58,8 @@ Usage
coefficients = [2., 4.] # 2 + 4x coefficients = [2., 4.] # 2 + 4x
data_ws = PolynomialCorrection(data_ws, coefficients, Operation="Multiply") data_ws = PolynomialCorrection(data_ws, coefficients, Operation="Multiply")
print "First 5 y values:", data_ws.readY(0)[0:5] print("First 5 y values: {}".format(data_ws.readY(0)[0:5]))
print "First 5 error values:", data_ws.readE(0)[0:5] print("First 5 error values: {}".format(data_ws.readE(0)[0:5]))
.. testoutput:: .. testoutput::
......
...@@ -37,8 +37,8 @@ Usage ...@@ -37,8 +37,8 @@ Usage
data_ws = CreateWorkspace(dataX, dataY, NSpec=2) data_ws = CreateWorkspace(dataX, dataY, NSpec=2)
result_ws = Power(data_ws, 2) result_ws = Power(data_ws, 2)
print "Squared values of first spectrum:", result_ws.readY(0) print("Squared values of first spectrum: {}".format(result_ws.readY(0)))
print "Squared values of second spectrum:", result_ws.readY(1) print("Squared values of second spectrum: {}".format(result_ws.readY(1)))
Output: Output:
......
...@@ -28,9 +28,9 @@ Usage ...@@ -28,9 +28,9 @@ Usage
#Now we are ready to run the correction #Now we are ready to run the correction
wsCorrected = PowerLawCorrection(ws,C0=3,C1=2) wsCorrected = PowerLawCorrection(ws,C0=3,C1=2)
print ("The correction counts and errors are multiplied by function 3*x^2") print("The correction counts and errors are multiplied by function 3*x^2")
for i in range(0,wsCorrected.blocksize(),10): for i in range(0,wsCorrected.blocksize(),10):
print ("The correct value in bin %i is %.2f compared to %.2f" % (i,wsCorrected.readY(0)[i],ws.readY(0)[i])) print ("The correct value in bin {} is {:.2f} compared to {:.2f}".format(i, wsCorrected.readY(0)[i], ws.readY(0)[i]))
Output: Output:
......
...@@ -34,7 +34,7 @@ Usage ...@@ -34,7 +34,7 @@ Usage
IndexPeaks(peaks) IndexPeaks(peaks)
fractional_peaks = PredictFractionalPeaks(peaks, HOffset=[-0.5,0,0.5],KOffset=0,LOffset=0.2) fractional_peaks = PredictFractionalPeaks(peaks, HOffset=[-0.5,0,0.5],KOffset=0,LOffset=0.2)
print "Number of fractional peaks:",fractional_peaks.getNumberPeaks() print("Number of fractional peaks: {}".format(fractional_peaks.getNumberPeaks()))
.. testoutput:: TopazExample .. testoutput:: TopazExample
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment