pycroscopy issueshttps://code.ornl.gov/rvv/pycroscopy/-/issues2020-12-31T15:17:20Zhttps://code.ornl.gov/rvv/pycroscopy/-/issues/242KMeans Clustering not writing to file2020-12-31T15:17:20ZVasudevan, Rama K.KMeans Clustering not writing to file*Created by: sulaymandesai*
Hi,
Hope you're well.
I am following this published notebook: https://nbviewer.jupyter.org/github/pycroscopy/papers/blob/master/Notebooks/EM/STEM/Image_Cleaning_Atom_Finding.ipynb
When I try to run t...*Created by: sulaymandesai*
Hi,
Hope you're well.
I am following this published notebook: https://nbviewer.jupyter.org/github/pycroscopy/papers/blob/master/Notebooks/EM/STEM/Image_Cleaning_Atom_Finding.ipynb
When I try to run the KMeans clustering I have the following error:
```
num_clusters = 4
# num_clusters = 32
estimator = px.processing.Cluster(h5_U, KMeans(n_clusters=num_clusters), num_comps=num_comps)
if estimator.duplicate_h5_groups==[]:
t0 = time()
h5_kmeans = estimator.compute()
print('kMeans took {} seconds.'.format(round(time()-t0, 2)))
else:
h5_kmeans = estimator.duplicate_h5_groups[-1]
print( 'Using existing results.')
print( 'Clustering results in {}.'.format(h5_kmeans.name))
half_wind = int(win_size*0.5)
# generate a cropped image that was effectively the area that was used for pattern searching
# Need to get the math righ on the counting
cropped_clean_image = clean_image_mat[half_wind:-half_wind + 1, half_wind:-half_wind + 1]
# Plot cluster results Get the labels dataset
labels_mat = np.reshape(h5_kmeans['Labels'][()], [num_rows, num_cols])
fig, axes = plt.subplots(ncols=2, figsize=(14,7))
axes[0].imshow(cropped_clean_image,cmap=spiepy.NANOMAP, origin='lower')
axes[0].set_title('Cleaned Image', fontsize=16)
axes[1].imshow(labels_mat, aspect=1, interpolation='none',cmap=spiepy.NANOMAP, origin='lower')
axes[1].set_title('K-means cluster labels', fontsize=16);
for axis in axes:
axis.get_yaxis().set_visible(False)
axis.get_xaxis().set_visible(False)
usid.jupyter_utils.save_fig_filebox_button(fig, 'Clustered_Clean_Image.png')
```
```
Consider calling test() to check results before calling compute() which computes on the entire dataset and writes results to the HDF5 file
Group: <HDF5 group "/Measurement_000/Channel_000/Plane_Mean_Subtracted_Data-Windowing_000/Image_Windows-SVD_000/U-Cluster_000" (0 members)> had neither the status HDF5 dataset or the legacy attribute: "last_pixel".
Group: <HDF5 group "/Measurement_000/Channel_000/Plane_Mean_Subtracted_Data-Windowing_000/Image_Windows-SVD_000/U-Cluster_001" (0 members)> had neither the status HDF5 dataset or the legacy attribute: "last_pixel".
Group: <HDF5 group "/Measurement_000/Channel_000/Plane_Mean_Subtracted_Data-Windowing_000/Image_Windows-SVD_000/U-Cluster_002" (0 members)> had neither the status HDF5 dataset or the legacy attribute: "last_pixel".
Performing clustering on /Measurement_000/Channel_000/Plane_Mean_Subtracted_Data-Windowing_000/Image_Windows-SVD_000/U.
Took 5.76 sec to compute KMeans
Calculated the Mean Response of each cluster.
Took 340.1 msec to calculate mean response per cluster
Writing clustering results to file.
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-35-6b9a66d30096> in <module>
7 if estimator.duplicate_h5_groups==[]:
8 t0 = time()
----> 9 h5_kmeans = estimator.compute()
10 print('kMeans took {} seconds.'.format(round(time()-t0, 2)))
11 else:
~/.pyenv/versions/3.8.3/lib/python3.8/site-packages/pycroscopy-0.60.7-py3.8.egg/pycroscopy/processing/cluster.py in compute(self, rearrange_clusters, override)
226
227 if self.h5_results_grp is None:
--> 228 h5_group = self._write_results_chunk()
229 self.delete_results()
230 else:
~/.pyenv/versions/3.8.3/lib/python3.8/site-packages/pycroscopy-0.60.7-py3.8.egg/pycroscopy/processing/cluster.py in _write_results_chunk(self)
282 h5_cluster_group = create_results_group(self.h5_main, self.process_name,
283 h5_parent_group=self._h5_target_group)
--> 284 self._write_source_dset_provenance()
285
286 write_simple_attrs(h5_cluster_group, self.parms_dict)
~/.pyenv/versions/3.8.3/lib/python3.8/site-packages/pyUSID/processing/process.py in _write_source_dset_provenance(self)
793
794 @staticmethod
--> 795 def _map_function(*args, **kwargs):
796 """
797 The function that manipulates the data on a single instance (position). This will be used by
~/.pyenv/versions/3.8.3/lib/python3.8/site-packages/sidpy/hdf/hdf_utils.py in write_simple_attrs(h5_obj, attrs, verbose)
371 '{}'.format(type(attrs)))
372 if not isinstance(h5_obj, (h5py.File, h5py.Group, h5py.Dataset)):
--> 373 raise TypeError('h5_obj should be a h5py File, Group or Dataset object'
374 ' but is instead of type '
375 '{}t'.format(type(h5_obj)))
TypeError: h5_obj should be a h5py File, Group or Dataset object but is instead of type <class 'NoneType'>t
```
Any help would be appreciated!https://code.ornl.gov/rvv/pycroscopy/-/issues/237Translator issue with pyUSID update2020-08-18T16:02:24ZVasudevan, Rama K.Translator issue with pyUSID update*Created by: nccreang*
When running FakeBEPSGenerator translator, KeyError: 'Can't open attribute (can't locate attribute: 'DC_Offset')' arrises.*Created by: nccreang*
When running FakeBEPSGenerator translator, KeyError: 'Can't open attribute (can't locate attribute: 'DC_Offset')' arrises.https://code.ornl.gov/rvv/pycroscopy/-/issues/238Generic utility list2020-08-21T20:55:16ZMukherjee, Debangshumukherjeed@ornl.govGeneric utility list@ssomnath @ramav87 I am starting to make a list of domain agnostic tools that should find a home in pycroscopy
* Functional fits for spectra
* 2D Gaussian fitting
* Hybrid cross-correlation
* Scan drift correction@ssomnath @ramav87 I am starting to make a list of domain agnostic tools that should find a home in pycroscopy
* Functional fits for spectra
* 2D Gaussian fitting
* Hybrid cross-correlation
* Scan drift correctionhttps://code.ornl.gov/rvv/pycroscopy/-/issues/234Labview H5 Patcher2020-08-07T18:39:48ZVasudevan, Rama K.Labview H5 Patcher*Created by: ramav87*
Labview H5 patcher currently looks through spec_dim_labels to gauge how many spectroscopic dimensions there are. This is better determined by looking at the size of the spectroscopic_values dataset instead. Some of...*Created by: ramav87*
Labview H5 patcher currently looks through spec_dim_labels to gauge how many spectroscopic dimensions there are. This is better determined by looking at the size of the spectroscopic_values dataset instead. Some of the acquisition mistakenly adds labels to non-existent spectroscopic dimensions, causing translation bugs.https://code.ornl.gov/rvv/pycroscopy/-/issues/204Importing nanoscope 9.4 files fails2019-05-16T14:50:16ZVasudevan, Rama K.Importing nanoscope 9.4 files fails*Created by: flounderscore*
I cannot import a file created with NanoScope 9.4. The error message is:
> File "...translators\bruker_afm.py", line 321, in _read_image_layer
> data_mat = data_vec.reshape(layer_info['Number of lines'], ...*Created by: flounderscore*
I cannot import a file created with NanoScope 9.4. The error message is:
> File "...translators\bruker_afm.py", line 321, in _read_image_layer
> data_mat = data_vec.reshape(layer_info['Number of lines'], layer_info['Samps/line'])
> ValueError: cannot reshape array of size 524288 into shape (512,512)
The issue appears to be a bug in the NanoScope software >= 9.2 where all data is 4 bytes per pixel even though the header says otherwise.
See line 391 in https://sourceforge.net/p/gwyddion/code/HEAD/tree/trunk/gwyddion/modules/file/nanoscope.c#l31https://code.ornl.gov/rvv/pycroscopy/-/issues/223Need function to get all main NSID And USID dataset2020-06-12T22:01:49ZVasudevan, Rama K.Need function to get all main NSID And USID dataset*Created by: ssomnath*
*Created by: ssomnath*
https://code.ornl.gov/rvv/pycroscopy/-/issues/222Electron microscopy translator suite2020-05-15T20:27:20ZVasudevan, Rama K.Electron microscopy translator suite*Created by: ramav87*
Currently, we have a mix of translators that are rather poor for electron microscopy files. Need to create a suite of new translators to clean up the problem. Specific classes will be developed for each of the foll...*Created by: ramav87*
Currently, we have a mix of translators that are rather poor for electron microscopy files. Need to create a suite of new translators to clean up the problem. Specific classes will be developed for each of the following
1) Nion (single images are in ndata, multidimensional goes to h5, otherwise).
2) Digital Micrograph
3) FEI
4) EMD (Berkeley)
At some point, SEM should also be incorporated, along with atom probe tomography. A lot of these functions are available in pyTemlib (https://github.com/gduscher/pyTEMlib/blob/master/pyTEMlib/dm3lib_v1_0b.py and https://github.com/gduscher/pyTEMlib/blob/master/pyTEMlib/file_tools.py )https://code.ornl.gov/rvv/pycroscopy/-/issues/221BEPS notebook only works on first measurement group2020-05-15T18:59:44ZVasudevan, Rama K.BEPS notebook only works on first measurement group*Created by: ssomnath*
Every time a user changes a measurement parameter during a BE experiment, all subsequent data are written out to a different measurement group and corresponding HDF5 dataset. The BE notebook currently only perfor...*Created by: ssomnath*
Every time a user changes a measurement parameter during a BE experiment, all subsequent data are written out to a different measurement group and corresponding HDF5 dataset. The BE notebook currently only performs fitting and visualization on the data contained in the first measurement group only.
Instead, the notebook should iterate through all available datasets and perform the same operations on themhttps://code.ornl.gov/rvv/pycroscopy/-/issues/218Investigate VisPy as a solution to visualization2020-05-08T20:57:12ZVasudevan, Rama K.Investigate VisPy as a solution to visualization*Created by: ramav87*
Visualization is an issue for large datasets; look into VisPy as a solution. *Created by: ramav87*
Visualization is an issue for large datasets; look into VisPy as a solution. https://code.ornl.gov/rvv/pycroscopy/-/issues/219Image shifting and saving as new dset2020-05-12T02:09:55ZVasudevan, Rama K.Image shifting and saving as new dset*Created by: rajgiriUW*
This came up when trying to align some images, but I didn't see anything already that resaves the shifted data. It's very simple but maybe I can just append to the old notebook about registration on the site? Or ...*Created by: rajgiriUW*
This came up when trying to align some images, but I didn't see anything already that resaves the shifted data. It's very simple but maybe I can just append to the old notebook about registration on the site? Or some simple function that just:
a) shifts array by specified amount
b) visualize pre and post-shifting
c) create a results group ("Shifted")
d) write using the pos/spec of the original dataset.https://code.ornl.gov/rvv/pycroscopy/-/issues/206Separate translator(s) for DM3 and DM4 files2019-06-06T15:38:58ZVasudevan, Rama K.Separate translator(s) for DM3 and DM4 files*Created by: ssomnath*
Currently, DM3 and DM4 translation is being managed by the image, time series, movie, image stack translators. These translators were originally designed to read multiple file formats given the similarities in the...*Created by: ssomnath*
Currently, DM3 and DM4 translation is being managed by the image, time series, movie, image stack translators. These translators were originally designed to read multiple file formats given the similarities in the operations. However, it is not clear to the end user as to which translator to use given a DM3/4 file. Perhaps the common elements in these translators could be reused or moved into static functions outside a translator class so that they can be shared across translators.
This change will be very important when attempting to build a look-up table that automates the translation process based off file extensions or signatures within the header. Such a feature would be the foundation for both a high level "load()" function as well as the development of a pipeline that connects (offline) instruments to data facilities.https://code.ornl.gov/rvv/pycroscopy/-/issues/169Request for interactive visualization tool for Dataset results from Process C...2018-06-15T15:12:17ZVasudevan, Rama K.Request for interactive visualization tool for Dataset results from Process Class *Created by: nmosto*
I'm putting in a request for a interactive visualization tool for dataset results from the Process class.
What I was envisioning was a LHS interactive spatial map where you can specify a parameter to visualize (suc...*Created by: nmosto*
I'm putting in a request for a interactive visualization tool for dataset results from the Process class.
What I was envisioning was a LHS interactive spatial map where you can specify a parameter to visualize (such as max amplitude). The RHS would inherit all of the data associated with a chosen pixel from the LHS and one can do what they want with it like apply new functions or make new plots.
For example taking the LHS spatial map of a fitting parameter and the RHS could plot the fit using that parameter back over the raw data to check what happened.https://code.ornl.gov/rvv/pycroscopy/-/issues/198Generalized Image Stack translator class2018-11-27T15:23:12ZVasudevan, Rama K.Generalized Image Stack translator class*Created by: ssomnath*
The current PtychographyTranslator, MovieTranslator, OneViewTranslator, ImageTranslator share a fair number of pieces in common. Perhaps a generalized class could be created that would be able to:
1. Maximize cod...*Created by: ssomnath*
The current PtychographyTranslator, MovieTranslator, OneViewTranslator, ImageTranslator share a fair number of pieces in common. Perhaps a generalized class could be created that would be able to:
1. Maximize code reuse
2. Minimize effort required to write variants of an image stack translator
Also, Dask may come in handy when reading numerous image files, pre-processing (e.g. - binning) in memory and writing the data into HDF5 datasets.
Related to: #197, #196, #194 https://code.ornl.gov/rvv/pycroscopy/-/issues/196PtychographyTranslator read_dm3 issue2018-11-20T21:25:39ZVasudevan, Rama K.PtychographyTranslator read_dm3 issue*Created by: kbschliep*
Having an issue using PtychographyTranslate.
In translate definition, within the Ptychography.py, the image_path is listed as [Absolute path to **folder** holding image files yet when image_type == '.dm3' it cal...*Created by: kbschliep*
Having an issue using PtychographyTranslate.
In translate definition, within the Ptychography.py, the image_path is listed as [Absolute path to **folder** holding image files yet when image_type == '.dm3' it calls upon read_dm3(image_path).[1] In read_dm3,[2] the image_path is a path to the image file so some issues occur. I've copy pasted the relevant code I'm citing below.
1 - translate() -code
Parameters
----------------
image_path : str
Absolute path to folder holding the image files
2 - read_dm3() code
def read_dm3(image_path, get_parms=True):
"""
Read an image from a dm3 file into a numpy array
image_path : str
Path to the image file
In the end I'm just trying to use the PtychographyTranslation to convert a folder of .dm3 to an hdf5.
This is how I am currently using it (let me know if I'm missing something)
## Example
root = Tk() # Tk() is a function in tkinter that opens a window
root.directory = filedialog.askdirectory() # opens explorer window so you can find the folder of choice
Data_path=root.directory # Returns folder location 'C:\\Users\\kbs1\\Documents\\Test'
hdf5_filename = 'Test.hdf5'
hdf5_path = Data_path + '/'+hdf5_filename # Returns 'C:\\Users\\kbs1\\Documents\\Test\\Test.hdf5'
tran=PtychographyTranslator()
test_data= tran.translate(hdf5_path, Data_path, image_type='.dm3')https://code.ornl.gov/rvv/pycroscopy/-/issues/194Rename PtychographyTranslator to something more generic2018-11-14T15:34:18ZVasudevan, Rama K.Rename PtychographyTranslator to something more generic*Created by: ssomnath*
How about GridofImagesTranslator?
Reminders:
1. Leave "a" PtychographyTranslator in the same location for legacy reasons but add a DeprecationWarning pointing towards the newly named translator.
2. Check exi...*Created by: ssomnath*
How about GridofImagesTranslator?
Reminders:
1. Leave "a" PtychographyTranslator in the same location for legacy reasons but add a DeprecationWarning pointing towards the newly named translator.
2. Check existing notebooks to ensure that they are updated as wellhttps://code.ornl.gov/rvv/pycroscopy/-/issues/195Record pycroscopy version2018-11-15T14:55:40ZVasudevan, Rama K.Record pycroscopy version*Created by: CompPhysChris*
We record the pyUSID version that was used to create the HDF5 objects when they are written to file. We need to add a similar feature to the pycroscopy process and analysis classes.*Created by: CompPhysChris*
We record the pyUSID version that was used to create the HDF5 objects when they are written to file. We need to add a similar feature to the pycroscopy process and analysis classes.https://code.ornl.gov/rvv/pycroscopy/-/issues/168Implement a way to update existing data to new version requirements2018-06-11T16:26:57ZVasudevan, Rama K.Implement a way to update existing data to new version requirements*Created by: CompPhysChris*
We try to keep updates from breaking backwards compatibility, but some of our new requirements have resulted in results from previous versions no longer being valid. We need to implement a version check and ...*Created by: CompPhysChris*
We try to keep updates from breaking backwards compatibility, but some of our new requirements have resulted in results from previous versions no longer being valid. We need to implement a version check and automatic updating system to correct for any changes in the code.
These updates need to be tied to a specific version and will be applied sequentially from the version of the file to the current version.https://code.ornl.gov/rvv/pycroscopy/-/issues/167loop_fitter missing checks for existing results2018-06-11T15:21:41ZVasudevan, Rama K.loop_fitter missing checks for existing results*Created by: ramav87*
do_guess and do_fit methods from BELoopFitter are missing override options that have been added to SHOFitter. This causes an error in the latest BE processing notebook.*Created by: ramav87*
do_guess and do_fit methods from BELoopFitter are missing override options that have been added to SHOFitter. This causes an error in the latest BE processing notebook.https://code.ornl.gov/rvv/pycroscopy/-/issues/160NanonisTranslator cannot translate sxm file2018-06-01T17:18:51ZVasudevan, Rama K.NanonisTranslator cannot translate sxm file*Created by: donpatrice*
I have a sxm file that we would like to read and translate into pycroscopy but this does not work.
```
>>> nt = NanonisTranslator(sxm_file)
>>> nt.translate()
----------------------------------------------...*Created by: donpatrice*
I have a sxm file that we would like to read and translate into pycroscopy but this does not work.
```
>>> nt = NanonisTranslator(sxm_file)
>>> nt.translate()
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-240-1db0a86297c7> in <module>()
----> 1 nt.translate()
~/ownCloudFHI/FHI/github.molgen.mpg.de/saasmi/saasmi/.venv/lib/python3.5/site-packages/pycroscopy/io/translators/nanonis.py in translate(self, data_channels, verbose)
71 """
72 if self.parm_dict is None or self.data_dict is None:
---> 73 self._read_data(self.data_path)
74
75 if data_channels is None:
~/ownCloudFHI/FHI/github.molgen.mpg.de/saasmi/saasmi/.venv/lib/python3.5/site-packages/pycroscopy/io/translators/nanonis.py in _read_data(self, grid_file_path)
153
154 parm_dict = dict()
--> 155 for key, parm_grid in zip(header_dict['fixed_parameters'] + header_dict['experimental_parameters'],
156 signal_dict['params'].T):
157 parm_dict[key] = parm_grid
KeyError: 'fixed_parameters'
```
The problem seems to be the missing key 'fixed_parameters' in the header_dict.
```
>>> from pycroscopy.io.translators.df_utils.nanonispy.read import Scan
>>> s = Scan(sxm_file)
>>> s.header.keys()
dict_keys(['scan_pixels', 'current>offset (a)', 'scan_time', 'current>gain', 'z-controller>i gain', 'scan_range', 'data_info', 'current>current (a)', 'z-controller>tiplift (m)', 'z-controller', 'current>calibration (a/v)', 'z-controller>controller name', 'scan_file', 'z-controller>switch off delay (s)', 'z-controller>z (m)', 'z-controller>controller status', 'z-controller>time const (s)', 'rec_time', 'scan_offset', 'z-controller>setpoint', 'rec_temp', 'nanonis_version', 'acq_time', 'scanit_type', 'z-controller>setpoint unit', 'z-controller>p gain', 'bias', 'comment', 'scan_dir', 'scan_angle', 'rec_date'])
```
How can I fix this issue?
Thanks a lot.https://code.ornl.gov/rvv/pycroscopy/-/issues/130SignalFilter memory use2018-04-05T14:14:00ZVasudevan, Rama K.SignalFilter memory use*Created by: CompPhysChris*
Need a custom memory use calculation for SignalFilter. Taking the FFT of the input data converts it to complex values which increases the initial size.*Created by: CompPhysChris*
Need a custom memory use calculation for SignalFilter. Taking the FFT of the input data converts it to complex values which increases the initial size.https://code.ornl.gov/rvv/pycroscopy/-/issues/20Units and labels for data2016-12-14T14:44:50ZVasudevan, Rama K.Units and labels for data*Created by: ssomnath*
All main datasets should have appropriate labels and units attributes to tell visualizers / plotters about the data.
This applies to all translators.
The processing and analysis classes should either take copy th...*Created by: ssomnath*
All main datasets should have appropriate labels and units attributes to tell visualizers / plotters about the data.
This applies to all translators.
The processing and analysis classes should either take copy these attributes from the source dataset or create new attributes as appropriate
┆Issue is synchronized with this [Asana task](https://app.asana.com/0/200029249765524/202072668364428)
https://code.ornl.gov/rvv/pycroscopy/-/issues/38Ability to tell a (obviously) noisy image from a relatively clear image using...2016-12-09T19:45:12ZVasudevan, Rama K.Ability to tell a (obviously) noisy image from a relatively clear image using radially averaged correlation function*Created by: ssomnath*
*Created by: ssomnath*
https://code.ornl.gov/rvv/pycroscopy/-/issues/49repack in ioHDF5 still crashing2016-12-14T16:06:08ZVasudevan, Rama K.repack in ioHDF5 still crashing*Created by: ssomnath*
*Created by: ssomnath*
https://code.ornl.gov/rvv/pycroscopy/-/issues/46Add the information criterion for SHO fits as well2016-12-14T19:43:33ZVasudevan, Rama K.Add the information criterion for SHO fits as well*Created by: ssomnath*
AIC / BIC*Created by: ssomnath*
AIC / BIChttps://code.ornl.gov/rvv/pycroscopy/-/issues/47BEPS NDF translator needs to select N random spectra for Q factor checking2016-12-15T16:15:37ZVasudevan, Rama K.BEPS NDF translator needs to select N random spectra for Q factor checking*Created by: ssomnath*
This is necessary to figure out if we need to take the complex conjugate on the data*Created by: ssomnath*
This is necessary to figure out if we need to take the complex conjugate on the datahttps://code.ornl.gov/rvv/pycroscopy/-/issues/71Explore multi-core / faster alternatives to Kmeans and SVD2017-03-17T16:56:31ZVasudevan, Rama K.Explore multi-core / faster alternatives to Kmeans and SVD*Created by: ssomnath*
Look at arguments for Kmeans such as n_jobs and init to speed up K-means
We may need to look for anther package that speeds up SVD. *Created by: ssomnath*
Look at arguments for Kmeans such as n_jobs and init to speed up K-means
We may need to look for anther package that speeds up SVD. https://code.ornl.gov/rvv/pycroscopy/-/issues/82Improve analysis documentation.2017-04-11T12:29:44ZVasudevan, Rama K.Improve analysis documentation.*Created by: CompPhysChris*
*Created by: CompPhysChris*
https://code.ornl.gov/rvv/pycroscopy/-/issues/92Better version attributes in files2017-05-24T20:36:04ZVasudevan, Rama K.Better version attributes in files*Created by: ssomnath*
Add pycroscopy and underlying package (eg - scipy) versions to the datasets for any operation. These should be added to datasets and not datagroups to encompass the situation such as the Guess operation coming fro...*Created by: ssomnath*
Add pycroscopy and underlying package (eg - scipy) versions to the datasets for any operation. These should be added to datasets and not datagroups to encompass the situation such as the Guess operation coming from a different source (eg. instrumentation software) and the fit coming either from pycroscopy or BEAM etc.https://code.ornl.gov/rvv/pycroscopy/-/issues/101Ensure model classes accept a max_mem input2017-07-14T13:40:40ZVasudevan, Rama K.Ensure model classes accept a max_mem input*Created by: CompPhysChris*
The BE_SHO_model does not. Other models should be checked as well.
*Created by: CompPhysChris*
The BE_SHO_model does not. Other models should be checked as well.
https://code.ornl.gov/rvv/pycroscopy/-/issues/102Incorrect reshaping on loop projection2018-01-18T13:19:26ZVasudevan, Rama K.Incorrect reshaping on loop projection*Created by: ramav87*
Error in projectLoop batch (in be_loop_model) where order of reshaped array after loop projection is reversed from what it should be. *Created by: ramav87*
Error in projectLoop batch (in be_loop_model) where order of reshaped array after loop projection is reversed from what it should be. https://code.ornl.gov/rvv/pycroscopy/-/issues/103MPI implementations for analysis and processing packages2017-07-18T13:18:23ZVasudevan, Rama K.MPI implementations for analysis and processing packages*Created by: ssomnath*
The idea is to provide an alternate framework for code to scale to multi CPU systems. The same scientific analysis and processing functions should be usable in the current multiprocessing implementation and the fu...*Created by: ssomnath*
The idea is to provide an alternate framework for code to scale to multi CPU systems. The same scientific analysis and processing functions should be usable in the current multiprocessing implementation and the future MPI implementations. This should ensure that scientists can continue to write / test / execute simple functions and minimal effort needs to be applied to scale the computation to a large number of cores / CPUshttps://code.ornl.gov/rvv/pycroscopy/-/issues/105Igor ibw translation errors in python 32018-03-12T16:11:13ZVasudevan, Rama K.Igor ibw translation errors in python 3*Created by: ssomnath*
pycroscopy\io\translators\igor_ibw.py in _read_parms(ibw_wave)
173 Dictionary containing parameters
174 """
--> 175 parm_string = ibw_wave.get('note').decode('utf-8')
17...*Created by: ssomnath*
pycroscopy\io\translators\igor_ibw.py in _read_parms(ibw_wave)
173 Dictionary containing parameters
174 """
--> 175 parm_string = ibw_wave.get('note').decode('utf-8')
176 parm_string = parm_string.rstrip('\r')
177 parm_list = parm_string.split('\r')
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xb0 in position 2200: invalid start bytehttps://code.ornl.gov/rvv/pycroscopy/-/issues/113Colorbar in BE processing notebook2017-08-09T14:15:48ZVasudevan, Rama K.Colorbar in BE processing notebook*Created by: ramav87*
All image plots need colorbars with units. Consistency across notebooks for this is also something we should work on. *Created by: ramav87*
All image plots need colorbars with units. Consistency across notebooks for this is also something we should work on. https://code.ornl.gov/rvv/pycroscopy/-/issues/119BE Processing Notebook2017-09-01T14:42:32ZVasudevan, Rama K.BE Processing Notebook*Created by: ramav87*
Colorbars needed for visualization of spatial maps, as well as export ("Save Figure") button for each visualizer.*Created by: ramav87*
Colorbars needed for visualization of spatial maps, as well as export ("Save Figure") button for each visualizer.https://code.ornl.gov/rvv/pycroscopy/-/issues/120BE processing Notebook2017-09-11T20:06:37ZVasudevan, Rama K.BE processing Notebook*Created by: ramav87*
Need ability to view the spatial maps of the Loop Fit parameters (V+, V-, Work of switching, etc.). Currently only allows visualization of the parameters of the loop fit function (a1,a2,b1,b2, etc.).*Created by: ramav87*
Need ability to view the spatial maps of the Loop Fit parameters (V+, V-, Work of switching, etc.). Currently only allows visualization of the parameters of the loop fit function (a1,a2,b1,b2, etc.).https://code.ornl.gov/rvv/pycroscopy/-/issues/135Plot attributes are hard coded2018-05-26T21:22:55ZVasudevan, Rama K.Plot attributes are hard coded*Created by: CompPhysChris*
Some plot attributes are being hard coded rather than pulled from the datasets. Ex: Field names in visualize_sho_results.*Created by: CompPhysChris*
Some plot attributes are being hard coded rather than pulled from the datasets. Ex: Field names in visualize_sho_results.https://code.ornl.gov/rvv/pycroscopy/-/issues/136BESHOFitter does not find existing results2018-05-22T19:11:31ZVasudevan, Rama K.BESHOFitter does not find existing results*Created by: CompPhysChris*
The Fitter method _check_for_old_guess does not find completed fits.*Created by: CompPhysChris*
The Fitter method _check_for_old_guess does not find completed fits.https://code.ornl.gov/rvv/pycroscopy/-/issues/1413+ Position support for loop visualizers2018-04-19T12:24:49ZVasudevan, Rama K.3+ Position support for loop visualizers*Created by: CompPhysChris*
The visualizers for the raw and SHO BE data have been updated to support more than 2 position dimensions. Similar updates need to be done to the loop visualizers.*Created by: CompPhysChris*
The visualizers for the raw and SHO BE data have been updated to support more than 2 position dimensions. Similar updates need to be done to the loop visualizers.https://code.ornl.gov/rvv/pycroscopy/-/issues/140be_viz_utils.jupyter_visualize_beps_sho not working for relaxation SHO fits2018-04-17T18:43:53ZVasudevan, Rama K.be_viz_utils.jupyter_visualize_beps_sho not working for relaxation SHO fits*Created by: ramav87*
Need to check dimensional reshaping
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-36-a0...*Created by: ramav87*
Need to check dimensional reshaping
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-36-a04eb0cf2113> in <module>()
1 step_chan = 'DC_Offset'
----> 2 px.be_viz_utils.jupyter_visualize_beps_sho(h5_sho_fit, step_chan)
~\Documents\GitHub\pycroscopy\pycroscopy\viz\be_viz_utils.py in jupyter_visualize_beps_sho(pc_sho_dset, step_chan, resp_func, resp_label, cmap)
324 bias_slider = ax_bias.axvline(x=step_ind, color='r')
325
--> 326 img_map, img_cmap = plot_map(ax_map, spatial_map.T, show_xy_ticks=None)
327
328 map_title = '{} - {}={}'.format(sho_quantity, step_chan, bias_mat[step_ind][0])
~\Documents\GitHub\pycroscopy\pycroscopy\viz\plot_utils.py in plot_map(axis, img, show_xy_ticks, show_cbar, x_size, y_size, num_ticks, stdevs, cbar_label, tick_font_size, origin, **kwargs)
430 kwargs.update({'origin': origin})
431
--> 432 im_handle = axis.imshow(img, **kwargs)
433
434 if show_xy_ticks is True:
~\AppData\Local\Continuum\anaconda3\lib\site-packages\matplotlib\__init__.py in inner(ax, *args, **kwargs)
1843 "the Matplotlib list!)" % (label_namer, func.__name__),
1844 RuntimeWarning, stacklevel=2)
-> 1845 return func(ax, *args, **kwargs)
1846
1847 inner.__doc__ = _add_data_doc(inner.__doc__,
~\AppData\Local\Continuum\anaconda3\lib\site-packages\matplotlib\axes\_axes.py in imshow(self, X, cmap, norm, aspect, interpolation, alpha, vmin, vmax, origin, extent, shape, filternorm, filterrad, imlim, resample, url, **kwargs)
5471 resample=resample, **kwargs)
5472
-> 5473 im.set_data(X)
5474 im.set_alpha(alpha)
5475 if im.get_clip_path() is None:
~\AppData\Local\Continuum\anaconda3\lib\site-packages\matplotlib\image.py in set_data(self, A)
651 if not (self._A.ndim == 2
652 or self._A.ndim == 3 and self._A.shape[-1] in [3, 4]):
--> 653 raise TypeError("Invalid dimensions for image data")
654
655 if self._A.ndim == 3:
TypeError: Invalid dimensions for image data