pycroscopy issueshttps://code.ornl.gov/rvv/pycroscopy/-/issues2018-10-08T16:58:27Zhttps://code.ornl.gov/rvv/pycroscopy/-/issues/192Out of date notebooks2018-10-08T16:58:27ZVasudevan, Rama K.Out of date notebooks*Created by: DancingQuanta*
I am trying to do a tutorial here Tutorial_04_Interactive_Visualization. But I found that some things are broken because the functions in the notebooks have been renamed, or changed functionality.
*Created by: DancingQuanta*
I am trying to do a tutorial here Tutorial_04_Interactive_Visualization. But I found that some things are broken because the functions in the notebooks have been renamed, or changed functionality.
https://code.ornl.gov/rvv/pycroscopy/-/issues/182AttributeError in gIV Notebook help wanted2018-07-03T15:37:20ZVasudevan, Rama K.AttributeError in gIV Notebook help wanted*Created by: cuu123*
The notebook can't run this line:
# Load the raw dataset
h5_path = px.io_utils.uiGetFile('*.h5', 'G-mode IV dataset')
AttributeError: 'module' object has no attribute 'uiGetFile'
Is it possible to use usid...*Created by: cuu123*
The notebook can't run this line:
# Load the raw dataset
h5_path = px.io_utils.uiGetFile('*.h5', 'G-mode IV dataset')
AttributeError: 'module' object has no attribute 'uiGetFile'
Is it possible to use usid instead of px? But Conda install pyUSID seems not working, the message is
PackagesNotFoundError: The following packages are not available from current channels
So how to solve it?
https://code.ornl.gov/rvv/pycroscopy/-/issues/169Request for interactive visualization tool for Dataset results from Process C...2018-06-15T15:12:17ZVasudevan, Rama K.Request for interactive visualization tool for Dataset results from Process Class *Created by: nmosto*
I'm putting in a request for a interactive visualization tool for dataset results from the Process class.
What I was envisioning was a LHS interactive spatial map where you can specify a parameter to visualize (suc...*Created by: nmosto*
I'm putting in a request for a interactive visualization tool for dataset results from the Process class.
What I was envisioning was a LHS interactive spatial map where you can specify a parameter to visualize (such as max amplitude). The RHS would inherit all of the data associated with a chosen pixel from the LHS and one can do what they want with it like apply new functions or make new plots.
For example taking the LHS spatial map of a fitting parameter and the RHS could plot the fit using that parameter back over the raw data to check what happened.https://code.ornl.gov/rvv/pycroscopy/-/issues/198Generalized Image Stack translator class2018-11-27T15:23:12ZVasudevan, Rama K.Generalized Image Stack translator class*Created by: ssomnath*
The current PtychographyTranslator, MovieTranslator, OneViewTranslator, ImageTranslator share a fair number of pieces in common. Perhaps a generalized class could be created that would be able to:
1. Maximize cod...*Created by: ssomnath*
The current PtychographyTranslator, MovieTranslator, OneViewTranslator, ImageTranslator share a fair number of pieces in common. Perhaps a generalized class could be created that would be able to:
1. Maximize code reuse
2. Minimize effort required to write variants of an image stack translator
Also, Dask may come in handy when reading numerous image files, pre-processing (e.g. - binning) in memory and writing the data into HDF5 datasets.
Related to: #197, #196, #194 https://code.ornl.gov/rvv/pycroscopy/-/issues/199PIFM and HyperImage Translators2019-02-21T19:59:55ZVasudevan, Rama K.PIFM and HyperImage Translators*Created by: ssomnath*
The base code that makes up the translator already exists [here](https://github.com/rajgiriUW/pifm_translator). Need to finish converting this to a formal Pycroscopy translator*Created by: ssomnath*
The base code that makes up the translator already exists [here](https://github.com/rajgiriUW/pifm_translator). Need to finish converting this to a formal Pycroscopy translatorhttps://code.ornl.gov/rvv/pycroscopy/-/issues/196PtychographyTranslator read_dm3 issue2018-11-20T21:25:39ZVasudevan, Rama K.PtychographyTranslator read_dm3 issue*Created by: kbschliep*
Having an issue using PtychographyTranslate.
In translate definition, within the Ptychography.py, the image_path is listed as [Absolute path to **folder** holding image files yet when image_type == '.dm3' it cal...*Created by: kbschliep*
Having an issue using PtychographyTranslate.
In translate definition, within the Ptychography.py, the image_path is listed as [Absolute path to **folder** holding image files yet when image_type == '.dm3' it calls upon read_dm3(image_path).[1] In read_dm3,[2] the image_path is a path to the image file so some issues occur. I've copy pasted the relevant code I'm citing below.
1 - translate() -code
Parameters
----------------
image_path : str
Absolute path to folder holding the image files
2 - read_dm3() code
def read_dm3(image_path, get_parms=True):
"""
Read an image from a dm3 file into a numpy array
image_path : str
Path to the image file
In the end I'm just trying to use the PtychographyTranslation to convert a folder of .dm3 to an hdf5.
This is how I am currently using it (let me know if I'm missing something)
## Example
root = Tk() # Tk() is a function in tkinter that opens a window
root.directory = filedialog.askdirectory() # opens explorer window so you can find the folder of choice
Data_path=root.directory # Returns folder location 'C:\\Users\\kbs1\\Documents\\Test'
hdf5_filename = 'Test.hdf5'
hdf5_path = Data_path + '/'+hdf5_filename # Returns 'C:\\Users\\kbs1\\Documents\\Test\\Test.hdf5'
tran=PtychographyTranslator()
test_data= tran.translate(hdf5_path, Data_path, image_type='.dm3')https://code.ornl.gov/rvv/pycroscopy/-/issues/194Rename PtychographyTranslator to something more generic2018-11-14T15:34:18ZVasudevan, Rama K.Rename PtychographyTranslator to something more generic*Created by: ssomnath*
How about GridofImagesTranslator?
Reminders:
1. Leave "a" PtychographyTranslator in the same location for legacy reasons but add a DeprecationWarning pointing towards the newly named translator.
2. Check exi...*Created by: ssomnath*
How about GridofImagesTranslator?
Reminders:
1. Leave "a" PtychographyTranslator in the same location for legacy reasons but add a DeprecationWarning pointing towards the newly named translator.
2. Check existing notebooks to ensure that they are updated as wellhttps://code.ornl.gov/rvv/pycroscopy/-/issues/195Record pycroscopy version2018-11-15T14:55:40ZVasudevan, Rama K.Record pycroscopy version*Created by: CompPhysChris*
We record the pyUSID version that was used to create the HDF5 objects when they are written to file. We need to add a similar feature to the pycroscopy process and analysis classes.*Created by: CompPhysChris*
We record the pyUSID version that was used to create the HDF5 objects when they are written to file. We need to add a similar feature to the pycroscopy process and analysis classes.https://code.ornl.gov/rvv/pycroscopy/-/issues/183Loop Fitting on FORC datasets2018-07-05T23:50:25ZVasudevan, Rama K.Loop Fitting on FORC datasets*Created by: ramav87*
Loop fitting on FORC datasets provides an error due to size mismatch between generated output, which appears to be the length of a particular spectroscopic slice, and the fit h5 object, which is predetermined. Edit...*Created by: ramav87*
Loop fitting on FORC datasets provides an error due to size mismatch between generated output, which appears to be the length of a particular spectroscopic slice, and the fit h5 object, which is predetermined. Editing the fitter class does not appear to be a viable option to fix this issue.
Error specifics: Upon calling loop_fitter on a FORC dataset (25 position dimensions, 2 field dimensions, 8 FORC dimensions), yields the following error:
```
~/Documents/pycroscopy/pycroscopy/analysis/be_loop_fitter.py in do_fit(self, processors, max_mem, solver_type, solver_options, obj_func, get_loop_parameters, h5_guess)
396
397 self.fit = np.hstack(tuple(results))
--> 398 self._set_results()
399
400 self._start_pos = self._end_pos
~/Documents/pycroscopy/pycroscopy/analysis/fitter.py in _set_results(self, is_guess)
185 print('Writing data to positions: {} to {}'.format(self._start_pos, self._end_pos))
186
--> 187 targ_dset[self._start_pos: self._end_pos, :] = source_dset[:,:]
TypeError: Can't broadcast (25, 2) -> (25, 16)
```https://code.ornl.gov/rvv/pycroscopy/-/issues/168Implement a way to update existing data to new version requirements2018-06-11T16:26:57ZVasudevan, Rama K.Implement a way to update existing data to new version requirements*Created by: CompPhysChris*
We try to keep updates from breaking backwards compatibility, but some of our new requirements have resulted in results from previous versions no longer being valid. We need to implement a version check and ...*Created by: CompPhysChris*
We try to keep updates from breaking backwards compatibility, but some of our new requirements have resulted in results from previous versions no longer being valid. We need to implement a version check and automatic updating system to correct for any changes in the code.
These updates need to be tied to a specific version and will be applied sequentially from the version of the file to the current version.https://code.ornl.gov/rvv/pycroscopy/-/issues/167loop_fitter missing checks for existing results2018-06-11T15:21:41ZVasudevan, Rama K.loop_fitter missing checks for existing results*Created by: ramav87*
do_guess and do_fit methods from BELoopFitter are missing override options that have been added to SHOFitter. This causes an error in the latest BE processing notebook.*Created by: ramav87*
do_guess and do_fit methods from BELoopFitter are missing override options that have been added to SHOFitter. This causes an error in the latest BE processing notebook.https://code.ornl.gov/rvv/pycroscopy/-/issues/163utf-8 problem with NanonisFile2018-06-06T14:07:33ZVasudevan, Rama K.utf-8 problem with NanonisFile*Created by: donpatrice*
I have a sxm file, that causes some problems when reading it with NanonisFile:
```
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xb0 in position 67: invalid start byte
```
I found that the relevant pa...*Created by: donpatrice*
I have a sxm file, that causes some problems when reading it with NanonisFile:
```
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xb0 in position 67: invalid start byte
```
I found that the relevant part differs from [nanonispy](https://github.com/underchemist/nanonispy/blob/master/nanonispy/read.py) by
```
try:
entry = line.strip().decode()
except UnicodeDecodeError:
warnings.warn('{} has non-uft-8 characters, replacing them.'.format(f.name))
entry = line.strip().decode('utf-8', errors='replace')
```
In particular these lines are missing in pycroscopy. Is there a reason for this? Adding these lines solves my problem.https://code.ornl.gov/rvv/pycroscopy/-/issues/160NanonisTranslator cannot translate sxm file2018-06-01T17:18:51ZVasudevan, Rama K.NanonisTranslator cannot translate sxm file*Created by: donpatrice*
I have a sxm file that we would like to read and translate into pycroscopy but this does not work.
```
>>> nt = NanonisTranslator(sxm_file)
>>> nt.translate()
----------------------------------------------...*Created by: donpatrice*
I have a sxm file that we would like to read and translate into pycroscopy but this does not work.
```
>>> nt = NanonisTranslator(sxm_file)
>>> nt.translate()
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-240-1db0a86297c7> in <module>()
----> 1 nt.translate()
~/ownCloudFHI/FHI/github.molgen.mpg.de/saasmi/saasmi/.venv/lib/python3.5/site-packages/pycroscopy/io/translators/nanonis.py in translate(self, data_channels, verbose)
71 """
72 if self.parm_dict is None or self.data_dict is None:
---> 73 self._read_data(self.data_path)
74
75 if data_channels is None:
~/ownCloudFHI/FHI/github.molgen.mpg.de/saasmi/saasmi/.venv/lib/python3.5/site-packages/pycroscopy/io/translators/nanonis.py in _read_data(self, grid_file_path)
153
154 parm_dict = dict()
--> 155 for key, parm_grid in zip(header_dict['fixed_parameters'] + header_dict['experimental_parameters'],
156 signal_dict['params'].T):
157 parm_dict[key] = parm_grid
KeyError: 'fixed_parameters'
```
The problem seems to be the missing key 'fixed_parameters' in the header_dict.
```
>>> from pycroscopy.io.translators.df_utils.nanonispy.read import Scan
>>> s = Scan(sxm_file)
>>> s.header.keys()
dict_keys(['scan_pixels', 'current>offset (a)', 'scan_time', 'current>gain', 'z-controller>i gain', 'scan_range', 'data_info', 'current>current (a)', 'z-controller>tiplift (m)', 'z-controller', 'current>calibration (a/v)', 'z-controller>controller name', 'scan_file', 'z-controller>switch off delay (s)', 'z-controller>z (m)', 'z-controller>controller status', 'z-controller>time const (s)', 'rec_time', 'scan_offset', 'z-controller>setpoint', 'rec_temp', 'nanonis_version', 'acq_time', 'scanit_type', 'z-controller>setpoint unit', 'z-controller>p gain', 'bias', 'comment', 'scan_dir', 'scan_angle', 'rec_date'])
```
How can I fix this issue?
Thanks a lot.https://code.ornl.gov/rvv/pycroscopy/-/issues/157Reshape_to_n_dims for non-square images2018-05-26T21:22:05ZVasudevan, Rama K.Reshape_to_n_dims for non-square images*Created by: rajgiriUW*
Posted on Slack, but for tracking.
Issue is for non-square image files, I think the reshape_to_n_dims function is reordering the data when it shouldn't be, resulting in some jagged-looking images (if rows > co...*Created by: rajgiriUW*
Posted on Slack, but for tracking.
Issue is for non-square image files, I think the reshape_to_n_dims function is reordering the data when it shouldn't be, resulting in some jagged-looking images (if rows > columns) or the image is repeated vertically (if columns > rows).
Here's what happens when running this on an example:
Position dimensions: ['X' 'Y']
Position sort order: [0 1]
Spectroscopic Dimensions: ['arb']
Spectroscopic sort order: [0]
Position dimensions (sort applied): ['X' 'Y']
Position dimensionality (sort applied): [256, 128]
Spectroscopic dimensions (sort applied): ['arb']
Spectroscopic dimensionality (sort applied): [1]
After first reshape, labels are ['Y' 'X' 'arb']
Data shape is (128, 256, 1)
Axes will permuted in this order: [1 0 2]
New labels ordering: ['X' 'Y' 'arb']
Dataset now of shape: (256, 128, 1)
Suhas seems to think the issue is in line:
>> Axes will permuted in this order: [1 0 2]
Since that is changing the dimensions.
I confirmed in the Igor IBW Translator that the position dimensions are being written correctly. It is possible to correct the Translator to fix this, I think, but I then expect the issue to pop up in other translators.
https://code.ornl.gov/rvv/pycroscopy/-/issues/156svd_utils.rebuild_svd bug in create_indexed_group2018-05-22T12:02:47ZVasudevan, Rama K.svd_utils.rebuild_svd bug in create_indexed_group*Created by: rajgiriUW*
I think this was a bug in commit f327da7d98907b9ba09473055320e12ddb04e122 in updating svd_utils:
line 328:
rebuilt_grp = create_indexed_group('Rebuilt_Data', h5_svd_group.name[1:])
throws an error sinc...*Created by: rajgiriUW*
I think this was a bug in commit f327da7d98907b9ba09473055320e12ddb04e122 in updating svd_utils:
line 328:
rebuilt_grp = create_indexed_group('Rebuilt_Data', h5_svd_group.name[1:])
throws an error since create_indexed_group requires a h5Py group, not string.
I think this change should work (will test):
rebuilt_grp = create_indexed_group(h5_svd_group, 'Rebuilt_Data')https://code.ornl.gov/rvv/pycroscopy/-/issues/152svd_rebuild error with get_component_slice returning ndarray2018-05-08T17:35:34ZVasudevan, Rama K.svd_rebuild error with get_component_slice returning ndarray*Created by: rajgiriUW*
in svd_utils.rebuild_svd:
comp_slice, num_comps = get_component_slice(components, total_components=h5_main.shape[1])
will cause an error later in:
n_comps = h5_S[comp_slice].size
if comp_slice is an ndarr...*Created by: rajgiriUW*
in svd_utils.rebuild_svd:
comp_slice, num_comps = get_component_slice(components, total_components=h5_main.shape[1])
will cause an error later in:
n_comps = h5_S[comp_slice].size
if comp_slice is an ndarray rather than a list. This seems to be an H5Py limitation.https://code.ornl.gov/rvv/pycroscopy/-/issues/130SignalFilter memory use2018-04-05T14:14:00ZVasudevan, Rama K.SignalFilter memory use*Created by: CompPhysChris*
Need a custom memory use calculation for SignalFilter. Taking the FFT of the input data converts it to complex values which increases the initial size.*Created by: CompPhysChris*
Need a custom memory use calculation for SignalFilter. Taking the FFT of the input data converts it to complex values which increases the initial size.https://code.ornl.gov/rvv/pycroscopy/-/issues/140be_viz_utils.jupyter_visualize_beps_sho not working for relaxation SHO fits2018-04-17T18:43:53ZVasudevan, Rama K.be_viz_utils.jupyter_visualize_beps_sho not working for relaxation SHO fits*Created by: ramav87*
Need to check dimensional reshaping
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-36-a0...*Created by: ramav87*
Need to check dimensional reshaping
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-36-a04eb0cf2113> in <module>()
1 step_chan = 'DC_Offset'
----> 2 px.be_viz_utils.jupyter_visualize_beps_sho(h5_sho_fit, step_chan)
~\Documents\GitHub\pycroscopy\pycroscopy\viz\be_viz_utils.py in jupyter_visualize_beps_sho(pc_sho_dset, step_chan, resp_func, resp_label, cmap)
324 bias_slider = ax_bias.axvline(x=step_ind, color='r')
325
--> 326 img_map, img_cmap = plot_map(ax_map, spatial_map.T, show_xy_ticks=None)
327
328 map_title = '{} - {}={}'.format(sho_quantity, step_chan, bias_mat[step_ind][0])
~\Documents\GitHub\pycroscopy\pycroscopy\viz\plot_utils.py in plot_map(axis, img, show_xy_ticks, show_cbar, x_size, y_size, num_ticks, stdevs, cbar_label, tick_font_size, origin, **kwargs)
430 kwargs.update({'origin': origin})
431
--> 432 im_handle = axis.imshow(img, **kwargs)
433
434 if show_xy_ticks is True:
~\AppData\Local\Continuum\anaconda3\lib\site-packages\matplotlib\__init__.py in inner(ax, *args, **kwargs)
1843 "the Matplotlib list!)" % (label_namer, func.__name__),
1844 RuntimeWarning, stacklevel=2)
-> 1845 return func(ax, *args, **kwargs)
1846
1847 inner.__doc__ = _add_data_doc(inner.__doc__,
~\AppData\Local\Continuum\anaconda3\lib\site-packages\matplotlib\axes\_axes.py in imshow(self, X, cmap, norm, aspect, interpolation, alpha, vmin, vmax, origin, extent, shape, filternorm, filterrad, imlim, resample, url, **kwargs)
5471 resample=resample, **kwargs)
5472
-> 5473 im.set_data(X)
5474 im.set_alpha(alpha)
5475 if im.get_clip_path() is None:
~\AppData\Local\Continuum\anaconda3\lib\site-packages\matplotlib\image.py in set_data(self, A)
651 if not (self._A.ndim == 2
652 or self._A.ndim == 3 and self._A.shape[-1] in [3, 4]):
--> 653 raise TypeError("Invalid dimensions for image data")
654
655 if self._A.ndim == 3:
TypeError: Invalid dimensions for image datahttps://code.ornl.gov/rvv/pycroscopy/-/issues/137sdist tarball has wrong contents2018-03-27T17:27:45ZVasudevan, Rama K.sdist tarball has wrong contents*Created by: carlodri*
the source tarball available on Pypi is not what it should be: it contains a number of nested directories but it doesn't have the structure of a package source. Is this deliberate?
This is strictly related to #...*Created by: carlodri*
the source tarball available on Pypi is not what it should be: it contains a number of nested directories but it doesn't have the structure of a package source. Is this deliberate?
This is strictly related to #134, since conda-forge is based on `sdist`tarballs.https://code.ornl.gov/rvv/pycroscopy/-/issues/1413+ Position support for loop visualizers2018-04-19T12:24:49ZVasudevan, Rama K.3+ Position support for loop visualizers*Created by: CompPhysChris*
The visualizers for the raw and SHO BE data have been updated to support more than 2 position dimensions. Similar updates need to be done to the loop visualizers.*Created by: CompPhysChris*
The visualizers for the raw and SHO BE data have been updated to support more than 2 position dimensions. Similar updates need to be done to the loop visualizers.