pycroscopy issueshttps://code.ornl.gov/rvv/pycroscopy/-/issues2020-12-20T20:58:51Zhttps://code.ornl.gov/rvv/pycroscopy/-/issues/239SVD error2020-12-20T20:58:51ZVasudevan, Rama K.SVD error*Created by: sulaymandesai*
Hi,
I have been following the example notebooks on this GitHub page to perform SVD. I get the following error:
```
1 decomposer = px.processing.svd_utils.SVD(h5_main, num_components=100)
----> 2 h5_s...*Created by: sulaymandesai*
Hi,
I have been following the example notebooks on this GitHub page to perform SVD. I get the following error:
```
1 decomposer = px.processing.svd_utils.SVD(h5_main, num_components=100)
----> 2 h5_svd_group = decomposer.compute()
3
4 h5_u = h5_svd_group['U']
5 h5_v = h5_svd_group['V']
~/.pyenv/versions/3.8.3/lib/python3.8/site-packages/pycroscopy/processing/svd_utils.py in compute(self, override)
161 """
162 if self.__u is None and self.__v is None and self.__s is None:
--> 163 self.test(override=override)
164
165 if self.h5_results_grp is None:
~/.pyenv/versions/3.8.3/lib/python3.8/site-packages/pycroscopy/processing/svd_utils.py in test(self, override)
137 raise ValueError('Could not reshape U to N-Dimensional dataset! Error:' + success)
138
--> 139 v_mat, success = reshape_to_n_dims(self.__v, h5_pos=np.expand_dims(np.arange(self.__u.shape[1]), axis=1),
140 h5_spec=self.h5_main.h5_spec_inds)
141 if not success:
~/.pyenv/versions/3.8.3/lib/python3.8/site-packages/pyUSID/io/hdf_utils/model.py in reshape_to_n_dims(h5_main, h5_pos, h5_spec, get_labels, verbose, sort_dims, lazy)
84 else:
85 if not isinstance(h5_main, (h5py.Dataset, np.ndarray, da.core.Array)):
---> 86 raise TypeError('h5_main should either be a h5py.Dataset or numpy array')
87
88 if h5_pos is not None:
TypeError: h5_main should either be a h5py.Dataset or numpy array
```
Any help would be appreciated!https://code.ornl.gov/rvv/pycroscopy/-/issues/233Automate notebook tests?2020-12-13T16:28:02ZVasudevan, Rama K.Automate notebook tests?*Created by: alex-treebeard*
Hey there,
Was just checking out your project and thought it may be a good candidate for [treebeard](https://github.com/treebeardtech/treebeard)
happy to help getting setup if you are interested otherw...*Created by: alex-treebeard*
Hey there,
Was just checking out your project and thought it may be a good candidate for [treebeard](https://github.com/treebeardtech/treebeard)
happy to help getting setup if you are interested otherwise feel free to close 👍https://code.ornl.gov/rvv/pycroscopy/-/issues/220BEOdfTranslator failure2020-05-15T19:44:20ZVasudevan, Rama K.BEOdfTranslator failure*Created by: ramav87*
BE ODF translator unable to translate certain old files collected in 2010. This error was found for nonlinearity measurements:
> ---------------------------------------------------------------------------
> K...*Created by: ramav87*
BE ODF translator unable to translate certain old files collected in 2010. This error was found for nonlinearity measurements:
> ---------------------------------------------------------------------------
> KeyError Traceback (most recent call last)
> <ipython-input-6-6c7fce848344> in <module>
> 4 translator = px.translators.BEodfTranslator()
> 5
> ----> 6 translator.translate(file_path)
>
> //anaconda3/lib/python3.7/site-packages/pycroscopy/io/translators/be_odf.py in translate(self, file_path, show_plots, save_plots, do_histogram, verbose)
> 251
> 252 if isBEPS:
> --> 253 (UDVS_labs, UDVS_units, UDVS_mat) = self.__build_udvs_table(parm_dict)
> 254
> 255 # Remove the unused plot group columns before proceeding:
>
> //anaconda3/lib/python3.7/site-packages/pycroscopy/io/translators/be_odf.py in __build_udvs_table(self, parm_dict)
> 1014 BE_amp = parm_dict['BE_amplitude_[V]']
> 1015
> -> 1016 VS_amp = parm_dict['VS_amplitude_[V]']
> 1017 VS_offset = parm_dict['VS_offset_[V]']
> 1018 # VS_read_voltage = parm_dict['VS_read_voltage_[V]']
>
> KeyError: 'VS_amplitude_[V]'
Will explore this in more detail.https://code.ornl.gov/rvv/pycroscopy/-/issues/215ImageWindowing Failures2020-04-24T20:11:14ZVasudevan, Rama K.ImageWindowing Failures*Created by: ramav87*
ImageWindowing fails for >1 step sizes. This needs to be investigated and fixed.*Created by: ramav87*
ImageWindowing fails for >1 step sizes. This needs to be investigated and fixed.https://code.ornl.gov/rvv/pycroscopy/-/issues/214Guess/Fit functions do not find guess2020-03-19T14:57:24ZVasudevan, Rama K.Guess/Fit functions do not find guess*Created by: ramav87*
Guess/Fit dual methods such as SHO or Loop Guess do not correctly find existing guesses, causing errors. This causes needless duplication of data and compute. Need to dig to find further. This appears to be a prob...*Created by: ramav87*
Guess/Fit dual methods such as SHO or Loop Guess do not correctly find existing guesses, causing errors. This causes needless duplication of data and compute. Need to dig to find further. This appears to be a problem when dealing with files with multiple channels with main datasets in them.https://code.ornl.gov/rvv/pycroscopy/-/issues/210LabViewh5Patcher fails to translate existing h5 files correctly2020-04-24T20:07:31ZVasudevan, Rama K.LabViewh5Patcher fails to translate existing h5 files correctly*Created by: ramav87*
Throws error of main dataset not being USID dataset incorrectly. *Created by: ramav87*
Throws error of main dataset not being USID dataset incorrectly. https://code.ornl.gov/rvv/pycroscopy/-/issues/207Igor IBW translator - loss of metadata2019-06-24T12:55:44ZVasudevan, Rama K.Igor IBW translator - loss of metadata*Created by: JulianeWeb*
Hi!
I am using the Igor IBW translator (IgorIBWTranslator) to convert large amount of AFM data.
The ibw file has stored a lot of metadata, which normally can be accessed using e.g. gwyddion or the Igor Softw...*Created by: JulianeWeb*
Hi!
I am using the Igor IBW translator (IgorIBWTranslator) to convert large amount of AFM data.
The ibw file has stored a lot of metadata, which normally can be accessed using e.g. gwyddion or the Igor Software. When converting the files into hdf5 using the IgorIBWTranslator, this metadata is lost. There is other metadata added instead. How can I change the code that the metadata (ScanRate, ScanPoints, ScanLines, ScanSize) is added into the hdf5 file?
I would appreciate help with this a lot, thanks!https://code.ornl.gov/rvv/pycroscopy/-/issues/205Error during Importing pycroscopy2019-05-24T14:21:19ZVasudevan, Rama K.Error during Importing pycroscopy*Created by: ayfliu*
I have this error when importing pycroscopy (0.60.3)
~\PyMOL\envs\PyEMMA\lib\site-packages\pycroscopy\io\hdf_writer.py in <module>
14 import h5py
15
---> 16 from pyUSID.io.hdf_utils import assign_gr...*Created by: ayfliu*
I have this error when importing pycroscopy (0.60.3)
~\PyMOL\envs\PyEMMA\lib\site-packages\pycroscopy\io\hdf_writer.py in <module>
14 import h5py
15
---> 16 from pyUSID.io.hdf_utils import assign_group_index, write_simple_attrs, attempt_reg_ref_build, write_region_references
17 from .virtual_data import VirtualGroup, VirtualDataset, VirtualData
18 from ..__version__ import version
ImportError: cannot import name 'attempt_reg_ref_build'
I tried to look up attempt_reg_ref_build but I cannot find it in pyUSID documentation
I have pyUSID 0.0.6.1 imported in python 3.6.7
https://code.ornl.gov/rvv/pycroscopy/-/issues/203Processing functions failing to find existing results2019-04-20T12:57:15ZVasudevan, Rama K.Processing functions failing to find existing results*Created by: ramav87*
Calling SVD or NMF (or any decomposition method) twice with same parameters does not result in return of results on second pass. Note given is
Note: Decomposition has already been performed PARTIALLY with the sa...*Created by: ramav87*
Calling SVD or NMF (or any decomposition method) twice with same parameters does not result in return of results on second pass. Note given is
Note: Decomposition has already been performed PARTIALLY with the same parameters. compute() will resuming computation in the last group below.
despite no changes to input parameters and completion of the decomposition. This needs to be remedied.https://code.ornl.gov/rvv/pycroscopy/-/issues/199PIFM and HyperImage Translators2019-02-21T19:59:55ZVasudevan, Rama K.PIFM and HyperImage Translators*Created by: ssomnath*
The base code that makes up the translator already exists [here](https://github.com/rajgiriUW/pifm_translator). Need to finish converting this to a formal Pycroscopy translator*Created by: ssomnath*
The base code that makes up the translator already exists [here](https://github.com/rajgiriUW/pifm_translator). Need to finish converting this to a formal Pycroscopy translatorhttps://code.ornl.gov/rvv/pycroscopy/-/issues/192Out of date notebooks2018-10-08T16:58:27ZVasudevan, Rama K.Out of date notebooks*Created by: DancingQuanta*
I am trying to do a tutorial here Tutorial_04_Interactive_Visualization. But I found that some things are broken because the functions in the notebooks have been renamed, or changed functionality.
*Created by: DancingQuanta*
I am trying to do a tutorial here Tutorial_04_Interactive_Visualization. But I found that some things are broken because the functions in the notebooks have been renamed, or changed functionality.
https://code.ornl.gov/rvv/pycroscopy/-/issues/183Loop Fitting on FORC datasets2018-07-05T23:50:25ZVasudevan, Rama K.Loop Fitting on FORC datasets*Created by: ramav87*
Loop fitting on FORC datasets provides an error due to size mismatch between generated output, which appears to be the length of a particular spectroscopic slice, and the fit h5 object, which is predetermined. Edit...*Created by: ramav87*
Loop fitting on FORC datasets provides an error due to size mismatch between generated output, which appears to be the length of a particular spectroscopic slice, and the fit h5 object, which is predetermined. Editing the fitter class does not appear to be a viable option to fix this issue.
Error specifics: Upon calling loop_fitter on a FORC dataset (25 position dimensions, 2 field dimensions, 8 FORC dimensions), yields the following error:
```
~/Documents/pycroscopy/pycroscopy/analysis/be_loop_fitter.py in do_fit(self, processors, max_mem, solver_type, solver_options, obj_func, get_loop_parameters, h5_guess)
396
397 self.fit = np.hstack(tuple(results))
--> 398 self._set_results()
399
400 self._start_pos = self._end_pos
~/Documents/pycroscopy/pycroscopy/analysis/fitter.py in _set_results(self, is_guess)
185 print('Writing data to positions: {} to {}'.format(self._start_pos, self._end_pos))
186
--> 187 targ_dset[self._start_pos: self._end_pos, :] = source_dset[:,:]
TypeError: Can't broadcast (25, 2) -> (25, 16)
```https://code.ornl.gov/rvv/pycroscopy/-/issues/182AttributeError in gIV Notebook help wanted2018-07-03T15:37:20ZVasudevan, Rama K.AttributeError in gIV Notebook help wanted*Created by: cuu123*
The notebook can't run this line:
# Load the raw dataset
h5_path = px.io_utils.uiGetFile('*.h5', 'G-mode IV dataset')
AttributeError: 'module' object has no attribute 'uiGetFile'
Is it possible to use usid...*Created by: cuu123*
The notebook can't run this line:
# Load the raw dataset
h5_path = px.io_utils.uiGetFile('*.h5', 'G-mode IV dataset')
AttributeError: 'module' object has no attribute 'uiGetFile'
Is it possible to use usid instead of px? But Conda install pyUSID seems not working, the message is
PackagesNotFoundError: The following packages are not available from current channels
So how to solve it?
https://code.ornl.gov/rvv/pycroscopy/-/issues/163utf-8 problem with NanonisFile2018-06-06T14:07:33ZVasudevan, Rama K.utf-8 problem with NanonisFile*Created by: donpatrice*
I have a sxm file, that causes some problems when reading it with NanonisFile:
```
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xb0 in position 67: invalid start byte
```
I found that the relevant pa...*Created by: donpatrice*
I have a sxm file, that causes some problems when reading it with NanonisFile:
```
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xb0 in position 67: invalid start byte
```
I found that the relevant part differs from [nanonispy](https://github.com/underchemist/nanonispy/blob/master/nanonispy/read.py) by
```
try:
entry = line.strip().decode()
except UnicodeDecodeError:
warnings.warn('{} has non-uft-8 characters, replacing them.'.format(f.name))
entry = line.strip().decode('utf-8', errors='replace')
```
In particular these lines are missing in pycroscopy. Is there a reason for this? Adding these lines solves my problem.https://code.ornl.gov/rvv/pycroscopy/-/issues/157Reshape_to_n_dims for non-square images2018-05-26T21:22:05ZVasudevan, Rama K.Reshape_to_n_dims for non-square images*Created by: rajgiriUW*
Posted on Slack, but for tracking.
Issue is for non-square image files, I think the reshape_to_n_dims function is reordering the data when it shouldn't be, resulting in some jagged-looking images (if rows > co...*Created by: rajgiriUW*
Posted on Slack, but for tracking.
Issue is for non-square image files, I think the reshape_to_n_dims function is reordering the data when it shouldn't be, resulting in some jagged-looking images (if rows > columns) or the image is repeated vertically (if columns > rows).
Here's what happens when running this on an example:
Position dimensions: ['X' 'Y']
Position sort order: [0 1]
Spectroscopic Dimensions: ['arb']
Spectroscopic sort order: [0]
Position dimensions (sort applied): ['X' 'Y']
Position dimensionality (sort applied): [256, 128]
Spectroscopic dimensions (sort applied): ['arb']
Spectroscopic dimensionality (sort applied): [1]
After first reshape, labels are ['Y' 'X' 'arb']
Data shape is (128, 256, 1)
Axes will permuted in this order: [1 0 2]
New labels ordering: ['X' 'Y' 'arb']
Dataset now of shape: (256, 128, 1)
Suhas seems to think the issue is in line:
>> Axes will permuted in this order: [1 0 2]
Since that is changing the dimensions.
I confirmed in the Igor IBW Translator that the position dimensions are being written correctly. It is possible to correct the Translator to fix this, I think, but I then expect the issue to pop up in other translators.
https://code.ornl.gov/rvv/pycroscopy/-/issues/156svd_utils.rebuild_svd bug in create_indexed_group2018-05-22T12:02:47ZVasudevan, Rama K.svd_utils.rebuild_svd bug in create_indexed_group*Created by: rajgiriUW*
I think this was a bug in commit f327da7d98907b9ba09473055320e12ddb04e122 in updating svd_utils:
line 328:
rebuilt_grp = create_indexed_group('Rebuilt_Data', h5_svd_group.name[1:])
throws an error sinc...*Created by: rajgiriUW*
I think this was a bug in commit f327da7d98907b9ba09473055320e12ddb04e122 in updating svd_utils:
line 328:
rebuilt_grp = create_indexed_group('Rebuilt_Data', h5_svd_group.name[1:])
throws an error since create_indexed_group requires a h5Py group, not string.
I think this change should work (will test):
rebuilt_grp = create_indexed_group(h5_svd_group, 'Rebuilt_Data')https://code.ornl.gov/rvv/pycroscopy/-/issues/152svd_rebuild error with get_component_slice returning ndarray2018-05-08T17:35:34ZVasudevan, Rama K.svd_rebuild error with get_component_slice returning ndarray*Created by: rajgiriUW*
in svd_utils.rebuild_svd:
comp_slice, num_comps = get_component_slice(components, total_components=h5_main.shape[1])
will cause an error later in:
n_comps = h5_S[comp_slice].size
if comp_slice is an ndarr...*Created by: rajgiriUW*
in svd_utils.rebuild_svd:
comp_slice, num_comps = get_component_slice(components, total_components=h5_main.shape[1])
will cause an error later in:
n_comps = h5_S[comp_slice].size
if comp_slice is an ndarray rather than a list. This seems to be an H5Py limitation.https://code.ornl.gov/rvv/pycroscopy/-/issues/139Add datatype information to PycroData output2018-04-19T12:27:04ZVasudevan, Rama K.Add datatype information to PycroData output*Created by: CompPhysChris*
Easy way to access the dtype info especially field names for compound types.*Created by: CompPhysChris*
Easy way to access the dtype info especially field names for compound types.https://code.ornl.gov/rvv/pycroscopy/-/issues/137sdist tarball has wrong contents2018-03-27T17:27:45ZVasudevan, Rama K.sdist tarball has wrong contents*Created by: carlodri*
the source tarball available on Pypi is not what it should be: it contains a number of nested directories but it doesn't have the structure of a package source. Is this deliberate?
This is strictly related to #...*Created by: carlodri*
the source tarball available on Pypi is not what it should be: it contains a number of nested directories but it doesn't have the structure of a package source. Is this deliberate?
This is strictly related to #134, since conda-forge is based on `sdist`tarballs.https://code.ornl.gov/rvv/pycroscopy/-/issues/134upload con conda-forge2018-03-29T21:19:14ZVasudevan, Rama K.upload con conda-forge*Created by: carlodri*
I was wondering if you were interested in uploading the package on conda-forge, which, as you probably know, is a widespread and robust package managing system. If you agree, I can prepare the feedstock recipe.*Created by: carlodri*
I was wondering if you were interested in uploading the package on conda-forge, which, as you probably know, is a widespread and robust package managing system. If you agree, I can prepare the feedstock recipe.