pycroscopy issueshttps://code.ornl.gov/rvv/pycroscopy/-/issues2020-12-31T15:17:20Zhttps://code.ornl.gov/rvv/pycroscopy/-/issues/242KMeans Clustering not writing to file2020-12-31T15:17:20ZVasudevan, Rama K.KMeans Clustering not writing to file*Created by: sulaymandesai*
Hi,
Hope you're well.
I am following this published notebook: https://nbviewer.jupyter.org/github/pycroscopy/papers/blob/master/Notebooks/EM/STEM/Image_Cleaning_Atom_Finding.ipynb
When I try to run t...*Created by: sulaymandesai*
Hi,
Hope you're well.
I am following this published notebook: https://nbviewer.jupyter.org/github/pycroscopy/papers/blob/master/Notebooks/EM/STEM/Image_Cleaning_Atom_Finding.ipynb
When I try to run the KMeans clustering I have the following error:
```
num_clusters = 4
# num_clusters = 32
estimator = px.processing.Cluster(h5_U, KMeans(n_clusters=num_clusters), num_comps=num_comps)
if estimator.duplicate_h5_groups==[]:
t0 = time()
h5_kmeans = estimator.compute()
print('kMeans took {} seconds.'.format(round(time()-t0, 2)))
else:
h5_kmeans = estimator.duplicate_h5_groups[-1]
print( 'Using existing results.')
print( 'Clustering results in {}.'.format(h5_kmeans.name))
half_wind = int(win_size*0.5)
# generate a cropped image that was effectively the area that was used for pattern searching
# Need to get the math righ on the counting
cropped_clean_image = clean_image_mat[half_wind:-half_wind + 1, half_wind:-half_wind + 1]
# Plot cluster results Get the labels dataset
labels_mat = np.reshape(h5_kmeans['Labels'][()], [num_rows, num_cols])
fig, axes = plt.subplots(ncols=2, figsize=(14,7))
axes[0].imshow(cropped_clean_image,cmap=spiepy.NANOMAP, origin='lower')
axes[0].set_title('Cleaned Image', fontsize=16)
axes[1].imshow(labels_mat, aspect=1, interpolation='none',cmap=spiepy.NANOMAP, origin='lower')
axes[1].set_title('K-means cluster labels', fontsize=16);
for axis in axes:
axis.get_yaxis().set_visible(False)
axis.get_xaxis().set_visible(False)
usid.jupyter_utils.save_fig_filebox_button(fig, 'Clustered_Clean_Image.png')
```
```
Consider calling test() to check results before calling compute() which computes on the entire dataset and writes results to the HDF5 file
Group: <HDF5 group "/Measurement_000/Channel_000/Plane_Mean_Subtracted_Data-Windowing_000/Image_Windows-SVD_000/U-Cluster_000" (0 members)> had neither the status HDF5 dataset or the legacy attribute: "last_pixel".
Group: <HDF5 group "/Measurement_000/Channel_000/Plane_Mean_Subtracted_Data-Windowing_000/Image_Windows-SVD_000/U-Cluster_001" (0 members)> had neither the status HDF5 dataset or the legacy attribute: "last_pixel".
Group: <HDF5 group "/Measurement_000/Channel_000/Plane_Mean_Subtracted_Data-Windowing_000/Image_Windows-SVD_000/U-Cluster_002" (0 members)> had neither the status HDF5 dataset or the legacy attribute: "last_pixel".
Performing clustering on /Measurement_000/Channel_000/Plane_Mean_Subtracted_Data-Windowing_000/Image_Windows-SVD_000/U.
Took 5.76 sec to compute KMeans
Calculated the Mean Response of each cluster.
Took 340.1 msec to calculate mean response per cluster
Writing clustering results to file.
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-35-6b9a66d30096> in <module>
7 if estimator.duplicate_h5_groups==[]:
8 t0 = time()
----> 9 h5_kmeans = estimator.compute()
10 print('kMeans took {} seconds.'.format(round(time()-t0, 2)))
11 else:
~/.pyenv/versions/3.8.3/lib/python3.8/site-packages/pycroscopy-0.60.7-py3.8.egg/pycroscopy/processing/cluster.py in compute(self, rearrange_clusters, override)
226
227 if self.h5_results_grp is None:
--> 228 h5_group = self._write_results_chunk()
229 self.delete_results()
230 else:
~/.pyenv/versions/3.8.3/lib/python3.8/site-packages/pycroscopy-0.60.7-py3.8.egg/pycroscopy/processing/cluster.py in _write_results_chunk(self)
282 h5_cluster_group = create_results_group(self.h5_main, self.process_name,
283 h5_parent_group=self._h5_target_group)
--> 284 self._write_source_dset_provenance()
285
286 write_simple_attrs(h5_cluster_group, self.parms_dict)
~/.pyenv/versions/3.8.3/lib/python3.8/site-packages/pyUSID/processing/process.py in _write_source_dset_provenance(self)
793
794 @staticmethod
--> 795 def _map_function(*args, **kwargs):
796 """
797 The function that manipulates the data on a single instance (position). This will be used by
~/.pyenv/versions/3.8.3/lib/python3.8/site-packages/sidpy/hdf/hdf_utils.py in write_simple_attrs(h5_obj, attrs, verbose)
371 '{}'.format(type(attrs)))
372 if not isinstance(h5_obj, (h5py.File, h5py.Group, h5py.Dataset)):
--> 373 raise TypeError('h5_obj should be a h5py File, Group or Dataset object'
374 ' but is instead of type '
375 '{}t'.format(type(h5_obj)))
TypeError: h5_obj should be a h5py File, Group or Dataset object but is instead of type <class 'NoneType'>t
```
Any help would be appreciated!https://code.ornl.gov/rvv/pycroscopy/-/issues/239SVD error2020-12-20T20:58:51ZVasudevan, Rama K.SVD error*Created by: sulaymandesai*
Hi,
I have been following the example notebooks on this GitHub page to perform SVD. I get the following error:
```
1 decomposer = px.processing.svd_utils.SVD(h5_main, num_components=100)
----> 2 h5_s...*Created by: sulaymandesai*
Hi,
I have been following the example notebooks on this GitHub page to perform SVD. I get the following error:
```
1 decomposer = px.processing.svd_utils.SVD(h5_main, num_components=100)
----> 2 h5_svd_group = decomposer.compute()
3
4 h5_u = h5_svd_group['U']
5 h5_v = h5_svd_group['V']
~/.pyenv/versions/3.8.3/lib/python3.8/site-packages/pycroscopy/processing/svd_utils.py in compute(self, override)
161 """
162 if self.__u is None and self.__v is None and self.__s is None:
--> 163 self.test(override=override)
164
165 if self.h5_results_grp is None:
~/.pyenv/versions/3.8.3/lib/python3.8/site-packages/pycroscopy/processing/svd_utils.py in test(self, override)
137 raise ValueError('Could not reshape U to N-Dimensional dataset! Error:' + success)
138
--> 139 v_mat, success = reshape_to_n_dims(self.__v, h5_pos=np.expand_dims(np.arange(self.__u.shape[1]), axis=1),
140 h5_spec=self.h5_main.h5_spec_inds)
141 if not success:
~/.pyenv/versions/3.8.3/lib/python3.8/site-packages/pyUSID/io/hdf_utils/model.py in reshape_to_n_dims(h5_main, h5_pos, h5_spec, get_labels, verbose, sort_dims, lazy)
84 else:
85 if not isinstance(h5_main, (h5py.Dataset, np.ndarray, da.core.Array)):
---> 86 raise TypeError('h5_main should either be a h5py.Dataset or numpy array')
87
88 if h5_pos is not None:
TypeError: h5_main should either be a h5py.Dataset or numpy array
```
Any help would be appreciated!https://code.ornl.gov/rvv/pycroscopy/-/issues/233Automate notebook tests?2020-12-13T16:28:02ZVasudevan, Rama K.Automate notebook tests?*Created by: alex-treebeard*
Hey there,
Was just checking out your project and thought it may be a good candidate for [treebeard](https://github.com/treebeardtech/treebeard)
happy to help getting setup if you are interested otherw...*Created by: alex-treebeard*
Hey there,
Was just checking out your project and thought it may be a good candidate for [treebeard](https://github.com/treebeardtech/treebeard)
happy to help getting setup if you are interested otherwise feel free to close 👍https://code.ornl.gov/rvv/pycroscopy/-/issues/238Generic utility list2020-08-21T20:55:16ZMukherjee, Debangshumukherjeed@ornl.govGeneric utility list@ssomnath @ramav87 I am starting to make a list of domain agnostic tools that should find a home in pycroscopy
* Functional fits for spectra
* 2D Gaussian fitting
* Hybrid cross-correlation
* Scan drift correction@ssomnath @ramav87 I am starting to make a list of domain agnostic tools that should find a home in pycroscopy
* Functional fits for spectra
* 2D Gaussian fitting
* Hybrid cross-correlation
* Scan drift correctionhttps://code.ornl.gov/rvv/pycroscopy/-/issues/237Translator issue with pyUSID update2020-08-18T16:02:24ZVasudevan, Rama K.Translator issue with pyUSID update*Created by: nccreang*
When running FakeBEPSGenerator translator, KeyError: 'Can't open attribute (can't locate attribute: 'DC_Offset')' arrises.*Created by: nccreang*
When running FakeBEPSGenerator translator, KeyError: 'Can't open attribute (can't locate attribute: 'DC_Offset')' arrises.https://code.ornl.gov/rvv/pycroscopy/-/issues/234Labview H5 Patcher2020-08-07T18:39:48ZVasudevan, Rama K.Labview H5 Patcher*Created by: ramav87*
Labview H5 patcher currently looks through spec_dim_labels to gauge how many spectroscopic dimensions there are. This is better determined by looking at the size of the spectroscopic_values dataset instead. Some of...*Created by: ramav87*
Labview H5 patcher currently looks through spec_dim_labels to gauge how many spectroscopic dimensions there are. This is better determined by looking at the size of the spectroscopic_values dataset instead. Some of the acquisition mistakenly adds labels to non-existent spectroscopic dimensions, causing translation bugs.https://code.ornl.gov/rvv/pycroscopy/-/issues/223Need function to get all main NSID And USID dataset2020-06-12T22:01:49ZVasudevan, Rama K.Need function to get all main NSID And USID dataset*Created by: ssomnath*
*Created by: ssomnath*
https://code.ornl.gov/rvv/pycroscopy/-/issues/222Electron microscopy translator suite2020-05-15T20:27:20ZVasudevan, Rama K.Electron microscopy translator suite*Created by: ramav87*
Currently, we have a mix of translators that are rather poor for electron microscopy files. Need to create a suite of new translators to clean up the problem. Specific classes will be developed for each of the foll...*Created by: ramav87*
Currently, we have a mix of translators that are rather poor for electron microscopy files. Need to create a suite of new translators to clean up the problem. Specific classes will be developed for each of the following
1) Nion (single images are in ndata, multidimensional goes to h5, otherwise).
2) Digital Micrograph
3) FEI
4) EMD (Berkeley)
At some point, SEM should also be incorporated, along with atom probe tomography. A lot of these functions are available in pyTemlib (https://github.com/gduscher/pyTEMlib/blob/master/pyTEMlib/dm3lib_v1_0b.py and https://github.com/gduscher/pyTEMlib/blob/master/pyTEMlib/file_tools.py )https://code.ornl.gov/rvv/pycroscopy/-/issues/220BEOdfTranslator failure2020-05-15T19:44:20ZVasudevan, Rama K.BEOdfTranslator failure*Created by: ramav87*
BE ODF translator unable to translate certain old files collected in 2010. This error was found for nonlinearity measurements:
> ---------------------------------------------------------------------------
> K...*Created by: ramav87*
BE ODF translator unable to translate certain old files collected in 2010. This error was found for nonlinearity measurements:
> ---------------------------------------------------------------------------
> KeyError Traceback (most recent call last)
> <ipython-input-6-6c7fce848344> in <module>
> 4 translator = px.translators.BEodfTranslator()
> 5
> ----> 6 translator.translate(file_path)
>
> //anaconda3/lib/python3.7/site-packages/pycroscopy/io/translators/be_odf.py in translate(self, file_path, show_plots, save_plots, do_histogram, verbose)
> 251
> 252 if isBEPS:
> --> 253 (UDVS_labs, UDVS_units, UDVS_mat) = self.__build_udvs_table(parm_dict)
> 254
> 255 # Remove the unused plot group columns before proceeding:
>
> //anaconda3/lib/python3.7/site-packages/pycroscopy/io/translators/be_odf.py in __build_udvs_table(self, parm_dict)
> 1014 BE_amp = parm_dict['BE_amplitude_[V]']
> 1015
> -> 1016 VS_amp = parm_dict['VS_amplitude_[V]']
> 1017 VS_offset = parm_dict['VS_offset_[V]']
> 1018 # VS_read_voltage = parm_dict['VS_read_voltage_[V]']
>
> KeyError: 'VS_amplitude_[V]'
Will explore this in more detail.https://code.ornl.gov/rvv/pycroscopy/-/issues/221BEPS notebook only works on first measurement group2020-05-15T18:59:44ZVasudevan, Rama K.BEPS notebook only works on first measurement group*Created by: ssomnath*
Every time a user changes a measurement parameter during a BE experiment, all subsequent data are written out to a different measurement group and corresponding HDF5 dataset. The BE notebook currently only perfor...*Created by: ssomnath*
Every time a user changes a measurement parameter during a BE experiment, all subsequent data are written out to a different measurement group and corresponding HDF5 dataset. The BE notebook currently only performs fitting and visualization on the data contained in the first measurement group only.
Instead, the notebook should iterate through all available datasets and perform the same operations on themhttps://code.ornl.gov/rvv/pycroscopy/-/issues/219Image shifting and saving as new dset2020-05-12T02:09:55ZVasudevan, Rama K.Image shifting and saving as new dset*Created by: rajgiriUW*
This came up when trying to align some images, but I didn't see anything already that resaves the shifted data. It's very simple but maybe I can just append to the old notebook about registration on the site? Or ...*Created by: rajgiriUW*
This came up when trying to align some images, but I didn't see anything already that resaves the shifted data. It's very simple but maybe I can just append to the old notebook about registration on the site? Or some simple function that just:
a) shifts array by specified amount
b) visualize pre and post-shifting
c) create a results group ("Shifted")
d) write using the pos/spec of the original dataset.https://code.ornl.gov/rvv/pycroscopy/-/issues/218Investigate VisPy as a solution to visualization2020-05-08T20:57:12ZVasudevan, Rama K.Investigate VisPy as a solution to visualization*Created by: ramav87*
Visualization is an issue for large datasets; look into VisPy as a solution. *Created by: ramav87*
Visualization is an issue for large datasets; look into VisPy as a solution. https://code.ornl.gov/rvv/pycroscopy/-/issues/215ImageWindowing Failures2020-04-24T20:11:14ZVasudevan, Rama K.ImageWindowing Failures*Created by: ramav87*
ImageWindowing fails for >1 step sizes. This needs to be investigated and fixed.*Created by: ramav87*
ImageWindowing fails for >1 step sizes. This needs to be investigated and fixed.https://code.ornl.gov/rvv/pycroscopy/-/issues/210LabViewh5Patcher fails to translate existing h5 files correctly2020-04-24T20:07:31ZVasudevan, Rama K.LabViewh5Patcher fails to translate existing h5 files correctly*Created by: ramav87*
Throws error of main dataset not being USID dataset incorrectly. *Created by: ramav87*
Throws error of main dataset not being USID dataset incorrectly. https://code.ornl.gov/rvv/pycroscopy/-/issues/214Guess/Fit functions do not find guess2020-03-19T14:57:24ZVasudevan, Rama K.Guess/Fit functions do not find guess*Created by: ramav87*
Guess/Fit dual methods such as SHO or Loop Guess do not correctly find existing guesses, causing errors. This causes needless duplication of data and compute. Need to dig to find further. This appears to be a prob...*Created by: ramav87*
Guess/Fit dual methods such as SHO or Loop Guess do not correctly find existing guesses, causing errors. This causes needless duplication of data and compute. Need to dig to find further. This appears to be a problem when dealing with files with multiple channels with main datasets in them.https://code.ornl.gov/rvv/pycroscopy/-/issues/22Generic data visualizer2019-10-15T18:20:30ZVasudevan, Rama K.Generic data visualizer*Created by: ssomnath*
Should take any MAIN dataset regardless of the dimensionality. Core components implemented in viz.jupyter_utils
┆Issue is synchronized with this [Asana task](https://app.asana.com/0/200029249765524/202079868929...*Created by: ssomnath*
Should take any MAIN dataset regardless of the dimensionality. Core components implemented in viz.jupyter_utils
┆Issue is synchronized with this [Asana task](https://app.asana.com/0/200029249765524/202079868929510)
https://code.ornl.gov/rvv/pycroscopy/-/issues/207Igor IBW translator - loss of metadata2019-06-24T12:55:44ZVasudevan, Rama K.Igor IBW translator - loss of metadata*Created by: JulianeWeb*
Hi!
I am using the Igor IBW translator (IgorIBWTranslator) to convert large amount of AFM data.
The ibw file has stored a lot of metadata, which normally can be accessed using e.g. gwyddion or the Igor Softw...*Created by: JulianeWeb*
Hi!
I am using the Igor IBW translator (IgorIBWTranslator) to convert large amount of AFM data.
The ibw file has stored a lot of metadata, which normally can be accessed using e.g. gwyddion or the Igor Software. When converting the files into hdf5 using the IgorIBWTranslator, this metadata is lost. There is other metadata added instead. How can I change the code that the metadata (ScanRate, ScanPoints, ScanLines, ScanSize) is added into the hdf5 file?
I would appreciate help with this a lot, thanks!https://code.ornl.gov/rvv/pycroscopy/-/issues/206Separate translator(s) for DM3 and DM4 files2019-06-06T15:38:58ZVasudevan, Rama K.Separate translator(s) for DM3 and DM4 files*Created by: ssomnath*
Currently, DM3 and DM4 translation is being managed by the image, time series, movie, image stack translators. These translators were originally designed to read multiple file formats given the similarities in the...*Created by: ssomnath*
Currently, DM3 and DM4 translation is being managed by the image, time series, movie, image stack translators. These translators were originally designed to read multiple file formats given the similarities in the operations. However, it is not clear to the end user as to which translator to use given a DM3/4 file. Perhaps the common elements in these translators could be reused or moved into static functions outside a translator class so that they can be shared across translators.
This change will be very important when attempting to build a look-up table that automates the translation process based off file extensions or signatures within the header. Such a feature would be the foundation for both a high level "load()" function as well as the development of a pipeline that connects (offline) instruments to data facilities.https://code.ornl.gov/rvv/pycroscopy/-/issues/205Error during Importing pycroscopy2019-05-24T14:21:19ZVasudevan, Rama K.Error during Importing pycroscopy*Created by: ayfliu*
I have this error when importing pycroscopy (0.60.3)
~\PyMOL\envs\PyEMMA\lib\site-packages\pycroscopy\io\hdf_writer.py in <module>
14 import h5py
15
---> 16 from pyUSID.io.hdf_utils import assign_gr...*Created by: ayfliu*
I have this error when importing pycroscopy (0.60.3)
~\PyMOL\envs\PyEMMA\lib\site-packages\pycroscopy\io\hdf_writer.py in <module>
14 import h5py
15
---> 16 from pyUSID.io.hdf_utils import assign_group_index, write_simple_attrs, attempt_reg_ref_build, write_region_references
17 from .virtual_data import VirtualGroup, VirtualDataset, VirtualData
18 from ..__version__ import version
ImportError: cannot import name 'attempt_reg_ref_build'
I tried to look up attempt_reg_ref_build but I cannot find it in pyUSID documentation
I have pyUSID 0.0.6.1 imported in python 3.6.7
https://code.ornl.gov/rvv/pycroscopy/-/issues/204Importing nanoscope 9.4 files fails2019-05-16T14:50:16ZVasudevan, Rama K.Importing nanoscope 9.4 files fails*Created by: flounderscore*
I cannot import a file created with NanoScope 9.4. The error message is:
> File "...translators\bruker_afm.py", line 321, in _read_image_layer
> data_mat = data_vec.reshape(layer_info['Number of lines'], ...*Created by: flounderscore*
I cannot import a file created with NanoScope 9.4. The error message is:
> File "...translators\bruker_afm.py", line 321, in _read_image_layer
> data_mat = data_vec.reshape(layer_info['Number of lines'], layer_info['Samps/line'])
> ValueError: cannot reshape array of size 524288 into shape (512,512)
The issue appears to be a bug in the NanoScope software >= 9.2 where all data is 4 bytes per pixel even though the header says otherwise.
See line 391 in https://sourceforge.net/p/gwyddion/code/HEAD/tree/trunk/gwyddion/modules/file/nanoscope.c#l31