pycroscopy issueshttps://code.ornl.gov/rvv/pycroscopy/-/issues2018-06-01T17:18:51Zhttps://code.ornl.gov/rvv/pycroscopy/-/issues/160NanonisTranslator cannot translate sxm file2018-06-01T17:18:51ZVasudevan, Rama K.NanonisTranslator cannot translate sxm file*Created by: donpatrice*
I have a sxm file that we would like to read and translate into pycroscopy but this does not work.
```
>>> nt = NanonisTranslator(sxm_file)
>>> nt.translate()
----------------------------------------------...*Created by: donpatrice*
I have a sxm file that we would like to read and translate into pycroscopy but this does not work.
```
>>> nt = NanonisTranslator(sxm_file)
>>> nt.translate()
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-240-1db0a86297c7> in <module>()
----> 1 nt.translate()
~/ownCloudFHI/FHI/github.molgen.mpg.de/saasmi/saasmi/.venv/lib/python3.5/site-packages/pycroscopy/io/translators/nanonis.py in translate(self, data_channels, verbose)
71 """
72 if self.parm_dict is None or self.data_dict is None:
---> 73 self._read_data(self.data_path)
74
75 if data_channels is None:
~/ownCloudFHI/FHI/github.molgen.mpg.de/saasmi/saasmi/.venv/lib/python3.5/site-packages/pycroscopy/io/translators/nanonis.py in _read_data(self, grid_file_path)
153
154 parm_dict = dict()
--> 155 for key, parm_grid in zip(header_dict['fixed_parameters'] + header_dict['experimental_parameters'],
156 signal_dict['params'].T):
157 parm_dict[key] = parm_grid
KeyError: 'fixed_parameters'
```
The problem seems to be the missing key 'fixed_parameters' in the header_dict.
```
>>> from pycroscopy.io.translators.df_utils.nanonispy.read import Scan
>>> s = Scan(sxm_file)
>>> s.header.keys()
dict_keys(['scan_pixels', 'current>offset (a)', 'scan_time', 'current>gain', 'z-controller>i gain', 'scan_range', 'data_info', 'current>current (a)', 'z-controller>tiplift (m)', 'z-controller', 'current>calibration (a/v)', 'z-controller>controller name', 'scan_file', 'z-controller>switch off delay (s)', 'z-controller>z (m)', 'z-controller>controller status', 'z-controller>time const (s)', 'rec_time', 'scan_offset', 'z-controller>setpoint', 'rec_temp', 'nanonis_version', 'acq_time', 'scanit_type', 'z-controller>setpoint unit', 'z-controller>p gain', 'bias', 'comment', 'scan_dir', 'scan_angle', 'rec_date'])
```
How can I fix this issue?
Thanks a lot.https://code.ornl.gov/rvv/pycroscopy/-/issues/167loop_fitter missing checks for existing results2018-06-11T15:21:41ZVasudevan, Rama K.loop_fitter missing checks for existing results*Created by: ramav87*
do_guess and do_fit methods from BELoopFitter are missing override options that have been added to SHOFitter. This causes an error in the latest BE processing notebook.*Created by: ramav87*
do_guess and do_fit methods from BELoopFitter are missing override options that have been added to SHOFitter. This causes an error in the latest BE processing notebook.https://code.ornl.gov/rvv/pycroscopy/-/issues/168Implement a way to update existing data to new version requirements2018-06-11T16:26:57ZVasudevan, Rama K.Implement a way to update existing data to new version requirements*Created by: CompPhysChris*
We try to keep updates from breaking backwards compatibility, but some of our new requirements have resulted in results from previous versions no longer being valid. We need to implement a version check and ...*Created by: CompPhysChris*
We try to keep updates from breaking backwards compatibility, but some of our new requirements have resulted in results from previous versions no longer being valid. We need to implement a version check and automatic updating system to correct for any changes in the code.
These updates need to be tied to a specific version and will be applied sequentially from the version of the file to the current version.https://code.ornl.gov/rvv/pycroscopy/-/issues/169Request for interactive visualization tool for Dataset results from Process C...2018-06-15T15:12:17ZVasudevan, Rama K.Request for interactive visualization tool for Dataset results from Process Class *Created by: nmosto*
I'm putting in a request for a interactive visualization tool for dataset results from the Process class.
What I was envisioning was a LHS interactive spatial map where you can specify a parameter to visualize (suc...*Created by: nmosto*
I'm putting in a request for a interactive visualization tool for dataset results from the Process class.
What I was envisioning was a LHS interactive spatial map where you can specify a parameter to visualize (such as max amplitude). The RHS would inherit all of the data associated with a chosen pixel from the LHS and one can do what they want with it like apply new functions or make new plots.
For example taking the LHS spatial map of a fitting parameter and the RHS could plot the fit using that parameter back over the raw data to check what happened.https://code.ornl.gov/rvv/pycroscopy/-/issues/194Rename PtychographyTranslator to something more generic2018-11-14T15:34:18ZVasudevan, Rama K.Rename PtychographyTranslator to something more generic*Created by: ssomnath*
How about GridofImagesTranslator?
Reminders:
1. Leave "a" PtychographyTranslator in the same location for legacy reasons but add a DeprecationWarning pointing towards the newly named translator.
2. Check exi...*Created by: ssomnath*
How about GridofImagesTranslator?
Reminders:
1. Leave "a" PtychographyTranslator in the same location for legacy reasons but add a DeprecationWarning pointing towards the newly named translator.
2. Check existing notebooks to ensure that they are updated as wellhttps://code.ornl.gov/rvv/pycroscopy/-/issues/195Record pycroscopy version2018-11-15T14:55:40ZVasudevan, Rama K.Record pycroscopy version*Created by: CompPhysChris*
We record the pyUSID version that was used to create the HDF5 objects when they are written to file. We need to add a similar feature to the pycroscopy process and analysis classes.*Created by: CompPhysChris*
We record the pyUSID version that was used to create the HDF5 objects when they are written to file. We need to add a similar feature to the pycroscopy process and analysis classes.https://code.ornl.gov/rvv/pycroscopy/-/issues/196PtychographyTranslator read_dm3 issue2018-11-20T21:25:39ZVasudevan, Rama K.PtychographyTranslator read_dm3 issue*Created by: kbschliep*
Having an issue using PtychographyTranslate.
In translate definition, within the Ptychography.py, the image_path is listed as [Absolute path to **folder** holding image files yet when image_type == '.dm3' it cal...*Created by: kbschliep*
Having an issue using PtychographyTranslate.
In translate definition, within the Ptychography.py, the image_path is listed as [Absolute path to **folder** holding image files yet when image_type == '.dm3' it calls upon read_dm3(image_path).[1] In read_dm3,[2] the image_path is a path to the image file so some issues occur. I've copy pasted the relevant code I'm citing below.
1 - translate() -code
Parameters
----------------
image_path : str
Absolute path to folder holding the image files
2 - read_dm3() code
def read_dm3(image_path, get_parms=True):
"""
Read an image from a dm3 file into a numpy array
image_path : str
Path to the image file
In the end I'm just trying to use the PtychographyTranslation to convert a folder of .dm3 to an hdf5.
This is how I am currently using it (let me know if I'm missing something)
## Example
root = Tk() # Tk() is a function in tkinter that opens a window
root.directory = filedialog.askdirectory() # opens explorer window so you can find the folder of choice
Data_path=root.directory # Returns folder location 'C:\\Users\\kbs1\\Documents\\Test'
hdf5_filename = 'Test.hdf5'
hdf5_path = Data_path + '/'+hdf5_filename # Returns 'C:\\Users\\kbs1\\Documents\\Test\\Test.hdf5'
tran=PtychographyTranslator()
test_data= tran.translate(hdf5_path, Data_path, image_type='.dm3')https://code.ornl.gov/rvv/pycroscopy/-/issues/198Generalized Image Stack translator class2018-11-27T15:23:12ZVasudevan, Rama K.Generalized Image Stack translator class*Created by: ssomnath*
The current PtychographyTranslator, MovieTranslator, OneViewTranslator, ImageTranslator share a fair number of pieces in common. Perhaps a generalized class could be created that would be able to:
1. Maximize cod...*Created by: ssomnath*
The current PtychographyTranslator, MovieTranslator, OneViewTranslator, ImageTranslator share a fair number of pieces in common. Perhaps a generalized class could be created that would be able to:
1. Maximize code reuse
2. Minimize effort required to write variants of an image stack translator
Also, Dask may come in handy when reading numerous image files, pre-processing (e.g. - binning) in memory and writing the data into HDF5 datasets.
Related to: #197, #196, #194 https://code.ornl.gov/rvv/pycroscopy/-/issues/204Importing nanoscope 9.4 files fails2019-05-16T14:50:16ZVasudevan, Rama K.Importing nanoscope 9.4 files fails*Created by: flounderscore*
I cannot import a file created with NanoScope 9.4. The error message is:
> File "...translators\bruker_afm.py", line 321, in _read_image_layer
> data_mat = data_vec.reshape(layer_info['Number of lines'], ...*Created by: flounderscore*
I cannot import a file created with NanoScope 9.4. The error message is:
> File "...translators\bruker_afm.py", line 321, in _read_image_layer
> data_mat = data_vec.reshape(layer_info['Number of lines'], layer_info['Samps/line'])
> ValueError: cannot reshape array of size 524288 into shape (512,512)
The issue appears to be a bug in the NanoScope software >= 9.2 where all data is 4 bytes per pixel even though the header says otherwise.
See line 391 in https://sourceforge.net/p/gwyddion/code/HEAD/tree/trunk/gwyddion/modules/file/nanoscope.c#l31https://code.ornl.gov/rvv/pycroscopy/-/issues/206Separate translator(s) for DM3 and DM4 files2019-06-06T15:38:58ZVasudevan, Rama K.Separate translator(s) for DM3 and DM4 files*Created by: ssomnath*
Currently, DM3 and DM4 translation is being managed by the image, time series, movie, image stack translators. These translators were originally designed to read multiple file formats given the similarities in the...*Created by: ssomnath*
Currently, DM3 and DM4 translation is being managed by the image, time series, movie, image stack translators. These translators were originally designed to read multiple file formats given the similarities in the operations. However, it is not clear to the end user as to which translator to use given a DM3/4 file. Perhaps the common elements in these translators could be reused or moved into static functions outside a translator class so that they can be shared across translators.
This change will be very important when attempting to build a look-up table that automates the translation process based off file extensions or signatures within the header. Such a feature would be the foundation for both a high level "load()" function as well as the development of a pipeline that connects (offline) instruments to data facilities.https://code.ornl.gov/rvv/pycroscopy/-/issues/218Investigate VisPy as a solution to visualization2020-05-08T20:57:12ZVasudevan, Rama K.Investigate VisPy as a solution to visualization*Created by: ramav87*
Visualization is an issue for large datasets; look into VisPy as a solution. *Created by: ramav87*
Visualization is an issue for large datasets; look into VisPy as a solution. https://code.ornl.gov/rvv/pycroscopy/-/issues/219Image shifting and saving as new dset2020-05-12T02:09:55ZVasudevan, Rama K.Image shifting and saving as new dset*Created by: rajgiriUW*
This came up when trying to align some images, but I didn't see anything already that resaves the shifted data. It's very simple but maybe I can just append to the old notebook about registration on the site? Or ...*Created by: rajgiriUW*
This came up when trying to align some images, but I didn't see anything already that resaves the shifted data. It's very simple but maybe I can just append to the old notebook about registration on the site? Or some simple function that just:
a) shifts array by specified amount
b) visualize pre and post-shifting
c) create a results group ("Shifted")
d) write using the pos/spec of the original dataset.https://code.ornl.gov/rvv/pycroscopy/-/issues/221BEPS notebook only works on first measurement group2020-05-15T18:59:44ZVasudevan, Rama K.BEPS notebook only works on first measurement group*Created by: ssomnath*
Every time a user changes a measurement parameter during a BE experiment, all subsequent data are written out to a different measurement group and corresponding HDF5 dataset. The BE notebook currently only perfor...*Created by: ssomnath*
Every time a user changes a measurement parameter during a BE experiment, all subsequent data are written out to a different measurement group and corresponding HDF5 dataset. The BE notebook currently only performs fitting and visualization on the data contained in the first measurement group only.
Instead, the notebook should iterate through all available datasets and perform the same operations on themhttps://code.ornl.gov/rvv/pycroscopy/-/issues/222Electron microscopy translator suite2020-05-15T20:27:20ZVasudevan, Rama K.Electron microscopy translator suite*Created by: ramav87*
Currently, we have a mix of translators that are rather poor for electron microscopy files. Need to create a suite of new translators to clean up the problem. Specific classes will be developed for each of the foll...*Created by: ramav87*
Currently, we have a mix of translators that are rather poor for electron microscopy files. Need to create a suite of new translators to clean up the problem. Specific classes will be developed for each of the following
1) Nion (single images are in ndata, multidimensional goes to h5, otherwise).
2) Digital Micrograph
3) FEI
4) EMD (Berkeley)
At some point, SEM should also be incorporated, along with atom probe tomography. A lot of these functions are available in pyTemlib (https://github.com/gduscher/pyTEMlib/blob/master/pyTEMlib/dm3lib_v1_0b.py and https://github.com/gduscher/pyTEMlib/blob/master/pyTEMlib/file_tools.py )https://code.ornl.gov/rvv/pycroscopy/-/issues/223Need function to get all main NSID And USID dataset2020-06-12T22:01:49ZVasudevan, Rama K.Need function to get all main NSID And USID dataset*Created by: ssomnath*
*Created by: ssomnath*
https://code.ornl.gov/rvv/pycroscopy/-/issues/234Labview H5 Patcher2020-08-07T18:39:48ZVasudevan, Rama K.Labview H5 Patcher*Created by: ramav87*
Labview H5 patcher currently looks through spec_dim_labels to gauge how many spectroscopic dimensions there are. This is better determined by looking at the size of the spectroscopic_values dataset instead. Some of...*Created by: ramav87*
Labview H5 patcher currently looks through spec_dim_labels to gauge how many spectroscopic dimensions there are. This is better determined by looking at the size of the spectroscopic_values dataset instead. Some of the acquisition mistakenly adds labels to non-existent spectroscopic dimensions, causing translation bugs.https://code.ornl.gov/rvv/pycroscopy/-/issues/237Translator issue with pyUSID update2020-08-18T16:02:24ZVasudevan, Rama K.Translator issue with pyUSID update*Created by: nccreang*
When running FakeBEPSGenerator translator, KeyError: 'Can't open attribute (can't locate attribute: 'DC_Offset')' arrises.*Created by: nccreang*
When running FakeBEPSGenerator translator, KeyError: 'Can't open attribute (can't locate attribute: 'DC_Offset')' arrises.https://code.ornl.gov/rvv/pycroscopy/-/issues/238Generic utility list2020-08-21T20:55:16ZMukherjee, Debangshumukherjeed@ornl.govGeneric utility list@ssomnath @ramav87 I am starting to make a list of domain agnostic tools that should find a home in pycroscopy
* Functional fits for spectra
* 2D Gaussian fitting
* Hybrid cross-correlation
* Scan drift correction@ssomnath @ramav87 I am starting to make a list of domain agnostic tools that should find a home in pycroscopy
* Functional fits for spectra
* 2D Gaussian fitting
* Hybrid cross-correlation
* Scan drift correctionhttps://code.ornl.gov/rvv/pycroscopy/-/issues/242KMeans Clustering not writing to file2020-12-31T15:17:20ZVasudevan, Rama K.KMeans Clustering not writing to file*Created by: sulaymandesai*
Hi,
Hope you're well.
I am following this published notebook: https://nbviewer.jupyter.org/github/pycroscopy/papers/blob/master/Notebooks/EM/STEM/Image_Cleaning_Atom_Finding.ipynb
When I try to run t...*Created by: sulaymandesai*
Hi,
Hope you're well.
I am following this published notebook: https://nbviewer.jupyter.org/github/pycroscopy/papers/blob/master/Notebooks/EM/STEM/Image_Cleaning_Atom_Finding.ipynb
When I try to run the KMeans clustering I have the following error:
```
num_clusters = 4
# num_clusters = 32
estimator = px.processing.Cluster(h5_U, KMeans(n_clusters=num_clusters), num_comps=num_comps)
if estimator.duplicate_h5_groups==[]:
t0 = time()
h5_kmeans = estimator.compute()
print('kMeans took {} seconds.'.format(round(time()-t0, 2)))
else:
h5_kmeans = estimator.duplicate_h5_groups[-1]
print( 'Using existing results.')
print( 'Clustering results in {}.'.format(h5_kmeans.name))
half_wind = int(win_size*0.5)
# generate a cropped image that was effectively the area that was used for pattern searching
# Need to get the math righ on the counting
cropped_clean_image = clean_image_mat[half_wind:-half_wind + 1, half_wind:-half_wind + 1]
# Plot cluster results Get the labels dataset
labels_mat = np.reshape(h5_kmeans['Labels'][()], [num_rows, num_cols])
fig, axes = plt.subplots(ncols=2, figsize=(14,7))
axes[0].imshow(cropped_clean_image,cmap=spiepy.NANOMAP, origin='lower')
axes[0].set_title('Cleaned Image', fontsize=16)
axes[1].imshow(labels_mat, aspect=1, interpolation='none',cmap=spiepy.NANOMAP, origin='lower')
axes[1].set_title('K-means cluster labels', fontsize=16);
for axis in axes:
axis.get_yaxis().set_visible(False)
axis.get_xaxis().set_visible(False)
usid.jupyter_utils.save_fig_filebox_button(fig, 'Clustered_Clean_Image.png')
```
```
Consider calling test() to check results before calling compute() which computes on the entire dataset and writes results to the HDF5 file
Group: <HDF5 group "/Measurement_000/Channel_000/Plane_Mean_Subtracted_Data-Windowing_000/Image_Windows-SVD_000/U-Cluster_000" (0 members)> had neither the status HDF5 dataset or the legacy attribute: "last_pixel".
Group: <HDF5 group "/Measurement_000/Channel_000/Plane_Mean_Subtracted_Data-Windowing_000/Image_Windows-SVD_000/U-Cluster_001" (0 members)> had neither the status HDF5 dataset or the legacy attribute: "last_pixel".
Group: <HDF5 group "/Measurement_000/Channel_000/Plane_Mean_Subtracted_Data-Windowing_000/Image_Windows-SVD_000/U-Cluster_002" (0 members)> had neither the status HDF5 dataset or the legacy attribute: "last_pixel".
Performing clustering on /Measurement_000/Channel_000/Plane_Mean_Subtracted_Data-Windowing_000/Image_Windows-SVD_000/U.
Took 5.76 sec to compute KMeans
Calculated the Mean Response of each cluster.
Took 340.1 msec to calculate mean response per cluster
Writing clustering results to file.
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-35-6b9a66d30096> in <module>
7 if estimator.duplicate_h5_groups==[]:
8 t0 = time()
----> 9 h5_kmeans = estimator.compute()
10 print('kMeans took {} seconds.'.format(round(time()-t0, 2)))
11 else:
~/.pyenv/versions/3.8.3/lib/python3.8/site-packages/pycroscopy-0.60.7-py3.8.egg/pycroscopy/processing/cluster.py in compute(self, rearrange_clusters, override)
226
227 if self.h5_results_grp is None:
--> 228 h5_group = self._write_results_chunk()
229 self.delete_results()
230 else:
~/.pyenv/versions/3.8.3/lib/python3.8/site-packages/pycroscopy-0.60.7-py3.8.egg/pycroscopy/processing/cluster.py in _write_results_chunk(self)
282 h5_cluster_group = create_results_group(self.h5_main, self.process_name,
283 h5_parent_group=self._h5_target_group)
--> 284 self._write_source_dset_provenance()
285
286 write_simple_attrs(h5_cluster_group, self.parms_dict)
~/.pyenv/versions/3.8.3/lib/python3.8/site-packages/pyUSID/processing/process.py in _write_source_dset_provenance(self)
793
794 @staticmethod
--> 795 def _map_function(*args, **kwargs):
796 """
797 The function that manipulates the data on a single instance (position). This will be used by
~/.pyenv/versions/3.8.3/lib/python3.8/site-packages/sidpy/hdf/hdf_utils.py in write_simple_attrs(h5_obj, attrs, verbose)
371 '{}'.format(type(attrs)))
372 if not isinstance(h5_obj, (h5py.File, h5py.Group, h5py.Dataset)):
--> 373 raise TypeError('h5_obj should be a h5py File, Group or Dataset object'
374 ' but is instead of type '
375 '{}t'.format(type(h5_obj)))
TypeError: h5_obj should be a h5py File, Group or Dataset object but is instead of type <class 'NoneType'>t
```
Any help would be appreciated!