1. Check if the same process has been performed with the same paramters. When initializing the process, throw an exception. This is better than checking in the notebook stage.
2. (Gracefully) Abort and resume processing.
* consolidate _get_component_slice used in Cluster with duplicate in svd_utils
* Legacy processes **MUST** extend Process:
* Image Windowing
* Image Cleaning
* sklearn wrapper classes:
* Cluter
* Decomposition
* The computation will continue to be performed by sklearn. No need to use parallel_compute() or resume computation.
* Own classes:
* Image Windowing
* Image Cleaning
* As time permits, ensure that these can resume processing
* All these MUST implement the check for previous computations at the very least
* As time permits, ensure that these can resume processing
* Absorb functionality from Process into Model
* Bayesian GIV should actually be an analysis <-- depends on above
* Reogranize processing and analysis - promote / demote classes etc.
* multi-node computing capability in parallel_compute
* Demystify analyis / optimize. Use parallel_compute instead of optimize and guess_methods and fit_methods
* Consistency in the naming of and placement of attributes (chan or meas group) in all translators - Some put attributes in the measurement level, some in the channel level! hyperspy appears to create datagroups solely for the purpose of organizing metadata in a tree structure!
* Consider developing a generic curve fitting class a la `hyperspy <http://nbviewer.jupyter.org/github/hyperspy/hyperspy-demos/blob/master/Fitting_tutorial.ipynb>`_
Convert the source image file into a pycroscopy compatible hierarchical data format (HDF or .h5) file. This simple translation gives you access to the powerful data functions within pycroscopy
#### H5 files:
* are like smart containers that can store matrices with data, folders to organize these datasets, images, metadata like experimental parameters, links or shortcuts to datasets, etc.
* are readily compatible with high-performance computing facilities
* scale very efficiently from few kilobytes to several terabytes
* can be read and modified using any language including Python, Matlab, C/C++, Java, Fortran, Igor Pro, etc.
%% Cell type:code id: tags:
``` python
# Check if an HDF5 file with the chosen image already exists.
print('HDF5 file with Raw_Data found. No need to translate.')
exceptKeyError:
print('Raw Data not found.')
else:
print('No HDF5 file found.')
ifneed_translation:
# Initialize the Image Translator
tl=px.ImageTranslator()
# create an H5 file that has the image information in it and get the reference to the dataset
h5_raw=tl.translate(image_path)
# create a reference to the file
h5_file=h5_raw.file
print('HDF5 file is located at {}.'.format(h5_file.filename))
```
%% Cell type:markdown id: tags:
### Inspect the contents of this h5 data file
The file contents are stored in a tree structure, just like files on a contemporary computer.
The data is stored as a 2D matrix (position, spectroscopic value) regardless of the dimensionality of the data.
In the case of these 2D images, the data is stored as a N x 1 dataset
The main dataset is always accompanied by four ancillary datasets that explain the position and spectroscopic value of any given element in the dataset.
In the case of the 2d images, the positions will be arranged as row0-col0, row0-col1.... row0-colN, row1-col0....
The spectroscopic information is trivial since the data at any given pixel is just a scalar value