Converting the raw data into a pycroscopy compatible hierarchical data format (HDF or .h5) file gives you access to the fast fitting algorithms and powerful analysis functions within pycroscopy
#### H5 files:
* are like smart containers that can store matrices with data, folders to organize these datasets, images, metadata like experimental parameters, links or shortcuts to datasets, etc.
* are readily compatible with high-performance computing facilities
* scale very efficiently from few kilobytes to several terabytes
* can be read and modified using any language including Python, Matlab, C/C++, Java, Fortran, Igor Pro, etc.
#### You can load either of the following:
* Any .mat or .txt parameter file from the original experiment
* A .h5 file generated from the raw data using pycroscopy - skips translation
You can select desired file type by choosing the second option in the pull down menu on the bottom right of the file window
%% Cell type:code id: tags:
``` python
ifui_file_window:
input_file_path=uiGetFile(caption='Select translated .h5 file or raw experiment data',
filter='Parameters for raw G-Line data (*.txt);; \
input_file_path=px.io_utils.uiGetFile(caption='Select translated .h5 file or raw experiment data',
file_filter='Parameters for raw G-Line data (*.txt);; \
The file contents are stored in a tree structure, just like files on a conventional computer.
The data is stored as a 2D matrix (position, spectroscopic value) regardless of the dimensionality of the data. Thus, the positions will be arranged as row0-col0, row0-col1.... row0-colN, row1-col0.... and the data for each position is stored as it was chronologically collected
The main dataset is always accompanied by four ancillary datasets that explain the position and spectroscopic value of any given element in the dataset.
Note that G-mode data is acquired line-by-line rather than pixel-by-pixel.
%% Cell type:code id: tags:
``` python
print('Datasets and datagroups within the file:\n------------------------------------')
px.io.hdf_utils.print_tree(hdf.file)
print('\nThe main dataset:\n------------------------------------')
"# Loading, reshaping, visualizing data using pycroscopy\n",
"### Suhas Somnath, Chris R. Smith and Stephen Jesse\n",
"The Center for Nanophase Materials Science and The Institute for Functional Imaging for Materials <br>\n",
"Oak Ridge National Laboratory<br>\n",
"8/01/2017\n",
"\n",
"Here, we will demonstrate how to load, reshape, and visualize multidimensional imaging datasets. For this example, we will load a three dimensional Band Excitation imaging dataset acquired from an atomic force microscope. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Make sure pycroscopy and wget are installed\n",
"# set up notebook to show plots within the notebook\n",
"% matplotlib inline"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Load pycroscopy compatible file\n",
"\n",
"For simplicity we will use a dataset that has already been transalated form its original data format into a pycroscopy compatible hierarchical data format (HDF5 or H5) file\n",
"\n",
"#### HDF5 or H5 files:\n",
"* are like smart containers that can store matrices with data, folders to organize these datasets, images, metadata like experimental parameters, links or shortcuts to datasets, etc.\n",
"* are readily compatible with high-performance computing facilities\n",
"* scale very efficiently from few kilobytes to several terabytes\n",
"* can be read and modified using any language including Python, Matlab, C/C++, Java, Fortran, Igor Pro, etc.\n",
"\n",
"Python uses the h5py libaray to read, write, and access HDF5 files"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Downloading the example file from the pycroscopy Github project\n",