Commit 6c4ece4e authored by syz's avatar syz
Browse files

Merge branch 'cades_dev' of https://github.com/pycroscopy/pycroscopy into cades_dev_local

parents 14ab6511 9d3fe2d9
......@@ -11,18 +11,16 @@ Documentation
* Include examples in documentation
* Links to references for all functions and methods used in our workflows.
Short tutorials on how to use pycroscopy
Fundamental tutorials on how to use pycroscopy
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
* Access h5 files
* Find a specific dataset/group in the file
* chunking the main dataset
* Links to tutorials on how to use pycharm, Git,
Longer examples (via specific scientific usecases)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
* A tour of the many functions in hdf_utils and io_utils since these functions need data to show / explain them.
* A tour of the hdf_utils functions used for writing h5 files since these functions need data to show / explain them.
* chunking the main dataset
* A tour of the io_utils functions since these functions need data to show / explain them.
* A tour of plot_utils
* pycroscopy pacakge organization - a short writeup on what is where and differences between the process / analyis submodules
* How to write your own analysis class based on the (to-be simplified) Model class
* Links to tutorials on how to use pycharm, Git,
Rama's (older and more applied / specific) tutorial goals
~~~~~~~~~~~~~~~~~~~~
......
......@@ -411,7 +411,7 @@ sphinx_gallery_conf = dict(examples_dirs='../examples',
reference_url=dict(pycroscopy=None,
matplotlib='https://matplotlib.org',
numpy='https://docs.scipy.org/doc/numpy',
scipy='https://docs.scipy.org/doc/scipy',
scipy='https://docs.scipy.org/doc/scipy/reference',
h5py='http://docs.h5py.org/en/latest/'),
# directory where function granular galleries are stored
backreferences_dir='_autosummary/backreferences',
......
This diff is collapsed.
......@@ -15,7 +15,7 @@ Introduction
In pycroscopy, all position dimensions of a dataset are collapsed into the first dimension and all other
(spectroscopic) dimensions are collapsed to the second dimension to form a two dimensional matrix. The ancillary
matrices, namely the spectroscopic indices and values matrix as well as the position indicies and values matrices
matrices, namely the spectroscopic indices and values matrix as well as the position indices and values matrices
will be essential for reshaping the data back to its original N dimensional form and for slicing multidimensional
datasets
......@@ -54,8 +54,8 @@ import pycroscopy as px
# imaging datasets, a single spectra is acquired at each location in a two dimensional grid of spatial locations.
# Thus, BE imaging datasets have two position dimensions (X, Y) and one spectroscopic dimension (frequency - against
# which the spectra is recorded). The BEPS dataset used in this example has a spectra for each combination of
# three other paramaters (DC offset, Field, and Cycle). Thus, this dataset has three new spectral
# dimensions in addition to the spectra itself. Hence, this dataet becomes a 2+4 = 6 dimensional dataset
# three other parameters (DC offset, Field, and Cycle). Thus, this dataset has three new spectral
# dimensions in addition to the spectra itself. Hence, this dataset becomes a 2+4 = 6 dimensional dataset
# download the raw data file from Github:
h5_path = 'temp_3.h5'
......@@ -118,6 +118,8 @@ def myfun(pos_index, spec_index):
print(dim_name, ':', h5_pos_ind[pos_index, dim_ind])
for dim_ind, dim_name in enumerate(spec_labels):
print(dim_name, ':', h5_spec_ind[dim_ind, spec_index])
interact(myfun, pos_index=(0, h5_main.shape[0]-1, 1), spec_index=(0, h5_main.shape[1]-1, 1))
#########################################################################
......@@ -175,13 +177,14 @@ for dim_ind, axis, dim_label, dim_array in zip(range(h5_spec_ind.shape[0]), rhs_
def describe_dimensions(h5_aux):
for name, unit in zip(px.hdf_utils.get_attr(h5_aux, 'labels'),
px.hdf_utils.get_attr(h5_aux, 'units')):
px.hdf_utils.get_attr(h5_aux, 'units')):
print(name, '[', unit, ']')
print('Position dimension names and units:')
describe_dimensions(h5_pos_ind)
print('\nSpectrocopic dimension names and units:')
print('\nSpectroscopic dimension names and units:')
describe_dimensions(h5_spec_ind)
#########################################################################
......@@ -269,7 +272,7 @@ fig, axis = plt. subplots()
axis.imshow(np.abs(spectrogram3), origin='lower')
axis.set_xlabel('Frequency Index')
axis.set_ylabel('DC Offset Index')
axis.set_title('Spectrogram Amplitude');
axis.set_title('Spectrogram Amplitude')
#########################################################################
# Approach 2 - N-dimensional form
......@@ -283,16 +286,19 @@ print('Shape of the N-dimensional dataset:', ds_nd.shape)
print(labels)
#########################################################################
# Now that we have the data in its original N dimensional form, we can easily slice the dataset:
spectrogram2 = ds_nd[2, 3, :, :, 0, 1]
# Now the spectrogram is of order (frequency x DC_Offset).
spectrogram2 = spectrogram2.T
# Now the spectrogram is of order (DC_Offset x frequency)
fig, axis = plt. subplots()
axis.imshow(np.abs(spectrogram2), origin='lower')
axis.set_xlabel('Frequency Index')
axis.set_ylabel('DC Offset Index')
axis.set_title('Spectrogram Amplitude');
axis.set_title('Spectrogram Amplitude')
#########################################################################
# Approach 3 - slicing the 2D matrix
......@@ -301,10 +307,10 @@ axis.set_title('Spectrogram Amplitude');
# This approach is hands-on and requires that we be very careful with the indexing and slicing. Nonetheless,
# the process is actually fairly intuitive. We rely entirely upon the spectroscopic and position ancillary datasets
# to find the indices for slicing the dataset. Unlike the main dataset, the ancillary datasets are very small and
# can be stored easily in memory. Once the slicing indices are calculated, we __only read the desired portion of
# `main` data to memory__. Thus the amount of data loaded into memory is only the amount that we absolutely need.
# __This is the only approach that can be applied to slice very large datasets without ovwhelming memory overheads__.
# The comments for each line explain the entire process comprehensively
# can be stored easily in memory. Once the slicing indices are calculated, we *only read the desired portion of
# `main` data to memory*. Thus the amount of data loaded into memory is only the amount that we absolutely need.
# *This is the only approach that can be applied to slice very large datasets without overwhelming memory overheads*.
# The comments for each line explain the entire process comprehensively.
#
# Get only the spectroscopic dimension names:
......@@ -312,21 +318,22 @@ spec_dim_names = px.hdf_utils.get_attr(h5_spec_ind, 'labels')
# Find the row in the spectroscopic indices that corresponds to the dimensions we want to slice:
cycle_row_ind = np.where(spec_dim_names == 'Cycle')[0][0]
# Find the row correspoding to field in the same way:
# Find the row corresponding to field in the same way:
field_row_ind = np.where(spec_dim_names == 'Field')[0][0]
# Find all the spectral indices corresponding to the second cycle:
desired_cycle = h5_spec_ind[cycle_row_ind] == 1
# Do the same to find the spectral indicies for the first field:
# Do the same to find the spectral indices for the first field:
desired_field = h5_spec_ind[field_row_ind] == 0
# Now find the indices where the cycle = 1 and the field = 0 using a logical AND statement:
spec_slice = np.logical_and(desired_cycle, desired_field)
# We will use the same approach to find the position indices
# corresponding to the row index of 3 and colum index of 2:
pos_dim_names = px.hdf_utils.get_attr(h5_pos_ind,'labels')
# corresponding to the row index of 3 and column index of 2:
pos_dim_names = px.hdf_utils.get_attr(h5_pos_ind, 'labels')
x_col_ind = np.where(pos_dim_names == 'X')[0][0]
y_col_ind = np.where(pos_dim_names == 'Y')[0][0]
......@@ -352,7 +359,7 @@ print('Sliced data is of shape:', data_vec.shape)
# For this we need to find the size of the data in the DC_offset and Frequency dimensions:
dc_dim_ind = np.where(spec_dim_names == 'DC_Offset')[0][0]
# Find the row correspoding to field in the same way:
# Find the row corresponding to field in the same way:
freq_dim_ind = np.where(spec_dim_names == 'Frequency')[0][0]
dc_dim_size = spec_dim_sizes[dc_dim_ind]
......@@ -366,7 +373,7 @@ print('We need to reshape the vector by the tuple:', (dc_dim_size, freq_dim_size
# The dimensions in the ancillary datasets may or may not be arranged from fastest to slowest even though that is
# part of the requirements. We can still account for this. In the event that we don't know the order in which to
# reshape the data vector because we don't know which dimension varies faster than the other(s), we would need to
# sort the dimensions by how fast their indices change. Fortuantely, pycroscopy has a function called `px.hdf_utils.
# sort the dimensions by how fast their indices change. Fortunately, pycroscopy has a function called `px.hdf_utils.
# get_sort_order` that does just this. Knowing the sort order, we can easily reshape correctly in an automated manner.
# We will do this below
......@@ -376,7 +383,7 @@ print('Spectroscopic dimensions arranged as is:\n',
spec_dim_names)
print('Dimension indices arranged from fastest to slowest:',
spec_sort_order)
print('Dimension namess now arranged from fastest to slowest:\n',
print('Dimension names now arranged from fastest to slowest:\n',
spec_dim_names[spec_sort_order])
if spec_sort_order[dc_dim_ind] > spec_sort_order[freq_dim_ind]:
......
%% Cell type:markdown id: tags:
# Band Excitation data procesing using pycroscopy
### Suhas Somnath, Chris R. Smith, Stephen Jesse
The Center for Nanophase Materials Science and The Institute for Functional Imaging for Materials <br>
Oak Ridge National Laboratory<br>
2/10/2017
%% Cell type:markdown id: tags:
## Configure the notebook
%% Cell type:code id: tags:
``` python
!pip install -U numpy matplotlib Ipython ipywidgets pycroscopy
# Ensure python 3 compatibility
from __future__ import division, print_function, absolute_import
# Import necessary libraries:
# General utilities:
import sys
import os
import shutil
# Computation:
import numpy as np
import h5py
# Visualization:
# import ipympl
import matplotlib.pyplot as plt
import matplotlib.widgets as mpw
from IPython.display import display, clear_output, HTML
import ipywidgets as widgets
# Finally, pycroscopy itself
sys.path.append('..')
import pycroscopy as px
# set up notebook to show plots within the notebook
% matplotlib notebook
# Make Notebook take up most of page width
display(HTML(data="""
<style>
div#notebook-container { width: 95%; }
div#menubar-container { width: 65%; }
div#maintoolbar-container { width: 99%; }
</style>
"""))
```
%% Cell type:markdown id: tags:
## Set some basic parameters for computation
This notebook performs some functional fitting whose duration can be substantially decreased by using more memory and CPU cores. We have provided default values below but you may choose to change them if necessary.
%% Cell type:code id: tags:
``` python
max_mem = 1024*8 # Maximum memory to use, in Mbs. Default = 1024
max_cores = None # Number of logical cores to use in fitting. None uses all but 2 available cores.
max_mem = 1024*2 # Maximum memory to use, in Mbs. Default = 1024
max_cores = 2 # Number of logical cores to use in fitting. None uses all but 2 available cores.
```
%% Cell type:markdown id: tags:
## Make the data pycroscopy compatible
Converting the raw data into a pycroscopy compatible hierarchical data format (HDF or .h5) file gives you access to the fast fitting algorithms and powerful analysis functions within pycroscopy
#### H5 files:
* are like smart containers that can store matrices with data, folders to organize these datasets, images, metadata like experimental parameters, links or shortcuts to datasets, etc.
* are readily compatible with high-performance computing facilities
* scale very efficiently from few kilobytes to several terabytes
* can be read and modified using any language including Python, Matlab, C/C++, Java, Fortran, Igor Pro, etc.
#### You can load either of the following:
* Any .mat or .txt parameter file from the original experiment
* A .h5 file generated from the raw data using pycroscopy - skips translation
You can select desired file type by choosing the second option in the pull down menu on the bottom right of the file window
%% Cell type:code id: tags:
``` python
input_file_path = px.io_utils.uiGetFile(caption='Select translated .h5 file or raw experiment data',
file_filter='Parameters for raw BE data (*.txt *.mat *xls *.xlsx);; \
Translated file (*.h5)')
(data_dir, data_name) = os.path.split(input_file_path)
(data_dir, filename) = os.path.split(input_file_path)
if copy_input_file:
_, ext = os.path.splitext(filename)
temp_path = os.path.join(data_dir, 'temp_file'+ext)
if os.path.exists(temp_path):
os.remove(temp_path)
shutil.copy2(input_file_path, temp_path)
input_file_path = temp_path
if input_file_path.endswith('.h5'):
# No translation here
h5_path = input_file_path
force = False # Set this to true to force patching of the datafile.
tl = px.LabViewH5Patcher()
hdf = tl.translate(h5_path, force_patch=force)
else:
# Set the data to be translated
data_path = input_file_path
(junk, base_name) = os.path.split(data_dir)
# Check if the data is in the new or old format. Initialize the correct translator for the format.
if base_name == 'newdataformat':
(junk, base_name) = os.path.split(junk)
translator = px.BEPSndfTranslator(max_mem_mb=max_mem)
else:
translator = px.BEodfTranslator(max_mem_mb=max_mem)
if base_name.endswith('_d'):
base_name = base_name[:-2]
# Translate the data
h5_path = translator.translate(data_path, show_plots=True, save_plots=False)
hdf = px.ioHDF5(h5_path)
print('Working on:\n' + h5_path)
h5_main = px.hdf_utils.getDataSet(hdf.file, 'Raw_Data')[0]
```
%% Cell type:markdown id: tags:
##### Inspect the contents of this h5 data file
The file contents are stored in a tree structure, just like files on a conventional computer.
The data is stored as a 2D matrix (position, spectroscopic value) regardless of the dimensionality of the data. Thus, the positions will be arranged as row0-col0, row0-col1.... row0-colN, row1-col0.... and the data for each position is stored as it was chronologically collected
The main dataset is always accompanied by four ancillary datasets that explain the position and spectroscopic value of any given element in the dataset.
%% Cell type:code id: tags:
``` python
print('Datasets and datagroups within the file:\n------------------------------------')
px.io.hdf_utils.print_tree(hdf.file)
print('\nThe main dataset:\n------------------------------------')
print(h5_main)
print('\nThe ancillary datasets:\n------------------------------------')
print(hdf.file['/Measurement_000/Channel_000/Position_Indices'])
print(hdf.file['/Measurement_000/Channel_000/Position_Values'])
print(hdf.file['/Measurement_000/Channel_000/Spectroscopic_Indices'])
print(hdf.file['/Measurement_000/Channel_000/Spectroscopic_Values'])
print('\nMetadata or attributes in a datagroup\n------------------------------------')
for key in hdf.file['/Measurement_000'].attrs:
print('{} : {}'.format(key, hdf.file['/Measurement_000'].attrs[key]))
```
%% Cell type:markdown id: tags:
## Get some basic parameters from the H5 file
This information will be vital for futher analysis and visualization of the data
%% Cell type:code id: tags:
``` python
h5_pos_inds = px.hdf_utils.getAuxData(h5_main, auxDataName='Position_Indices')[-1]
pos_sort = px.hdf_utils.get_sort_order(np.transpose(h5_pos_inds))
pos_dims = px.hdf_utils.get_dimensionality(np.transpose(h5_pos_inds), pos_sort)
pos_labels = np.array(px.hdf_utils.get_attr(h5_pos_inds, 'labels'))[pos_sort]
print(pos_labels, pos_dims)
parm_dict = hdf.file['/Measurement_000'].attrs
is_ckpfm = hdf.file.attrs['data_type'] == 'cKPFMData'
if is_ckpfm:
num_write_steps = parm_dict['VS_num_DC_write_steps']
num_read_steps = parm_dict['VS_num_read_steps']
num_fields = 2
```
%% Cell type:markdown id: tags:
## Visualize the raw data
Use the sliders below to visualize spatial maps (2D only for now), and spectrograms.
For simplicity, all the spectroscopic dimensions such as frequency, excitation bias, cycle, field, etc. have been collapsed to a single slider.
%% Cell type:code id: tags:
``` python
px.be_viz_utils.jupyter_visualize_be_spectrograms(h5_main)
```
%% Cell type:code id: tags:
``` python
sho_fit_points = 5 # The number of data points at each step to use when fitting
h5_sho_group = px.hdf_utils.findH5group(h5_main, 'SHO_Fit')
sho_fitter = px.BESHOmodel(h5_main, parallel=True)
if len(h5_sho_group) == 0:
print('No SHO fit found. Doing SHO Fitting now')
h5_sho_guess = sho_fitter.do_guess(strategy='complex_gaussian', processors=max_cores, options={'num_points':sho_fit_points})
h5_sho_fit = sho_fitter.do_fit(processors=max_cores)
else:
print('Taking previous SHO results already present in file')
h5_sho_guess = h5_sho_group[-1]['Guess']
try:
h5_sho_fit = h5_sho_group[-1]['Fit']
except KeyError:
print('Previously computed guess found. Now computing fit')
h5_sho_fit = sho_fitter.do_fit(processors=max_cores, h5_guess=h5_sho_guess)
```
%% Cell type:markdown id: tags:
## Visualize the SHO results
Here, we visualize the parameters for the SHO fits. BE-line (3D) data is visualized via simple spatial maps of the SHO parameters while more complex BEPS datasets (4+ dimensions) can be visualized using a simple interactive visualizer below.
You can choose to visualize the guesses for SHO function or the final fit values from the first line of the cell below.
Use the sliders below to inspect the BE response at any given location.
%% Cell type:code id: tags:
``` python
h5_sho_spec_inds = px.hdf_utils.getAuxData(h5_sho_fit, auxDataName='Spectroscopic_Indices')[0]
sho_spec_labels = px.io.hdf_utils.get_attr(h5_sho_spec_inds,'labels')
if is_ckpfm:
# It turns out that the read voltage index starts from 1 instead of 0
# Also the VDC indices are NOT repeating. They are just rising monotonically
write_volt_index = np.argwhere(sho_spec_labels == 'write_bias')[0][0]
read_volt_index = np.argwhere(sho_spec_labels == 'read_bias')[0][0]
h5_sho_spec_inds[read_volt_index, :] -= 1
h5_sho_spec_inds[write_volt_index, :] = np.tile(np.repeat(np.arange(num_write_steps), num_fields), num_read_steps)
(Nd_mat, success, nd_labels) = px.io.hdf_utils.reshape_to_Ndims(h5_sho_fit, get_labels=True)
print('Reshape Success: ' + str(success))
print(nd_labels)
print(Nd_mat.shape)
```
%% Cell type:code id: tags:
``` python
use_sho_guess = False
use_static_viz_func = False
if use_sho_guess:
sho_dset = h5_sho_guess
else:
sho_dset = h5_sho_fit
data_type = px.io.hdf_utils.get_attr(hdf.file, 'data_type')
if data_type == 'BELineData' or len(pos_dims) != 2:
use_static_viz_func = True
step_chan = None
else:
vs_mode = px.io.hdf_utils.get_attr(h5_main.parent.parent, 'VS_mode')
if vs_mode not in ['AC modulation mode with time reversal',
'DC modulation mode']:
use_static_viz_func = True
else:
if vs_mode == 'DC modulation mode':
step_chan = 'DC_Offset'
else:
step_chan = 'AC_Amplitude'
if not use_static_viz_func:
try:
# use interactive visualization
px.be_viz_utils.jupyter_visualize_beps_sho(sho_dset, step_chan)
except:
raise
print('There was a problem with the interactive visualizer')
use_static_viz_func = True
else:
# show plots of SHO results vs. applied bias
px.be_viz_utils.visualize_sho_results(sho_dset, show_plots=True,
save_plots=False)
```
%% Cell type:markdown id: tags:
## Fit loops to a function
This is applicable only to DC voltage spectroscopy datasets from BEPS. The PFM hysteresis loops in this dataset will be projected to maximize the loop area and then fitted to a function.
Note: This computation generally takes a while for reasonably sized datasets.
%% Cell type:code id: tags:
``` python
# Do the Loop Fitting on the SHO Fit dataset
loop_success = False
h5_loop_group = px.hdf_utils.findH5group(h5_sho_fit, 'Loop_Fit')
if len(h5_loop_group) == 0:
try:
loop_fitter = px.BELoopModel(h5_sho_fit, parallel=True)
print('No loop fits found. Fitting now....')
h5_loop_guess = loop_fitter.do_guess(processors=max_cores, max_mem=max_mem)
h5_loop_fit = loop_fitter.do_fit(processors=max_cores, max_mem=max_mem)
loop_success = True
except ValueError:
print('Loop fitting is applicable only to DC spectroscopy datasets!')
else:
loop_success = True
print('Taking previously computed loop fits')
h5_loop_guess = h5_loop_group[-1]['Guess']
h5_loop_fit = h5_loop_group[-1]['Fit']
h5_loop_group = h5_loop_fit.parent
```
%% Cell type:markdown id: tags:
## Prepare datasets for visualization
%% Cell type:code id: tags:
``` python
# Prepare some variables for plotting loops fits and guesses
# Plot the Loop Guess and Fit Results
if loop_success:
h5_projected_loops = h5_loop_guess.parent['Projected_Loops']
h5_proj_spec_inds = px.hdf_utils.getAuxData(h5_projected_loops,
auxDataName='Spectroscopic_Indices')[-1]
h5_proj_spec_vals = px.hdf_utils.getAuxData(h5_projected_loops,
auxDataName='Spectroscopic_Values')[-1]
# reshape the vdc_vec into DC_step by Loop
sort_order = px.hdf_utils.get_sort_order(h5_proj_spec_inds)
dims = px.hdf_utils.get_dimensionality(h5_proj_spec_inds[()],
sort_order[::-1])
vdc_vec = np.reshape(h5_proj_spec_vals[h5_proj_spec_vals.attrs['DC_Offset']], dims).T
#Also reshape the projected loops to Positions-DC_Step-Loop
# Also reshape the projected loops to Positions-DC_Step-Loop
proj_nd, _ = px.hdf_utils.reshape_to_Ndims(h5_projected_loops)
proj_3d = np.reshape(proj_nd, [h5_projected_loops.shape[0],
proj_nd.shape[2], -1])
```
%% Cell type:markdown id: tags:
## Visualize Loop fits
%% Cell type:code id: tags:
``` python
use_static_plots = False
if loop_success:
if not use_static_plots:
try:
px.be_viz_utils.jupyter_visualize_beps_loops(h5_projected_loops, h5_loop_guess, h5_loop_fit)
except:
print('There was a problem with the interactive visualizer')
use_static_plots = True
if use_static_plots:
for iloop in range(h5_loop_guess.shape[1]):
fig, ax = px.be_viz_utils.plot_loop_guess_fit(vdc_vec[:, iloop], proj_3d[:, :, iloop],
h5_loop_guess[:, iloop], h5_loop_fit[:, iloop],
title='Loop {} - All Positions'.format(iloop))
```
%% Cell type:markdown id: tags:
## Loop Parameters
We will now load the loop parameters caluculated from the fit and plot them.
%% Cell type:code id: tags:
``` python
h5_loop_parameters = h5_loop_group['Fit_Loop_Parameters']
px.viz.be_viz_utils.jupyter_visualize_parameter_maps(h5_loop_parameters)
```
%% Cell type:code id: tags:
``` python
map_parm = 'Work of Switching'
plot_cycle = 0
plot_position = (int(pos_dims[0]/2), int(pos_dims[1]/2))
plot_bias_step = 0
h5_main.pos_dim_sizes
px.viz.be_viz_utils.plot_loop_sho_raw_comparison(h5_loop_parameters, map_parm, plot_cycle, plot_position, plot_bias_step)
# display(px.viz.plot_utils.save_fig_filebox_button(fig, 'plot.png'))
```
%% Cell type:markdown id: tags:
## Save and close
* Save the .h5 file that we are working on by closing it. <br>
* Also, consider exporting this notebook as a notebook or an html file. <br> To do this, go to File >> Download as >> HTML
* Finally consider saving this notebook if necessary
%% Cell type:code id: tags:
``` python
hdf.close()
```
......
%% Cell type:markdown id: tags:
# Image cleaning and atom finding using pycroscopy
### Suhas Somnath, Chris R. Smith, Stephen Jesse
The Center for Nanophase Materials Science and The Institute for Functional Imaging for Materials <br>
Oak Ridge National Laboratory<br>
1/19/2017
%% Cell type:markdown id: tags:
## Configure the notebook first
%% Cell type:code id: tags:
``` python
!pip install -U numpy scipy skimage h5py matplotlib Ipython ipywidgets pycroscopy
# set up notebook to show plots within the notebook
% matplotlib notebook
# Import necessary libraries:
# General utilities:
import os
import sys
from time import time
from scipy.misc import imsave
# Computation:
import numpy as np
import h5py
from skimage import measure
from scipy.cluster.hierarchy import linkage, dendrogram
from scipy.spatial.distance import pdist
# Visualization:
import matplotlib.pyplot as plt
import matplotlib.patches as patches
from mpl_toolkits.axes_grid1 import make_axes_locatable
from IPython.display import display, HTML
import ipywidgets as widgets
from mpl_toolkits.axes_grid1 import ImageGrid
# Finally, pycroscopy itself
sys.path.append('..')
import pycroscopy as px
# Make Notebook take up most of page width
display(HTML(data="""
<style>
div#notebook-container { width: 95%; }
div#menubar-container { width: 65%; }
div#maintoolbar-container { width: 99%; }
</style>
"""))
```
%% Cell type:markdown id: tags:
## Load the image that will be cleaned:
%% Cell type:code id: tags: