Commit fffc90fb authored by CompPhysChris's avatar CompPhysChris
Browse files

Merge remote-tracking branch 'remotes/origin/master' into cades_dev

# Conflicts:
#	examples/plot_multidimensional_data.py
#	jupyter_notebooks/Spectral_Unmixing_1pos_1spec.ipynb
#	jupyter_notebooks/Spectral_Unmixing_2pos_1spec.ipynb
#	pycroscopy/io/hdf_utils.py
parents 929f9cdd 4ba8f9fb
......@@ -6,9 +6,12 @@ pycroscopy
What is pycroscopy?
-------------------
pycroscopy is a `python <http://www.python.org/>`_ package for image processing and scientific analysis of imaging modalities such as multi-frequency scanning probe microscopy,
scanning tunneling spectroscopy, x-ray diffraction microscopy, and transmission electron microscopy.
Classes implemented here are ported to a high performance computing platform at `Oak Ridge National Laboratory (ORNL) <http://www.ornl.gov/>`_.
pycroscopy is a `python <http://www.python.org/>`_ package for image processing and scientific analysis of imaging modalities such as multi-frequency scanning probe microscopy, scanning tunneling spectroscopy, x-ray diffraction microscopy, and transmission electron microscopy.
With `pycroscopy <https://pycroscopy.github.io/pycroscopy/>`_ we aim to:
1. provide a community-developed, open standard for data formatting
2. provide a framework for developing data analysis routines
3. significantly lower the barrier to advanced data analysis procedures by simplifying I/O, processing, visualization, etc.
To learn more about the motivation, general structure, and philosophy of pycroscopy, please read this `short introduction <https://github.com/pycroscopy/pycroscopy/blob/master/docs/pycroscopy_2017_07_11.pdf>`_.
......@@ -18,7 +21,7 @@ The package structure is simple, with 4 main modules:
1. `io`: Input/Output from custom & proprietary microscope formats to HDF5.
2. `processing`: Multivariate Statistics, Machine Learning, and Filtering.
3. `analysis`: Model-dependent analysis of information.
4. `viz`: Plotting functions and custom interactive jupyter widgets
4. `viz`: Plotting functions and interactive jupyter widgets to visualize multidimenional data
Once a user converts their microscope's data format into an HDF5 format, by simply extending some of the classes in `io`, the user gains access to the rest of the utilities present in `pycroscopy.*`.
......@@ -59,45 +62,41 @@ Compatibility
* Pycroscopy was initially developed in python 2 but all current / future development for pycroscopy will be on python 3.5+. Nonetheless, we will do our best to ensure continued compatibility with python 2.
* We currently do not support 32 bit architectures
API and Documentation
---------------------
* See our `homepage <https://pycroscopy.github.io/pycroscopy/>`_ for more information
* Our api (documentation for our functions and classes) is available `here <http://https://pycroscopy.github.io/pycroscopy/index.html/>`_
* Details regarding pycroscopy's `data format <https://github.com/pycroscopy/pycroscopy/blob/master/docs/Pycroscopy_Data_Formatting.pdf>`_ for HDF5 are also available in the docs. You can check out how we are able to represent multidimensional datasets of arbitrary sizes.
Examples and Resources
----------------------
* We use `jupyter <http://jupyter.org>`_ notebooks for our scientific workflows. This `youtube video <https://www.youtube.com/watch?v=HW29067qVWk>`_ provides a nice overview on jupyter notebooks.
* We host many `jupyter notebooks <https://github.com/pycroscopy/pycroscopy/blob/master/jupyter_notebooks/>`_ of popular scientific workflows and many of them are tied to journal publications (see below).
Getting Started
---------------
* Follow the instructions above to install pycroscopy
* See how we use pycroscopy for our scientific research in these `jupyter notebooks <https://github.com/pycroscopy/pycroscopy/blob/master/jupyter_notebooks/>`_. Many of them are linked to journal publications listed below.
* Please see the official `jupyter <http://jupyter.org>`_ website for more information about notebooks. This `youtube video <https://www.youtube.com/watch?v=HW29067qVWk>`_.
* See our `examples <https://pycroscopy.github.io/pycroscopy/auto_examples/index.html>`_ to get started on using and writing your own pycroscopy functions
* Videos and other tutorials are available at the `Institute For Functional Imaging of Materials <http://ifim.ornl.gov/resources.html>`_
* For more information about our functions and classes, please see our `API <https://pycroscopy.github.io/pycroscopy/pycroscopy.html>`_
* We have many translators that transform data from popular microscope data formats to pycroscopy compatible .h5 files. We also have `tutorials to get you started on importing your data to pycroscopy <https://pycroscopy.github.io/pycroscopy/auto_examples/plot_translator_tutorial.html#sphx-glr-auto-examples-plot-translator-tutorial-py>`_.
* Details regarding the defention, implementation, and guidelines for pycroscopy's `data format <https://github.com/pycroscopy/pycroscopy/blob/master/docs/Data_Format.md>`_ for `HDF5 <https://github.com/pycroscopy/pycroscopy/blob/master/docs/Pycroscopy_Data_Formatting.pdf>`_ are also available.
Journal Papers using pycroscopy
-------------------------------
1. `Big Data Analytics for Scanning Transmission Electron Microscopy Ptychography <https://www.nature.com/articles/srep26348>`_ by S. Jesse et al., Scientific Reports (2015); jupyter notebook `here 1 <ttps://raw.githubusercontent.com/pycroscopy/pycroscopy/master/jupyter_notebooks/Ptychography.ipynb>`_
1. `Big Data Analytics for Scanning Transmission Electron Microscopy Ptychography <https://www.nature.com/articles/srep26348>`_ by S. Jesse et al., Scientific Reports (2015); jupyter notebook `here 1 <http://nbviewer.jupyter.org/github/pycroscopy/pycroscopy/blob/master/jupyter_notebooks/Ptychography.ipynb>`_
 
2. `Rapid mapping of polarization switching through complete information acquisition <http://www.nature.com/articles/ncomms13290>`_ by S. Somnath et al., Nature Communications (2016); jupyter notebook `here 2 <ttps://raw.githubusercontent.com/pycroscopy/pycroscopy/master/jupyter_notebooks/G_mode_filtering.ipynb>`_
2. `Rapid mapping of polarization switching through complete information acquisition <http://www.nature.com/articles/ncomms13290>`_ by S. Somnath et al., Nature Communications (2016); jupyter notebook `here 2 <http://nbviewer.jupyter.org/github/pycroscopy/pycroscopy/blob/master/jupyter_notebooks/G_mode_filtering.ipynb>`_
 
3. `Improving superconductivity in BaFe2As2-based crystals by cobalt clustering and electronic uniformity <http://www.nature.com/articles/s41598-017-00984-1>`_ by L. Li et al., Scientific Reports (2017); jupyter notebook `here 3 <ttps://raw.githubusercontent.com/pycroscopy/pycroscopy/master/jupyter_notebooks/STS_LDOS.ipynb>`_
3. `Improving superconductivity in BaFe2As2-based crystals by cobalt clustering and electronic uniformity <http://www.nature.com/articles/s41598-017-00984-1>`_ by L. Li et al., Scientific Reports (2017); jupyter notebook `here 3 <http://nbviewer.jupyter.org/github/pycroscopy/pycroscopy/blob/master/jupyter_notebooks/STS_LDOS.ipynb>`_
 
4. `Direct Imaging of the Relaxation of Individual Ferroelectric Interfaces in a Tensile-Strained Film <http://onlinelibrary.wiley.com/doi/10.1002/aelm.201600508/full>`_ by L. Li et al.; Advanced Electronic Materials (2017), jupyter notebook `here 4 <https://raw.githubusercontent.com/pycroscopy/pycroscopy/master/jupyter_notebooks/BE_Processing.ipynb>`_
4. `Direct Imaging of the Relaxation of Individual Ferroelectric Interfaces in a Tensile-Strained Film <http://onlinelibrary.wiley.com/doi/10.1002/aelm.201600508/full>`_ by L. Li et al.; Advanced Electronic Materials (2017), jupyter notebook `here 4 <http://nbviewer.jupyter.org/github/pycroscopy/pycroscopy/blob/master/jupyter_notebooks/BE_Processing.ipynb>`_
5. Many more coming soon....
International conferences and workshops using pycroscopy
--------------------------------------------------------
* Aug 8 2017 @ 10:45 AM - Microscopy and Microanalysis conference - poster session
* Aug 9 2017 @ 8:30 - 10:00 AM - Microscopy and Microanalysis conference; X40 - Tutorial session on `Large Scale Data Acquisition and Analysis for Materials Imaging and Spectroscopy <http://microscopy.org/MandM/2017/program/tutorials.cfm>`_ by S. Jesse and S. V. Kalinin
* Oct 31 2017 @ 6:30 PM - American Vacuum Society conference; Session: SP-TuP1; poster 1641
* Dec 2017 - Materials Research Society conference
News
----
* Oct 31 2017 @ 6:30 PM - American Vacuum Society conference; Session: SP-TuP1; poster 1641
* Aug 9 2017 @ 8:30 - 10:00 AM - Microscopy and Microanalysis conference; X40 - Tutorial session on `Large Scale Data Acquisition and Analysis for Materials Imaging and Spectroscopy <http://microscopy.org/MandM/2017/program/tutorials.cfm>`_ by S. Jesse and S. V. Kalinin
* Aug 8 2017 @ 10:45 AM - Microscopy and Microanalysis conference - poster session
* Apr 2017 - Lecture on `atom finding <https://physics.appstate.edu/events/aberration-corrected-stem-teaching-machines-and-atomic-forge>`_
* Dec 2016 - Poster + `abstract <https://mrsspring.zerista.com/poster/member/85350>`_ at the 2017 Spring Materials Research Society (MRS) conference
Contact us
----------
* We are interested in collaborating with industry members to integrate pycroscopy into instrumentation or analysis software.
* We are interested in collaborating with industry members to integrate pycroscopy into instrumentation or analysis software and can help in exporting data to pycroscopy compatible .h5 files
* We can work with you to convert your file formats into pycroscopy compatible HDF5 files and help you get started with data analysis.
* Join our slack project at https://pycroscopy.slack.com to discuss about pycroscopy
* Feel free to get in touch with us at pycroscopy (at) gmail [dot] com
......
......@@ -46,11 +46,11 @@ Done:
* How to write (back) to H5
* Spectral Unmixing with pycroscopy
* Basic introduction to loading data in pycroscopy
* Handling multidimensional (6D) datasets
* Visualizing data (interactively using widgets) (needs some tiny automation in the end)
Pending:
* Handling multidimensional (6D) datasets - work in progress
* Visualizing data (interactively using widgets) - yet to begin
* How to write your write your own parallel computing function using the (yet to be written) process module
......
This diff is collapsed.
This diff is collapsed.
......@@ -119,6 +119,7 @@ packages:
:template: module.rst
.. toctree::
Data_Format
pycroscopy
auto_examples/index
......
......@@ -15,7 +15,7 @@ Introduction
In pycroscopy, all position dimensions of a dataset are collapsed into the first dimension and all other
(spectroscopic) dimensions are collapsed to the second dimension to form a two dimensional matrix. The ancillary
matricies, namely the spectroscopic indices and values matrix as well as the position indicies and values matrices
matrices, namely the spectroscopic indices and values matrix as well as the position indicies and values matrices
will be essential for reshaping the data back to its original N dimensional form and for slicing multidimensional
datasets
......@@ -64,6 +64,8 @@ if os.path.exists(h5_path):
os.remove(h5_path)
_ = wget.download(url, h5_path, bar=None)
#########################################################################
# Open the file in read-only mode
h5_file = h5py.File(h5_path, mode='r')
......@@ -93,12 +95,12 @@ h5_pos_val = px.hdf_utils.getAuxData(h5_main, 'Position_Values')[0]
#
# The position datasets are shaped as [spatial points, dimension] while the spectroscopic datasets are shaped as
# [dimension, spectral points]. Clearly the first axis of the position dataset and the second axis of the spectroscopic
# datasets match the correponding sizes of the main dataset.
# datasets match the corresponding sizes of the main dataset.
#
# Again, the sum of the position and spectroscopic dimensions results in the 6 dimensions originally described above.
#
# Essentially, there is a unique combination of position and spectroscopic parameters for each cell in the two
# dimensionam main dataset. The interactive widgets below illustrate this point. The first slider represents the
# dimensional main dataset. The interactive widgets below illustrate this point. The first slider represents the
# position dimension while the second represents the spectroscopic dimension. Each position index can be decoded
# to a set of X and Y indices and values while each spectroscopic index can be decoded into a set of frequency,
# dc offset, field, and forc parameters
......@@ -110,25 +112,26 @@ print('Spectroscopic Datasets of shape:', h5_spec_ind.shape)
spec_labels = px.hdf_utils.get_formatted_labels(h5_spec_ind)
pos_labels = px.hdf_utils.get_formatted_labels(h5_pos_ind)
def myfun(pos_index, spec_index):
for dim_ind, dim_name in enumerate(pos_labels):
print(dim_name,':',h5_pos_ind[pos_index, dim_ind])
print(dim_name, ':', h5_pos_ind[pos_index, dim_ind])
for dim_ind, dim_name in enumerate(spec_labels):
print(dim_name,':',h5_spec_ind[dim_ind, spec_index])
interact(myfun, pos_index=(0,h5_main.shape[0]-1, 1), spec_index=(0,h5_main.shape[1]-1, 1));
print(dim_name, ':', h5_spec_ind[dim_ind, spec_index])
interact(myfun, pos_index=(0, h5_main.shape[0]-1, 1), spec_index=(0, h5_main.shape[1]-1, 1))
#########################################################################
# Visualizing the ancillary datasets
# ==================================
#
# The plots below show how the position and spectrocopic dimensions vary. Due to the high dimensionality of the
# The plots below show how the position and spectroscopic dimensions vary. Due to the high dimensionality of the
# spectroscopic dimensions, the variation of each dimension has been plotted separately.
#
# How we interpret these plots:
# =============================
#
# **Positions**: For each Y index, the X index ramps up from 0 to 4 and repeats. Essentially, this means that for
# a given Y index, there were multiple measurments (different values of X)
# a given Y index, there were multiple measurements (different values of X)
#
# **Spectroscopic**: The plot for `FORC` shows that the next fastest dimension - `DC offset` was varied 6 times.
# Correspondingly, the plot for `DC offset` plot shows that this dimension ramps up from 0 to a little less than
......
......@@ -78,82 +78,44 @@ import pycroscopy as px
# ==========================================
# We will begin by downloading the data file from Github, followed by reshaping and decimation of the dataset
data_file_path = 'temp_um.h5'
# download the data file from Github:
url = 'https://raw.githubusercontent.com/pycroscopy/pycroscopy/master/data/NanoIR.txt'
data_file_path = 'temp.txt'
url = 'https://raw.githubusercontent.com/pycroscopy/pycroscopy/master/data/BELine_0004.h5'
_ = wget.download(url, data_file_path, bar=None)
#data_file_path = px.io.uiGetFile(filter='Anasys NanoIR text export (*.txt)')
# Load the data from file to memory
data_mat = np.loadtxt(data_file_path, delimiter ='\t', skiprows =1 )
print('Data currently of shape:', data_mat.shape)
hdf = px.ioHDF5(data_file_path)
h5_file = hdf.file
# Only every fifth column is of interest (position)
data_mat = data_mat[:, 1::5]
# The data is structured as [wavelength, position]
# nans cannot be handled in most of these decompositions. So set them to be zero.
data_mat[np.isnan(data_mat)]=0
# Finally, taking the transpose of the matrix to match [position, wavelength]
data_mat = data_mat.T
num_pos = data_mat.shape[0]
spec_pts = data_mat.shape[1]
print('Data currently of shape:', data_mat.shape)
x_label = 'Spectral dimension'
y_label = 'Intensity (a.u.)'
#####################################################################################
# Convert to H5
# =============
# Now we will take our numpy array holding the data and use the NumpyTranslator in pycroscopy to
# write it to an h5 file.
print('Contents of data file:')
print('----------------------')
px.hdf_utils.print_tree(h5_file)
print('----------------------')
folder_path, file_name = os.path.split(data_file_path)
file_name = file_name[:-4] + '_'
h5_meas_grp = h5_file['Measurement_000']
h5_path = os.path.join(folder_path, file_name + '.h5')
# Extracting some basic parameters:
num_rows = px.hdf_utils.get_attr(h5_meas_grp,'grid_num_rows')
num_cols = px.hdf_utils.get_attr(h5_meas_grp,'grid_num_cols')
# Use NumpyTranslator to convert the data to h5
tran = px.io.NumpyTranslator()
h5_path = tran.translate(h5_path, data_mat, num_pos, 1, scan_height=spec_pts, scan_width=1,
qty_name='Intensity', data_unit='a.u', spec_name=x_label,
spatial_unit='a.u.', data_type='NanoIR')
# Getting a reference to the main dataset:
h5_main = h5_meas_grp['Channel_000/Raw_Data']
h5_file = h5py.File(h5_path, mode='r+')
# See if a tree has been created within the hdf5 file:
px.hdf_utils.print_tree(h5_file)
# Extracting the X axis - vector of frequencies
h5_spec_vals = px.hdf_utils.getAuxData(h5_main,'Spectroscopic_Values')[-1]
freq_vec = np.squeeze(h5_spec_vals.value) * 1E-3
#####################################################################################
# Extracting the data and parameters
# ==================================
# All necessary information to understand, plot, analyze, and process the data is present in the H5 file now. Here, we show how to extract some basic parameters to plot the data
print('Data currently of shape:', h5_main.shape)
h5_main = h5_file['Measurement_000/Channel_000/Raw_Data']
h5_spec_vals = px.hdf_utils.getAuxData(h5_main,'Spectroscopic_Values')[0]
h5_pos_vals = px.hdf_utils.getAuxData(h5_main,'Position_Values')[0]
x_label = px.hdf_utils.get_formatted_labels(h5_spec_vals)[0]
y_label = px.hdf_utils.get_formatted_labels(h5_pos_vals)[0]
descriptor = px.hdf_utils.get_data_descriptor(h5_main)
x_label = 'Frequency (kHz)'
y_label = 'Amplitude (a.u.)'
#####################################################################################
# Visualize the Amplitude Data
# ============================
# Note that we are not hard-coding / writing any tick labels / axis labels by hand. All the necessary information was present in the H5 file
fig, axis = plt.subplots(figsize=(8,5))
px.plot_utils.plot_map(axis, h5_main, cmap='inferno')
axis.set_title('Raw data - ' + descriptor)
axis.set_xlabel(x_label)
axis.set_ylabel(y_label)
vec = h5_spec_vals[0]
cur_x_ticks = axis.get_xticks()
for ind in range(1,len(cur_x_ticks)-1):
cur_x_ticks[ind] = h5_spec_vals[0, ind]
axis.set_xticklabels([str(val) for val in cur_x_ticks]);
# Note that we are not hard-coding / writing any tick labels / axis labels by hand.
# All the necessary information was present in the H5 file
px.viz.be_viz_utils.jupyter_visualize_be_spectrograms(h5_main)
#####################################################################################
# 1. Singular Value Decomposition (SVD)
......@@ -170,23 +132,46 @@ axis.set_xticklabels([str(val) for val in cur_x_ticks]);
# * V - Eigenvectors sorted by variance in descending order
# * U - corresponding bundance maps
# * S - Variance or importance of each of these components
#
# Advantage of pycroscopy:
# ------------------------
# Notice that we are working with a complex valued dataset. Passing the complex values as is to SVD would result in
# complex valued eigenvectors / endmembers as well as abundance maps. Complex valued abundance maps are not physical.
# Thus, one would need to restructure the data such that it is real-valued only.
#
# One solution is to stack the real value followed by the magnitude of the imaginary component before passing to SVD.
# After SVD, the real-valued eigenvectors would need to be treated as the concatenation of the real and imaginary
# components. So, the eigenvectors would need to be restructured to get back the complex valued eigenvectors.
#
# **Pycroscopy handles all these data transformations (both for the source dataset and the eigenvectors)
# automatically.** In general, pycroscopy handles compund / complex valued datasets everywhere possible
#
# Furthermore, while it is not discussed in this example, pycroscopy also writes back the results from SVD back to
# the same source h5 file including all relevant links to the source dataset and other ancillary datasets
h5_svd_group = px.doSVD(h5_main, num_comps=256)
h5_svd_grp = px.processing.doSVD(h5_main)
h5_u = h5_svd_group['U']
h5_v = h5_svd_group['V']
h5_s = h5_svd_group['S']
U = h5_svd_grp['U']
S = h5_svd_grp['S']
V = h5_svd_grp['V']
# Since the two spatial dimensions (x, y) have been collapsed to one, we need to reshape the abundance maps:
abun_maps = np.reshape(h5_u[:,:25], (num_rows, num_cols, -1))
# Visualize the variance / statistical importance of each component:
px.plot_utils.plotScree(S, title='Note the exponential drop of variance with number of components')
px.plot_utils.plotScree(h5_s, title='Note the exponential drop of variance with number of components')
# Visualize the eigenvectors:
px.plot_utils.plot_loops(np.arange(spec_pts), V, x_label=x_label, y_label=y_label, plots_on_side=3,
subtitles='Component', title='SVD Eigenvectors', evenly_spaced=False);
first_evecs = h5_v[:9, :]
px.plot_utils.plot_loops(freq_vec, np.abs(first_evecs), x_label=x_label, y_label=y_label, plots_on_side=3,
subtitles='Component', title='SVD Eigenvectors (Amplitude)', evenly_spaced=False)
px.plot_utils.plot_loops(freq_vec, np.angle(first_evecs), x_label=x_label, y_label='Phase (rad)', plots_on_side=3,
subtitles='Component', title='SVD Eigenvectors (Phase)', evenly_spaced=False)
# Visualize the abundance maps:
px.plot_utils.plot_loops(np.arange(num_pos), np.transpose(U), plots_on_side=3,
subtitles='Component', title='SVD Abundances', evenly_spaced=False);
px.plot_utils.plot_map_stack(abun_maps, num_comps=9, heading='SVD Abundance Maps',
color_bar_mode='single', cmap='inferno')
#####################################################################################
# 2. KMeans Clustering
......@@ -200,25 +185,14 @@ px.plot_utils.plot_loops(np.arange(num_pos), np.transpose(U), plots_on_side=3,
#
# Set the number of clusters below
num_comps = 4
num_clusters = 4
estimators = px.Cluster(h5_main, 'KMeans', num_comps=num_comps)
estimators = px.Cluster(h5_main, 'KMeans', n_clusters=num_clusters)
h5_kmeans_grp = estimators.do_cluster(h5_main)
h5_kmeans_labels = h5_kmeans_grp['Labels']
h5_kmeans_mean_resp = h5_kmeans_grp['Mean_Response']
fig, axes = plt.subplots(ncols=2, figsize=(18, 8))
for clust_ind, end_member in enumerate(h5_kmeans_mean_resp):
axes[0].plot(end_member+(500*clust_ind), label='Cluster #' + str(clust_ind))
axes[0].legend(bbox_to_anchor=[1.05, 1.0], fontsize=12)
axes[0].set_title('K-Means Cluster Centers', fontsize=14)
axes[0].set_xlabel(x_label, fontsize=14)
axes[0].set_ylabel(y_label, fontsize=14)
axes[1].plot(h5_kmeans_labels)
axes[1].set_title('KMeans Labels', fontsize=14)
axes[1].set_xlabel('Position', fontsize=14)
axes[1].set_ylabel('Label')
px.plot_utils.plot_cluster_h5_group(h5_kmeans_grp)
#####################################################################################
# 3. Non-negative Matrix Factorization (NMF)
......@@ -232,20 +206,18 @@ axes[1].set_ylabel('Label')
num_comps = 4
# Make sure the data is non-negative:
data_mat[h5_main[()] < 0] = 0
# get the non-negative portion of the dataset
data_mat = np.abs(h5_main)
model = NMF(n_components=num_comps, init='random', random_state=0)
model.fit(data_mat)
fig, axis = plt.subplots()
for comp_ind, end_member in enumerate(model.components_):
axis.plot(end_member + comp_ind * 50,
label = 'NMF Component #' + str(comp_ind))
fig, axis = plt.subplots(figsize=(5.5, 5))
px.plot_utils.plot_line_family(axis, freq_vec, model.components_, label_prefix='NMF Component #')
axis.set_xlabel(x_label, fontsize=12)
axis.set_ylabel(y_label, fontsize=12)
axis.set_title('NMF Components', fontsize=14)
axis.legend(bbox_to_anchor=[1.0,1.0], fontsize=12);
axis.legend(bbox_to_anchor=[1.0, 1.0], fontsize=12)
#####################################################################################
# 4. NFINDR
......@@ -278,13 +250,14 @@ axis.legend(bbox_to_anchor=[1.0,1.0], fontsize=12);
num_comps = 4
# get the amplitude component of the dataset
data_mat = np.abs(h5_main)
nfindr_results = eea.nfindr.NFINDR(data_mat, num_comps) #Find endmembers
end_members = nfindr_results[0]
fig, axis = plt.subplots()
for comp_ind, end_member in enumerate(end_members):
axis.plot(end_member + comp_ind * 1000,
label = 'NFINDR Component #' + str(comp_ind))
fig, axis = plt.subplots(figsize=(5.5, 5))
px.plot_utils.plot_line_family(axis, freq_vec, end_members, label_prefix='NFINDR endmember #')
axis.set_title('NFINDR Endmembers', fontsize=14)
axis.set_xlabel(x_label, fontsize=12)
axis.set_ylabel(y_label, fontsize=12)
......@@ -295,21 +268,14 @@ fcls = amp.FCLS()
# Find abundances:
amap = fcls.map(data_mat[np.newaxis, :, :], end_members)
# Reshaping amap to match those of conventional endmembers
amap = np.squeeze(amap).T
# Reshaping amap
amap = np.reshape(np.squeeze(amap), (num_rows, num_cols, -1))
fig2, axis2 = plt.subplots()
for comp_ind, abundance in enumerate(amap):
axis2.plot(abundance, label = 'NFIND R Component #' + str(comp_ind) )
axis2.set_title('Abundances', fontsize=14)
axis2.set_xlabel(x_label, fontsize=12)
axis2.set_ylabel('Abundance (a. u.)', fontsize=12)
axis2.legend(bbox_to_anchor=[1.0,1.0], fontsize=12);
px.plot_utils.plot_map_stack(amap, heading='NFINDR Abundance maps', cmap=plt.cm.inferno,
color_bar_mode='single');
#####################################################################################
# Delete the temporarily downloaded file
os.remove(data_file_path)
# Close and delete the h5_file
h5_file.close()
os.remove(h5_path)
os.remove(data_file_path)
......@@ -88,7 +88,7 @@ import pycroscopy as px
# ===========================
# Download the data file from Github:
url = 'https://raw.githubusercontent.com/pycroscopy/pycroscopy/master/data/STS.asc'
data_file_path = 'temp.asc'
data_file_path = 'temp_1.asc'
if os.path.exists(data_file_path):
os.remove(data_file_path)
_ = wget.download(url, data_file_path, bar=None)
......@@ -240,4 +240,4 @@ with h5py.File(h5_path, mode='r') as h5_file:
# Remove both the original and translated files:
os.remove(h5_path)
os.remove(data_file_path)
\ No newline at end of file
os.remove(data_file_path)
......@@ -102,7 +102,7 @@ from pycroscopy.io.translators.omicron_asc import AscTranslator
# pycroscopy H5 file.
# download the raw data file from Github:
data_file_path = 'temp.asc'
data_file_path = 'temp_2.asc'
url = 'https://raw.githubusercontent.com/pycroscopy/pycroscopy/master/data/STS.asc'
if os.path.exists(data_file_path):
os.remove(data_file_path)
......