Commit 80ab5609 authored by Unknown's avatar Unknown
Browse files

Merge remote-tracking branch 'origin/sxm_dev' into sxm_dev

parents a51e415f 6d4f362a
......@@ -7,7 +7,7 @@ Pycroscopy
What?
------
* pycroscopy is a `python <http://www.python.org/>`_ package for processing, analyzing, and visualizing multidimensional imaging and spectroscopy data.
* pycroscopy uses the **Universal Spectroscopy and Imaging Data (USID)** `model <https://pycroscopy.github.io/pyUSID/data_format.html>`_ as its foundation, which:
* pycroscopy uses the **Universal Spectroscopy and Imaging Data (USID)** `model <../../USID/index.html>`_ as its foundation, which:
* facilitates the representation of any spectroscopic or imaging data regardless of its origin, modality, size, or dimensionality.
* enables the development of instrument- and modality- agnostic data processing and analysis algorithms.
......@@ -58,7 +58,7 @@ As we see it, there are a few opportunities in scientific imaging (that surely a
How?
-----
* pycroscopy uses the `Universal Spectroscopy and Imaging Data model <https://pycroscopy.github.io/pyUSID/data_format.html>`_ that facilitates the storage of data, regardless
* pycroscopy uses the `Universal Spectroscopy and Imaging Data model <../../USID/index.html>`_ that facilitates the storage of data, regardless
of dimensionality (conventional 1D spectra and 2D images to 9D hyperspectral datasets and beyond!) or instrument of origin (AFMs, STEMs, Raman spectroscopy etc.).
* This generalized representation of data allows us to write a single and
generalized version of analysis and processing functions that can be applied to any kind of data.
......
......@@ -17,6 +17,7 @@ Contributors
* `@ssomnath <https://github.com/ssomnath>`_ (Suhas Somnath)
* `@CompPhysChris <https://github.com/CompPhysChris>`_ (Chris R. Smith)
* `@nlaanait <https://github.com/nlaanait>`_ (Numan Laanait)
* `@stephenjesse <https://github.com/stephenjesse>`_ (Stephen Jesse)
* `@ianton86 <https://github.com/ianton86>`_ (Anton Ievlev)
* `@ramav87 <https://github.com/ramav87>`_ (Rama K. Vasudevan)
* `@Liambcollins <https://github.com/Liambcollins>`_ (Liam Collins)
......@@ -27,9 +28,8 @@ Contributors
* `@nmosto <https://github.com/nmosto>`_ (Nick Mostovych)
* `@rajgiriUW <https://github.com/rajgiriUW>`_ (Raj Giridharagopal)
* `@donpatrice <https://github.com/donpatrice>`_ (Patrik Marschalik)
* Arpitha Nagaraj for our logo
* and many more
* and `many more <https://github.com/pycroscopy/pycroscopy/graphs/contributors>`_
Acknowledgements
----------------
......
......@@ -8,7 +8,7 @@ Getting Started
* See `tutorials <https://pycroscopy.github.io/pyUSID/auto_examples/index.html>`_ to get started on using and writing your own pyUSID functions that power pycroscopy
* We already have `many translators <./translators.html>`_ that transform data from popular microscope data formats to pycroscopy compatible HDF5 files.
* pyUSID also has a `tutorial <https://pycroscopy.github.io/pyUSID/auto_examples/beginner/plot_numpy_translator.html>`_ to get you started on importing your other data to pycroscopy.
* Details regarding the definition and guidelines for the Universal Spectroscopy and Imaging Data `(USID) <https://pycroscopy.github.io/pyUSID/data_format.html>`_ model and implementation in HDF5 are also available in pyUSID's documentation.
* Details regarding the definition and guidelines for the Universal Spectroscopy and Imaging Data `(USID) <../../USID/index.html>`_ model and implementation in HDF5 are also available in pyUSID's documentation.
* Please see our document on the `organization of pycroscopy <./package_organization.html>`_ to find out more on what is where and why.
* If you are interested in contributing your code to pycroscopy, please look at our `guidelines <https://pycroscopy.github.io/pyUSID/contribution_guidelines.html>`_
* If you need detailed documentation on all our classes, functions, etc., please visit our `API <./api.html>`_
......
......@@ -71,11 +71,19 @@ Workshops on pycroscopy
pycroscopy at International conferences
---------------------------------------
2018
~~~~
* Nov 25-30 2018 - poster at MRS Fall meeting at Boston
* Oct 25 2018 @ 6 PM - 8 PM - poster - NS-ThP19 - Room Hall B - at `American Vacuum Society 2018 meeting <http://www.avssymposium.org/Schedule/SessionSchedule.aspx?sessionCode=NS-ThP>`_
* May 16-18 2018 - Poster at `ORNL Software Expo <https://software.ornl.gov/expo/program>`_
* May 18 2018 - **Invited** `talk <https://github.com/pycroscopy/pycroscopy/blob/master/docs/USID_pyUSID_pycroscopy.pdf>`_ at `ImageXD <http://www.imagexd.org/programs/imagexd2018/>`_
* Feb 28 2018 - Webinar on `Jupyter for Supporting a Materials Imaging User Facility (and beyond) <https://www.exascaleproject.org/event/jupyter/>`_. see this `Youtube video <https://www.youtube.com/watch?v=aKah_O5OZdE&t=31m53s>`_
2017
~~~~
* Nov 29 2017 @ 8-10 PM - `Poster <https://mrsfall.zerista.com/event/member/432978>`_ at the Materials Research Society Fall 2017 Meeting
* Oct 31 2017 @ 6:30 PM - American Vacuum Society conference; Session: SP-TuP1; `poster 1641 <http://www2.avs.org/symposium2017/Papers/Paper_SP-TuP1.html>`_
* Aug 8 2017 @ 10:45 AM - Microscopy and Microanalysis conference - `poster <https://www.cambridge.org/core/services/aop-cambridge-core/content/view/C6F6D85EF7367C058B66B4B709AD61ED/S1431927617001805a.pdf/pycroscopy_an_open_source_approach_to_microscopy_and_microanalysis_in_the_age_of_big_data_and_open_science.pdf>`_.
* Apr 2017 - Lecture on `atom finding <https://physics.appstate.edu/events/aberration-corrected-stem-teaching-machines-and-atomic-forge>`_
2016
~~~~
* Dec 2016 - Poster + `abstract <https://mrsspring.zerista.com/poster/member/85350>`_ at the 2017 Spring Materials Research Society (MRS) conference
Data Translators
=================
* Pycroscopy uses ``Translators`` to extract data and metadata from files (often measurement data stored in instrument-generated proprietary file formats) and write them into `Universal Spectroscopy and Imaging Data (USID) HDF5 files <../../pyUSID/data_format.html>`_.
* Pycroscopy uses ``Translators`` to extract data and metadata from files (often measurement data stored in instrument-generated proprietary file formats) and write them into `Universal Spectroscopy and Imaging Data (USID) HDF5 files <../../USID/index.html>`_.
* You can write your own ``Translator`` easily by following `this example <https://pycroscopy.github.io/pyUSID/auto_examples/beginner/plot_numpy_translator.html>`_ on our sister project's documentation.
* Below is a list of ``Translators`` already available in pycroscopy to translate data.
* These translators can be accessed via ``pycroscopy.io.translators`` or ``pycroscopy.translators``
......
......@@ -755,7 +755,7 @@ class BELoopFitter(Fitter):
if dim == self._fit_dim_name:
fit_dim_slice.append(slice(None))
fit_dim_slice[0] = idim
elif dim in ['FORC', 'FORC_repeat']:
elif dim in ['FORC', 'FORC_repeat', 'FORC_Cycle']:
continue
else:
fit_dim_slice.append(slice(0, 1))
......
......@@ -56,11 +56,12 @@ class NanonisFile(object):
'sxm', or 'dat'.
"""
if self.fname[-3:] == '3ds':
_, fname_ext = os.path.splitext(self.fname)
if fname_ext == '.3ds':
return 'grid'
elif self.fname[-3:] == 'sxm':
elif self.fname[-3:] == '.sxm':
return 'scan'
elif self.fname[-3:] == 'dat':
elif self.fname[-3:] == '.dat':
return 'spec'
else:
raise UnhandledFileError(
......@@ -760,5 +761,6 @@ def _is_valid_file(fname, ext):
"""
Detect if invalid file is being initialized by class.
"""
if fname[-3:] != ext:
_, fname_ext = os.path.splitext(fname)
if fname_ext[1:] != ext:
raise UnhandledFileError('{} is not a {} file'.format(fname, ext))
......@@ -13,7 +13,7 @@ from igor import binarywave as bw
from pyUSID.io.translator import Translator, \
generate_dummy_main_parms # Because this class extends the abstract Translator class
from pyUSID.io.write_utils import VALUES_DTYPE, Dimension
from pyUSID.io.write_utils import VALUES_DTYPE, Dimension, clean_string_att
from pyUSID.io.hdf_utils import create_indexed_group, write_main_dataset, write_simple_attrs, write_ind_val_dsets
......@@ -65,7 +65,7 @@ class IgorIBWTranslator(Translator):
# Get the data to figure out if this is an image or a force curve
images = ibw_wave.get('wData')
if images.shape[2] != len(chan_labels):
if images.shape[-1] != len(chan_labels):
chan_labels = chan_labels[1:] # for layer 0 null set errors in older AR software
if images.ndim == 3: # Image stack
......@@ -98,11 +98,11 @@ class IgorIBWTranslator(Translator):
# Find the channel that corresponds to either Z sensor or Raw:
try:
chan_ind = chan_labels.index('ZSnsr')
spec_data = np.atleast_2d(VALUES_DTYPE(images[chan_ind]))
spec_data = VALUES_DTYPE(images[chan_ind]).squeeze()
except ValueError:
try:
chan_ind = chan_labels.index('Raw')
spec_data = np.atleast_2d(VALUES_DTYPE(images[chan_ind]))
spec_data = VALUES_DTYPE(images[chan_ind]).squeeze()
except ValueError:
# We don't expect to come here. If we do, spectroscopic values remains as is
spec_data = np.arange(images.shape[2])
......@@ -127,6 +127,9 @@ class IgorIBWTranslator(Translator):
# Prepare the list of raw_data datasets
for chan_data, chan_name, chan_unit in zip(images, chan_labels, chan_units):
if verbose:
print('channel', chan_name)
print('unit', chan_unit)
chan_grp = create_indexed_group(meas_grp, 'Channel')
write_main_dataset(chan_grp, np.atleast_2d(chan_data), 'Raw_Data',
......@@ -229,6 +232,8 @@ class IgorIBWTranslator(Translator):
chan = chan.decode(codec)
if chan.lower().rfind('trace') > 0:
labels[chan_ind] = chan[:chan.lower().rfind('trace') + 5]
else:
labels[chan_ind] = chan
# Figure out (default) units
if chan.startswith('Phase'):
default_units.append('deg')
......
......@@ -8,7 +8,7 @@ from __future__ import division, print_function, absolute_import
import numpy as np
import sys
if sys.version_info.major == 3 and sys.version_info.minor == 6:
if sys.version_info.major == 3 and sys.version_info.minor >= 6:
disable_histogram = True
else:
disable_histogram = False
......
......@@ -25,9 +25,20 @@ from pyUSID import USIDataset
class SVD(Process):
def __init__(self, h5_main, num_components=None):
def __init__(self, h5_main, num_components=None, **kwargs):
"""
Perform the SVD decomposition on the selected dataset and write the results to h5 file.
super(SVD, self).__init__(h5_main)
Parameters
----------
h5_main : USIDataset
Dataset to be decomposed.
num_components : int, optional
Number of components to decompose h5_main into. Default None.
kwargs
Arguments to be sent to Process
"""
super(SVD, self).__init__(h5_main, **kwargs)
self.process_name = 'SVD'
'''
......@@ -42,7 +53,12 @@ class SVD(Process):
num_components = min(n_samples, n_features)
else:
num_components = min(n_samples, n_features, num_components)
self.num_components = num_components
# Check that we can actually compute the SVD with the selected number of components
self._check_available_mem()
self.parms_dict = {'num_components': num_components}
self.duplicate_h5_groups, self.partial_h5_groups = self._check_for_duplicates()
......@@ -190,6 +206,44 @@ class SVD(Process):
h5_v.attrs[key] = svd_ref
def _check_available_mem(self):
"""
Check that there is enough memory to perform the SVD decomposition.
Returns
-------
sufficient_mem : bool
True is enough memory found, False otherwise.
"""
if self.verbose:
print('Checking memory availability.')
n_samples, n_features = self.h5_main.shape
s_mem_per_comp = np.float32(0).itemsize
u_mem_per_comp = np.float32(0).itemsize * n_samples
v_mem_per_comp = self.h5_main.dtype.itemsize * n_features
mem_per_comp = s_mem_per_comp + u_mem_per_comp + v_mem_per_comp
avail_mem = 0.75 * self._max_mem_mb * 1024 ** 2
free_mem = avail_mem - self.h5_main.__sizeof__()
if free_mem <= 0:
error_message = 'Cannot load main dataset into memory.\n' + \
'Available memory is {}. Dataset needs {}.'.format(avail_mem,
self.h5_main.__sizeof__())
raise MemoryError()
if self.verbose:
print('Memory available for SVD is {}.'.format(free_mem))
print('Memory needed per component is {}.'.format(mem_per_comp))
cant_svd = (free_mem - self.num_components * mem_per_comp) <= 0
if cant_svd:
max_comps = np.floor(free_mem / mem_per_comp, dtype=int)
error_message = 'Not enough free memory for performing SVD with requested number of parameters.\n' + \
'Maximum possible parameters is {}.'.format(max_comps)
raise MemoryError(error_message)
###############################################################################
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment