Commit b6a34eea authored by Unknown's avatar Unknown
Browse files

Adding gen_modules and auto_examples to documentation

parent d65b420e
%% Cell type:code id: tags:
``` python
%matplotlib inline
```
%% Cell type:markdown id: tags:
An Example on writing examples
==============================
The example docstring will be printed as text along with the code of the example.
It should contain the desciption of the example code.
Only examples that begine with plot_* will be run when generating the docs.
%% Cell type:code id: tags:
``` python
# Code source: Chris Smith
# Liscense: MIT
print('I am an example')
```
"""
An Example on writing examples
==============================
The example docstring will be printed as text along with the code of the example.
It should contain the desciption of the example code.
Only examples that begine with plot_* will be run when generating the docs.
"""
# Code source: Chris Smith
# Liscense: MIT
print('I am an example')
.. _sphx_glr_auto_examples_example_example.py:
An Example on writing examples
==============================
The example docstring will be printed as text along with the code of the example.
It should contain the desciption of the example code.
Only examples that begine with plot_* will be run when generating the docs.
.. code-block:: python
# Code source: Chris Smith
# Liscense: MIT
print('I am an example')
**Total running time of the script:** ( 0 minutes 0.000 seconds)
.. container:: sphx-glr-footer
.. container:: sphx-glr-download
:download:`Download Python source code: example_example.py <example_example.py>`
.. container:: sphx-glr-download
:download:`Download Jupyter notebook: example_example.ipynb <example_example.ipynb>`
.. rst-class:: sphx-glr-signature
`Generated by Sphinx-Gallery <https://sphinx-gallery.readthedocs.io>`_
:orphan:
==================
Pycroscopy Gallery
==================
.. raw:: html
<div class="sphx-glr-thumbcontainer" tooltip="The example docstring will be printed as text along with the code of the example. It should con...">
.. only:: html
.. figure:: /auto_examples/images/thumb/sphx_glr_example_example_thumb.png
:ref:`sphx_glr_auto_examples_example_example.py`
.. raw:: html
</div>
.. toctree::
:hidden:
/auto_examples/example_example
.. raw:: html
<div class="sphx-glr-thumbcontainer" tooltip="">
.. only:: html
.. figure:: /auto_examples/images/thumb/sphx_glr_microdata_example_thumb.png
:ref:`sphx_glr_auto_examples_microdata_example.py`
.. raw:: html
</div>
.. toctree::
:hidden:
/auto_examples/microdata_example
.. raw:: html
<div class="sphx-glr-thumbcontainer" tooltip="Conventionally, the h5py package is used to create, read, write, and modify h5 files.">
.. only:: html
.. figure:: /auto_examples/images/thumb/sphx_glr_load_dataset_example_thumb.png
:ref:`sphx_glr_auto_examples_load_dataset_example.py`
.. raw:: html
</div>
.. toctree::
:hidden:
/auto_examples/load_dataset_example
.. raw:: html
<div style='clear:both'></div>
.. container:: sphx-glr-footer
.. container:: sphx-glr-download
:download:`Download all examples in Python source code: auto_examples_python.zip <//home/challtdow/workspace/pycroscopy/docs/auto_examples/auto_examples_python.zip>`
.. container:: sphx-glr-download
:download:`Download all examples in Jupyter notebooks: auto_examples_jupyter.zip <//home/challtdow/workspace/pycroscopy/docs/auto_examples/auto_examples_jupyter.zip>`
.. rst-class:: sphx-glr-signature
`Generated by Sphinx-Gallery <https://sphinx-gallery.readthedocs.io>`_
%% Cell type:code id: tags:
``` python
%matplotlib inline
```
%% Cell type:markdown id: tags:
# Load Dataset
Conventionally, the h5py package is used to create, read, write, and modify h5 files.
Pycroscopy uses h5py to read hdf5 files and the ioHDF5 subpackage (a light wrapper around h5py) within pycroscopy
to create / write back to the file. Please see the example on writing hdf5 files for more information on creating and
writing to h5 files using pycroscopy.
In the event that modification / addition of data to the existing file is of interest,
it is recommended that the file be opened using ioHDF5. The same h5py handles can be obtained easily from ioHDF5.
Note that ioHDF5 always reads the files in the 'r+' mode that allows modification of the file.
In this example, we will be loading the Raw_Data dataset from the hdf5 file.
%% Cell type:code id: tags:
``` python
# Code source: pycroscopy
# Liscense: MIT
from __future__ import division, print_function, absolute_import, unicode_literals
import h5py
try:
# This package is not part of anaconda and may need to be installed.
import wget
except ImportError:
import pip
pip.main(['install', 'wget'])
import wget
from os import remove
import pycroscopy as px
# Downloading the file from the pycroscopy Github project
url = 'https://raw.githubusercontent.com/pycroscopy/pycroscopy/master/data/BELine_0004.h5'
h5_path = 'temp.h5'
_ = wget.download(url, h5_path)
# h5_path = px.io_utils.uiGetFile(caption='Select .h5 file', filter='HDF5 file (*.h5)')
# Read the file using using h5py:
h5_file1 = h5py.File(h5_path, 'r')
# Look at the contents of the file:
px.hdf_utils.print_tree(h5_file1)
# Access the "Raw_Data" dataset from its absolute path
h5_raw1 = h5_file1['Measurement_000/Channel_000/Raw_Data']
print('h5_raw1: ', h5_raw1)
# We can get to the same dataset through relative paths:
# Access the Measurement_000 group first
h5_meas_grp = h5_file1['Measurement_000']
print('h5_meas_grp:', h5_meas_grp)
# Now we can access the "Channel_000" group via the h5_meas_grp object
h5_chan_grp = h5_meas_grp['Channel_000']
# And finally, the same raw dataset can be accessed as:
h5_raw1_alias_1 = h5_chan_grp['Raw_Data']
print('h5_raw1_alias_1:', h5_raw1_alias_1)
# Another way to get this dataset is via functions written in pycroscopy:
h5_dsets = px.hdf_utils.getDataSet(h5_file1, 'Raw_Data')
print('h5_dsets:', h5_dsets)
# In this case, there is only a single Raw_Data, so we an access it simply as:
h5_raw1_alias_2 = h5_dsets[0]
print('h5_raw1_alias_2:', h5_raw1_alias_2)
# Let's just check to see if these are indeed aliases of the same dataset:
print('All aliases of the same dataset?', h5_raw1 == h5_raw1_alias_1 and h5_raw1 == h5_raw1_alias_2)
# Let's close this file
h5_file1.close()
# Load the dataset with pycroscopy
hdf = px.ioHDF5(h5_path)
# Getting the same h5py handle to the file:
h5_file2 = hdf.file
h5_raw2 = h5_file2['Measurement_000/Channel_000/Raw_Data']
print('h5_raw2:', h5_raw2)
h5_file2.close()
# Delete the temporarily downloaded h5 file:
remove(h5_path)
```
"""
============
Load Dataset
============
Conventionally, the h5py package is used to create, read, write, and modify h5 files.
Pycroscopy uses h5py to read hdf5 files and the ioHDF5 subpackage (a light wrapper around h5py) within pycroscopy
to create / write back to the file. Please see the example on writing hdf5 files for more information on creating and
writing to h5 files using pycroscopy.
In the event that modification / addition of data to the existing file is of interest,
it is recommended that the file be opened using ioHDF5. The same h5py handles can be obtained easily from ioHDF5.
Note that ioHDF5 always reads the files in the 'r+' mode that allows modification of the file.
In this example, we will be loading the Raw_Data dataset from the hdf5 file.
"""
# Code source: pycroscopy
# Liscense: MIT
from __future__ import division, print_function, absolute_import, unicode_literals
import h5py
try:
# This package is not part of anaconda and may need to be installed.
import wget
except ImportError:
import pip
pip.main(['install', 'wget'])
import wget
from os import remove
import pycroscopy as px
# Downloading the file from the pycroscopy Github project
url = 'https://raw.githubusercontent.com/pycroscopy/pycroscopy/master/data/BELine_0004.h5'
h5_path = 'temp.h5'
_ = wget.download(url, h5_path)
# h5_path = px.io_utils.uiGetFile(caption='Select .h5 file', filter='HDF5 file (*.h5)')
# Read the file using using h5py:
h5_file1 = h5py.File(h5_path, 'r')
# Look at the contents of the file:
px.hdf_utils.print_tree(h5_file1)
# Access the "Raw_Data" dataset from its absolute path
h5_raw1 = h5_file1['Measurement_000/Channel_000/Raw_Data']
print('h5_raw1: ', h5_raw1)
# We can get to the same dataset through relative paths:
# Access the Measurement_000 group first
h5_meas_grp = h5_file1['Measurement_000']
print('h5_meas_grp:', h5_meas_grp)
# Now we can access the "Channel_000" group via the h5_meas_grp object
h5_chan_grp = h5_meas_grp['Channel_000']
# And finally, the same raw dataset can be accessed as:
h5_raw1_alias_1 = h5_chan_grp['Raw_Data']
print('h5_raw1_alias_1:', h5_raw1_alias_1)
# Another way to get this dataset is via functions written in pycroscopy:
h5_dsets = px.hdf_utils.getDataSet(h5_file1, 'Raw_Data')
print('h5_dsets:', h5_dsets)
# In this case, there is only a single Raw_Data, so we an access it simply as:
h5_raw1_alias_2 = h5_dsets[0]
print('h5_raw1_alias_2:', h5_raw1_alias_2)
# Let's just check to see if these are indeed aliases of the same dataset:
print('All aliases of the same dataset?', h5_raw1 == h5_raw1_alias_1 and h5_raw1 == h5_raw1_alias_2)
# Let's close this file
h5_file1.close()
# Load the dataset with pycroscopy
hdf = px.ioHDF5(h5_path)
# Getting the same h5py handle to the file:
h5_file2 = hdf.file
h5_raw2 = h5_file2['Measurement_000/Channel_000/Raw_Data']
print('h5_raw2:', h5_raw2)
h5_file2.close()
# Delete the temporarily downloaded h5 file:
remove(h5_path)
.. _sphx_glr_auto_examples_load_dataset_example.py:
============
Load Dataset
============
Conventionally, the h5py package is used to create, read, write, and modify h5 files.
Pycroscopy uses h5py to read hdf5 files and the ioHDF5 subpackage (a light wrapper around h5py) within pycroscopy
to create / write back to the file. Please see the example on writing hdf5 files for more information on creating and
writing to h5 files using pycroscopy.
In the event that modification / addition of data to the existing file is of interest,
it is recommended that the file be opened using ioHDF5. The same h5py handles can be obtained easily from ioHDF5.
Note that ioHDF5 always reads the files in the 'r+' mode that allows modification of the file.
In this example, we will be loading the Raw_Data dataset from the hdf5 file.
.. code-block:: python
# Code source: pycroscopy
# Liscense: MIT
from __future__ import division, print_function, absolute_import, unicode_literals
import h5py
try:
# This package is not part of anaconda and may need to be installed.
import wget
except ImportError:
import pip
pip.main(['install', 'wget'])
import wget
from os import remove
import pycroscopy as px
# Downloading the file from the pycroscopy Github project
url = 'https://raw.githubusercontent.com/pycroscopy/pycroscopy/master/data/BELine_0004.h5'
h5_path = 'temp.h5'
_ = wget.download(url, h5_path)
# h5_path = px.io_utils.uiGetFile(caption='Select .h5 file', filter='HDF5 file (*.h5)')
# Read the file using using h5py:
h5_file1 = h5py.File(h5_path, 'r')
# Look at the contents of the file:
px.hdf_utils.print_tree(h5_file1)
# Access the "Raw_Data" dataset from its absolute path
h5_raw1 = h5_file1['Measurement_000/Channel_000/Raw_Data']
print('h5_raw1: ', h5_raw1)
# We can get to the same dataset through relative paths:
# Access the Measurement_000 group first
h5_meas_grp = h5_file1['Measurement_000']
print('h5_meas_grp:', h5_meas_grp)
# Now we can access the "Channel_000" group via the h5_meas_grp object
h5_chan_grp = h5_meas_grp['Channel_000']
# And finally, the same raw dataset can be accessed as:
h5_raw1_alias_1 = h5_chan_grp['Raw_Data']
print('h5_raw1_alias_1:', h5_raw1_alias_1)
# Another way to get this dataset is via functions written in pycroscopy:
h5_dsets = px.hdf_utils.getDataSet(h5_file1, 'Raw_Data')
print('h5_dsets:', h5_dsets)
# In this case, there is only a single Raw_Data, so we an access it simply as:
h5_raw1_alias_2 = h5_dsets[0]
print('h5_raw1_alias_2:', h5_raw1_alias_2)
# Let's just check to see if these are indeed aliases of the same dataset:
print('All aliases of the same dataset?', h5_raw1 == h5_raw1_alias_1 and h5_raw1 == h5_raw1_alias_2)
# Let's close this file
h5_file1.close()
# Load the dataset with pycroscopy
hdf = px.ioHDF5(h5_path)
# Getting the same h5py handle to the file:
h5_file2 = hdf.file
h5_raw2 = h5_file2['Measurement_000/Channel_000/Raw_Data']
print('h5_raw2:', h5_raw2)
h5_file2.close()
# Delete the temporarily downloaded h5 file:
remove(h5_path)
**Total running time of the script:** ( 0 minutes 0.000 seconds)
.. container:: sphx-glr-footer
.. container:: sphx-glr-download
:download:`Download Python source code: load_dataset_example.py <load_dataset_example.py>`
.. container:: sphx-glr-download
:download:`Download Jupyter notebook: load_dataset_example.ipynb <load_dataset_example.ipynb>`
.. rst-class:: sphx-glr-signature
`Generated by Sphinx-Gallery <https://sphinx-gallery.readthedocs.io>`_
%% Cell type:code id: tags:
``` python
%matplotlib inline
```
%% Cell type:markdown id: tags:
# Writing to hdf5 using the Microdata objects
%% Cell type:code id: tags:
``` python
# Code source: Chris Smith -- cq6@ornl.gov
# Liscense: MIT
import numpy as np
import pycroscopy as px
```
%% Cell type:markdown id: tags:
Create some MicroDatasets and MicroDataGroups that will be written to the file.
With h5py, groups and datasets must be created from the top down,
but the Microdata objects allow us to build them in any order and link them later.
%% Cell type:code id: tags:
``` python
# First create some data
data1 = np.random.rand(5, 7)
```
%% Cell type:markdown id: tags:
Now use the array to build the dataset. This dataset will live
directly under the root of the file. The MicroDataset class also implements the
compression and chunking parameters from h5py.Dataset.
%% Cell type:code id: tags:
``` python
ds_main = px.MicroDataset('Main_Data', data=data1, parent='/')
```
%% Cell type:markdown id: tags:
We can also create an empty dataset and write the values in later
With this method, it is neccessary to specify the dtype and maxshape kwarg parameters.
%% Cell type:code id: tags:
``` python
ds_empty = px.MicroDataset('Empty_Data', data=[], dtype=np.float32, maxshape=[7, 5, 3])
```
%% Cell type:markdown id: tags:
We can also create groups and add other MicroData objects as children.
If the group's parent is not given, it will be set to root.
%% Cell type:code id: tags:
``` python
data_group = px.MicroDataGroup('Data_Group', parent='/')
root_group = px.MicroDataGroup('/')
# After creating the group, we then add an existing object as its child.
data_group.addChildren([ds_empty])
root_group.addChildren([ds_main, data_group])
```
%% Cell type:markdown id: tags:
The showTree method allows us to view the data structure before the hdf5 file is
created.
%% Cell type:code id: tags:
``` python
root_group.showTree()
```
%% Cell type:markdown id: tags:
Now that we have created the objects, we can write them to an hdf5 file
%% Cell type:code id: tags:
``` python
# First we specify the path to the file
h5_path = 'microdata_test.h5'
# Then we use the ioHDF5 class to build the file from our objects.
hdf = px.ioHDF5(h5_path)
```
%% Cell type:markdown id: tags:
The writeData method builds the hdf5 file using the structure defined by the
MicroData objects. It returns a list of references to all h5py objects in the
new file.
%% Cell type:code id: tags:
``` python
h5_refs = hdf.writeData(root_group, print_log=True)
# We can use these references to get the h5py dataset and group objects
h5_main = px.io.hdf_utils.getH5DsetRefs(['Main_Data'], h5_refs)[0]
h5_empty = px.io.hdf_utils.getH5DsetRefs(['Empty_Data'], h5_refs)[0]
```
%% Cell type:markdown id: tags:
Compare the data in our dataset to the original
%% Cell type:code id: tags: