Commit 65aaab37 authored by Unknown's avatar Unknown
Browse files

Small documentation update added docs/_build back to .gitignore

parent b6a34eea
......@@ -69,6 +69,7 @@ instance/
# Sphinx documentation
docs/_static
docs/_build
# PyBuilder
target/
......
{
"nbformat_minor": 0,
"metadata": {
"kernelspec": {
"language": "python",
"name": "python3",
"display_name": "Python 3"
},
"language_info": {
"version": "3.5.2",
"mimetype": "text/x-python",
"file_extension": ".py",
"name": "python",
"pygments_lexer": "ipython3",
"nbconvert_exporter": "python",
"codemirror_mode": {
"version": 3,
"name": "ipython"
}
},
"name": "python",
"file_extension": ".py",
"version": "3.5.2",
"nbconvert_exporter": "python",
"mimetype": "text/x-python",
"pygments_lexer": "ipython3"
},
"kernelspec": {
"language": "python",
"display_name": "Python 3",
"name": "python3"
}
},
"nbformat": 4,
"nbformat_minor": 0,
"cells": [
{
"cell_type": "code",
"execution_count": null,
"outputs": [],
"source": [
"%matplotlib inline"
],
"cell_type": "code",
"metadata": {
"collapsed": false
},
"outputs": [],
"execution_count": null
}
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\nAn Example on writing examples\n==============================\n\nThe example docstring will be printed as text along with the code of the example.\nIt should contain the desciption of the example code.\nOnly examples that begine with plot_* will be run when generating the docs.\n\n\n"
]
],
"cell_type": "markdown",
"metadata": {}
},
{
"cell_type": "code",
"execution_count": null,
"outputs": [],
"source": [
"# Code source: Chris Smith\n# Liscense: MIT\nprint('I am an example')"
],
"cell_type": "code",
"metadata": {
"collapsed": false
},
"outputs": [],
"execution_count": null
}
}
]
}
\ No newline at end of file
{
"nbformat_minor": 0,
"metadata": {
"kernelspec": {
"language": "python",
"name": "python3",
"display_name": "Python 3"
},
"language_info": {
"version": "3.5.2",
"mimetype": "text/x-python",
"file_extension": ".py",
"name": "python",
"pygments_lexer": "ipython3",
"nbconvert_exporter": "python",
"codemirror_mode": {
"version": 3,
"name": "ipython"
}
},
"name": "python",
"file_extension": ".py",
"version": "3.5.2",
"nbconvert_exporter": "python",
"mimetype": "text/x-python",
"pygments_lexer": "ipython3"
},
"kernelspec": {
"language": "python",
"display_name": "Python 3",
"name": "python3"
}
},
"nbformat": 4,
"nbformat_minor": 0,
"cells": [
{
"cell_type": "code",
"execution_count": null,
"outputs": [],
"source": [
"%matplotlib inline"
],
"cell_type": "code",
"metadata": {
"collapsed": false
},
"outputs": [],
"execution_count": null
}
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n# Load Dataset\n\n\nConventionally, the h5py package is used to create, read, write, and modify h5 files.\n\nPycroscopy uses h5py to read hdf5 files and the ioHDF5 subpackage (a light wrapper around h5py) within pycroscopy\nto create / write back to the file. Please see the example on writing hdf5 files for more information on creating and\nwriting to h5 files using pycroscopy.\n\nIn the event that modification / addition of data to the existing file is of interest,\nit is recommended that the file be opened using ioHDF5. The same h5py handles can be obtained easily from ioHDF5.\nNote that ioHDF5 always reads the files in the 'r+' mode that allows modification of the file.\n\nIn this example, we will be loading the Raw_Data dataset from the hdf5 file.\n\n\n"
]
],
"cell_type": "markdown",
"metadata": {}
},
{
"cell_type": "code",
"execution_count": null,
"outputs": [],
"source": [
"# Code source: pycroscopy\n# Liscense: MIT\n\nfrom __future__ import division, print_function, absolute_import, unicode_literals\nimport h5py\ntry:\n # This package is not part of anaconda and may need to be installed.\n import wget\nexcept ImportError:\n import pip\n pip.main(['install', 'wget'])\n import wget\n\nfrom os import remove\nimport pycroscopy as px\n\n# Downloading the file from the pycroscopy Github project\nurl = 'https://raw.githubusercontent.com/pycroscopy/pycroscopy/master/data/BELine_0004.h5'\nh5_path = 'temp.h5'\n_ = wget.download(url, h5_path)\n\n# h5_path = px.io_utils.uiGetFile(caption='Select .h5 file', filter='HDF5 file (*.h5)')\n\n# Read the file using using h5py:\nh5_file1 = h5py.File(h5_path, 'r')\n\n# Look at the contents of the file:\npx.hdf_utils.print_tree(h5_file1)\n\n# Access the \"Raw_Data\" dataset from its absolute path\nh5_raw1 = h5_file1['Measurement_000/Channel_000/Raw_Data']\nprint('h5_raw1: ', h5_raw1)\n\n# We can get to the same dataset through relative paths:\n\n# Access the Measurement_000 group first\nh5_meas_grp = h5_file1['Measurement_000']\nprint('h5_meas_grp:', h5_meas_grp)\n\n# Now we can access the \"Channel_000\" group via the h5_meas_grp object\nh5_chan_grp = h5_meas_grp['Channel_000']\n\n# And finally, the same raw dataset can be accessed as:\nh5_raw1_alias_1 = h5_chan_grp['Raw_Data']\nprint('h5_raw1_alias_1:', h5_raw1_alias_1)\n\n# Another way to get this dataset is via functions written in pycroscopy:\nh5_dsets = px.hdf_utils.getDataSet(h5_file1, 'Raw_Data')\nprint('h5_dsets:', h5_dsets)\n\n# In this case, there is only a single Raw_Data, so we an access it simply as:\nh5_raw1_alias_2 = h5_dsets[0]\nprint('h5_raw1_alias_2:', h5_raw1_alias_2)\n\n# Let's just check to see if these are indeed aliases of the same dataset:\nprint('All aliases of the same dataset?', h5_raw1 == h5_raw1_alias_1 and h5_raw1 == h5_raw1_alias_2)\n\n# Let's close this file\nh5_file1.close()\n\n# Load the dataset with pycroscopy\nhdf = px.ioHDF5(h5_path)\n\n# Getting the same h5py handle to the file:\nh5_file2 = hdf.file\n\nh5_raw2 = h5_file2['Measurement_000/Channel_000/Raw_Data']\nprint('h5_raw2:', h5_raw2)\n\nh5_file2.close()\n\n# Delete the temporarily downloaded h5 file:\nremove(h5_path)"
],
"cell_type": "code",
"metadata": {
"collapsed": false
},
"outputs": [],
"execution_count": null
}
}
]
}
\ No newline at end of file
{
"nbformat_minor": 0,
"metadata": {
"kernelspec": {
"language": "python",
"name": "python3",
"display_name": "Python 3"
},
"language_info": {
"version": "3.5.2",
"mimetype": "text/x-python",
"file_extension": ".py",
"name": "python",
"pygments_lexer": "ipython3",
"nbconvert_exporter": "python",
"codemirror_mode": {
"version": 3,
"name": "ipython"
}
},
"name": "python",
"file_extension": ".py",
"version": "3.5.2",
"nbconvert_exporter": "python",
"mimetype": "text/x-python",
"pygments_lexer": "ipython3"
},
"kernelspec": {
"language": "python",
"display_name": "Python 3",
"name": "python3"
}
},
"nbformat": 4,
"nbformat_minor": 0,
"cells": [
{
"cell_type": "code",
"execution_count": null,
"outputs": [],
"source": [
"%matplotlib inline"
],
"cell_type": "code",
"metadata": {
"collapsed": false
},
"outputs": [],
"execution_count": null
}
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n# Writing to hdf5 using the Microdata objects\n\n\n\n\n\n"
]
],
"cell_type": "markdown",
"metadata": {}
},
{
"cell_type": "code",
"execution_count": null,
"outputs": [],
"source": [
"# Code source: Chris Smith -- cq6@ornl.gov\n# Liscense: MIT\n\nimport numpy as np\nimport pycroscopy as px"
],
"cell_type": "code",
"metadata": {
"collapsed": false
},
"outputs": [],
"execution_count": null
}
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Create some MicroDatasets and MicroDataGroups that will be written to the file.\nWith h5py, groups and datasets must be created from the top down,\nbut the Microdata objects allow us to build them in any order and link them later.\n\n"
]
],
"cell_type": "markdown",
"metadata": {}
},
{
"cell_type": "code",
"execution_count": null,
"outputs": [],
"source": [
"# First create some data\ndata1 = np.random.rand(5, 7)"
],
"cell_type": "code",
"metadata": {
"collapsed": false
},
"outputs": [],
"execution_count": null
}
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now use the array to build the dataset. This dataset will live\ndirectly under the root of the file. The MicroDataset class also implements the\ncompression and chunking parameters from h5py.Dataset.\n\n"
]
],
"cell_type": "markdown",
"metadata": {}
},
{
"cell_type": "code",
"execution_count": null,
"outputs": [],
"source": [
"ds_main = px.MicroDataset('Main_Data', data=data1, parent='/')"
],
"cell_type": "code",
"metadata": {
"collapsed": false
},
"outputs": [],
"execution_count": null
}
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can also create an empty dataset and write the values in later\nWith this method, it is neccessary to specify the dtype and maxshape kwarg parameters.\n\n"
]
],
"cell_type": "markdown",
"metadata": {}
},
{
"cell_type": "code",
"execution_count": null,
"outputs": [],
"source": [
"ds_empty = px.MicroDataset('Empty_Data', data=[], dtype=np.float32, maxshape=[7, 5, 3])"
],
"cell_type": "code",
"metadata": {
"collapsed": false
},
"outputs": [],
"execution_count": null
}
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can also create groups and add other MicroData objects as children.\nIf the group's parent is not given, it will be set to root.\n\n"
]
],
"cell_type": "markdown",
"metadata": {}
},
{
"cell_type": "code",
"execution_count": null,
"outputs": [],
"source": [
"data_group = px.MicroDataGroup('Data_Group', parent='/')\n\nroot_group = px.MicroDataGroup('/')\n\n# After creating the group, we then add an existing object as its child.\ndata_group.addChildren([ds_empty])\nroot_group.addChildren([ds_main, data_group])"
],
"cell_type": "code",
"metadata": {
"collapsed": false
},
"outputs": [],
"execution_count": null
}
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The showTree method allows us to view the data structure before the hdf5 file is\ncreated.\n\n"
]
],
"cell_type": "markdown",
"metadata": {}
},
{
"cell_type": "code",
"execution_count": null,
"outputs": [],
"source": [
"root_group.showTree()"
],
"cell_type": "code",
"metadata": {
"collapsed": false
},
"outputs": [],
"execution_count": null
}
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now that we have created the objects, we can write them to an hdf5 file\n\n"
]
],
"cell_type": "markdown",
"metadata": {}
},
{
"cell_type": "code",
"execution_count": null,
"outputs": [],
"source": [
"# First we specify the path to the file\nh5_path = 'microdata_test.h5'\n\n# Then we use the ioHDF5 class to build the file from our objects.\nhdf = px.ioHDF5(h5_path)"
],
"cell_type": "code",
"metadata": {
"collapsed": false
},
"outputs": [],
"execution_count": null
}
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The writeData method builds the hdf5 file using the structure defined by the\nMicroData objects. It returns a list of references to all h5py objects in the\nnew file.\n\n"
]
],
"cell_type": "markdown",
"metadata": {}
},
{
"cell_type": "code",
"execution_count": null,
"outputs": [],
"source": [
"h5_refs = hdf.writeData(root_group, print_log=True)\n\n# We can use these references to get the h5py dataset and group objects\nh5_main = px.io.hdf_utils.getH5DsetRefs(['Main_Data'], h5_refs)[0]\nh5_empty = px.io.hdf_utils.getH5DsetRefs(['Empty_Data'], h5_refs)[0]"
],
"cell_type": "code",
"metadata": {
"collapsed": false
},
"outputs": [],
"execution_count": null
}
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Compare the data in our dataset to the original\n\n"
]
],
"cell_type": "markdown",
"metadata": {}
},
{
"cell_type": "code",
"execution_count": null,
"outputs": [],
"source": [
"print(np.allclose(h5_main[()], data1))"
],
"cell_type": "code",
"metadata": {
"collapsed": false
},
"outputs": [],
"execution_count": null
}
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As mentioned above, we can now write to the Empty_Data object\n\n"
]
],
"cell_type": "markdown",
"metadata": {}
},
{
"cell_type": "code",
"execution_count": null,
"outputs": [],
"source": [
"data2 = np.random.rand(*h5_empty.shape)\nh5_empty[:] = data2[:]"
],
"cell_type": "code",
"metadata": {
"collapsed": false
},
"outputs": [],
"execution_count": null
}
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now that we are using h5py objects, we must use flush to write the data to file\nafter it has been altered.\nWe need the file object to do this. It can be accessed as an attribute of the\nhdf object.\n\n"
]
],
"cell_type": "markdown",
"metadata": {}
},
{
"cell_type": "code",
"execution_count": null,
"outputs": [],
"source": [
"h5_file = hdf.file\nh5_file.flush()"
],
"cell_type": "code",
"metadata": {
"collapsed": false
},
"outputs": [],
"execution_count": null
}
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now that we are done, we should close the file so that it can be accessed elsewhere.\n\n"
]
],
"cell_type": "markdown",
"metadata": {}
},
{
"cell_type": "code",
"execution_count": null,
"outputs": [],
"source": [
"h5_file.close()"
],
"cell_type": "code",
"metadata": {
"collapsed": false
},
"outputs": [],
"execution_count": null
}
}
]
}
\ No newline at end of file
......@@ -18,6 +18,8 @@ The package structure is simple, with 4 main modules:
Once a user converts their microscope's data format into an HDF5 format, by simply extending some of the classes in `io`, the user gains access to the rest of the utilities present in `pycroscopy.*`.
2. Pycroscopy Modules
---------------------
.. currentmodule:: pycroscopy
.. autosummary::
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment