Spectral_Unmixing_1pos_1spec.ipynb 17.6 KB
Newer Older
Somnath, Suhas's avatar
Somnath, Suhas committed
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Basic data analysis - Spectral Unmixing, KMeans, PCA, NFINDR etc.\n",
    "\n",
    "R. K. Vasudevan<sup>1,2</sup>, S. Somnath<sup>3</sup>\n",
    "\n",
    "* <sup>1</sup>Center for Nanophase Materials Sciences\n",
    "* <sup>2</sup>Institute for Functional Imaging of Materials \n",
    "* <sup>3</sup>Advanced Data and Workflows Group\n",
    "\n",
    "Oak Ridge National Laboratory, Oak Ridge TN 37831, USA\n",
    "\n",
    "#### In this notebook we load some spectral data, and perform basic data analysis, including:\n",
    "* KMeans Clustering\n",
    "* Non-negative Matrix Factorization\n",
    "* Principal Component Analysis\n",
    "* NFINDR"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Software Prerequisites:\n",
    "* Standard distribution of ___Anaconda__ (includes numpy, scipy, matplotlib and sci-kit learn)\n",
    "* __pysptools__ (will automatically be installed in the next step)\n",
    "* __cvxopt__ for fully constrained least squares fitting\n",
    "    * install in a terminal via __`conda install -c https://conda.anaconda.org/omnia cvxopt`__\n",
    "* __pycroscopy__ : Though pycroscopy is mainly used here for plotting purposes only, it's true capabilities are realized through the ability to seamlessly perform these analyses on any imaging dataset (regardless of origin, size, complexity) and storing the results back into the same dataset among other things"
   ]
  },
  {
   "cell_type": "code",
38
   "execution_count": null,
39
40
41
   "metadata": {
    "collapsed": true
   },
42
   "outputs": [],
Somnath, Suhas's avatar
Somnath, Suhas committed
43
44
45
46
47
48
49
50
51
52
53
54
   "source": [
    "# Installing sotware prerequisites via the python package index:\n",
    "!pip install -U numpy matplotlib sklearn pysptools wget\n",
    "\n",
    "#Import packages\n",
    "\n",
    "# Ensure that this code works on both python 2 and python 3\n",
    "from __future__ import division, print_function, absolute_import, unicode_literals\n",
    "\n",
    "# basic numeric computation:\n",
    "import numpy as np\n",
    "\n",
55
56
57
    "# The package used for creating and manipulating HDF5 files:\n",
    "import h5py\n",
    "\n",
Somnath, Suhas's avatar
Somnath, Suhas committed
58
59
60
61
62
63
    "# Plotting and visualization:\n",
    "import matplotlib.pyplot as plt\n",
    "from mpl_toolkits.axes_grid1 import make_axes_locatable\n",
    "\n",
    "# for downloading files:\n",
    "import wget\n",
64
    "import os\n",
Somnath, Suhas's avatar
Somnath, Suhas committed
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
    "\n",
    "# multivariate analysis:\n",
    "from sklearn.decomposition import NMF\n",
    "from pysptools import eea\n",
    "import pysptools.abundance_maps as amp\n",
    "from pysptools.eea import nfindr\n",
    "\n",
    "# finally import pycroscopy:\n",
    "import pycroscopy as px\n",
    "\n",
    "# configure the notebook to place plots after code cells within the notebook:\n",
    "%matplotlib inline"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## The Data\n",
    "\n",
    "In this example, we will work on __Infrared (IR) Spectra__ data obtained from an Anasys Instruments Nano IR as one of the simplest examples of data. This dataset contains a single IR spectra collected at each position on a single line of spatial points. Thus, this two dimensional dataset has one position dimension (lets say X) and one spectral dimension (wavelength). \n",
    "\n",
    "In the event that the spectra were collected on a 2D grid of spatial locations (two spatial dimensions - X, Y), the resultant three dimensional dataset (X, Y, wavelength) would need to be reshaped to a two dimensional dataset of (position, wavelength) since this is the standard format that is accepted by all statistical analysis, machine learning, spectral unmixing algorithms. The same reshaping of data would need to be performed if there are more than two spectroscopic dimensions.\n",
    "\n",
    "#### Working with the specific Nano IR dataset:\n",
    "We will begin by downloading the data file from Github, followed by reshaping and decimation of the dataset"
   ]
  },
  {
   "cell_type": "code",
95
   "execution_count": null,
Somnath, Suhas's avatar
Somnath, Suhas committed
96
   "metadata": {},
97
   "outputs": [],
Somnath, Suhas's avatar
Somnath, Suhas committed
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
   "source": [
    "# download the data file from Github:\n",
    "url = 'https://raw.githubusercontent.com/pycroscopy/pycroscopy/master/data/NanoIR.txt'\n",
    "data_file_path = 'temp.txt'\n",
    "_ = wget.download(url, data_file_path)\n",
    "#data_file_path = px.io.uiGetFile(filter='Anasys NanoIR text export (*.txt)')\n",
    "\n",
    "# Load the data from file to memory\n",
    "data_mat = np.loadtxt(data_file_path, delimiter ='\\t', skiprows =1 )\n",
    "print('Data currently of shape:', data_mat.shape)\n",
    "\n",
    "# Only every fifth column is of interest (position)\n",
    "data_mat =  data_mat[:, 1::5]\n",
    "\n",
    "# The data is structured as [wavelength, position]\n",
    "\n",
    "# nans cannot be handled in most of these decompositions. So set them to be zero.\n",
    "data_mat[np.isnan(data_mat)]=0 \n",
    "\n",
    "# Finally, taking the transpose of the matrix to match [position, wavelength]\n",
    "data_mat = data_mat.T\n",
    "\n",
    "num_pos = data_mat.shape[0]\n",
    "spec_pts = data_mat.shape[1]\n",
    "print('Data currently of shape:', data_mat.shape)\n",
    "\n",
    "x_label = 'Spectral dimension'\n",
    "y_label = 'Intensity (a.u.)'"
   ]
  },
128
129
130
131
132
133
134
135
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Convert to H5\n",
    "Now we will take our numpy array holding the data and use the NumpyTranslator in pycroscopy to write it to an h5 file."
   ]
  },
Somnath, Suhas's avatar
Somnath, Suhas committed
136
137
  {
   "cell_type": "code",
138
   "execution_count": null,
Somnath, Suhas's avatar
Somnath, Suhas committed
139
   "metadata": {},
140
141
142
143
144
145
146
147
148
149
150
151
152
   "outputs": [],
   "source": [
    "folder_path, file_name = os.path.split(data_file_path)\n",
    "file_name = file_name[:-4] + '_'\n",
    "\n",
    "h5_path = os.path.join(folder_path, file_name + '.h5')\n",
    "\n",
    "# Use NumpyTranslator to convert the data to h5\n",
    "tran = px.io.NumpyTranslator()\n",
    "h5_path = tran.translate(h5_path, data_mat, num_pos, 1, scan_height=spec_pts, scan_width=1,\n",
    "                         qty_name='Intensity', data_unit='a.u', spec_name=x_label, \n",
    "                         spatial_unit='a.u.', data_type='NanoIR')\n",
    "\n",
153
    "h5_file = h5py.File(h5_path, mode='r+')\n",
154
    "\n",
155
156
157
158
159
160
161
162
163
164
    "# See if a tree has been created within the hdf5 file:\n",
    "px.hdf_utils.print_tree(h5_file)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Extracting the data and parameters\n",
    "All necessary information to understand, plot, analyze, and process the data is present in the H5 file now. Here, we show how to extract some basic parameters to plot the data"
165
166
167
168
169
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
170
171
172
   "metadata": {
    "collapsed": true
   },
173
   "outputs": [],
Somnath, Suhas's avatar
Somnath, Suhas committed
174
   "source": [
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
    "h5_main = h5_file['Measurement_000/Channel_000/Raw_Data']\n",
    "h5_spec_vals = px.hdf_utils.getAuxData(h5_main,'Spectroscopic_Values')[0]\n",
    "h5_pos_vals = px.hdf_utils.getAuxData(h5_main,'Position_Values')[0]\n",
    "x_label = px.hdf_utils.get_formatted_labels(h5_spec_vals)[0]\n",
    "y_label = px.hdf_utils.get_formatted_labels(h5_pos_vals)[0]\n",
    "descriptor = px.hdf_utils.get_data_descriptor(h5_main)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Visualize the Amplitude Data\n",
    "Note that we are not hard-coding / writing any tick labels / axis labels by hand. All the necessary information was present in the H5 file"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "fig, axis = plt.subplots(figsize=(8,5))\n",
    "px.plot_utils.plot_map(axis, h5_main, cmap='inferno')\n",
    "axis.set_title('Raw data - ' + descriptor)\n",
    "axis.set_xlabel(x_label)\n",
    "axis.set_ylabel(y_label)\n",
    "vec = h5_spec_vals[0]\n",
    "cur_x_ticks = axis.get_xticks()\n",
    "for ind in range(1,len(cur_x_ticks)-1):\n",
    "    cur_x_ticks[ind] = h5_spec_vals[0, ind]\n",
    "axis.set_xticklabels([str(val) for val in cur_x_ticks]);"
Somnath, Suhas's avatar
Somnath, Suhas committed
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 1. Singular Value Decomposition (SVD)\n",
    "\n",
    "SVD is an eigenvector decomposition that is defined statistically, and therefore typically produces non-physical eigenvectors. Consequently, the interpretation of eigenvectors and abundance maps from SVD requires care and caution in interpretation. Nontheless, it is a good method for quickly visualizing the major trends in the dataset since the resultant eigenvectors are sorted in descending order of variance or importance. Furthermore, SVD is also very well suited for data cleaning through the reconstruction of the dataset using only the first N (most significant) components. \n",
    "\n",
    "SVD results in three matrices:\n",
    "* V - Eigenvectors sorted by variance in descending order\n",
    "* U - corresponding bundance maps\n",
    "* S - Variance or importance of each of these components"
   ]
  },
  {
   "cell_type": "code",
225
   "execution_count": null,
Somnath, Suhas's avatar
Somnath, Suhas committed
226
   "metadata": {},
227
   "outputs": [],
Somnath, Suhas's avatar
Somnath, Suhas committed
228
   "source": [
229
230
231
232
233
    "h5_svd_grp = px.processing.doSVD(h5_main)\n",
    "\n",
    "U = h5_svd_grp['U']\n",
    "S = h5_svd_grp['S']\n",
    "V = h5_svd_grp['V']\n",
Somnath, Suhas's avatar
Somnath, Suhas committed
234
235
236
237
238
239
240
241
242
    "\n",
    "# Visualize the variance / statistical importance of each component:\n",
    "px.plot_utils.plotScree(S, title='Note the exponential drop of variance with number of components')\n",
    "\n",
    "# Visualize the eigenvectors:\n",
    "px.plot_utils.plot_loops(np.arange(spec_pts), V, x_label=x_label, y_label=y_label, plots_on_side=3, \n",
    "                         subtitles='Component', title='SVD Eigenvectors', evenly_spaced=False);\n",
    "\n",
    "# Visualize the abundance maps:\n",
243
    "px.plot_utils.plot_loops(np.arange(num_pos), np.transpose(U), plots_on_side=3, \n",
Somnath, Suhas's avatar
Somnath, Suhas committed
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
    "                         subtitles='Component', title='SVD Abundances', evenly_spaced=False);"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 2. KMeans Clustering\n",
    "\n",
    "KMeans clustering is a quick and easy method to determine the types of spectral responses present in the data. It is not a decomposition method, but a basic clustering method. The user inputs the number of clusters (sets) to partition the data into. The algorithm proceeds to find the optimal labeling (ie., assignment of each spectra as belonging to the k<sup>th</sup> set) such that the within-cluster sum of squares is minimized.\n",
    "\n",
    "Set the number of clusters below"
   ]
  },
  {
   "cell_type": "code",
260
   "execution_count": null,
Somnath, Suhas's avatar
Somnath, Suhas committed
261
   "metadata": {},
262
   "outputs": [],
Somnath, Suhas's avatar
Somnath, Suhas committed
263
264
265
   "source": [
    "num_comps = 4\n",
    "\n",
266
267
268
269
    "estimators = px.Cluster(h5_main, 'KMeans', num_comps=num_comps)\n",
    "h5_kmeans_grp = estimators.do_cluster(h5_main)\n",
    "h5_kmeans_labels = h5_kmeans_grp['Labels']\n",
    "h5_kmeans_mean_resp = h5_kmeans_grp['Mean_Response']\n",
Somnath, Suhas's avatar
Somnath, Suhas committed
270
271
    "\n",
    "fig, axes = plt.subplots(ncols=2,figsize=(18,8))\n",
272
273
274
275
    "\n",
    "px.plot_utils.plot_line_family(axes[0], np.arange(h5_kmeans_mean_resp.shape[1]),\n",
    "                               h5_kmeans_mean_resp, label_prefix='Cluster',\n",
    "                               y_offset=500)\n",
Somnath, Suhas's avatar
Somnath, Suhas committed
276
277
278
279
280
    "axes[0].legend(bbox_to_anchor = [1.05,1.0], fontsize=12)\n",
    "axes[0].set_title('K-Means Cluster Centers', fontsize=14)\n",
    "axes[0].set_xlabel(x_label, fontsize=14)\n",
    "axes[0].set_ylabel(y_label, fontsize=14)\n",
    "\n",
281
    "axes[1].plot(h5_kmeans_labels)\n",
Somnath, Suhas's avatar
Somnath, Suhas committed
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
    "axes[1].set_title('KMeans Labels', fontsize=14)\n",
    "axes[1].set_xlabel('Position', fontsize=14)\n",
    "axes[1].set_ylabel('Label');"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 3. Non-negative Maxtrix Factorization (NMF)\n",
    "\n",
    "NMF, or non-negative matrix factorization, is a method that is useful towards unmixing of spectral data. It only works on data with positive real values. It operates by approximate determination of factors (matrices) W and H, given a matrix V, as shown below\n",
    "\n",
    "![NMF](https://upload.wikimedia.org/wikipedia/commons/f/f9/NMF.png)"
   ]
  },
  {
   "cell_type": "code",
300
   "execution_count": null,
Somnath, Suhas's avatar
Somnath, Suhas committed
301
   "metadata": {},
302
   "outputs": [],
Somnath, Suhas's avatar
Somnath, Suhas committed
303
304
305
   "source": [
    "num_comps = 4\n",
    "\n",
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
    "model = px.Decomposition(h5_main, 'NMF', n_components=num_comps)\n",
    "h5_nmf_grp = model.doDecomposition()\n",
    "\n",
    "h5_endmembers = h5_nmf_grp['Components']\n",
    "h5_projections = h5_nmf_grp['Projection']\n",
    "\n",
    "fig, axes = plt.subplots(ncols=2, figsize=(12, 6))\n",
    "px.plot_utils.plot_line_family(axes[0], np.arange(h5_endmembers.shape[1]),\n",
    "                               h5_endmembers, label_prefix='Endmember',\n",
    "                               y_offset=50)\n",
    "axes[0].set_xlabel(x_label, fontsize=12)\n",
    "axes[0].set_ylabel(y_label, fontsize=12)\n",
    "axes[0].legend()\n",
    "axes[0].set_title('Endmembers', fontsize=14)\n",
    "px.plot_utils.plot_line_family(axes[1], np.arange(h5_projections.shape[0]),\n",
    "                               h5_projections[()].T, label_prefix='Projection',\n",
    "                               y_offset=40)\n",
    "axes[1].legend()\n",
    "axes[1].set_xlabel('Positions', fontsize=12)\n",
    "axes[1].set_title('Abundances', fontsize=14);\n",
    "fig.suptitle('NMF Components', fontsize=18);"
Somnath, Suhas's avatar
Somnath, Suhas committed
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 4. NFINDR\n",
    "\n",
    "NFINDR is a geometric decomposition technique that can aid in determination of constitent spectra in data. The basic idea is as follows. Assume that at any point <i>x</i>, the spectra measured <i>A(w,x)</i> is a linear superposition of <i>k</i> 'pure' spectra, i.e.\n",
    "\n",
    "<i>A(w,x)</i> = c<sub>0</sub>(x)a<sub>0</sub> + c<sub>1</sub>(x)a<sub>1</sub> + ... + c<sub>k</sub>(x)a<sub>k</sub>\n",
    "\n",
    "In this case, our task consists of first determining the pure spectra {a<sub>0</sub>,...,a<sub>k</sub>}, and then determining the coefficients {c<sub>0</sub>,...,c<sub>k</sub>}. NFINDR determines the 'pure' spectra by first projecting the data into a low-dimensional sub-space (typically using PCA), and then taking the convex hull of the points in this space. Then, points are picked at random along the convex hull and the volume of the simplex that the points form is determined. If (k+1) pure spectra are needed, the data is reduced to (k) dimensions for this purpose. The points that maximize the volume of the simples are taken as the most representative pure spectra available in the dataset. One way to think of this is that any spectra that lie within the given volume can be represented as a superposition of these constituent spectra; thus maximizing this volume allows the purest spectra to be determined. \n",
    "\n",
    "The second task is to determine the coefficients. This is done usign the fully constrained least squares optimization, and involves the sum-to-one constraint, to allow quantitative comparisons to be made. More information can be found in the paper below:\n",
    "\n",
    "[Winter, Michael E. \"N-FINDR: An algorithm for fast autonomous spectral end-member determination in hyperspectral data.\" SPIE's International Symposium on Optical Science, Engineering, and Instrumentation. International Society for Optics and Photonics, 1999.](http://proceedings.spiedigitallibrary.org/proceeding.aspx?articleid=994814)"
   ]
  },
  {
   "cell_type": "code",
348
   "execution_count": null,
Somnath, Suhas's avatar
Somnath, Suhas committed
349
   "metadata": {},
350
   "outputs": [],
Somnath, Suhas's avatar
Somnath, Suhas committed
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
   "source": [
    "num_comps = 4\n",
    "\n",
    "nfindr_results = eea.nfindr.NFINDR(data_mat, num_comps) #Find endmembers\n",
    "end_members = nfindr_results[0]\n",
    "\n",
    "fig, axis = plt.subplots()\n",
    "for comp_ind, end_member in enumerate(end_members):\n",
    "    axis.plot(end_member + comp_ind * 1000, \n",
    "              label = 'NFINDR Component #' + str(comp_ind))\n",
    "axis.set_title('NFINDR Endmembers', fontsize=14)\n",
    "axis.set_xlabel(x_label, fontsize=12)\n",
    "axis.set_ylabel(y_label, fontsize=12)\n",
    "axis.legend(bbox_to_anchor=[1.0,1.0], fontsize=12)\n",
    "\n",
    "# fully constrained least squares model:\n",
    "fcls = amp.FCLS() \n",
    "# Find abundances:\n",
    "amap = fcls.map(data_mat[np.newaxis, :, :], end_members) \n",
    "\n",
    "# Reshaping amap to match those of conventional endmembers\n",
    "amap = np.squeeze(amap).T\n",
    "\n",
    "fig2, axis2 = plt.subplots()\n",
    "for comp_ind, abundance in enumerate(amap):\n",
    "    axis2.plot(abundance, label = 'NFIND R Component #' + str(comp_ind) )\n",
    "axis2.set_title('Abundances', fontsize=14)\n",
    "axis2.set_xlabel(x_label, fontsize=12)\n",
    "axis2.set_ylabel('Abundance (a. u.)', fontsize=12)\n",
    "axis2.legend(bbox_to_anchor=[1.0,1.0], fontsize=12);"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "# Delete the temporarily downloaded file\n",
392
393
394
395
    "os.remove(data_file_path)\n",
    "# Close and delete the h5_file\n",
    "h5_file.close()\n",
    "os.remove(h5_path)"
Somnath, Suhas's avatar
Somnath, Suhas committed
396
   ]
397
398
399
400
401
402
403
404
405
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": []
Somnath, Suhas's avatar
Somnath, Suhas committed
406
407
408
409
410
  }
 ],
 "metadata": {
  "anaconda-cloud": {},
  "kernelspec": {
411
   "display_name": "Python [default]",
Somnath, Suhas's avatar
Somnath, Suhas committed
412
   "language": "python",
413
   "name": "python3"
Somnath, Suhas's avatar
Somnath, Suhas committed
414
415
416
417
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
418
    "version": 3
Somnath, Suhas's avatar
Somnath, Suhas committed
419
420
421
422
423
424
425
426
427
428
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.5.2"
  }
 },
 "nbformat": 4,
429
430
 "nbformat_minor": 1
}