Commit dd54fdaa authored by cianciosa's avatar cianciosa Committed by Cianciosa, Mark
Browse files

Add documentation for equilibria, dispersion functions, command line...

Add documentation for equilibria, dispersion functions, command line arguments, and a framework tutorial.
parent a3ea80f6
Loading
Loading
Loading
Loading
+1 −0
Original line number Diff line number Diff line
@@ -311,6 +311,7 @@ if (DOXYGEN_FOUND)
    set (DOXYGEN_EXCLUDE ${CMAKE_CURRENT_SOURCE_DIR}/LLVM ${CMAKE_CURRENT_SOURCE_DIR}/build)
    set (DOXYGEN_GENERATE_TREEVIEW YES)
    set (DOXYGEN_USE_MATHJAX YES)
    set (DOXYGEN_IMAGE_PATH ${CMAKE_CURRENT_SOURCE_DIR}/graph_docs)

    doxygen_add_docs (doc)
endif ()

graph_docs/Efit.png

0 → 100755
+92.3 KiB
Loading image diff...
+0 −4
Original line number Diff line number Diff line
@@ -7,10 +7,6 @@
 * This page details the <a href="http://www.cmake.org">cmake</a> based build
 * system.
 *
 * Documentation if divided into two parts.
 * * User documention.
 * * Developer documentation.
 *
 * @section build_system_user User Guide
 * The following section is for users of framework.
 *
+92 −4
Original line number Diff line number Diff line
/*!
 * @page general_concepts General Concepts
 * @tableofcontents
 * @section introduction Introduction
 * @section general_concepts_introduction Introduction
 * This page documents general concepts of the graph_framework.
 *
 * @section general_concepts_definitions Definitions.
 * * <b>leaf_node</b>            A leaf on the graph.
 * * <b>node</b>                 A leaf or branch on the graph tree.
 * * <b>graph</b>                A data stucture connecting leaf nodes.
 * * <b>reduce</b>               A tranformation of the graph to remove leaf_nodes.
 * * <b>auto differentiation</b> A tranformation of the graph build derivatives.
@@ -17,14 +17,102 @@
 * * <b>converge_item</b>        A kernel that is run until a convergence test is met.
 * * <b>workflow</b>             A series of work items.
 * * <b>backend</b>              The device the kernel is run on.
 * * <b>recursion</b>            See definition of recursion.
 * * <b>safe math</b>            Run time checks to avoid off normal conditions.
 * * <b>API</b>                  Application programming interface.
 * * <b>Host</b>                 The place where kernels are launched from.
 * * <b>Device</b>               The device side where kernels are run.
 *
 * @section general_concepts_graph Graph
 * The graph_framework operates by building tree structure of math operations.
 * In tree form it is easy to traverse nodes in the graph. Take the example of
 * equation of line.
 * equation of a line.
 * @f{equation}{y=mx + b@f}
 * This equation consists of five leaf nodes. The ends of the tree are clasified
 * as either variables @f$x@f$ or constants @f$m,b@f$. These leaf_nodes are
 * connected by leaf nodes for multiply and additon operations. The ouptut
 * @f$y@f$ represents the entire graph of operations.
 * @image{} html line_graph.png "The graph stucture for y = mx + b."
 * Evaluation of graphs start from the top most node in this case the @f$+@f$
 * operation. Evaluation of a node is not performed until all subnodes are
 * evaluated starting with the left operand. Evaluation starts by recursively
 * evaluating the left operands until the last leaf_node is reached @f$m@f$.
 * @image{} html line_graph_eval1.png ""
 * Once @f$m@f$ the result is returned ot the @f$+@f$ then the right operand is
 * evaluated.
 * @image{} html line_graph_eval2.png ""
 * Evaluation is repeated until every node in the graph is evaluated.
 * @image{} html line_graph_eval_final.png ""
 *
 * @section general_concepts_diff Auto Differentiation
 * From the previous @ref general_concepts_graph "section", it was shown how
 * graph can be evaluated. This same evaluation can be applied to build
 * graphs of a function derivative. Lets say that we want to take the derivative
 * of @f$\frac{\partial y}{\partial x}@f$. This is achieved by evaluating the
 * until bottom left most leaf_node is reached. Then a new graph is build
 * starting with @f$\frac{\partial m}{\partial x}=0@f$. Applying the first half
 * of the chain rule we build a new graph for @f$0x@f$
 * @image{} html line_graph_dydf1.png ""
 * Then we take the derivative of the right operand and apply the second half
 * of the chain rule to build a new graph for @f$0x=0@f$.
 * @image{} html line_graph_dydf2.png ""
 * Evaluation is repeated recursively until the full graph has been evaluated.
 * @image{} html line_graph_dydf_final.png ""
 *
 * @section general_concepts_reduction Reduction
 * The final expression for @f$\frac{\partial y}{\partial x}@f$ contains many
 * unnecessary nodes in the graph. Instead of building full graphs, we can
 * simplify and eleminate node as we build them. For instance, when the
 * expression @f$0x@f$ this create can be immediately reduced to a single node.
 * @image{} html line_graph_reduce1.png ""
 * Applying all possible reduction reduces the final expression to
 * @f$\frac{\partial y}{\partial x}=m@f$.
 * @image{} html line_graph_reduce_final.png ""
 * By reducing graphs as they are build, we can eliminate nodes one by one.
 *
 * @section general_concepts_compile Compile
 * Once graph expressions are build, they can be compiled to a compute kernel.
 * Using the same recursive evaluation, we can visit each node of a graph and
 * and create a line of kernel source code. There are three important parts for
 * creating kernels, inputs, outputs, and maps. These three concepts define
 * buffers on the compute device and how they are changed. Compute kernels can
 * be genereted from multiple outputs and maps.
 *
 * @subsection general_concepts_compile_inputs Inputs
 * Inputs are the varible nodes that define the graph. In the line example
 * @f$\frac{\partial y}{\partial x}@f$, the input variable would be the node
 * for @f$x@f$. Some graphs have no inputs. The graph for
 * @f$\frac{\partial y}{\partial x}=m@f$ has eliminated all the variable nodes
 * in the graph.
 *
 * @subsection general_concepts_compile_outputs Outputs
 * Outputs are the ends of the nodes. These are the values we want to compute.
 * Any node of a graph can be a potential output. For each out put a device
 * buffer is generated to store the results of the evalutaion. For nodes that
 * are not used as outputs, no buffers need to be created since those results
 * are never stored.
 *
 * @subsection general_concepts_compile_maps Maps
 * Maps enable the results of an output node to stored in an input node. This is
 * use for a wide varity of steps. For instance take a gradient decent step.
 * @f{equation}{y = y + \frac{\partial f}{\partial x}@f}
 * In this case the output of the expression
 * @f$y + \frac{\partial f}{\partial x}@f$
 * can be mapped to update @f$y@f$.
 *
 * @section general_concepts_workflow Workflows
 * Sequences of kernels are evaluated in workflow. A workflow is defined from
 * workitems which wrap a kernel call.
 *
 * @section general_concepts_safe_math Safe Math
 * There are some conditions where mathematically, a graph should evaluate to
 * normal number. However, when evaluted suing floating point precison, can lead
 * to <tt>Inf</tt> or <tt>NaN</tt>. An example of this the
 * @f$\exp\left(x\right)@f$ function. For large argument values,
 * @f$\exp\left(x\right)@f$ overflows the maximum floating point precision and
 * returns <tt>Inf</tt>. @f$\frac{1}{\exp\left(x\right)}@f$ should evaluate
 * zero however, in floating point this becomes a <tt>NaN</tt>. Safe math avoids
 * problems like this by checking for large argument values. But since these
 * run time checks slow down kernel evaluation, most of the time safe math
 * should be avoided.
 */
+8.84 KiB
Loading image diff...
Loading