Loading llvm/docs/ReleaseNotes.rst +15 −0 Original line number Diff line number Diff line Loading @@ -305,6 +305,21 @@ which ensure vector-friendly data layout, explicit vectorization and compact representation of the program. The project uses the LLVM infrastructure for optimization and code generation. Likely ------ `Likely <http://www.liblikely.org>`_ is an embeddable just-in-time Lisp for image recognition and heterogenous architectures. Algorithms are just-in-time compiled using LLVM’s MCJIT infrastructure to execute on single or multi-threaded CPUs and potentially OpenCL SPIR or CUDA enabled GPUs. Likely exploits the observation that while image processing and statistical learning kernels must be written generically to handle any matrix datatype, at runtime they tend to be executed repeatedly on the same type. Likely also seeks to explore new optimizations for statistical learning algorithms by moving them from an offline model generation step to a compile-time simplification of a function (the learning algorithm) with constant arguments (the training set). Additional Information ====================== Loading Loading
llvm/docs/ReleaseNotes.rst +15 −0 Original line number Diff line number Diff line Loading @@ -305,6 +305,21 @@ which ensure vector-friendly data layout, explicit vectorization and compact representation of the program. The project uses the LLVM infrastructure for optimization and code generation. Likely ------ `Likely <http://www.liblikely.org>`_ is an embeddable just-in-time Lisp for image recognition and heterogenous architectures. Algorithms are just-in-time compiled using LLVM’s MCJIT infrastructure to execute on single or multi-threaded CPUs and potentially OpenCL SPIR or CUDA enabled GPUs. Likely exploits the observation that while image processing and statistical learning kernels must be written generically to handle any matrix datatype, at runtime they tend to be executed repeatedly on the same type. Likely also seeks to explore new optimizations for statistical learning algorithms by moving them from an offline model generation step to a compile-time simplification of a function (the learning algorithm) with constant arguments (the training set). Additional Information ====================== Loading