Commit 21aa0a73 authored by Dmitry I. Lyakh's avatar Dmitry I. Lyakh
Browse files

Merge branch 'devel_dil' into cotengra

parents 48b433ac a3e218be
......@@ -4,6 +4,7 @@ add_library(${LIBRARY_NAME}
exatn.cpp
exatn_service.cpp
ServiceRegistry.cpp
quantum.cpp
num_server.cpp
reconstructor.cpp
optimizer.cpp
......
/** ExaTN::Numerics: General client header (free function API)
REVISION: 2021/06/22
REVISION: 2021/07/13
Copyright (C) 2018-2021 Dmitry I. Lyakh (Liakh)
Copyright (C) 2018-2021 Oak Ridge National Laboratory (UT-Battelle) **/
/** Rationale:
1. Vector space and subspace registration:
1. Vector space and subspace registration [spaces.hpp, space_register.hpp]:
(a) Any unnamed vector space is automatically associated with a pre-registered
anonymous vector space wtih id = SOME_SPACE = 0.
(b) Any explicitly registered (named) vector space has id > 0.
......@@ -19,56 +19,60 @@ Copyright (C) 2018-2021 Oak Ridge National Laboratory (UT-Battelle) **/
2. Index labels:
(a) Any registered subspace can be assigned a symbolic index label serving as a placeholder for it;
an index label can only refer to a single registered (named) subspace it is associated with.
3. Tensor:
3. Tensor [tensor.hpp, tensor_composite.hpp]:
(a) A tensor is defined by its name, shape and signature.
(b) Tensor shape is an ordered tuple of tensor dimension extents.
(c) Tensor signature is an ordered tuple of {space_id,subspace_id} pairs
for each tensor dimension. In case space_id = SOME_SPACE, subspace_id
is simply the base offset in the anonymous vector space (min = 0).
4. Tensor operation:
(d) Additionally, a subset of tensor dimensions can be assigned an isometry property;
any tensor may have no more than two disjoint isometric dimension groups.
4. Tensor operation [tensor_operation.hpp]:
(a) Tensor operation is a mathematical operation on one or more tensor arguments.
(b) Evaluating a tensor operation means computing the value of all its output tensors,
given all input tensors.
5. Tensor network:
5. Tensor network [tensor_network.hpp]:
(a) Tensor network is a graph of tensors in which vertices are the tensors
and (directed) edges show which dimensions of these tensors are contracted
with each other. By default, each edge connects two dimensions in two separate
tensors (vertices), although these tensors themselves may be identical.
Partial/full traces within a tensor are allowed in a tensor network although
they must be supported by the processing backend in order to be computed.
(b) The same tensor may be present in the given tensor network multiple times
and (directed) edges are uniquely associated with tensor dimensions, showing
which of them are contracted between any pair of tensors (vertices).
By default, each edge connects two dimensions in two separate tensors (vertices),
although these tensors themselves may be identical. Partial/full traces within
a tensor are allowed, although they must be supported by the processing backend
in order to be actually computed.
(b) The same tensor may be present in a given tensor network multiple times
via different vertices, either normal or conjugated.
(c) Each tensor network has an implicit output tensor collecting all open
edges from all input tensors. Evaluating the tensor network means
computing the value of this output tensor, given all input tensors.
(c) Each tensor network has an implicit output tensor collecting all open edges from
all input tensors (uncontracted tensor dimensions). Evaluating the tensor network
means computing the value of this output tensor, given all input tensors.
(d) The conjugation operation applied to a tensor network performs complex conjugation
of all constituent input tensors, but does not apply to the output tensor per se
because the output tensor is simply the result of the full contraction of the tensor
network. The conjugation operation also reverses the direction of all edges.
network. The conjugation operation also reverses the direction of all edges, unless
they are undirected.
(e) An input tensor may be present in multiple tensor networks and its lifetime
is not bound to the lifetime of any tensor network it belongs to.
6. Tensor network expansion:
6. Tensor network expansion [tensor_expansion.hpp]:
(a) Tensor network expansion is a linear combination of tensor networks
with some complex coefficients. The output tensors of all constituent
tensor networks must be congruent. Evaluating the tensor network
expansion means computing the sum of all these output tensors scaled
by their respective (complex) coefficients.
(b) A tensor network expansion may either belong to the primary ket or dual bra space.
tensor networks must be congruent (same shape and signature). Evaluating
the tensor network expansion means computing the sum of all these output tensors
scaled by their respective (complex) coefficients.
(b) A tensor network expansion may either belong to the primary (ket) or dual (bra) space.
The conjugation operation transitions the tensor network expansion between
the ket and bra spaces.
the ket and bra spaces (it applies to each constituent tensor network).
(c) A single tensor network may enter multiple tensor network expansions and its
lifetime is not bound by the lifetime of any tensor network expansion it belongs to.
7. Tensor network operator:
7. Tensor network operator [tensor_operator.hpp]:
(a) Tensor network operator is a linear combination of tensors and/or tensor networks
where each tensor (or tensor network) component associates some of its open edges
to a primary ket space and some to a dual bra space, thus establishing a map
between the two generally distinct spaces. Therefore, a tensor network operator
has a ket shape and a bra shape. All components of a tensor network operator
must adhere to the same ket/bra shapes.
with a primary (ket) space and some with a dual (bra) space, thus establishing a
map between the two (generally unrelated) spaces. Therefore, a tensor network
operator has a ket shape and a bra shape. All components of a tensor network
operator must adhere to the same ket/bra shapes.
(b) A tensor network operator may act on a ket tensor network expansion if its ket
shape matches the shape of that ket tensor network expansion.
shape matches the shape of that tensor network expansion.
A tensor network operator may act on a bra tensor network expansion if its bra
shape matches the shape of that bra tensor network expansion.
shape matches the shape of that tensor network expansion.
(c) A full contraction may be formed between a bra tensor network expansion,
a tensor network operator, and a ket tensor network expansion if the bra shape
of the tensor network operator matches the shape of the bra tensor network
......@@ -76,6 +80,38 @@ Copyright (C) 2018-2021 Oak Ridge National Laboratory (UT-Battelle) **/
of the ket tensor network expansion.
(d) Any contraction of a tensor network operator with a ket/bra tensor network
expansion (or both) forms another tensor network expansion.
8. Tensor processing [exatn_numerics.hpp]:
(a) A tensor can be allocated storage and processed at any time after its formal definition.
(b) Tensor storage allocation is called tensor creation. A tensor can either be created across all
MPI processes or within a specified group of them. The subset of MPI processes participating
in the tensor creation operation defines its domain of existence, meaning that only these
MPI processses are aware of the existence of the created tensor. Note that the concrete
physical distribution of the tensor body among the MPI processes is hidden from the user
(either fully replicated or fully distributed or a mix of the two).
(c) All tensor operands of any non-unary tensor operation must have the same domain of existence,
otherwise the code is non-compliant, resulting in an undefined behavior.
(d) By default, the tensor body is replicated across all MPI processes in its domain of existence.
The user also has an option to create a distributed tensor by specifying which dimensions of
this tensor to split into segments, thus inducing a block-wise decomposition of the tensor body.
Each tensor dimension chosen for splitting must be given its splitting depth, that is, the number
of recursive bisections applied to that dimension (a depth of D results in 2^D segments).
As a consequence, the total number of tensor blocks will be a power of 2. Because of this,
the size of the domain of existence of the corresponding composite tensor must also be a power of 2.
In general, the user is also allowed to provide a Lambda predicate to select which tensor blocks
should be discarded during the creation of a composite tensor, resulting in a block-sparse storage.
(e) An explicit call to the tensor destruction operation is needed for freeing the tensor storage space.
Without an explicit tensor destruction call, tensor storage will be freed automatically by the
internal garbage collector at some point before the program termination.
(f) Tensor creation generally does not initialize a tensor to any value. Setting a tensor to some value
requires calling the tensor initialization operation.
(g) Any other unary tensor operation can be implemented as a tensor transformation operation with
a specific tranformation functor.
(h) Tensor addition is the main binary tensor operation which also implements tensor copy
when the output tensor operand is initialized to zero.
(i) Tensor contraction and tensor decomposition are the main ternary tensor operations, being
opposites of each other.
(j) All higher-level tensor operations (evaluation of tensor networks and tensor network expansions)
are decomposed into lists of elementary tensor operations which are subsequently executed.
**/
#ifndef EXATN_NUMERICS_HPP_
......@@ -209,6 +245,21 @@ inline bool registerTensorIsometry(const std::string & name,
{return numericalServer->registerTensorIsometry(name,iso_dims0,iso_dims1);}
/** Returns TRUE if the calling process is within the existence domain
of all given tensors, FALSE otherwise. **/
template <typename... Args>
inline bool withinTensorExistenceDomain(Args&&... tensor_names) //in: tensor names
{return numericalServer->withinTensorExistenceDomain(std::forward<Args>(tensor_names)...);}
/** Returns the process group associated with the given tensors.
The calling process must be within the tensor exsistence domain,
which must be the same for all tensors. **/
template <typename... Args>
inline const ProcessGroup & getTensorProcessGroup(Args&&... tensor_names) //in: tensor names
{return numericalServer->getTensorProcessGroup(std::forward<Args>(tensor_names)...);}
//////////////////////////
// TENSOR OPERATION API //
//////////////////////////
......@@ -545,6 +596,19 @@ inline bool insertTensorSliceSync(const std::string & tensor_name, //in: tensor
{return numericalServer->insertTensorSliceSync(tensor_name,slice_name);}
/** Assigns one tensor to another congruent one (makes a copy of a tensor).
If the output tensor with the given name does not exist, it will be created.
Note that the output tensor must either exist or not exist across all
participating processes, otherwise it will result in an undefined behavior! **/
inline bool copyTensor(const std::string & output_name, //in: output tensor name
const std::string & input_name) //in: input tensor name
{return numericalServer->copyTensor(output_name,input_name);}
inline bool copyTensorSync(const std::string & output_name, //in: output tensor name
const std::string & input_name) //in: input tensor name
{return numericalServer->copyTensorSync(output_name,input_name);}
/** Performs tensor addition: tensor0 += tensor1 * alpha **/
template<typename NumericType>
inline bool addTensors(const std::string & addition, //in: symbolic tensor addition specification
......@@ -977,16 +1041,26 @@ inline const ProcessGroup & getDefaultProcessGroup()
{return numericalServer->getDefaultProcessGroup();}
/** Returns the current process group comprising solely the current MPI process and its own communicator. **/
/** Returns the current process group comprising solely the current MPI process and its own self-communicator. **/
inline const ProcessGroup & getCurrentProcessGroup()
{return numericalServer->getCurrentProcessGroup();}
/** Returns the local rank of the MPI process in a given process group, or -1 if it does not belong to it. **/
inline int getProcessRank(const ProcessGroup & process_group)
{return numericalServer->getProcessRank(process_group);}
/** Returns the global rank of the current MPI process in the default process group. **/
inline int getProcessRank()
{return numericalServer->getProcessRank();}
/** Returns the number of MPI processes in a given process group. **/
inline int getNumProcesses(const ProcessGroup & process_group)
{return numericalServer->getNumProcesses(process_group);}
/** Returns the total number of MPI processes in the default process group. **/
inline int getNumProcesses()
{return numericalServer->getNumProcesses();}
......
This diff is collapsed.
/** ExaTN::Numerics: Numerical server
REVISION: 2021/06/22
REVISION: 2021/07/18
Copyright (C) 2018-2021 Dmitry I. Lyakh (Liakh)
Copyright (C) 2018-2021 Oak Ridge National Laboratory (UT-Battelle) **/
......@@ -118,7 +118,62 @@ using numerics::FunctorPrint;
using TensorMethod = talsh::TensorFunctor<Identifiable>;
//Numerical Server:
/** Returns the closest owner id (process rank) for a given subtensor. **/
unsigned int subtensor_owner_id(unsigned int process_rank, //in: current process rank
unsigned int num_processes, //in: total number of processes
unsigned long long subtensor_id, //in: id of the required subtensor
unsigned long long num_subtensors); //in: total number of subtensors
/* Returns a range of subtensors [begin,end] owned by the specified process. */
std::pair<unsigned long long, unsigned long long> owned_subtensors(
unsigned int process_rank, //in: target process rank
unsigned int num_processes, //in: total number of processes
unsigned long long num_subtensors); //in: total number of subtensors
//Composite tensor mapper (helper):
class CompositeTensorMapper: public TensorMapper{
public:
CompositeTensorMapper(unsigned int current_rank_in_group,
unsigned int num_processes_in_group,
const std::unordered_map<std::string,std::shared_ptr<Tensor>> & local_tensors):
current_process_rank_(current_rank_in_group), group_num_processes_(num_processes_in_group),
local_tensors_(local_tensors) {}
virtual ~CompositeTensorMapper() = default;
virtual unsigned int subtensorOwnerId(unsigned long long subtensor_id,
unsigned long long num_subtensors) const override
{
return subtensor_owner_id(current_process_rank_,group_num_processes_,subtensor_id,num_subtensors);
}
virtual std::pair<unsigned long long, unsigned long long> ownedSubtensors(unsigned int process_rank,
unsigned long long num_subtensors) const override
{
return owned_subtensors(process_rank,group_num_processes_,num_subtensors);
}
virtual bool isLocalSubtensor(unsigned long long subtensor_id,
unsigned long long num_subtensors) const override
{
return (subtensorOwnerId(subtensor_id,num_subtensors) == current_process_rank_);
}
virtual bool isLocalSubtensor(const Tensor & subtensor) const override
{
return (local_tensors_.find(subtensor.getName()) != local_tensors_.cend());
}
private:
unsigned int current_process_rank_; //rank of the current process (in some process group)
unsigned int group_num_processes_; //total number of processes (in some process group)
const std::unordered_map<std::string,std::shared_ptr<Tensor>> & local_tensors_; //locally stored tensors
};
//Numerical server:
class NumServer final {
public:
......@@ -191,15 +246,27 @@ public:
/** Returns the default process group comprising all MPI processes and their communicator. **/
const ProcessGroup & getDefaultProcessGroup() const;
/** Returns the current process group comprising solely the current MPI process and its own communicator. **/
/** Returns the current process group comprising solely the current MPI process and its own self-communicator. **/
const ProcessGroup & getCurrentProcessGroup() const;
/** Returns the local rank of the MPI process in a given process group, or -1 if it does not belong to it. **/
int getProcessRank(const ProcessGroup & process_group) const;
/** Returns the global rank of the current MPI process in the default process group. **/
int getProcessRank() const;
/** Returns the number of MPI processes in a given process group. **/
int getNumProcesses(const ProcessGroup & process_group) const;
/** Returns the total number of MPI processes in the default process group. **/
int getNumProcesses() const;
/** Returns a composite tensor mapper for a given process group. **/
std::shared_ptr<TensorMapper> getTensorMapper(const ProcessGroup & process_group) const;
/** Returns a composite tensor mapper for the default process group (all processes). **/
std::shared_ptr<TensorMapper> getTensorMapper() const;
/** Registers an external tensor method. **/
void registerTensorMethod(const std::string & tag,
std::shared_ptr<TensorMethod> method);
......@@ -253,8 +320,10 @@ public:
const Subspace * getSubspace(const std::string & subspace_name) const;
/** Submits an individual tensor operation for processing. **/
bool submit(std::shared_ptr<TensorOperation> operation); //in: tensor operation for numerical evaluation
/** Submits an individual (simple or composite) tensor operation for processing.
Composite tensor operations require an implementation of the TensorMapper interface. **/
bool submit(std::shared_ptr<TensorOperation> operation, //in: tensor operation for numerical evaluation
std::shared_ptr<TensorMapper> tensor_mapper); //in: tensor mapper (for composite tensor operations only)
/** Submits a tensor network for processing (evaluating the output tensor-result).
If the output (result) tensor has not been created yet, it will be created and
......@@ -343,6 +412,30 @@ public:
const std::vector<unsigned int> & iso_dims0, //in: tensor dimensions forming the isometry (group 0)
const std::vector<unsigned int> & iso_dims1); //in: tensor dimensions forming the isometry (group 1)
/** Returns TRUE if the calling process is within the existence domain of all given tensors, FALSE otherwise. **/
template <typename... Args>
bool withinTensorExistenceDomain(const std::string & tensor_name, Args&&... tensor_names) const //in: tensor names
{
if(!withinTensorExistenceDomain(tensor_name)) return false;
return withinTensorExistenceDomain(std::forward<Args>(tensor_names)...);
}
bool withinTensorExistenceDomain(const std::string & tensor_name) const; //in: tensor name
/** Returns the process group associated with the given tensors.
The calling process must be within the tensor exsistence domain,
which must be the same for all tensors. **/
template <typename... Args>
const ProcessGroup & getTensorProcessGroup(const std::string & tensor_name, Args&&... tensor_names) const //in: tensor names
{
const auto & tensor_domain = getTensorProcessGroup(tensor_name);
const auto & other_tensors_domain = getTensorProcessGroup(std::forward<Args>(tensor_names)...);
assert(other_tensors_domain == tensor_domain);
return tensor_domain;
}
const ProcessGroup & getTensorProcessGroup(const std::string & tensor_name) const; //tensor name
/** Declares, registers, and actually creates a tensor via the processing backend.
See numerics::Tensor constructors for different creation options. **/
template <typename... Args>
......@@ -594,6 +687,16 @@ public:
bool insertTensorSliceSync(const std::string & tensor_name, //in: tensor name
const std::string & slice_name); //in: slice name
/** Assigns one tensor to another congruent one (makes a copy of a tensor).
If the output tensor with the given name does not exist, it will be created.
Note that the output tensor must either exist or not exist across all
participating processes, otherwise it will result in an undefined behavior! **/
bool copyTensor(const std::string & output_name, //in: output tensor name
const std::string & input_name); //in: input tensor name
bool copyTensorSync(const std::string & output_name, //in: output tensor name
const std::string & input_name); //in: input tensor name
/** Performs tensor addition: tensor0 += tensor1 * alpha **/
template<typename NumericType>
bool addTensors(const std::string & addition, //in: symbolic tensor addition specification
......@@ -780,6 +883,7 @@ private:
std::unordered_map<std::string,std::shared_ptr<Tensor>> tensors_; //registered tensors (by CREATE operation)
std::list<std::shared_ptr<Tensor>> implicit_tensors_; //tensors created implicitly by the runtime (for garbage collection)
std::unordered_map<std::string,ProcessGroup> tensor_comms_; //process group associated with each tensor
std::string contr_seq_optimizer_; //tensor contraction sequence optimizer invoked when evaluating tensor networks
bool contr_seq_caching_; //regulates whether or not to cache pseudo-optimal tensor contraction orders for later reuse
......@@ -796,13 +900,14 @@ private:
int num_processes_; //total number of parallel processes in the dedicated MPI communicator
int process_rank_; //rank of the current parallel process in the dedicated MPI communicator
int global_process_rank_; //rank of the current parallel process in MPI_COMM_WORLD
MPICommProxy intra_comm_; //global MPI intra-communicator used to initialize the Numerical Server
MPICommProxy intra_comm_; //dedicated MPI intra-communicator used to initialize the Numerical Server
std::shared_ptr<TensorMapper> default_tensor_mapper_; //default composite tensor mapper (across all parallel processes)
std::shared_ptr<ProcessGroup> process_world_; //default process group comprising all MPI processes and their communicator
std::shared_ptr<ProcessGroup> process_self_; //current process group comprising solely the current MPI process and its own communicator
std::shared_ptr<runtime::TensorRuntime> tensor_rt_; //tensor runtime (for actual execution of tensor operations)
BytePacket byte_packet_; //byte packet for exchanging tensor meta-data
double time_start_; //time stamp of the Numerical Server start
bool validation_tracing_; //validation tracing flag
bool validation_tracing_; //validation tracing flag (for debugging)
};
/** Numerical service singleton (numerical server) **/
......@@ -838,7 +943,13 @@ bool NumServer::createTensor(const ProcessGroup & process_group,
std::shared_ptr<TensorOperation> op = tensor_op_factory_->createTensorOp(TensorOpCode::CREATE);
op->setTensorOperand(std::make_shared<Tensor>(name,std::forward<Args>(args)...));
std::dynamic_pointer_cast<numerics::TensorOpCreate>(op)->resetTensorElementType(element_type);
submitted = submit(op);
submitted = submit(op,getTensorMapper(process_group));
if(submitted){
if(process_group != getDefaultProcessGroup()){
auto saved = tensor_comms_.emplace(std::make_pair(name,process_group));
assert(saved.second);
}
}
}else{
std::cout << "#ERROR(exatn::createTensor): Missing data type!" << std::endl;
}
......@@ -857,8 +968,14 @@ bool NumServer::createTensorSync(const ProcessGroup & process_group,
std::shared_ptr<TensorOperation> op = tensor_op_factory_->createTensorOp(TensorOpCode::CREATE);
op->setTensorOperand(std::make_shared<Tensor>(name,std::forward<Args>(args)...));
std::dynamic_pointer_cast<numerics::TensorOpCreate>(op)->resetTensorElementType(element_type);
submitted = submit(op);
if(submitted) submitted = sync(*op);
submitted = submit(op,getTensorMapper(process_group));
if(submitted){
if(process_group != getDefaultProcessGroup()){
auto saved = tensor_comms_.emplace(std::make_pair(name,process_group));
assert(saved.second);
}
submitted = sync(*op);
}
}else{
std::cout << "#ERROR(exatn::createTensor): Missing data type!" << std::endl;
}
......@@ -957,25 +1074,26 @@ bool NumServer::addTensors(const std::string & addition,
iter = tensors_.find(tensor_name);
if(iter != tensors_.end()){
auto tensor1 = iter->second;
const auto & process_group = getTensorProcessGroup(tensor0->getName(),tensor1->getName());
std::shared_ptr<TensorOperation> op = tensor_op_factory_->createTensorOp(TensorOpCode::ADD);
op->setTensorOperand(tensor0,complex_conj0);
op->setTensorOperand(tensor1,complex_conj1);
op->setIndexPattern(addition);
op->setScalar(0,std::complex<double>(alpha));
parsed = submit(op);
parsed = submit(op,getTensorMapper(process_group));
}else{
parsed = false;
std::cout << "#ERROR(exatn::NumServer::addTensors): Tensor " << tensor_name << " not found in tensor addition: "
<< addition << std::endl;
parsed = true;
//std::cout << "#ERROR(exatn::NumServer::addTensors): Tensor " << tensor_name << " not found in tensor addition: "
// << addition << std::endl;
}
}else{
std::cout << "#ERROR(exatn::NumServer::addTensors): Invalid argument#1 in tensor addition: "
<< addition << std::endl;
}
}else{
parsed = false;
std::cout << "#ERROR(exatn::NumServer::addTensors): Tensor " << tensor_name << " not found in tensor addition: "
<< addition << std::endl;
parsed = true;
//std::cout << "#ERROR(exatn::NumServer::addTensors): Tensor " << tensor_name << " not found in tensor addition: "
// << addition << std::endl;
}
}else{
std::cout << "#ERROR(exatn::NumServer::addTensors): Invalid argument#0 in tensor addition: "
......@@ -1014,26 +1132,27 @@ bool NumServer::addTensorsSync(const std::string & addition,
iter = tensors_.find(tensor_name);
if(iter != tensors_.end()){
auto tensor1 = iter->second;
const auto & process_group = getTensorProcessGroup(tensor0->getName(),tensor1->getName());
std::shared_ptr<TensorOperation> op = tensor_op_factory_->createTensorOp(TensorOpCode::ADD);
op->setTensorOperand(tensor0,complex_conj0);
op->setTensorOperand(tensor1,complex_conj1);
op->setIndexPattern(addition);
op->setScalar(0,std::complex<double>(alpha));
parsed = submit(op);
parsed = submit(op,getTensorMapper(process_group));
if(parsed) parsed = sync(*op);
}else{
parsed = false;
std::cout << "#ERROR(exatn::NumServer::addTensors): Tensor " << tensor_name << " not found in tensor addition: "
<< addition << std::endl;
parsed = true;
//std::cout << "#ERROR(exatn::NumServer::addTensors): Tensor " << tensor_name << " not found in tensor addition: "
// << addition << std::endl;
}
}else{
std::cout << "#ERROR(exatn::NumServer::addTensors): Invalid argument#1 in tensor addition: "
<< addition << std::endl;
}
}else{
parsed = false;
std::cout << "#ERROR(exatn::NumServer::addTensors): Tensor " << tensor_name << " not found in tensor addition: "
<< addition << std::endl;
parsed = true;
//std::cout << "#ERROR(exatn::NumServer::addTensors): Tensor " << tensor_name << " not found in tensor addition: "
// << addition << std::endl;
}
}else{
std::cout << "#ERROR(exatn::NumServer::addTensors): Invalid argument#0 in tensor addition: "
......@@ -1077,35 +1196,36 @@ bool NumServer::contractTensors(const std::string & contraction,
iter = tensors_.find(tensor_name);
if(iter != tensors_.end()){
auto tensor2 = iter->second;
const auto & process_group = getTensorProcessGroup(tensor0->getName(),tensor1->getName(),tensor2->getName());
std::shared_ptr<TensorOperation> op = tensor_op_factory_->createTensorOp(TensorOpCode::CONTRACT);
op->setTensorOperand(tensor0,complex_conj0);
op->setTensorOperand(tensor1,complex_conj1);
op->setTensorOperand(tensor2,complex_conj2);
op->setIndexPattern(contraction);
op->setScalar(0,std::complex<double>(alpha));
parsed = submit(op);
parsed = submit(op,getTensorMapper(process_group));
}else{
parsed = false;
std::cout << "#ERROR(exatn::NumServer::contractTensors): Tensor " << tensor_name << " not found in tensor contraction: "
<< contraction << std::endl;
parsed = true;
//std::cout << "#ERROR(exatn::NumServer::contractTensors): Tensor " << tensor_name << " not found in tensor contraction: "
// << contraction << std::endl;
}
}else{
std::cout << "#ERROR(exatn::NumServer::contractTensors): Invalid argument#2 in tensor contraction: "
<< contraction << std::endl;
}
}else{
parsed = false;
std::cout << "#ERROR(exatn::NumServer::contractTensors): Tensor " << tensor_name << " not found in tensor contraction: "
<< contraction << std::endl;
parsed = true;
//std::cout << "#ERROR(exatn::NumServer::contractTensors): Tensor " << tensor_name << " not found in tensor contraction: "
// << contraction << std::endl;
}
}else{
std::cout << "#ERROR(exatn::NumServer::contractTensors): Invalid argument#1 in tensor contraction: "
<< contraction << std::endl;
}
}else{
parsed = false;
std::cout << "#ERROR(exatn::NumServer::contractTensors): Tensor " << tensor_name << " not found in tensor contraction: "
<< contraction << std::endl;
parsed = true;
//std::cout << "#ERROR(exatn::NumServer::contractTensors): Tensor " << tensor_name << " not found in tensor contraction: "
// << contraction << std::endl;
}
}else{
std::cout << "#ERROR(exatn::NumServer::contractTensors): Invalid argument#0 in tensor contraction: "
......@@ -1149,36 +1269,37 @@ bool NumServer::contractTensorsSync(const std::string & contraction,
iter = tensors_.find(tensor_name);
if(iter != tensors_.end()){
auto tensor2 = iter->second;
const auto & process_group = getTensorProcessGroup(tensor0->getName(),tensor1->getName(),tensor2->getName());
std::shared_ptr<TensorOperation> op = tensor_op_factory_->createTensorOp(TensorOpCode::CONTRACT);
op->setTensorOperand(tensor0,complex_conj0);
op->setTensorOperand(tensor1,complex_conj1);
op->setTensorOperand(tensor2,complex_conj2);
op->setIndexPattern(contraction);
op->setScalar(0,std::complex<double>(alpha));
parsed = submit(op);
parsed = submit(op,getTensorMapper(process_group));
if(parsed) parsed = sync(*op);
}else{
parsed = false;
std::cout << "#ERROR(exatn::NumServer::contractTensors): Tensor " << tensor_name << " not found in tensor contraction: "
<< contraction << std::endl;
parsed = true;
//std::cout << "#ERROR(exatn::NumServer::contractTensors): Tensor " << tensor_name << " not found in tensor contraction: "
// << contraction << std::endl;
}
}else{
std::cout << "#ERROR(exatn::NumServer::contractTensors): Invalid argument#2 in tensor contraction: "
<< contraction << std::endl;
}
}else{
parsed = false;
std::cout << "#ERROR(exatn::NumServer::contractTensors): Tensor " << tensor_name << " not found in tensor contraction: "
<< contraction << std::endl;
parsed = true;
//std::cout << "#ERROR(exatn::NumServer::contractTensors): Tensor " << tensor_name << " not found in tensor contraction: "
// << contraction << std::endl;
}
}else{
std::cout << "#ERROR(exatn::NumServer::contractTensors): Invalid argument#1 in tensor contraction: "
<< contraction << std::endl;
}
}else{
parsed = false;
std::cout << "#ERROR(exatn::NumServer::contractTensors): Tensor " << tensor_name << " not found in tensor contraction: "
<< contraction << std::endl;
parsed = true;
//std::cout << "#ERROR(exatn::NumServer::contractTensors): Tensor " << tensor_name << " not found in tensor contraction: "
// << contraction << std::endl;
}
}else{
std::cout << "#ERROR(exatn::NumServer::contractTensors): Invalid argument#0 in tensor contraction: "
......
/** ExaTN:: Variational optimizer of a closed symmetric tensor network expansion functional
REVISION: 2021/03/02
REVISION: 2021/06/22
Copyright (C) 2018-2021 Dmitry I. Lyakh (Liakh)
Copyright (C) 2018-2021 Oak Ridge National Laboratory (UT-Battelle) **/
......@@ -98,6 +98,9 @@ bool TensorNetworkOptimizer::optimize(const ProcessGroup & process_group)
bra_vector_expansion.rename(vector_expansion_->getName()+"Bra");
TensorExpansion operator_expectation(bra_vector_expansion,*vector_expansion_,*tensor_operator_);
operator_expectation.rename("OperatorExpectation");
for(auto net = operator_expectation.begin(); net != operator_expectation.end(); ++net){
net->network->rename("OperExpect" + std::to_string(std::distance(operator_expectation.begin(),net)));
}
if(TensorNetworkOptimizer::debug > 1){
std::cout << "#DEBUG(exatn::TensorNetworkOptimizer): Operator expectation expansio