Skip to content
GitLab
Projects
Groups
Snippets
/
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Sign in
Toggle navigation
Menu
Open sidebar
ORNL Quantum Computing Institute
exatn
Commits
9a2ebd03
Commit
9a2ebd03
authored
Oct 30, 2021
by
Dmitry I. Lyakh
Browse files
Implemented multi-state eigensolver via Optimizer.
Signed-off-by:
Dmitry I. Lyakh
<
quant4me@gmail.com
>
parent
dbb55657
Changes
9
Hide whitespace changes
Inline
Side-by-side
development.txt
View file @
9a2ebd03
ISSUES:
- Automatically generated unique tensor names have local scope,
thus the corresponding tensors with automatically generated
names will have their names differ accross different processes.
The only way to ensure proper tensor correspondence in global
tensor operations is to make sure all participating processes
execute the same algorithm in a consistent fashion, like SIMD.
That is, the order of tensor operations across all participating
processes must be consistent such that every encountered global
tensor operation will receive the same tensor operand irrespective
of the difference in the locally generated tensor name. Special
care needs to be taken in iterating over associative tensor containers,
to ensure that the keys are consistent accross all participating
processes. For example, automatically generated tensor names
cannot serve as keys in an iteration procedure since they are
inconsistent accross different processes whereas tensor Ids
can serve as keys since they are normally consistent accross
different processes because the container was built with the
same structure accross all processes.
BUGS:
- 32-bit integer MPI message chunking issue in the backend.
- Fix the bug(s) in the tensor order reduction mechanism in the backend.
- Fix the bug(s) in the tensor order reduction mechanism in the
TalshExecutor
backend.
FEATURES:
...
...
@@ -14,42 +35,32 @@ FEATURES:
4. Limit the Host buffer size to be similar to the combined devices buffer size;
5. Introduce manual knobs to keep tensors on GPU;
- Implement tensor operator builders.
- TensorExpansion: Constructor that converts a TensorOperator
into a TensorExpansion.
- TensorNetwork: Subnetwork replacement method:
Contract replaced tensors, then replace the contracted
tensor with a new tensor (sub)network.
- Implement SAVE/LOAD API for TensorExpansion.
- I
ntroduce parallelization over t
ensor
n
etwork
s within a tensor expansion
.
- I
mplement T
ensor
N
etwork
slice computing Generator
.
- Implement b-D procedure.
- Implement con
strained optimization for isometric tensors
.
- Implement con
jugate gradient optimization procedure
.
- Implement DIIS convergence accelerator.
- Tensor network bond dimension adaptivity in solvers.
- Implement parameterized non-linear optimization:
dTensorFunctional/dParameter = dTensorFunctional/dTensor * dTensor/dParameter
- Implement conjugate gradient optimization procedure.
- Implement constrained optimization for isometric tensors.
- Fix index slicing logic (switch to two passes):
Pass 1: Compute the throughout volume (TH) for all indices;
TH(index) = Sum of tensor volumes for all tensors containing the index.
Pass 2: Slice indices with maximal throughout volume;
- Implement parameterized non-linear optimization:
dTensorFunctional/dParameter = dTensorFunctional/dTensor * dTensor/dParameter
- Introduce DAG nodes/dependencies with multiple output operands.
- Implement the guided k-way partitioning algorithm.
src/exatn/exatn_numerics.hpp
View file @
9a2ebd03
/** ExaTN::Numerics: General client header (free function API)
REVISION: 2021/10/
19
REVISION: 2021/10/
30
Copyright (C) 2018-2021 Dmitry I. Lyakh (Liakh)
Copyright (C) 2018-2021 Oak Ridge National Laboratory (UT-Battelle) **/
...
...
@@ -476,6 +476,14 @@ inline bool initTensorsRndSync(TensorNetwork & tensor_network) //inout: tensor n
{
return
numericalServer
->
initTensorsRndSync
(
tensor_network
);}
/** Initializes all input tensors in a given tensor network expansion to a random value. **/
inline
bool
initTensorsRnd
(
TensorExpansion
&
tensor_expansion
)
//inout: tensor network expansion
{
return
numericalServer
->
initTensorsRnd
(
tensor_expansion
);}
inline
bool
initTensorsRndSync
(
TensorExpansion
&
tensor_expansion
)
//inout: tensor network expansion
{
return
numericalServer
->
initTensorsRndSync
(
tensor_expansion
);}
/** Initializes special tensors present in the tensor network. **/
inline
bool
initTensorsSpecial
(
TensorNetwork
&
tensor_network
)
//inout: tensor network
{
return
numericalServer
->
initTensorsSpecial
(
tensor_network
);}
...
...
@@ -971,7 +979,7 @@ inline bool balanceNormalizeNorm2Sync(const ProcessGroup & process_group, //in:
/** Duplicates a given tensor network as a new tensor network copy.
The name of the tensor network copy will be prepended with an underscore
whereas all copies of the input tensors will be renamed with their unique hashes. **/
whereas all copies of the input tensors will be renamed with their unique
(local)
hashes. **/
inline
std
::
shared_ptr
<
TensorNetwork
>
duplicateSync
(
const
TensorNetwork
&
network
)
//in: tensor network
{
return
numericalServer
->
duplicateSync
(
network
);}
...
...
@@ -981,7 +989,8 @@ inline std::shared_ptr<TensorNetwork> duplicateSync(const ProcessGroup & process
/** Duplicates a given tensor network expansion as a new tensor network expansion copy.
The new tensor network expansion copy will have no name. **/
The name of the tensor network expnasion copy will be prepended with an underscore
whereas all copies of the input tensors will be renamed with their unique (local) hashes. **/
inline
std
::
shared_ptr
<
TensorExpansion
>
duplicateSync
(
const
TensorExpansion
&
expansion
)
//in: tensor expansion
{
return
numericalServer
->
duplicateSync
(
expansion
);}
...
...
src/exatn/num_server.cpp
View file @
9a2ebd03
/** ExaTN::Numerics: Numerical server
REVISION: 2021/10/
19
REVISION: 2021/10/
30
Copyright (C) 2018-2021 Dmitry I. Lyakh (Liakh)
Copyright (C) 2018-2021 Oak Ridge National Laboratory (UT-Battelle) **/
...
...
@@ -8,6 +8,7 @@ Copyright (C) 2018-2021 Oak Ridge National Laboratory (UT-Battelle) **/
#include
"tensor_range.hpp"
#include
"timers.hpp"
#include
<unordered_set>
#include
<complex>
#include
<vector>
#include
<stack>
...
...
@@ -1564,11 +1565,13 @@ bool NumServer::initTensorRndSync(const std::string & name)
bool
NumServer
::
initTensorsRnd
(
TensorNetwork
&
tensor_network
)
{
bool
success
=
true
;
for
(
auto
tens
=
tensor_network
.
begin
();
tens
!=
tensor_network
.
end
();
++
tens
){
std
::
unordered_set
<
std
::
string
>
tensor_names
;
for
(
auto
tens
=
tensor_network
.
cbegin
();
tens
!=
tensor_network
.
cend
();
++
tens
){
auto
tensor
=
tens
->
second
.
getTensor
();
const
auto
&
tens_name
=
tensor
->
getName
();
if
(
tens
->
first
!=
0
){
//input tensor
if
(
tensorAllocated
(
tens_name
)){
//auto res = tensor_names.emplace(tens_name);
success
=
initTensorRnd
(
tens_name
);
}
else
{
success
=
false
;
...
...
@@ -1578,17 +1581,24 @@ bool NumServer::initTensorsRnd(TensorNetwork & tensor_network)
}
if
(
!
success
)
break
;
}
if
(
success
){
for
(
const
auto
&
tens_name
:
tensor_names
){
success
=
initTensorRnd
(
tens_name
);
if
(
!
success
)
break
;
}
}
return
success
;
}
bool
NumServer
::
initTensorsRndSync
(
TensorNetwork
&
tensor_network
)
{
bool
success
=
true
;
for
(
auto
tens
=
tensor_network
.
begin
();
tens
!=
tensor_network
.
end
();
++
tens
){
std
::
unordered_set
<
std
::
string
>
tensor_names
;
for
(
auto
tens
=
tensor_network
.
cbegin
();
tens
!=
tensor_network
.
cend
();
++
tens
){
auto
tensor
=
tens
->
second
.
getTensor
();
const
auto
&
tens_name
=
tensor
->
getName
();
if
(
tens
->
first
!=
0
){
//input tensor
if
(
tensorAllocated
(
tens_name
)){
//auto res = tensor_names.emplace(tens_name);
success
=
initTensorRndSync
(
tens_name
);
}
else
{
success
=
false
;
...
...
@@ -1598,6 +1608,69 @@ bool NumServer::initTensorsRndSync(TensorNetwork & tensor_network)
}
if
(
!
success
)
break
;
}
if
(
success
){
for
(
const
auto
&
tens_name
:
tensor_names
){
success
=
initTensorRndSync
(
tens_name
);
if
(
!
success
)
break
;
}
}
return
success
;
}
bool
NumServer
::
initTensorsRnd
(
TensorExpansion
&
tensor_expansion
)
{
bool
success
=
true
;
std
::
unordered_set
<
std
::
string
>
tensor_names
;
for
(
auto
tensor_network
=
tensor_expansion
.
cbegin
();
tensor_network
!=
tensor_expansion
.
cend
();
++
tensor_network
){
for
(
auto
tens
=
tensor_network
->
network
->
cbegin
();
tens
!=
tensor_network
->
network
->
cend
();
++
tens
){
auto
tensor
=
tens
->
second
.
getTensor
();
const
auto
&
tens_name
=
tensor
->
getName
();
if
(
tens
->
first
!=
0
){
//input tensor
if
(
tensorAllocated
(
tens_name
)){
//auto res = tensor_names.emplace(tens_name);
success
=
initTensorRnd
(
tens_name
);
}
else
{
success
=
false
;
}
}
else
{
//output tensor
if
(
tensorAllocated
(
tens_name
))
success
=
initTensor
(
tens_name
,
0.0
);
}
if
(
!
success
)
break
;
}
}
if
(
success
){
for
(
const
auto
&
tens_name
:
tensor_names
){
success
=
initTensorRnd
(
tens_name
);
if
(
!
success
)
break
;
}
}
return
success
;
}
bool
NumServer
::
initTensorsRndSync
(
TensorExpansion
&
tensor_expansion
)
{
bool
success
=
true
;
std
::
unordered_set
<
std
::
string
>
tensor_names
;
for
(
auto
tensor_network
=
tensor_expansion
.
cbegin
();
tensor_network
!=
tensor_expansion
.
cend
();
++
tensor_network
){
for
(
auto
tens
=
tensor_network
->
network
->
cbegin
();
tens
!=
tensor_network
->
network
->
cend
();
++
tens
){
auto
tensor
=
tens
->
second
.
getTensor
();
const
auto
&
tens_name
=
tensor
->
getName
();
if
(
tens
->
first
!=
0
){
//input tensor
if
(
tensorAllocated
(
tens_name
)){
//auto res = tensor_names.emplace(tens_name);
success
=
initTensorRndSync
(
tens_name
);
}
else
{
success
=
false
;
}
}
else
{
//output tensor
if
(
tensorAllocated
(
tens_name
))
success
=
initTensorSync
(
tens_name
,
0.0
);
}
if
(
!
success
)
break
;
}
}
if
(
success
){
for
(
const
auto
&
tens_name
:
tensor_names
){
success
=
initTensorRndSync
(
tens_name
);
if
(
!
success
)
break
;
}
}
return
success
;
}
...
...
@@ -3241,23 +3314,27 @@ std::shared_ptr<TensorNetwork> NumServer::duplicateSync(const ProcessGroup & pro
{
unsigned
int
local_rank
;
//local process rank within the process group
if
(
!
process_group
.
rankIsIn
(
process_rank_
,
&
local_rank
))
return
std
::
shared_ptr
<
TensorNetwork
>
(
nullptr
);
//process is not in the group: Do nothing
bool
success
=
true
;
const
auto
tens_elem_type
=
network
.
getTensorElementType
();
auto
network_copy
=
makeSharedTensorNetwork
(
network
,
true
);
assert
(
network_copy
);
network_copy
->
rename
(
"_"
+
network
.
getName
());
std
::
unordered_map
<
std
::
string
,
std
::
shared_ptr
<
Tensor
>>
tensor_copies
;
for
(
auto
tensor
=
network_copy
->
begin
();
tensor
!=
network_copy
->
end
();
++
tensor
){
//replace input tensors by their copies
if
(
tensor
->
first
!=
0
){
const
auto
&
tensor_name
=
tensor
->
second
.
getName
();
const
auto
&
tensor_name
=
tensor
->
second
.
getName
();
//original tensor name
auto
iter
=
tensor_copies
.
find
(
tensor_name
);
if
(
iter
==
tensor_copies
.
end
()){
auto
res
=
tensor_copies
.
emplace
(
std
::
make_pair
(
tensor_name
,
makeSharedTensor
(
*
(
tensor
->
second
.
getTensor
()))));
assert
(
res
.
second
);
iter
=
res
.
first
;
iter
->
second
->
rename
();
iter
->
second
->
rename
();
//new (automatically generated) tensor name
success
=
createTensor
(
iter
->
second
,
tens_elem_type
);
assert
(
success
);
success
=
copyTensor
(
iter
->
second
->
getName
(),
tensor_name
);
assert
(
success
);
}
tensor
->
second
.
replaceStoredTensor
(
iter
->
second
);
}
}
network_copy
->
rename
(
"_"
+
network
.
getName
());
auto
success
=
createTensorsSync
(
process_group
,
*
network_copy
,
network
.
getTensorElementType
());
assert
(
success
);
success
=
sync
(
process_group
);
assert
(
success
);
return
network_copy
;
}
...
...
@@ -3271,11 +3348,13 @@ std::shared_ptr<TensorExpansion> NumServer::duplicateSync(const ProcessGroup & p
{
unsigned
int
local_rank
;
//local process rank within the process group
if
(
!
process_group
.
rankIsIn
(
process_rank_
,
&
local_rank
))
return
std
::
shared_ptr
<
TensorExpansion
>
(
nullptr
);
//process is not in the group: Do nothing
auto
expansion_copy
=
makeSharedTensorExpansion
(
expansion
.
isKet
());
assert
(
expansion_copy
);
auto
expansion_copy
=
makeSharedTensorExpansion
(
"_"
+
expansion
.
getName
(),
expansion
.
isKet
());
assert
(
expansion_copy
);
for
(
auto
component
=
expansion
.
cbegin
();
component
!=
expansion
.
cend
();
++
component
){
auto
success
=
expansion_copy
->
appendComponent
(
duplicateSync
(
process_group
,
*
(
component
->
network
)),
component
->
coefficient
);
if
(
!
success
)
return
std
::
shared_ptr
<
TensorExpansion
>
(
nullptr
);
auto
dup_network
=
duplicateSync
(
process_group
,
*
(
component
->
network
));
assert
(
dup_network
);
auto
success
=
expansion_copy
->
appendComponent
(
dup_network
,
component
->
coefficient
);
assert
(
success
);
}
return
expansion_copy
;
}
...
...
src/exatn/num_server.hpp
View file @
9a2ebd03
/** ExaTN::Numerics: Numerical server
REVISION: 2021/10/
19
REVISION: 2021/10/
30
Copyright (C) 2018-2021 Dmitry I. Lyakh (Liakh)
Copyright (C) 2018-2021 Oak Ridge National Laboratory (UT-Battelle) **/
...
...
@@ -664,6 +664,11 @@ public:
bool
initTensorsRndSync
(
TensorNetwork
&
tensor_network
);
//inout: tensor network
/** Initializes all input tensors in a given tensor network expansion to a random value. **/
bool
initTensorsRnd
(
TensorExpansion
&
tensor_expansion
);
//inout: tensor network expansion
bool
initTensorsRndSync
(
TensorExpansion
&
tensor_expansion
);
//inout: tensor network expansion
/** Initializes special tensors present in the tensor network. **/
bool
initTensorsSpecial
(
TensorNetwork
&
tensor_network
);
//inout: tensor network
...
...
@@ -962,14 +967,15 @@ public:
/** Duplicates a given tensor network as a new tensor network copy.
The name of the tensor network copy will be prepended with an underscore
whereas all copies of the input tensors will be renamed with their unique hashes. **/
whereas all copies of the input tensors will be renamed with their unique
(local)
hashes. **/
std
::
shared_ptr
<
TensorNetwork
>
duplicateSync
(
const
TensorNetwork
&
network
);
//in: tensor network
std
::
shared_ptr
<
TensorNetwork
>
duplicateSync
(
const
ProcessGroup
&
process_group
,
//in: chosen group of MPI processes
const
TensorNetwork
&
network
);
//in: tensor network
/** Duplicates a given tensor network expansion as a new tensor network expansion copy.
The new tensor network expansion copy will have no name. **/
The name of the tensor network expansion copy will be prepended with an underscore
whereas all copies of the input tensors will be renamed with their unique (local) hashes. **/
std
::
shared_ptr
<
TensorExpansion
>
duplicateSync
(
const
TensorExpansion
&
expansion
);
//in: tensor expansion
std
::
shared_ptr
<
TensorExpansion
>
duplicateSync
(
const
ProcessGroup
&
process_group
,
//in: chosen group of MPI processes
...
...
src/exatn/optimizer.cpp
View file @
9a2ebd03
/** ExaTN:: Variational optimizer of a closed symmetric tensor network expansion functional
REVISION: 2021/10/
22
REVISION: 2021/10/
30
Copyright (C) 2018-2021 Dmitry I. Lyakh (Liakh)
Copyright (C) 2018-2021 Oak Ridge National Laboratory (UT-Battelle) **/
...
...
@@ -74,12 +74,28 @@ std::shared_ptr<TensorExpansion> TensorNetworkOptimizer::getSolution(std::comple
}
std
::
shared_ptr
<
TensorExpansion
>
TensorNetworkOptimizer
::
getSolution
(
unsigned
int
root_id
,
std
::
complex
<
double
>
*
average_expect_val
)
const
{
assert
(
root_id
<
eigenvalues_
.
size
());
if
(
average_expect_val
!=
nullptr
)
*
average_expect_val
=
eigenvalues_
[
root_id
];
return
eigenvectors_
[
root_id
];
}
std
::
complex
<
double
>
TensorNetworkOptimizer
::
getExpectationValue
()
const
{
return
average_expect_val_
;
}
std
::
complex
<
double
>
TensorNetworkOptimizer
::
getExpectationValue
(
unsigned
int
root_id
)
const
{
assert
(
root_id
<
eigenvalues_
.
size
());
return
eigenvalues_
[
root_id
];
}
bool
TensorNetworkOptimizer
::
optimize
()
{
return
optimize
(
exatn
::
getDefaultProcessGroup
());
...
...
@@ -92,6 +108,54 @@ bool TensorNetworkOptimizer::optimize(const ProcessGroup & process_group)
}
bool
TensorNetworkOptimizer
::
optimize
(
unsigned
int
num_roots
)
{
return
optimize
(
exatn
::
getDefaultProcessGroup
(),
num_roots
);
}
bool
TensorNetworkOptimizer
::
optimize
(
const
ProcessGroup
&
process_group
,
unsigned
int
num_roots
)
{
bool
success
=
true
;
auto
original_operator
=
tensor_operator_
;
for
(
unsigned
int
root_id
=
0
;
root_id
<
num_roots
;
++
root_id
){
success
=
initTensorsRndSync
(
*
vector_expansion_
);
assert
(
success
);
bool
synced
=
sync
(
process_group
);
assert
(
synced
);
success
=
optimize
(
process_group
);
synced
=
sync
(
process_group
);
assert
(
synced
);
if
(
!
success
)
break
;
const
auto
expect_val
=
getExpectationValue
();
eigenvalues_
.
emplace_back
(
expect_val
);
auto
solution_vector
=
duplicateSync
(
process_group
,
*
vector_expansion_
);
assert
(
solution_vector
);
success
=
normalizeNorm2Sync
(
*
solution_vector
,
1.0
);
assert
(
success
);
eigenvectors_
.
emplace_back
(
solution_vector
);
for
(
auto
ket_net
=
solution_vector
->
begin
();
ket_net
!=
solution_vector
->
end
();
++
ket_net
){
ket_net
->
network
->
markOptimizableNoTensors
();
}
const
auto
num_legs
=
solution_vector
->
getRank
();
std
::
vector
<
std
::
pair
<
unsigned
int
,
unsigned
int
>>
ket_pairing
(
num_legs
);
for
(
unsigned
int
i
=
0
;
i
<
num_legs
;
++
i
)
ket_pairing
[
i
]
=
std
::
make_pair
(
i
,
i
);
std
::
vector
<
std
::
pair
<
unsigned
int
,
unsigned
int
>>
bra_pairing
(
num_legs
);
for
(
unsigned
int
i
=
0
;
i
<
num_legs
;
++
i
)
bra_pairing
[
i
]
=
std
::
make_pair
(
i
,
i
);
auto
projector
=
makeSharedTensorOperator
(
"EigenProjector"
+
std
::
to_string
(
root_id
));
for
(
auto
ket_net
=
solution_vector
->
cbegin
();
ket_net
!=
solution_vector
->
cend
();
++
ket_net
){
for
(
auto
bra_net
=
solution_vector
->
cbegin
();
bra_net
!=
solution_vector
->
cend
();
++
bra_net
){
success
=
projector
->
appendComponent
(
ket_net
->
network
,
bra_net
->
network
,
ket_pairing
,
bra_pairing
,
(
-
expect_val
)
*
std
::
conj
(
ket_net
->
coefficient
)
*
(
bra_net
->
coefficient
));
assert
(
success
);
}
}
auto
proj_hamiltonian
=
combineTensorOperators
(
*
tensor_operator_
,
*
projector
);
assert
(
proj_hamiltonian
);
tensor_operator_
=
proj_hamiltonian
;
}
tensor_operator_
=
original_operator
;
return
success
;
}
bool
TensorNetworkOptimizer
::
optimize_sd
(
const
ProcessGroup
&
process_group
)
{
constexpr
bool
NORMALIZE_WITH_METRICS
=
true
;
//whether to normalize tensor network factors with metrics or not
...
...
@@ -160,6 +224,7 @@ bool TensorNetworkOptimizer::optimize_sd(const ProcessGroup & process_group)
}
//Prepare derivative environments for all optimizable tensors in the vector expansion:
environments_
.
clear
();
std
::
unordered_set
<
std
::
string
>
tensor_names
;
// Loop over the tensor networks constituting the tensor network vector expansion:
for
(
auto
network
=
vector_expansion_
->
cbegin
();
network
!=
vector_expansion_
->
cend
();
++
network
){
...
...
src/exatn/optimizer.hpp
View file @
9a2ebd03
/** ExaTN:: Variational optimizer of a closed symmetric tensor network expansion functional
REVISION: 2021/10/2
1
REVISION: 2021/10/2
7
Copyright (C) 2018-2021 Dmitry I. Lyakh (Liakh)
Copyright (C) 2018-2021 Oak Ridge National Laboratory (UT-Battelle) **/
...
...
@@ -10,7 +10,7 @@ Copyright (C) 2018-2021 Oak Ridge National Laboratory (UT-Battelle) **/
vectors formed by the same tensor network expansion, this tensor network
variational optimizer will optimize the tensor factors constituting the
bra/ket tensor network vectors to arrive at an extremum of that functional,
specifically
targeting its minimum:
targeting its minimum
(or maximum)
:
E = <x|H|x> / <x|x>, where H is a tensor network operator, and x is a
tensor network expansion that delivers an extremum to the functional.
**/
...
...
@@ -61,16 +61,33 @@ public:
/** Resets the number of micro-iterations. **/
void
resetMicroIterations
(
unsigned
int
micro_iterations
=
DEFAULT_MICRO_ITERATIONS
);
/** Optimizes the given closed symmetric tensor network expansion functional. **/
/** Optimizes the given closed symmetric tensor network expansion
functional for its minimum (or maximum). **/
bool
optimize
();
bool
optimize
(
const
ProcessGroup
&
process_group
);
//in: executing process group
/** Returns the optimized tensor network expansion forming the optimal bra/ket vectors. **/
/** Returns the optimized tensor network expansion forming the optimal
bra/ket vectors delivering an extremum to the functional. **/
std
::
shared_ptr
<
TensorExpansion
>
getSolution
(
std
::
complex
<
double
>
*
average_expect_val
=
nullptr
)
const
;
/** Returns the achieved expectation value. **/
/** Returns the achieved expectation value
of the optimized functional
. **/
std
::
complex
<
double
>
getExpectationValue
()
const
;
/** Performs a consecutive tensor network functional optimization
delivering multiple extreme eigenvalues/eigenvectors. **/
bool
optimize
(
unsigned
int
num_roots
);
//in: number of extreme roots to find
bool
optimize
(
const
ProcessGroup
&
process_group
,
//in: executing process group
unsigned
int
num_roots
);
//in: number of extreme roots to find
/** Returns a specific extreme root (eigenvalue/eigenvector pair). **/
std
::
shared_ptr
<
TensorExpansion
>
getSolution
(
unsigned
int
root_id
,
std
::
complex
<
double
>
*
average_expect_val
=
nullptr
)
const
;
/** Returns a specific extreme eigenvalue. **/
std
::
complex
<
double
>
getExpectationValue
(
unsigned
int
root_id
)
const
;
/** Enables/disables coarse-grain parallelization over tensor networks. **/
void
enableParallelization
(
bool
parallel
=
true
);
...
...
@@ -95,6 +112,9 @@ private:
std
::
complex
<
double
>
expect_value
;
//current expectation value
};
std
::
vector
<
std
::
shared_ptr
<
TensorExpansion
>>
eigenvectors_
;
//extreme eigenvectors
std
::
vector
<
std
::
complex
<
double
>>
eigenvalues_
;
//extreme eigenvalues
std
::
shared_ptr
<
TensorOperator
>
tensor_operator_
;
//tensor network operator
std
::
shared_ptr
<
TensorExpansion
>
vector_expansion_
;
//tensor network expansion to optimize (bra/ket vector)
unsigned
int
max_iterations_
;
//max number of macro-iterations
...
...
src/exatn/tests/NumServerTester.cpp
View file @
9a2ebd03
...
...
@@ -18,7 +18,7 @@
#include
"errors.hpp"
//Test activation:
/*
#define EXATN_TEST0
#define EXATN_TEST0
#define EXATN_TEST1
#define EXATN_TEST2
#define EXATN_TEST3
...
...
@@ -48,9 +48,9 @@
//#define EXATN_TEST27 //requires input file from source
//#define EXATN_TEST28 //requires input file from source
#define EXATN_TEST29
#define EXATN_TEST30
*/
#define EXATN_TEST31
//
#define EXATN_TEST32
#define EXATN_TEST30
//
#define EXATN_TEST31
//requires input file from source
#define EXATN_TEST32
#ifdef EXATN_TEST0
...
...
@@ -3603,6 +3603,7 @@ TEST(NumServerTester, ExcitedMCVQE) {
const
int
max_bond_dim
=
std
::
min
(
static_cast
<
int
>
(
std
::
pow
(
2
,
num_spin_sites
/
2
)),
bond_dim_lim
);
const
int
arity
=
2
;
const
std
::
string
tn_type
=
"TTN"
;
//MPS or TTN
const
double
accuracy
=
2e-4
;
//exatn::resetLoggingLevel(2,2); //debug
...
...
@@ -3646,11 +3647,11 @@ TEST(NumServerTester, ExcitedMCVQE) {
success
=
exatn
::
createTensorsSync
(
*
vec_net2
,
TENS_ELEM_TYPE
);
assert
(
success
);
success
=
exatn
::
initTensorsRndSync
(
*
vec_net2
);
assert
(
success
);
std
::
cout
<<
"Ok"
<<
std
::
endl
;
#if 0
//Ground state search for the original Hamiltonian:
std::cout << "Ground state search for the original Hamiltonian:" << std::endl;
exatn::TensorNetworkOptimizer::resetDebugLevel(1,0);
exatn
::
TensorNetworkOptimizer
optimizer0
(
hamiltonian0
,
vec_tns0
,
5e-4
);
exatn::TensorNetworkOptimizer optimizer0(hamiltonian0,vec_tns0,
accuracy
);
success = exatn::sync(); assert(success);
bool converged = optimizer0.optimize();
success = exatn::sync(); assert(success);
...
...
@@ -3674,7 +3675,7 @@ TEST(NumServerTester, ExcitedMCVQE) {
ket_pairing,bra_pairing,-expect_val0);
auto hamiltonian1 = exatn::combineTensorOperators(*hamiltonian0,*projector0);
exatn::TensorNetworkOptimizer::resetDebugLevel(1,0);
exatn
::
TensorNetworkOptimizer
optimizer1
(
hamiltonian1
,
vec_tns1
,
5e-4
);
exatn::TensorNetworkOptimizer optimizer1(hamiltonian1,vec_tns1,
accuracy
);
success = exatn::sync(); assert(success);
converged = optimizer1.optimize();
success = exatn::sync(); assert(success);
...
...
@@ -3694,7 +3695,7 @@ TEST(NumServerTester, ExcitedMCVQE) {
ket_pairing,bra_pairing,-expect_val1);
auto hamiltonian2 = exatn::combineTensorOperators(*hamiltonian1,*projector1);
exatn::TensorNetworkOptimizer::resetDebugLevel(1,0);
exatn
::
TensorNetworkOptimizer
optimizer2
(
hamiltonian2
,
vec_tns2
,
5e-4
);
exatn::TensorNetworkOptimizer optimizer2(hamiltonian2,vec_tns2,
accuracy
);
success = exatn::sync(); assert(success);
converged = optimizer2.optimize();
success = exatn::sync(); assert(success);
...
...
@@ -3706,6 +3707,28 @@ TEST(NumServerTester, ExcitedMCVQE) {
}
const auto expect_val2 = optimizer2.getExpectationValue();
std::cout << "Expectation value = " << expect_val2 << std::endl;
#endif
//Ground and three excited states in one call:
std
::
cout
<<
"Ground and three excited states search for the original Hamiltonian:"
<<
std
::
endl
;
exatn
::
TensorNetworkOptimizer
::
resetDebugLevel
(
1
,
0
);
vec_net0
->
markOptimizableAllTensors
();
success
=
exatn
::
initTensorsRndSync
(
*
vec_tns0
);
assert
(
success
);
exatn
::
TensorNetworkOptimizer
optimizer3
(
hamiltonian0
,
vec_tns0
,
accuracy
);
success
=
exatn
::
sync
();
assert
(
success
);
bool
converged
=
optimizer3
.
optimize
(
4
);
success
=
exatn
::
sync
();
assert
(
success
);
if
(
exatn
::
getProcessRank
()
==
0
){
if
(
converged
){
std
::
cout
<<
"Search succeeded:"
<<
std
::
endl
;
for
(
unsigned
int
root_id
=
0
;
root_id
<
4
;
++
root_id
){
std
::
cout
<<
"Expectation value "
<<
root_id
<<
" = "
<<
optimizer3
.
getExpectationValue
(
root_id
)
<<
std
::
endl
;
}
}
else
{
std
::
cout
<<
"Search failed!"
<<
std
::
endl
;
assert
(
false
);
}
}
}
//Synchronize:
...
...
src/numerics/tensor_operator.cpp
View file @
9a2ebd03
/** ExaTN::Numerics: Tensor operator
REVISION: 2021/10/2
6
REVISION: 2021/10/2
9
Copyright (C) 2018-2021 Dmitry I. Lyakh (Liakh)
Copyright (C) 2018-2021 Oak Ridge National Laboratory (UT-Battelle) **/
...
...
@@ -31,14 +31,7 @@ TensorOperator::TensorOperator(const std::string & name,
const
std
::
complex
<
double
>
coefficient
)
:
name_
(
name
)
{
auto
shifted_bra_pairing
=
bra_pairing
;
const
auto
shift
=
ket_network
->
getRank
();
for
(
auto
&
pairing
:
shifted_bra_pairing
)
pairing
.
second
+=
shift
;
auto
combined_network
=
makeSharedTensorNetwork
(
*
ket_network
,
true
,
ket_network
->
getName
());
combined_network
->
conjugate
();
auto
success
=
combined_network
->
appendTensorNetwork
(
TensorNetwork
(
*
bra_network
,
true
,
bra_network
->
getName
()),{});
assert
(
success
);
success
=
appendComponent
(
combined_network
,
ket_pairing
,
shifted_bra_pairing
,
coefficient
);
auto
success
=
appendComponent
(
ket_network
,
bra_network
,
ket_pairing
,
bra_pairing
,
coefficient
);
assert
(
success
);
}
...
...
@@ -192,6 +185,23 @@ bool TensorOperator::appendSymmetrizeComponent(std::shared_ptr<Tensor> tensor,
}
bool
TensorOperator
::
appendComponent
(
std
::
shared_ptr
<
TensorNetwork
>
ket_network
,
std
::
shared_ptr
<
TensorNetwork
>
bra_network
,
const
std
::
vector
<
std
::
pair
<
unsigned
int
,
unsigned
int
>>
&
ket_pairing
,
const
std
::
vector
<
std
::
pair
<
unsigned
int
,
unsigned
int
>>
&
bra_pairing
,
const
std
::
complex
<
double
>
coefficient
)
{
auto
shifted_bra_pairing
=
bra_pairing
;
const
auto
shift
=
ket_network
->
getRank
();
for
(
auto
&
pairing
:
shifted_bra_pairing
)
pairing
.
second
+=
shift
;
auto
combined_network
=
makeSharedTensorNetwork
(
*
ket_network
,
true
,
ket_network
->
getName
());
combined_network
->
conjugate
();
auto
success
=
combined_network
->
appendTensorNetwork
(
TensorNetwork
(
*
bra_network
,
true
,
bra_network
->
getName
()),{});