Commit 46e85b21 authored by Benjamin Huber's avatar Benjamin Huber
Browse files

updated testcases, tutorial and docu for new ALS_SPD variant instead of old ALS

parent 126ae781
Pipeline #108 failed with stages
......@@ -10,6 +10,8 @@ Potentially breaking changes are marked with an exclamation point '!' at the beg
* ! Reworked the IndexedTensor* index assignment.
* ! Changed resize_dimension to allow slate insertion at the first position.
* Added TTTensor::random with callback function to manipulate the singular values of all matricisations.
* Rewrote the ALS algorithm for better readability.
* Added fully functional (multi-site) DMRG and alternating steepest descent algorithms.
* Support for low level factorisation calls for Tensor class.
* Several bug fixes, including SVD factor handling, SparseTensor summation, Tensor resize_dimension, TN evaluation,....
* Added several test cases.
......
#=================================================================================================
# Compiler Options
#=================================================================================================
# Xerus can be compiled either with the G++ or the Clang++ frontend of the LLVM. The default is
# to use the G++. The Uncomment the next line to use Clang++ instead.
# USE_CLANG = TRUE
# Xerus can be compiled either with G++ or the Clang++ frontend of the LLVM.
# Set the CXX variable to the one you want to use.
CXX = g++
# CXX = clang++
#=================================================================================================
# Optimization
......
......@@ -7,13 +7,3 @@ be compared and identified uniquely, they store a unique id of which the first 1
10 bits overflow) it can thus lead to collisions and indices that were shared between threads and were meant to be seperate suddenly evaluate to be equal to all algorithms.
## R-Value References
Any movable object of the type `xerus::IndexedTensorMovable<T>` is assumed to be manipulatable be the receiving functions without influence to any outside storage.
Storing an r-value reference to such an object (`xerus::IndexedTensorMovable<T> &&a = ...`) will thus break the library. Badly. Just don't do it...
This is because these object typically arise as the returned objects of operators on indexed tensors. Eg. `A(i,j)*B(j,k)` will create a temporary of exactly this type. That it is
of the type `xerus::IndexedTensorMovable<T>` and an r-value reference denotes to further functions (like further multiplications) that this is a temporary and may thus be recycled to
reduce the total number of temporary objects created...
......@@ -206,16 +206,19 @@ namespace xerus {
}
};
/// default variant of the single-site ALS algorithm using the lapack solver
/// default variant of the single-site ALS algorithm for non-symmetric operators using the lapack solver
extern const ALSVariant ALS;
/// default variant of the single-site ALS algorithm for symmetric positive-definite operators using the lapack solver
extern const ALSVariant ALS_SPD;
/// default variant of the two-site DMRG algorithm using the lapack solver
/// default variant of the two-site DMRG algorithm for non-symmetric operators using the lapack solver
extern const ALSVariant DMRG;
/// default variant of the two-site DMRG algorithm for symmetric positive-definite operators using the lapack solver
extern const ALSVariant DMRG_SPD;
/// default variant of the alternating steepest descent
/// default variant of the alternating steepest descent for non-symmetric operators
extern const ALSVariant ASD;
/// default variant of the alternating steepest descent for symmetric positive-definite operators
extern const ALSVariant ASD_SPD;
}
......@@ -2,12 +2,8 @@
# Uses: USE_CLANG,
# Set Compiler used
ifeq ($(strip $(CXX)),)
ifdef USE_CLANG
CXX = clang++
else
CXX = g++
endif
ifneq (,$(findstring clang, $(CXX)))
USE_CLANG = true
endif
......
......@@ -128,13 +128,12 @@ UNIT_TEST(ALS, tutorial,
C(i&0) = A(i/2, j/2) * B(j&0);
X = xerus::TTTensor::random(stateDims, 2, rnd, dist);
xerus::ALSVariant ALSb(xerus::ASD_SPD);
PerformanceData pd(true);
PerformanceData pd(false);
// ALSb.printProgress = true;
// ALSb.useResidualForEndCriterion = true;
// std::vector<value_t> perfdata;
ALSb(A, X, C, 1e-12, pd);
ALS_SPD(A, X, C, 1e-12, pd);
TEST(frob_norm(A(i/2, j/2)*X(j&0) - C(i&0)) < 1e-4);
......
......@@ -32,7 +32,7 @@ UNIT_TEST(Tutorials, quick_start,
std::normal_distribution<double> dist (0.0, 1.0);
xerus::TTTensor ttx = xerus::TTTensor::random(std::vector<size_t>(9, 2), std::vector<size_t>(8, 3), rnd, dist);
xerus::ALS(ttA, ttx, ttb);
xerus::ALS_SPD(ttA, ttx, ttb);
xerus::Index i,j,k;
......
......@@ -400,7 +400,7 @@ namespace xerus {
TensorNetwork ALSVariant::construct_local_RHS(ALSVariant::ALSAlgorithmicData& _data) const {
Index cr1, cr2, cr3, cr4, r1, r2, r3, r4, n1, n2, n3, n4, x;
TensorNetwork BTilde;
if (assumeSPD) {
if (assumeSPD || !_data.A) {
BTilde(n1,r1) = _data.rhsCache.left.back()(r1,n1);
for (size_t p=0; p<sites; ++p) {
BTilde(n1^(p+1), n2, cr1) = BTilde(n1^(p+1), r1) * _data.b.get_component(_data.currIndex+p)(r1, n2, cr1);
......
......@@ -41,8 +41,8 @@ int main() {
std::normal_distribution<double> dist (0.0, 1.0);
xerus::TTTensor ttx = xerus::TTTensor::random(std::vector<size_t>(9, 2), std::vector<size_t>(8, 3), rnd, dist);
// and solve the system with the default ALS algorithm
xerus::ALS(ttA, ttx, ttb);
// and solve the system with the default ALS algorithm for symmetric positive operators
xerus::ALS_SPD(ttA, ttx, ttb);
// to perform arithmetic operations we need to define some indices
xerus::Index i,j,k;
......
......@@ -48,9 +48,10 @@ of a random number generator and distribution that will be used for the creation
\until ttx
With these three tensors (the operator `ttA`, the right-hand-side `ttb` and the initial guess `ttx`)
we can now perform the ALS algorithm to solve for `ttx`
we can now perform the ALS algorithm to solve for `ttx` (note here, that the _SPD suffix chooses the variant of the ALS
that assumes that the given operators are symmetric positive definite)
\skipline xerus::ALS
\skipline xerus::ALS_SPD
To verify the calculation performed by the ALS we will need to perform some arithmetic operations.
As these require the definition of (relative) index orderings in the tensors, we define some indices
......
......@@ -41,13 +41,11 @@ int main() {
// the rank of A increased in the last operation:
std::cout << "The rank of A*A^T is " << A.ranks() << std::endl;
// create a new variant of the ALS algorithm
xerus::ALSVariant ALSb(xerus::ALS);
// that will print its progress
ALSb.printProgress = true;
// create a performance data object to keep track of the current algorithm progress (and print it to cout)
xerus::PerformanceData perfData(true);
// apply the ALS algorithm to the new system A*X=B and try to converge up to a relative error of @f$ 10^{-4} @f$
ALSb(A, X, B, 1e-4);
ALS_SPD(A, X, B, 1e-4, perfData);
// as the ALS will not modify the rank of X, the residual will most likely not be zero in the end
// here i&n denotes that i should be a multiindex spanning all but n indices of the given tensor
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment