Commit d000dfd7 authored by Fuchsi*'s avatar Fuchsi*

quickstart (qtt) example in new documentation and in python

parent 8df6eb99
Pipeline #720 passed with stages
in 8 minutes and 9 seconds
......@@ -17,6 +17,7 @@
!*.sh
!*.rb
!*.py
!*.md
!*.xml
......
/**
* @file
* @short the source code to the "Quick-Start" guide
*/
#include <xerus.h>
int main() {
......
import xerus as xe
# construct the stiffness matrix A using a fill function
def A_fill(idx):
if idx[0] == idx[1] :
return 2
elif idx[1] == idx[0]+1 or idx[1]+1 == idx[0] :
return -1
else
return 0
A = xe.Tensor([512,512], A_fill)
# and dividing it by h^2 = multiplying it with N^2
A *= 512*512
# reinterpret the 512x512 tensor as a 2^18 tensor
# and create (Q)TT decomposition of it
A.reinterpret_dimensions([2,]*18)
ttA = xe.TTTensor(A)
# and verify its rank
print("ttA ranks:", ttA.ranks())
# the right hand side of the equation both as Tensor and in (Q)TT format
b = xe.Tensor([512], lambda: 1)
b.reinterpret_dimensions([2,]*9)
ttb = xe.TTTensor(b)
# construct a random initial guess of rank 3 for the ALS algorithm
ttx = xe.TTTensor.random([2,]*9, [3,]*8)
# and solve the system with the default ALS algorithm for symmetric positive operators
xe.ALS_SPD(ttA, ttx, ttb)
# to perform arithmetic operations we need to define some indices
i,j,k = xe.indices(3)
# calculate the residual of the just solved system to evaluate its accuracy
# here i^9 denotes a multiindex named i of dimension 9 (ie. spanning 9 indices of the respective tensors)
residual = xe.frob_norm( ttA(i^9,j^9)*ttx(j^9) - ttb(i^9) )
print("residual:", residual)
# as an comparison solve the system exactly using the Tensor / operator
x = xe.Tensor()
x(j^9) = b(i^9) / A(i^9, j^9)
# and calculate the Frobenius norm of the difference
# here i&0 denotes a multiindex large enough to fully index the respective tensors
# the subtraction of different formats will default to Tensor subtraction such that
# the TTTensor ttx will be evaluated to a Tensor prior to subtraction.
print("error:", xe.frob_norm(x(i&0) - ttx(i&0)))
---
layout: post
title: "QTT Decomposition"
date: 1000-12-10
topic: "Quickstart"
section: "Examples"
---
__tabsInit
# QTT Decomposition
This guide can be used as a quick start into `xerus`. It will introduce some basic functionality of the library,
demonstrate the general layout and is enough for very basic applications. It is recommended to also have a look
at the more detailed guides for all classes one wishes to use though - or even have a look at the doxygen class documentation for details on all functions.
It is assumed that you have already obtained and compiled the library itself as well as know how to link against it.
If this is not the case, please refer to the [building xerus](building_xerus) page.
In the following we will solve a FEM equation arising from the heat equation using a QTT decomposition and the ALS algorithm.
To start we create the stiffness matrix as a dense (ie. not sparse or decomposed) tensor.
To define the entries we pass a function to the constructor of the `Tensor` object that will be
called for every entry with a vector of size_t integers defining the indices of the current entry.
As it is simpler to think of the stiffness matrix in its original form rather than the QTT form we will
set the dimensions to 512x512.
__tabsStart
~~~ cpp
xerus::Tensor A({512,512}, [](const std::vector<size_t> &idx){
if (idx[0] == idx[1]) {
return 2.0;
} else if (idx[1] == idx[0]+1 || idx[1]+1 == idx[0]) {
return -1.0;
} else {
return 0.0;
}
});
~~~
__tabsMid
~~~ python
def A_fill(idx):
if idx[0] == idx[1] :
return 2
elif idx[1] == idx[0]+1 or idx[1]+1 == idx[0] :
return -1
else
return 0
A = xerus.Tensor([512,512], A_fill)
~~~
__tabsEnd
To account for the @f$ h^2 @f$ factor that we have ignored so far we simply multipy the operator by @f$ N^2 @f$.
__tabsStart
~~~ cpp
A *= 512*512;
~~~
__tabsMid
~~~ python
A *= 512*512
~~~
__tabsEnd
By reinterpreting the dimension and thus effectively treating the tensor as a @f$ 2^{18} @f$ instead of a @f$ 512^2 @f$ tensor,
the decomposition into a `TTTensor` will give us the stiffness matrix in a QTT format.
__tabsStart
~~~ cpp
A.reinterpret_dimensions(std::vector<size_t>(18, 2));
xerus::TTOperator ttA(A);
~~~
__tabsMid
~~~ python
A.reinterpret_dimensions([2,]*18)
ttA = xerus.TTTensor(A)
~~~
__tabsEnd
As the Laplace operator is representable exactly in a low-rank QTT format, the rank of this `ttA` should be low after this construction.
We can verify this by printing the ranks:
__tabsStart
~~~ cpp
using xerus::misc::operator<<; // to be able to pipe vectors
std::cout << "ttA ranks: " << ttA.ranks() << std::endl;
~~~
__tabsMid
~~~ python
print("ttA ranks:", ttA.ranks())
~~~
__tabsEnd
For the right-hand-side we perform similar operations to obtain a QTT decomposed vector @f$ b_i = 1 \forall i @f$.
As the generating function needs no index information, we create a `[]()->double` lambda function:
__tabsStart
~~~ cpp
xerus::Tensor b({512}, []() {
return 1.0;
});
b.reinterpret_dimensions(std::vector<size_t>(9, 2));
xerus::TTTensor ttb(b);
~~~
__tabsMid
~~~ python
b = xerus.Tensor([512], lambda: 1)
b.reinterpret_dimensions([2,]*9)
ttb = xerus.TTTensor(b)
~~~
__tabsEnd
To have an initial vector for the ALS algorithm we create a random TTTensor of the desired rank
(3 in this case - note, that this is the exact rank of the solution).
__tabsStart
~~~ cpp
xerus::TTTensor ttx = xerus::TTTensor::random(std::vector<size_t>(9, 2), std::vector<size_t>(8, 3));
~~~
__tabsMid
~~~ python
ttx = xerus.TTTensor.random([2,]*9, [3,]*8)
~~~
__tabsEnd
With these three tensors (the operator `ttA`, the right-hand-side `ttb` and the initial guess `ttx`)
we can now perform the ALS algorithm to solve for `ttx` (note here, that the _SPD suffix chooses the variant of the ALS
that assumes that the given operator is symmetric positive definite)
__tabsStart
~~~ cpp
xerus::ALS_SPD(ttA, ttx, ttb);
~~~
__tabsMid
~~~ python
xerus.ALS_SPD(ttA, ttx, ttb)
~~~
__tabsEnd
To verify the calculation performed by the ALS we will need to perform some arithmetic operations.
As these require the definition of (relative) index orderings in the tensors, we define some indices
__tabsStart
~~~ cpp
xerus::Index i,j,k;
~~~
__tabsMid
~~~ python
i,j,k = xerus.indices(3)
~~~
__tabsEnd
and use these in calculations like `A(i,j)*x(j) - b(i)`. Note though, that our tensors are of a higher
degree due to the QTT decomposition and we thus need to specify the corresponding dimension of the
multiindices i,j, and k with eg. `i^9` to denote a multiindex of dimension 9.
__tabsStart
~~~ cpp
double residual = frob_norm( ttA(i^9,j^9)*ttx(j^9) - ttb(i^9) );
std::cout << "residual: " << residual << std::endl;
~~~
__tabsMid
~~~ python
residual = xerus.frob_norm( ttA(i^9,j^9)*ttx(j^9) - ttb(i^9) )
print("residual:", residual)
~~~
__tabsEnd
To get the actual error of the ALS solution (rather than just the residual) we can calculate the exact solution
of the system using the FullTensors A, x and b and the / operator
__tabsStart
~~~ cpp
xerus::Tensor x;
x(j^9) = b(i^9) / A(i^9, j^9);
~~~
__tabsMid
~~~ python
x = xerus.Tensor()
x(j^9) = b(i^9) / A(i^9, j^9)
~~~
__tabsEnd
In the comparison of this exact solution `x` and the ALS solution `ttx` the TTTensor will automatically be
cast to a Tensor object to allow the subtraction. Here we can use another index shorthand: `i&0` which denotes
a multiindex of large enough dimension to fully index the respective tensor object.
__tabsStart
~~~ cpp
std::cout << "error: " << frob_norm(x(i&0) - ttx(i&0)) << std::endl;
~~~
__tabsMid
~~~ python
print("error:", xerus.frob_norm(x(i&0) - ttx(i&0)))
~~~
__tabsEnd
The expected output of this whole program now looks something like
~~~
ttA ranks: { 3, 3, 3, 3, 3, 3, 3, 3 }
residual: 4.93323e-11
error: 1.48729e-20
~~~
We could now also print the solution with `x.to_string()` or store in in a file to plot it with another program.
We could change our operator to define other FEM systems and many more things. As this is only a very short
introduction though, we will stop here and refer the interested reader to either the following more detailed guides or
to their own curiosity.
The full source code of this tutorial looks as follows
__tabsStart
~~~ cpp
{% include examples/qtt.cpp %}
~~~
__tabsMid
~~~ python
{% include examples/qtt.py %}
~~~
__tabsEnd
---
layout: post
title: "General Tensor Networks"
date: 2000-11-16
topic: "Basic Usage"
section: "Documentation"
---
__tabsInit
# General Tensor Networks
---
layout: post
title: "Riemannian Algorithms"
date: 2000-11-18
topic: "Basic Usage"
section: "Documentation"
---
__tabsInit
# TTTangentVectors and Riemannian Algorithms
---
layout: post
title: "Tensor Completion / Recovery"
date: 2000-11-20
topic: "Basic Usage"
section: "Documentation"
---
__tabsInit
# Algorithms for Tensor Competion and Recovery
---
layout: post
title: "ALS and Performance Data"
date: 2000-11-22
topic: "Basic Usage"
section: "Documentation"
---
__tabsInit
# Alternating Algorithms and Performance Data
---
layout: post
title: "TT-Tensors"
date: 2000-11-24
topic: "Basic Usage"
section: "Documentation"
---
__tabsInit
# Tensor Train / MPS Tensors and Operators
---
layout: post
title: "Decompositions and Linear Equations"
date: 2000-11-26
topic: "Basic Usage"
section: "Documentation"
---
__tabsInit
# Decompositions and Linear Equations
---
layout: post
title: "Indices and Equations"
date: 2000-11-28 16:25:06 +0000
date: 2000-11-28
topic: "Basic Usage"
section: "Documentation"
---
......@@ -179,6 +179,11 @@ u(i&0) << A(i/2, j/2) * v(j&0)
~~~
__tabsEnd
Notice how the declaration of multi-indices with `&` or `/` are always relative to the current tensor. As such, an index (such as
`j` in the above example) can appear with different modifiers. The important thing is, that each occurence of every index results
in an equal number of modes spanned and that these modes are of equal dimensions (in the example the placement of `j` requires
that the degree of `v` is equal to half the degree of `A` and that its dimensions are equal to the second half of dimensions of `A`).
## Blockwise Construction of Tensors
A common use for indexed expressions is to construct tensors in a blockwise fashion. In the following example we were able to
calculate the tensor `comp` whenever the first index was fixed, either by numerical construction (`A` and `B`) or by showing
......@@ -191,10 +196,10 @@ __tabsStart
// comp(0, :,:) = A+identity
// comp(1, :,:) = B+identity
// comp(2, :,:) = identity
comp(i, j, k) =
xerus::Tensor::dirac({3}, 0)(i) * A(j, k)
+ xerus::Tensor::dirac({3}, 1)(i) * B(j, k)
+ xerus::Tensor::ones({3})(i) * xerus::Tensor::identity({64,64})(j, k);
comp(i, j^2) =
xerus::Tensor::dirac({3}, 0)(i) * A(j^2)
+ xerus::Tensor::dirac({3}, 1)(i) * B(j^2)
+ xerus::Tensor::ones({3})(i) * xerus::Tensor::identity({64,64})(j^2);
~~~
__tabsMid
~~~ python
......@@ -202,9 +207,9 @@ __tabsMid
# comp(0, :,:) = A+identity
# comp(1, :,:) = B+identity
# comp(2, :,:) = identity
comp(i, j, k) << \
xerus.Tensor.dirac([3], 0)(i) * A(j, k) \
+ xerus.Tensor.dirac([3], 1)(i) * B(j, k) \
+ xerus.Tensor.ones([3])(i) * xerus.Tensor.identity([64,64])(j, k)
comp(i, j**2) << \
xerus.Tensor.dirac([3], 0)(i) * A(j**2) \
+ xerus.Tensor.dirac([3], 1)(i) * B(j**2) \
+ xerus.Tensor.ones([3])(i) * xerus.Tensor.identity([64,64])(j**2)
~~~
__tabsEnd
---
layout: post
title: "The Tensor Class"
date: 2000-11-30 16:25:06 +0000
date: 2000-11-30
topic: "Basic Usage"
section: "Documentation"
---
......@@ -194,7 +194,7 @@ V[[1,1]] = 1.0 # equivalently: V[3] = 1.0
~~~
__tabsEnd
Explicitely constructing a tensor similarly in a blockwise fashion will be covered in the tutorial on [indices and equations](\ref md_indices)
Explicitely constructing a tensor similarly in a blockwise fashion will be covered in the tutorial on [indices and equations](indices)
## Sparse and Dense Representations
......@@ -544,8 +544,8 @@ __tabsEnd
The average user of `xerus` does not need to worry about this internal mechanism. This changes though as soon as you need access
to the underlying data structues e.g. to call `blas` or `lapack` routines not supported by `xerus` or to convert objects from
other libraries to and from xerus::Tensor objects (only possible in c++). If you do, make sure to check out the documentation
for the following functions:
other libraries to and from `xerus::Tensor` objects. If you do, make sure to check out the documentation
for the following functions (c++ only):
* [.has_factor()](\ref xerus::Tensor::has_factor()) and [.apply_factor()](\ref xerus::Tensor::apply_factor())
* [.get_dense_data()](\ref xerus::Tensor::get_dense_data()); [.get_unsanitized_dense_data()](\ref xerus::Tensor::get_unsanitized_dense_data()); [.get_internal_dense_data()](\ref xerus::Tensor::get_internal_dense_data())
* [.get_sparse_data()](\ref xerus::Tensor::get_sparse_data()); [.get_unsanitized_sparse_data()](\ref xerus::Tensor::get_unsanitized_sparse_data()); [.get_internal_sparse_data()](\ref xerus::Tensor::get_internal_sparse_data())
.footer-menu {
/* position: relative; */
display: inline-block;
}
.footer-menu:focus {
/* clicking on label should toggle the menu */
pointer-events: none;
}
.footer-menu:focus + .footer-menu-content {
/* opacity is 1 in opened state (see below) */
opacity: 1;
visibility: visible;
/* don't let pointer-events affect descendant elements */
pointer-events: auto;
}
.footer-menu-content {
position: fixed;
top: 0;
left: 0;
bottom: 0;
right: 0;
z-index: -10000;
text-align: left;
/* use opacity to fake immediate toggle */
opacity: 0;
visibility: hidden;
transition: visibility 0.5s;
}
.navbar-header {
z-index: 10;
}
.navbar-footer {
z-index: 10;
}
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment