Commit 8df6eb99 authored by Ben Huber's avatar Ben Huber

indices and equations tutorial

parent 5312807e
Pipeline #719 passed with stages
in 8 minutes and 11 seconds
......@@ -2,7 +2,6 @@ module Jekyll
class TabsConverter < Converter
safe true
priority :low
@@ctr = 0
def matches(ext)
ext =~ /^\.md$/i
......@@ -13,11 +12,14 @@ module Jekyll
end
def convert(content)
@@ctr += 1
content.gsub('<p>__tabsInit</p>', "<input id=\"tab1\" type=\"radio\" checked><input id=\"tab2\" type=\"radio\">")
content.gsub('<p>__tabsInit</p>', "<input id=\"tab1\" type=\"radio\" name=\"tabs\" checked><input id=\"tab2\" type=\"radio\" name=\"tabs\">")
.gsub('<p>__tabsStart</p>', "<div id=\"tabs\"><label for=\"tab1\">C++</label><label for=\"tab2\">Python</label><div id=\"content\"><section id=\"content1\">")
.gsub('<p>__tabsMid</p>', "</section><section id=\"content2\">")
.gsub('<p>__tabsEnd</p>', "</section></div></div>")
.gsub('<p>__dangerStart</p>', "<div class=\"alert alert-danger\">")
.gsub('<p>__dangerEnd</p>', "</div>")
.gsub('<p>__warnStart</p>', "<div class=\"alert alert-warning\">")
.gsub('<p>__warnEnd</p>', "</div>")
end
end
end
......
......@@ -15,7 +15,7 @@ section: "Documentation"
The library uses a macro of the form `XERUS_LOG(level, msg)` to print messages to `cout`. It allows to use arbitrary log levels without any need to declare them beforehand
and to use typical piping syntax for the messages.
~~~.cpp
~~~ cpp
XERUS_LOG(als_warning, "The ALS encountered a mishap " << variable_1 << " <= " << variable_2);
~~~
The following warning levels are predefined: `fatal`, `critical`, `error`, `warning`, `info`, `debug`. The default config file `config.mk.default` defines the preprocessor
......@@ -32,16 +32,16 @@ in the `errors/` subfolder (if it exists) on any `error`, `critical` or `fatal`
The `XERUS_REQUIRE(condition, additional_msg)` macro replaces assertions in the `xerus` library. It uses above `XERUS_LOG` functionality and can thus use piping style messages just as the `XERUS_LOG`
macro. It is equivalent to the following definition
~~~.cpp
~~~ cpp
XERUS_REQUIRE(condition, additional_msg) = if (!(condition)) { XERUS_LOG(fatal, additional_msg); }
~~~
There is a large number of such checks in the library. All of them can be turned off by defining `DEBUG += -D XERUS_DISABLE_RUNTIME_CHECKS` in the `config.mk` file.
## UNIT_TEST
## Unit Tests
Compiling with `make test` creates an executable that includes all functions defined within `xerus::UnitTest` objects.
~~~.cpp
~~~ cpp
static xerus::misc::UnitTest objectName("Group", "Name", [](){
// source code of the test
// likely includes (several) tests of the form TEST(condition);
......@@ -65,7 +65,7 @@ The information of this callstack is only available if the application was compi
The exceptions used by `xerus` have the additional capability to accept piped information that will be included in the `.what()` string. To include a callstack it is thus possible to
simply write
~~~.cpp
~~~ cpp
XERUS_THROW(xerus::misc::generic_error() << "callstack:\n" << xerus::misc::get_call_stack());
~~~
The used macro will additionally include the source file and line as well as the function name in which the exception was thrown.
......
......@@ -57,14 +57,14 @@ but can become significant when very small tensors are being used and the time f
In such cases it can be useful to replace such equations (especially ones that are as simple as above) with the explicit statement
of contractions and reshuffels. For above equation that would simply be
~~~.cpp
~~~ cpp
contract(A, B, false, C, false, 1);
~~~
i.e. read as: contract two tensors and store the result in A, left hand side B, not transposed, right hand side C, not transposed, contract a single mode.
If it is necessary to reshuffle a tensor to be able to contract it in such a way, e.g. `A(i,j,k) = B(i,k,j)`, this can be done
with the `reshuffle` function.
~~~.cpp
~~~ cpp
reshuffle(A, B, {0,2,1});
~~~
......
......@@ -5,9 +5,206 @@ date: 2000-11-28 16:25:06 +0000
topic: "Basic Usage"
section: "Documentation"
---
__tabsInit
# Indices and Equations
One of the main features of `xerus` is the ability to write arbitrary Tensor contraction in a Einstein-notation like form.
This allows to write very readable sourcecode that closely resembles the mathematical formulas behind the algorithms.
## Indices
To write indexed equations, we first have to declare the variables that we will use as indices.
# Indices and Equations
__tabsStart
~~~ cpp
xerus::Index i,j,k,l;
~~~
__tabsMid
~~~ python
i,j,k,l = xerus.indices(4)
~~~
__tabsEnd
## Simple Equations
The most basic equations utilizing indexed expressions are reshufflings and contractions. Assuming `A` is a tensor of degree 2
and `b`of degree 1 we can write:
__tabsStart
~~~ cpp
// transposing a matrix
A(i,j) = A(j,i);
// a simple matrix-vector product
xerus::Tensor c;
c(i) = A(i,j) * b(j);
~~~
<br>
__tabsMid
~~~ python
# transposing a matrix
A(i,j) << A(j,i)
# a simple matrix-vector product
c = xerus.Tensor()
c(i) << A(i,j) * b(j)
~~~
As python does not allow to override the `=` operator, we had to fall back to another one. Read the left-shift operator as assignment.
__tabsEnd
In analogy to the Einstein notation, such an expression will contract those modes on the right hand side that are indexed by the same
index and assign the result to the left hand side. If necessary it is reshuffled first to obtain the index order denoted on the
left hand side.
The left hand side (`c` in the above example) is not required to have the right degree or dimensions. The type of `c` on the
other hand does change the meaning of the equation and so has to be set correctly. E.g. if `c` is a `xerus::TensorNetwork`, no
contraction is performed as `A(i,j)*b(j)` is in itself a valid tensor network. See also the tutorials on [TT-Tensors](tttensors)
and [Tensor Networks](tensornetworks) for details on the respective assignments.
__warnStart
Unless runtime checks have explicitely been disabled during compilation of `xerus` (see [Optimizations](optimization)), invalid
indexed expressions will produce runtime errors (in the form of `xerus::misc::generic_error`s being thrown as exceptions).
__tabsStart
~~~ cpp
try {
c(j) = A(i,j) * b(j); // runtime error!
} catch(xerus::misc::generic_error &e) {
std::cout << "something went wrong: " << e.what() << std::endl;
}
~~~
__tabsMid
~~~ python
try:
c(j) << A(i,j) * b(j) # runtime error!
except xerus.misc.generic_error as err:
print("something went wrong:", err.what())
~~~
__tabsEnd
__warnEnd
__dangerStart
**Warning!** While it is possible to assign an indexed expression to a (non-indexed) variable, you should **NOT** do it.
Unless you know very well what you are doing, this **will** lead to unexpected results!
__tabsStart
~~~ cpp
// do NOT do this!
auto evil = A(i,j) * b(j);
~~~
__tabsMid
~~~ python
# do NOT do this!
evil = A(i,j) * b(j)
~~~
__tabsEnd
__dangerEnd
Summation, subtraction, multiplication by scalars and calculating a norm all work as one would expect in such equations.
__tabsStart
~~~ cpp
// perform a single gradient step of stepsize alpha
// x' = x + \alpha * A^T * (b - A*x)
x(i) = x(i) + alpha * A(j,i) * (b(j) - A(j,k)*x(k));
~~~
__tabsMid
~~~ python
# perform a single gradient step of stepsize alpha
# x' = x + \alpha * A^T * (b - A*x)
x(i) << x(i) + alpha * A(j,i) * (b(j) - A(j,k)*x(k))
~~~
__tabsEnd
## High-Dimensional Equations
Writing high-dimensional equations in Einstein notation with individual indices can be cumbersome. What is more: in general
functions in our code we might not even know the degree of the tensors arguments before runtime. To still be able to use
indexed equations in these settings `xerus` uses multi-indices that can span anything between 0 and all modes of a tensor.
To denote such multi-indices, the regular indices defined as above have to be modified by operators inside the indexed expressions.
__tabsStart
~~~ cpp
i^d // an index that spans d modes
i&d // an index that spans all but d modes
i/n // an index that spans degree()/n modes
~~~
__tabsMid
~~~ python
i^d # an index that spans d modes
i**d # an index that spans d modes
i&d # an index that spans all but d modes
i/n # an index that spans degree()/n modes
~~~
__tabsEnd
As it might be difficult to tell how many modes the indices on the left hand side of the equation will span, it is not necessary
to declare them as multi-indices.
__tabsStart
~~~ cpp
// contract the last mode of A with the first of B
C(i,k) = A(i&1, j) * B(j, k&1);
// C is now of degree A.degree()+B.degree()-2
~~~
__tabsMid
~~~ python
# contract the last mode of A with the first of B
C(i,k) << A(i&1, j) * B(j, k&1)
# C is now of degree A.degree()+B.degree()-2
~~~
__tabsEnd
The division `i/n` is useful for example to write equations with high dimensional operators such as [TT-Operators](tttensors)
for which the indices are ordered per default such, that the application of the operator can be written in analogy to
matrix-vector products as:
__tabsStart
~~~ cpp
// assumes e.g.: TTTensors u,v; TTOperator A; of any compatible degree
u(i&0) = A(i/2, j/2) * v(j&0);
~~~
__tabsMid
~~~ python
# assumes e.g.: TTTensors u,v; TTOperator A; of any compatible degree
u(i&0) << A(i/2, j/2) * v(j&0)
~~~
__tabsEnd
## Blockwise Construction of Tensors
A common use for indexed expressions is to construct tensors in a blockwise fashion. In the following example we were able to
calculate the tensor `comp` whenever the first index was fixed, either by numerical construction (`A` and `B`) or by showing
mathematically, that it is then equal to a well known tensor (here the identity matrix). The full tensor can thus be constructed
with the help of the named constructors of `xerus::Tensor` (see the [Tensor tutorial](tensor)) as follows.
__tabsStart
~~~ cpp
// construct comp s.th.:
// comp(0, :,:) = A+identity
// comp(1, :,:) = B+identity
// comp(2, :,:) = identity
comp(i, j, k) =
xerus::Tensor::dirac({3}, 0)(i) * A(j, k)
+ xerus::Tensor::dirac({3}, 1)(i) * B(j, k)
+ xerus::Tensor::ones({3})(i) * xerus::Tensor::identity({64,64})(j, k);
~~~
__tabsMid
~~~ python
# construct comp s.th.:
# comp(0, :,:) = A+identity
# comp(1, :,:) = B+identity
# comp(2, :,:) = identity
comp(i, j, k) << \
xerus.Tensor.dirac([3], 0)(i) * A(j, k) \
+ xerus.Tensor.dirac([3], 1)(i) * B(j, k) \
+ xerus.Tensor.ones([3])(i) * xerus.Tensor.identity([64,64])(j, k)
~~~
__tabsEnd
......@@ -11,7 +11,7 @@ __tabsInit
The basic building stone of this library and all Tensor Network methods is the Tensor, represented in the class `xerus::Tensor`.
To simplify the work with these objects, `xerus` contains a number of helper functions that allow the quick creation and
modification of sparse and dense tensors. In the following we will list the most important ones but advise you to also read
the tutorials on [indices and equations](\ref md_indices) and [decompositions]() to have the full toolset with which to work on individual
the tutorials on [indices and equations](indices) and [decompositions](decompositions) to have the full toolset with which to work on individual
tensors.
## Creation of Tensors
......@@ -22,9 +22,7 @@ __tabsStart
// creates a degree 0 tensor
A = xerus::Tensor();
~~~
__tabsMid
~~~ python
# creates a degree 0 tensor
A = xerus.Tensor()
......@@ -42,11 +40,9 @@ B = xerus::Tensor(3);
// creates a sparse 2x2x2 tensor without any entries
C = xerus::Tensor({2,2,2});
~~~
<br />
~~~
__tabsMid
~~~ python
# creates a 1x1x1 tensor with entry 0
B = xerus.Tensor(3)
......@@ -94,6 +90,7 @@ xerus::Tensor::kronecker({3,4,3,4});
xerus::Tensor::dirac({2,2,2}, {1,1,1});
// a 4x4x4 tensor with i.i.d. Gaussian random values
xerus::Tensor::random({4,4,4});
......
......@@ -78,3 +78,19 @@ input#tab2:checked ~ #tabs label[for=tab2] {
background: #f1f1f1;
color: #444;
}
input#tab1:checked ~ div #tabs #content #content1,
input#tab2:checked ~ div #tabs #content #content2 {
opacity: 1;
z-index: 100;
}
input#tab1:checked ~ div #tabs label[for=tab1] {
background: #f1f1f1;
color: #444;
}
input#tab2:checked ~ div #tabs label[for=tab2] {
background: #f1f1f1;
color: #444;
}
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment