Commit 426a48e2 authored by Ben Huber's avatar Ben Huber

TTTensor documentation mostly done

parent 8ba035b2
Pipeline #758 passed with stages
in 8 minutes and 32 seconds
......@@ -232,7 +232,7 @@ clean:
rm -fr build
-rm -f $(TEST_NAME)
-rm -f include/xerus.h.gch
-rm -r doc/html
make -C doc clean
......
......@@ -16,7 +16,9 @@ doc: .FORCE parseDoxytags findDoxytag
jekyll build --source jekyll/ --destination html/
clean:
-rm -r html
-rm -rf html
-rm -f parseDoxytags findDoxytag
-rm -f xerus.tags xerus.tagfile
serve: .FORCE parseDoxytags findDoxytag
-mkdir html
......
......@@ -25,10 +25,8 @@ int main() {
std::cout << "ttA ranks: " << ttA.ranks() << std::endl;
// the right hand side of the equation both as Tensor and in (Q)TT format
auto b = xerus::Tensor::ones({512});
b.reinterpret_dimensions(std::vector<size_t>(9, 2));
xerus::TTTensor ttb(b);
auto b = xerus::Tensor::ones(std::vector<size_t>(9, 2));
auto ttb = xerus::TTTensor::ones(b.dimensions);
// construct a random initial guess of rank 3 for the ALS algorithm
xerus::TTTensor ttx = xerus::TTTensor::random(std::vector<size_t>(9, 2), std::vector<size_t>(8, 3));
......
......@@ -23,9 +23,8 @@ ttA = xe.TTOperator(A)
print("ttA ranks:", ttA.ranks())
# the right hand side of the equation both as Tensor and in (Q)TT format
b = xe.Tensor.ones([512])
b.reinterpret_dimensions([2,]*9)
ttb = xe.TTTensor(b)
b = xe.Tensor.ones([2,]*9)
ttb = xe.TTTensor.ones(b.dimensions)
# construct a random initial guess of rank 3 for the ALS algorithm
ttx = xe.TTTensor.random([2,]*9, [3,]*8)
......
......@@ -20,6 +20,8 @@ class TabsConverter < Converter
.gsub('<p>__dangerEnd</p>', "</div>")
.gsub('<p>__warnStart</p>', "<div class=\"alert alert-warning\">")
.gsub('<p>__warnEnd</p>', "</div>")
.gsub('<p>__infoStart</p>', "<div class=\"alert alert-info\">")
.gsub('<p>__infoEnd</p>', "</div>")
.gsub('__breakFix1</a></p>', "")
.gsub('<p>__breakFix2', "</a>")
.gsub('__version', %x( git describe --tags --always --abbrev=0 ) )
......
......@@ -90,22 +90,18 @@ print("ttA ranks:", ttA.ranks())
~~~
__tabsEnd
For the right-hand-side we perform similar operations to obtain a QTT decomposed vector $ b_i = 1 \forall i $.
As the generating function needs no index information, we create a `[]()->double` lambda function:
For the right-hand-side we will take a simple tensor that is equal to 1 at every position. As this is a commonly used tensor,
we can simply use the named constructor provide dby `xerus`.
__tabsStart
~~~ cpp
auto b = xerus::Tensor::ones({512});
b.reinterpret_dimensions(std::vector<size_t>(9, 2));
xerus::TTTensor ttb(b);
auto b = xerus::Tensor::ones(std::vector<size_t>(9, 2));
auto ttb = xerus::TTTensor::ones(b.dimensions);
~~~
__tabsMid
~~~ python
b = xerus.Tensor.ones([512])
b.reinterpret_dimensions([2,]*9)
ttb = xerus.TTTensor(b)
b = xerus.Tensor.ones([2,]*9)
ttb = xerus.TTTensor.ones(b.dimensions)
~~~
__tabsEnd
......
This diff is collapsed.
......@@ -33,6 +33,8 @@ To build the python bindings you will furthermore need the python development he
dnf install python2-numpy python-devel
~~~
To build the documentation (i.e. this homepage) locally you will need `jekyll` and `doxygen`.
After downloading the source it is necessary to set a number of options that are somewhat individual to your system and needs. All following build steps assume that these
options are set in the `config.mk` file. Reasonable default values can be found in the `config.mk.default` file.
......
......@@ -23,7 +23,7 @@ The key features include:
* Full python bindings with very similar syntax for easy transitions from and to c++.
* Calculation with tensors of arbitrary orders using an intuitive Einstein-like notation `A(i,j) = B(i,k,l) * C(k,j,l);`.
* Full implementation of the Tensor-Train decompositions (MPS) with all neccessary capabilities (including Algorithms like ALS, ADF and CG).
* Lazy evaluation of multiple tensor contractions featuring heuristics to find the most effective contraction order.
* Lazy evaluation of (multiple) tensor contractions featuring heuristics to automatically find efficient contraction orders.
* Direct integration of the `blas` and `lapack`, as high performance linear algebra backends.
* Fast sparse tensor calculation by usage of the `suiteSparse` sparse matrix capabilities.
* Capabilites to handle arbitrary Tensor Networks.
......
......@@ -159,7 +159,7 @@ namespace xerus {
* @brief Random constructs a TTNetwork with the given dimensions and ranks limited by the given rank.
* @details The entries of the componend tensors are sampled independendly using the provided random generator and distribution.
* @param _dimensions the dimensions of the to be created TTNetwork.
* @param _ranks the maximal allowed rank.
* @param _rank the maximal allowed rank.
* @param _rnd the random engine to be passed to the constructor of the component tensors.
* @param _dist the random distribution to be passed to the constructor of the component tensors.
*/
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment