Commit 573be0c8 authored by Ben Huber's avatar Ben Huber

documentation todos

parent bd082c76
Pipeline #733 passed with stages
in 8 minutes and 40 seconds
......@@ -31,6 +31,9 @@ this overhead is to write your appliation in c++ instead of python. Most instruc
similar in c++, so a transition might be simpler than you think. Simply check out the rest of the tutorials to compare the code
snippets.
This transition is particularly useful, if you wrote your own numerical algorithms in python. If most of the runtime is spend
inside one of `xerus`'x own algorithms like the `ALS`, it is likely not worth much.
## Compiling Xerus with High Optimizations
Per default the library already compiles with high optimization settings (corresponding basically to `-O3`) as there is rarely
any reason to use lower settings for numerical code. If you want to spend a significant amount of cpu hours in numerical code
......@@ -58,16 +61,25 @@ but can become significant when very small tensors are being used and the time f
In such cases it can be useful to replace such equations (especially ones that are as simple as above) with the explicit statement
of contractions and reshuffels. For above equation that would simply be
~~~ cpp
contract(A, B, false, C, false, 1);
// equivalent to A(i,k) = B(i,j^2)*C(j^2,k)
contract(A, B, false, C, false, 2);
~~~
i.e. read as: contract two tensors and store the result in A, left hand side B, not transposed, right hand side C, not transposed, contract a single mode.
i.e. read as: contract two tensors and store the result in A, left hand side B, not transposed, right hand side C, not transposed, contract two modes.
If it is necessary to reshuffle a tensor to be able to contract it in such a way, e.g. `A(i,j,k) = B(i,k,j)`, this can be done
If it is necessary to reshuffle a tensor to be able to contract it in such a way. This can be done
with the `reshuffle` function.
~~~ cpp
// equivalent to: A(i,j,k) = B(i,k,j)
reshuffle(A, B, {0,2,1});
~~~
Decompositions similarly have their low(er) level calls. They require properly reshuffled tensors and you have to provide a
`splitPosition`, i.e. the number of modes that will be represented by the left-hand-side of the result.
~~~ cpp
// equivalent to: (Q(i,j,r), R(r, k)) = xerus.QR(A(i,j,k))
calculate_qr(Q, R, A, 2);
~~~
It is our opinion that code written with these functions instead of indexed espressions are often much harder to understand
and the speedup is typically small... but just in case you really want to, you now have the option to use them.
---
layout: post
title: "General Tensor Networks"
date: 2000-11-16
topic: "Basic Usage"
date: 2000-10-30
topic: "Advanced Usage"
section: "Documentation"
---
__tabsInit
# General Tensor Networks
__warnStart
This Chapter is still **work in progress**.
Currently missing are sections on:
* creating TN
* constraints
* (full) contractions
* nodes and links
* round_edge, move_core
* fix_mode, remove_slate, resize_mode
* sanitize
__warnEnd
......@@ -7,3 +7,14 @@ section: "Documentation"
---
__tabsInit
# TTTangentVectors and Riemannian Algorithms
__warnStart
This Chapter is still **work in progress**.
Currently missing are sections on:
* using steepestDescent and CG
* creating TTTangentVectors
* retractions
* vector transport
__warnEnd
......@@ -7,3 +7,13 @@ section: "Documentation"
---
__tabsInit
# Algorithms for Tensor Competion and Recovery
__warnStart
This Chapter is still **work in progress**.
Currently missing are sections on:
* Singlepoint and rankOneMeasurementSets
* using IHT and ADF to solve completion / recovery problems
__warnEnd
......@@ -7,3 +7,15 @@ section: "Documentation"
---
__tabsInit
# Alternating Algorithms and Performance Data
__warnStart
This Chapter is still **work in progress**.
Currently missing are sections on:
* introduction to alternating algorithms
* usind ALS(_SPD), ASD, DMRG to solve least squares problems
* changing algorithm options via new algorithm objects
* using PerformanceData to log and / or output the progress of an algorithm
* defining own local solvers
__warnEnd
......@@ -7,3 +7,25 @@ section: "Documentation"
---
__tabsInit
# Tensor Train / MPS Tensors and Operators
__warnStart
This Chapter is still **work in progress**.
Currently missing are sections on:
* introduction with short explanation of the TT format
* creation of TT Tensors
* casting from / to Tensor
* operator[] for single and multiple positions (-> SinglepointMeasurementSets or cast)
* rounding and ranks
* cannonicalization / core positions
* accessing and changing components
* fix_mode, remove_slate, resize_mode
* entrywise and dyadic products
* TTOperators, .transpose
* advanced: lazy evaluation, access to nodes and require_correct_format()
__warnEnd
......@@ -63,9 +63,9 @@ and $r$ the rank of the input we have following resulting objects:
<tr>
<th style="border-top: none;">Decomposition</th>
<th style="border-top: none; border-left: 2px solid #ddd;">Property</th>
<th style="border-top: none;">Dimension</th>
<th style="border-top: none;">Dimensions</th>
<th style="border-top: none; border-left: 2px solid #ddd;">Property</th>
<th style="border-top: none;">Dimension</th>
<th style="border-top: none;">Dimensions</th>
</tr>
</thead>
<tbody>
......
......@@ -5,6 +5,12 @@ section: "Documentation"
# Documentation
__warnStart
This Documentation is still **work in progress**. Several Sections are currently missing but will hopefully be added shortly.
__warnEnd
This is the semi-complete documentation of the `xerus` library. It does not provide you with precise function declarations or
class hierarchies (check out the [doxygen documentation](doxygen) for those) but instead focuses on small working code snippets
to demonstrate `xerus`'s capabilities.
......
......@@ -189,8 +189,6 @@ namespace xerus {
/**
* @brief Returns a new copy of the network.
* @details All dimensions are set equals to one and the only entry
* of the tensor is zero.
*/
virtual TensorNetwork* get_copy() const;
......@@ -273,7 +271,7 @@ namespace xerus {
* @brief Read the value at a specific position.
* @details This allows the efficent calculation of a single entry of the TensorNetwork, by first fixing the external dimensions
* and then completly contracting the network. Do NOT use this as a manual cast to Tensor (there is an explicit cast for that).
* @param _position the position of the entry to be read assuming a single node.
* @param _positions the position of the entry to be read assuming a single node.
* @returns the calculated value (NO reference)
*/
value_t operator[](const std::vector<size_t>& _positions) const;
......@@ -285,7 +283,6 @@ namespace xerus {
* @brief Performs the entrywise multiplication with a constant @a _factor.
* @details Internally this only results in a change in the global factor.
* @param _factor the factor,
* @return a reference to this TensorNetwork.
*/
virtual void operator*=(const value_t _factor);
......@@ -294,7 +291,6 @@ namespace xerus {
* @brief Performs the entrywise divison by a constant @a _divisor.
* @details Internally this only results in a change in the global factor.
* @param _divisor the divisor,
* @return a reference to this TensorNetwork.
*/
virtual void operator/=(const value_t _divisor);
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment