in particular, setting the [sparsityFactor](\refxerus::Tensor::sparsityFactor) to 0 will disable this feature.
in particular, setting the [sparsityFactor](__doxyref(xerus::Tensor::sparsityFactor)) to 0 will disable this feature.
__tabsStart
~~~ cpp
...
...
@@ -235,12 +235,12 @@ in dense representation. You should thus manually convert overly full sparse Ten
To do this there are a number of ways to interact with the representation of `xerus::Tensor` objects. Above we already saw, that
the constructors can be used to explicitely construct sparse (default behaviour) or dense tensors. For already existing objects
you can use the member functions [.is_sparse()](\refxerus::Tensor::is_sparse()) and [.is_dense()](\refxerus::Tensor::is_dense()) to query their representation. To change representations call the
member functions [.use_dense_representation()](\refxerus::Tensor::use_dense_representation()) or [.use_sparse_representation()](\refxerus::Tensor::use_sparse_representation()) to change it inplace or [.dense_copy()](\refxerus::Tensor::dense_copy()) or
[.sparse_copy()](\refxerus::Tensor::sparse_copy()) to obtain new tensor objects with the desired representation.
you can use the member functions [.is_sparse()](__doxyref(xerus::Tensor::is_sparse)) and [.is_dense()](__doxyref(xerus::Tensor::is_dense)) to query their representation. To change representations call the
member functions [.use_dense_representation()](__doxyref(xerus::Tensor::use_dense_representation)) or [.use_sparse_representation()](__doxyref(xerus::Tensor::use_sparse_representation)) to change it inplace or [.dense_copy()](__doxyref(xerus::Tensor::dense_copy)) or
[.sparse_copy()](__doxyref(xerus::Tensor::sparse_copy)) to obtain new tensor objects with the desired representation.
To make more informed decisions about whether a conversion might be useful the tensor objects can be queried for the number of
defined entries with [.sparsity()](\refxerus::Tensor::sparsity()) or for the number of non-zero entries with [.count_non_zero_entries()](\refxerus::Tensor::count_non_zero_entries()).
defined entries with [.sparsity()](__doxyref(xerus::Tensor::sparsity)) or for the number of non-zero entries with [.count_non_zero_entries()](__doxyref(xerus::Tensor::count_non_zero_entries)).
__tabsStart
~~~ cpp
...
...
@@ -281,8 +281,8 @@ __tabsEnd
## Output and Storing
Probably the most common queries to the Tensor class are its degree with [.degree()](\refxerus::Tensor::degree())
as well as its precise dimensions by accessing [.dimensions](\refxerus::Tensor::dimensions).
Probably the most common queries to the Tensor class are its degree with [.degree()](__doxyref(xerus::Tensor::degree))
as well as its precise dimensions by accessing [.dimensions](__doxyref(xerus::Tensor::dimensions)).
To obtain a human readable string representation of the tensor, [.to_string()](\refxerus::Tensor::to_string()) can be used.
To obtain a human readable string representation of the tensor, [.to_string()](__doxyref(xerus::Tensor::to_string)) can be used.
Note that it is meant purely for debugging purposes, in particular of smaller objects, and it is not adequately possible to
reconstruct the original tensor from this output.
Storing Tensors to files such that they can be reconstructed exactly from those is instead possible with [save_to_file()](\refxerus::misc::save_to_file())
and respectively [load_from_file()](\refxerus::misc::load_from_file()).
Storing Tensors to files such that they can be reconstructed exactly from those is instead possible with [save_to_file()](__doxyref(xerus::misc::save_to_file))
and respectively [load_from_file()](__doxyref(xerus::misc::load_from_file)).
__tabsStart
~~~ cpp
...
...
@@ -375,8 +375,8 @@ __tabsEnd
## Operators and Modifications
We have already seen the most basic method of modifying a tensor via the [operator[]](\refxerus::Tensor::operator[]()). With it
and the index notation presented in the [indices and equations](\ref md_indices) tutorial, most desired manipulations can be
We have already seen the most basic method of modifying a tensor via the [operator[]](__doxyref(xerus::Tensor::operator[])). With it
and the index notation presented in the [indices and equations](indices) tutorial, most desired manipulations can be
represented. Some of them would still be cumbersome though, so `xerus` includes several helper functions to make your life easier.
The purpose of this section is to present the most important ones.
...
...
@@ -447,10 +447,10 @@ __tabsEnd
This operation is obviously only possible when the total number of entries remains unchanged.
If you want to change the dimensions of a tensor such that the total size changes, you have to specify how to do this. `xerus`
provides three functions to help you in such a case: [.resize_mode()](\refxerus::Tensor::resize_mode()) changes the dimension
of a single mode by adding zero slates or removing existing slates at a given position; [.fix_mode()](\refxerus::Tensor::fix_mode())
provides three functions to help you in such a case: [.resize_mode()](__doxyref(xerus::Tensor::resize_mode)) changes the dimension
of a single mode by adding zero slates or removing existing slates at a given position; [.fix_mode()](__doxyref(xerus::Tensor::fix_mode))
reduces the tensor to an object of degree d-1 that corresponds to the slate, selected in the call to the function; finally
[.remove_slate()](\refxerus::Tensor::remove_slate()) is a simplified version of `.resize_mode()` that removes a single slate
[.remove_slate()](__doxyref(xerus::Tensor::remove_slate)) is a simplified version of `.resize_mode()` that removes a single slate
from the tensor, reducing the dimension of the specified mode by one.
__tabsStart
...
...
@@ -484,7 +484,7 @@ print(A)
__tabsEnd
At the moment the Hadamard product is not available in a indexed notation (due to a lack of overloadable operators). Its
behaviour can instead be achieved with [entrywise_product()](\refxerus::entrywise_product()).
behaviour can instead be achieved with [entrywise_product()](__doxyref(xerus::misc::entrywise_product)).
__tabsStart
~~~ cpp
...
...
@@ -546,6 +546,6 @@ The average user of `xerus` does not need to worry about this internal mechanism
to the underlying data structues e.g. to call `blas` or `lapack` routines not supported by `xerus` or to convert objects from
other libraries to and from `xerus::Tensor` objects. If you do, make sure to check out the documentation
for the following functions (c++ only):
*[.has_factor()](\refxerus::Tensor::has_factor()) and [.apply_factor()](\refxerus::Tensor::apply_factor())