# Transfering ownership of raw data directly is not possible from within python.
...
...
@@ -155,22 +175,28 @@ T = xerus.Tensor.from_ndarray(numpy.eye(2))
# alternatively the function also accepts pythons native arrays
U=xerus.Tensor.from_ndarray([[1,0],[0,1]])
~~~
__tabsEnd
Last but not least it is possible to populate the entries of a tensor by explicitely accessing them. During this process, an
initially sparse 0 tensor as it is created by the default constructors will automatically be converted to a dense object as soon
as `xerus` deems this to be preferable.
__tabsStart
~~~ cpp
// creating an identity matrix by explicitely setting non-zero entries
V=xerus::Tensor({2,2});
V[{0,0}]=1.0;// equivalently: V[0] = 1.0;
V[{1,1}]=1.0;// equivalently: V[3] = 1.0;
~~~
__tabsMid
~~~ python
# creating an identity matrix by explicitely setting non-zero entries
V=xerus.Tensor([2,2])
V[[0,0]]=1.0# equivalently: V[0] = 1.0
V[[1,1]]=1.0# equivalently: V[3] = 1.0
~~~
__tabsEnd
Explicitely constructing a tensor similarly in a blockwise fashion will be covered in the tutorial on [indices and equations](\ref md_indices)
...
...
@@ -180,23 +206,33 @@ seems to be preferable. For this it does not matter whether the number of entrie
contraction of tensors or as the result of decompositions.
This behaviour can be modified by changing the global setting
__tabsStart
~~~ cpp
// tell xerus to convert sparse tensors to dense if 1 in 4 entries are non-zero
xerus::Tensor::sparsityFactor=4;
~~~
__tabsMid
~~~ python
# tell xerus to convert sparse tensors to dense if 1 in 4 entries are non-zero
xerus.Tensor.sparsityFactor=4
~~~
__tabsEnd
in particular, setting the [sparsityFactor](\ref xerus::Tensor::sparsityFactor) to 0 will disable this feature.
__tabsStart
~~~ cpp
// stop xerus from automatically converting sparse tensors to dense
xerus::Tensor::sparsityFactor=0;
~~~
__tabsMid
~~~ python
# stop xerus from automatically converting sparse tensors to dense
xerus.Tensor.sparsityFactor=0
~~~
__tabsEnd
Note though, that calculations with non-sparse Tensors that are stored in a sparse representation are typically much slower than
in dense representation. You should thus manually convert overly full sparse Tensors to the dense representation.
...
...
@@ -208,6 +244,8 @@ member functions [.use_dense_representation()](\ref xerus::Tensor::use_dense_rep
To make more informed decisions about whether a conversion might be useful the tensor objects can be queried for the number of
defined entries with [.sparsity()](\ref xerus::Tensor::sparsity()) or for the number of non-zero entries with [.count_non_zero_entries()](\ref xerus::Tensor::count_non_zero_entries()).
@@ -283,6 +330,7 @@ A = xerus.Tensor.identity([100,100])
print(xerus.one_norm(A),xerus.frob_norm(A))
# expected output: "10000 100"
~~~
__tabsEnd
To obtain a human readable string representation of the tensor, [.to_string()](\ref xerus::Tensor::to_string()) can be used.
...
...
@@ -291,6 +339,8 @@ reconstruct the original tensor from this output.
Storing Tensors to files such that they can be reconstructed exactly from those is instead possible with [save_to_file()](\ref xerus::misc::save_to_file())
and respectively [load_from_file()](\ref xerus::misc::load_from_file()).