Commit d387576c authored by Ben Huber's avatar Ben Huber

c++ and python tabs in documentation

parent f11aa5e6
Pipeline #717 passed with stages
in 8 minutes and 5 seconds
......@@ -16,9 +16,13 @@
!*.sh
!*.rb
!*.md
!*.xml
!*.dox
!*.css
!*.js
# ...even if they are in subdirectories
!*/
......
......@@ -15,6 +15,6 @@ permalink: /:title
highlighter: rouge
# highlighter: coderay
markdown: kramdown
# gems: ['jekyll-paginate']
# gems: ['TabsConverter']
# exclude: ['README.md', 'LICENSE']
module Jekyll
class TabsConverter < Converter
safe true
priority :low
@@ctr = 0
def matches(ext)
ext =~ /^\.md$/i
end
def output_ext(ext)
".html"
end
def convert(content)
@@ctr += 1
content.gsub('<p>__tabsInit</p>', "<input id=\"tab1\" type=\"radio\" checked><input id=\"tab2\" type=\"radio\">")
.gsub('<p>__tabsStart</p>', "<div id=\"tabs\"><label for=\"tab1\">C++</label><label for=\"tab2\">Python</label><div id=\"content\"><section id=\"content1\">")
.gsub('<p>__tabsMid</p>', "</section><section id=\"content2\">")
.gsub('<p>__tabsEnd</p>', "</section></div></div>")
end
end
end
......@@ -5,7 +5,7 @@ date: 2000-11-30 16:25:06 +0000
topic: "Basic Usage"
section: "Documentation"
---
__tabsInit
# The Tensor Class
The basic building stone of this library and all Tensor Network methods is the Tensor, represented in the class `xerus::Tensor`.
......@@ -17,19 +17,25 @@ tensors.
## Creation of Tensors
The most basic tensors can be created with the empty constructor
__tabsStart
~~~ cpp
// creates a degree 0 tensor
A = xerus::Tensor();
~~~
__tabsMid
~~~ python
# creates a degree 0 tensor
A = xerus.Tensor()
~~~
__tabsEnd
it is of degree 0 and represents the single number 0. Similarly the constructors that take either the degree or a vector of
dimensions as input create (sparse) tensors that are equal to 0 everywhere
__tabsStart
~~~ cpp
// creates a 1x1x1 tensor with entry 0
B = xerus::Tensor(3);
......@@ -37,6 +43,9 @@ B = xerus::Tensor(3);
// creates a sparse 2x2x2 tensor without any entries
C = xerus::Tensor({2,2,2});
~~~
<br />
__tabsMid
~~~ python
# creates a 1x1x1 tensor with entry 0
......@@ -46,9 +55,11 @@ B = xerus.Tensor(3)
C = xerus.Tensor([2,2,2])
# equivalently: xerus.Tensor(dim=[2,2,2])
~~~
__tabsEnd
The latter of these can be forced to create a dense tensor instead which can either be initialized to 0 or uninitialized
__tabsStart
~~~ cpp
// creates a dense 2x2x2 tensor with all entries set to 0
D = xerus::Tensor({2,2,2}, xerus::Tensor::Representation::Dense);
......@@ -56,6 +67,7 @@ D = xerus::Tensor({2,2,2}, xerus::Tensor::Representation::Dense);
// creates a dense 2x2x2 tensor with uninitialized entries
E = xerus::Tensor({2,2,2}, xerus::Tensor::Representation::Dense, xerus::Tensor::Initialisation::None);
~~~
__tabsMid
~~~ python
# creates a dense 2x2x2 tensor with all entries set to 0
D = xerus.Tensor(dim=[2,2,2], repr=xerus.Tensor.Representation.Dense)
......@@ -63,8 +75,11 @@ D = xerus.Tensor(dim=[2,2,2], repr=xerus.Tensor.Representation.Dense)
# creates a dense 2x2x2 tensor with uninitialized entries
E = xerus.Tensor(dim=[2,2,2], repr=xerus.Tensor.Representation.Dense, init=xerus.Tensor.Initialisation.None)
~~~
__tabsEnd
Other commonly used tensors (apart from the 0 tensor) are available through named constructors:
__tabsStart
~~~ cpp
// a 2x3x4 tensor with all entries = 1
xerus::Tensor::ones({2,3,4});
......@@ -88,6 +103,7 @@ xerus::Tensor::random({4,4,4}, 10);
// a (4x4) x (4x4) random orthogonal operator drawn according to the Haar measure
xerus::Tensor::random_orthogonal({4,4},{4,4});
~~~
__tabsMid
~~~ python
# a 2x3x4 tensor with all entries = 1
xerus.Tensor.ones([2,3,4])
......@@ -112,10 +128,13 @@ xerus.Tensor.random([4,4,4], n=10)
# a (4x4) x (4x4) random orthogonal operator drawn according to the Haar measure
xerus.Tensor.random_orthogonal([4,4],[4,4])
~~~
__tabsEnd
If the entries of the tensor should be calculated externally, it is possible in c++ to either pass the raw data directly (as
`std::unique_ptr<double>` or `std::shared_ptr<double>`, check section 'Advanced Use and Ownership of Data' for the latter!)
or use a callback / lambda function to populate the entries:
__tabsStart
~~~ cpp
std::unique_ptr<double> ptr = foo();
// transfer ownership of the data to the Tensor object of size 2x2x2
......@@ -136,6 +155,7 @@ H = xerus::Tensor({16,16,16}, 16, [](size_t num, size_t max) -> std::pair<size_t
return std::pair<size_t,double>(num*17, double(num)/double(max));
});
~~~
__tabsMid
~~~ python
# Transfering ownership of raw data directly is not possible from within python.
......@@ -155,22 +175,28 @@ T = xerus.Tensor.from_ndarray(numpy.eye(2))
# alternatively the function also accepts pythons native arrays
U = xerus.Tensor.from_ndarray([[1,0], [0,1]])
~~~
__tabsEnd
Last but not least it is possible to populate the entries of a tensor by explicitely accessing them. During this process, an
initially sparse 0 tensor as it is created by the default constructors will automatically be converted to a dense object as soon
as `xerus` deems this to be preferable.
__tabsStart
~~~ cpp
// creating an identity matrix by explicitely setting non-zero entries
V = xerus::Tensor({2,2});
V[{0,0}] = 1.0; // equivalently: V[0] = 1.0;
V[{1,1}] = 1.0; // equivalently: V[3] = 1.0;
~~~
__tabsMid
~~~ python
# creating an identity matrix by explicitely setting non-zero entries
V = xerus.Tensor([2,2])
V[[0,0]] = 1.0 # equivalently: V[0] = 1.0
V[[1,1]] = 1.0 # equivalently: V[3] = 1.0
~~~
__tabsEnd
Explicitely constructing a tensor similarly in a blockwise fashion will be covered in the tutorial on [indices and equations](\ref md_indices)
......@@ -180,23 +206,33 @@ seems to be preferable. For this it does not matter whether the number of entrie
contraction of tensors or as the result of decompositions.
This behaviour can be modified by changing the global setting
__tabsStart
~~~ cpp
// tell xerus to convert sparse tensors to dense if 1 in 4 entries are non-zero
xerus::Tensor::sparsityFactor = 4;
~~~
__tabsMid
~~~ python
# tell xerus to convert sparse tensors to dense if 1 in 4 entries are non-zero
xerus.Tensor.sparsityFactor = 4
~~~
__tabsEnd
in particular, setting the [sparsityFactor](\ref xerus::Tensor::sparsityFactor) to 0 will disable this feature.
__tabsStart
~~~ cpp
// stop xerus from automatically converting sparse tensors to dense
xerus::Tensor::sparsityFactor = 0;
~~~
__tabsMid
~~~ python
# stop xerus from automatically converting sparse tensors to dense
xerus.Tensor.sparsityFactor = 0
~~~
__tabsEnd
Note though, that calculations with non-sparse Tensors that are stored in a sparse representation are typically much slower than
in dense representation. You should thus manually convert overly full sparse Tensors to the dense representation.
......@@ -208,6 +244,8 @@ member functions [.use_dense_representation()](\ref xerus::Tensor::use_dense_rep
To make more informed decisions about whether a conversion might be useful the tensor objects can be queried for the number of
defined entries with [.sparsity()](\ref xerus::Tensor::sparsity()) or for the number of non-zero entries with [.count_non_zero_entries()](\ref xerus::Tensor::count_non_zero_entries()).
__tabsStart
~~~ cpp
// create a sparse tensor with 100 random entries
W = xerus::Tensor::random({100,100}, 100);
......@@ -224,6 +262,7 @@ W.use_dense_representation();
// query its sparsity. likely output: "10000 100"
std::cout << W.sparsity() << ' ' << W.count_non_zero_entries() << std:endl;
~~~
__tabsMid
~~~ python
# create a sparse tensor with 100 random entries
W = xerus.Tensor.random(dim=[100,100], n=100)
......@@ -240,12 +279,15 @@ W.use_dense_representation()
# query its sparsity. likely output: "10000 100"
print(W.sparsity(), W.count_non_zero_entries())
~~~
__tabsEnd
## Output and Storing
Probably the most common queries to the Tensor class are its degree with [.degree()](\ref xerus::Tensor::degree())
as well as its precise dimensions by accessing [.dimensions](\ref xerus::Tensor::dimensions).
__tabsStart
~~~ cpp
// construct a random 3x4x5x6 tensor
A = xerus::Tensor::random({3, 4, 5, 6});
......@@ -256,6 +298,7 @@ using xerus::misc::operator<<;
std::cout << "degree: " << A.degree() << " dim: " << A.dimensions << std::endl;
// expected output: "degree: 4 dim: {3, 4, 5, 6}"
~~~
__tabsMid
~~~ python
# construct a random 3x4x5x6 tensor
A = xerus.Tensor.random([3, 4, 5, 6])
......@@ -264,11 +307,14 @@ A = xerus.Tensor.random([3, 4, 5, 6])
print("degree:", A.degree(), "dim:", A.dimensions())
# expected output: "degree: 4 dim: [3, 4, 5, 6]"
~~~
__tabsEnd
Another useful and commonly used query is for the norm of a tensor. At the moment `xerus` provides member functions for the
two most commonly used norms: [.frob_norm()](\ref xerus::Tensor::frob_norm()) (or equivalently
`frob_norm(const Tensor&)`) to obtain the Frobenius norm and [.one_norm()](\ref xerus::Tensor::one_norm()) (or equivalently
`one_norm(const Tensor&)`) to obtain the p=1 norm of the tensor.
__tabsStart
~~~ cpp
A = xerus::Tensor::identity({100,100});
......@@ -276,6 +322,7 @@ A = xerus::Tensor::identity({100,100});
std::cout << one_norm(A) << ' ' << frob_norm(A) << std::endl;
// expected output: "10000 100"
~~~
__tabsMid
~~~ python
A = xerus.Tensor.identity([100,100])
......@@ -283,6 +330,7 @@ A = xerus.Tensor.identity([100,100])
print(xerus.one_norm(A), xerus.frob_norm(A))
# expected output: "10000 100"
~~~
__tabsEnd
To obtain a human readable string representation of the tensor, [.to_string()](\ref xerus::Tensor::to_string()) can be used.
......@@ -291,6 +339,8 @@ reconstruct the original tensor from this output.
Storing Tensors to files such that they can be reconstructed exactly from those is instead possible with [save_to_file()](\ref xerus::misc::save_to_file())
and respectively [load_from_file()](\ref xerus::misc::load_from_file()).
__tabsStart
~~~ cpp
// construct a random 3x3 tensor
A = xerus::Tensor::random({3, 3});
......@@ -306,6 +356,7 @@ std::cout << "original tensor: " << A.to_string() << std::endl
<< "loaded tensor: " << B.to_string() << std::endl
<< "error: " << frob_norm(B-A) << std::endl;
~~~
__tabsMid
~~~ python
# construct a random 3x3 tensor
A = xerus.Tensor.random([3, 3])
......@@ -321,7 +372,7 @@ print("original tensor:", A)
print("loaded tensor:", B)
print("error:", xerus.frob_norm(B-A))
~~~
__tabsEnd
......@@ -336,6 +387,8 @@ Naturally tensor objects can be used in arithmetic expressions whereever this is
sized tensors as well as multiplication and division by scalars. Note, that tensors in general do not have the cannonical
isomorphism to operators that matrices have. As such the matrix multiplication can not be generalized trivially to tensors. Instead
you will have to use indexed equations to express such contractions in xerus.
__tabsStart
~~~ cpp
xerus::Tensor A = xerus::Tensor::random({100});
......@@ -353,6 +406,7 @@ B -= A;
std::cout << "Distance on the unit sphere: " << B.frob_norm() << std::endl;
~~~
__tabsMid
~~~ python
A = xerus.Tensor.random([100])
......@@ -370,23 +424,29 @@ B -= A
print("Distance on the unit sphere: ", B.frob_norm())
~~~
__tabsEnd
Many theoretical results use flattenings or expansions (vectorizations or tensorizations) to reduce tensor problems to ones with
vectors and matrices. While this is not as common in practical applications it can still be useful at times. Because the data
that the computer uses internally does not need not be reordered, such a reinterpretation of the tensor objects are constant in
time.
__tabsStart
~~~ cpp
A = xerus::Tensor::identity({2,2});
// flatten the identity matrix to the vector (1, 0, 0, 1)
A.reinterpret_dimensions({4});
~~~
__tabsMid
~~~ python
A = xerus.Tensor.identity([2,2])
# flatten the identity matrix to the vector (1, 0, 0, 1)
A.reinterpret_dimensions([4])
~~~
__tabsEnd
This operation is obviously only possible when the total number of entries remains unchanged.
If you want to change the dimensions of a tensor such that the total size changes, you have to specify how to do this. `xerus`
......@@ -395,6 +455,8 @@ of a single mode by adding zero slates or removing existing slates at a given po
reduces the tensor to an object of degree d-1 that corresponds to the slate, selected in the call to the function; finally
[.remove_slate()](\ref xerus::Tensor::remove_slate()) is a simplified version of `.resize_mode()` that removes a single slate
from the tensor, reducing the dimension of the specified mode by one.
__tabsStart
~~~ cpp
// start with a 2x2 matrix filled with ones
A = xerus::Tensor::ones({2,2});
......@@ -408,6 +470,7 @@ A.fix_mode(1, 0);
// expected output: "1.0 0.0 1.0"
std::cout << A.to_string() << std::endl;
~~~
__tabsMid
~~~ python
# start with a 2x2 matrix filled with ones
A = xerus.Tensor.ones([2,2])
......@@ -421,9 +484,12 @@ A.fix_mode(mode=1, value=0)
# expected output: "1.0 0.0 1.0"
print(A)
~~~
__tabsEnd
At the moment the Hadamard product is not available in a indexed notation (due to a lack of overloadable operators). Its
behaviour can instead be achieved with [entrywise_product()](\ref xerus::entrywise_product()).
__tabsStart
~~~ cpp
// constructing a tensor with i.i.d. entries sampled from a Chi-Squared distribution by (Note: this is not the most efficient way to achieve this!)
// 1. constructing a tensor with Gaussian i.i.d. entries
......@@ -431,6 +497,7 @@ A = xerus::Tensor::random({10,10,10,10});
// 2. performing an entrywise product of this tensor with itself
A = entrywise_product(A, A);
~~~
__tabsMid
~~~ python
# constructing a tensor with i.i.d. entries sampled from a Chi-Squared distribution by (Note: this is not the most efficient way to achieve this!)
# 1. constructing a tensor with Gaussian i.i.d. entries
......@@ -438,11 +505,13 @@ A = xerus.Tensor.random([10,10,10,10])
# 2. performing an entrywise product of this tensor with itself
A = xerus.entrywise_product(A, A)
~~~
__tabsEnd
## Ownership of Data and Advanced Usage
To reduce the number of unnecessary copies of tensors, `xerus` can share the underlying data arrays among several tensors and
even evaluate multiplication with scalars lazyly (or include them in the next contraction or other modfication).
__tabsStart
~~~ cpp
A = xerus::Tensor::random({100,100});
......@@ -458,6 +527,7 @@ B.reinterpret_dimensions({10, 10, 10, 10});
B[{2,2,2,2}] = 0.0;
// A remains unchanged and does not share the data with B anymore
~~~
__tabsMid
~~~ python
A = xerus.Tensor.random([100,100])
......@@ -473,6 +543,7 @@ B.reinterpret_dimensions([10, 10, 10, 10])
B[[2,2,2,2]] = 0.0
# A remains unchanged and does not share the data with B anymore
~~~
__tabsEnd
The average user of `xerus` does not need to worry about this internal mechanism. This changes though as soon as you need access
to the underlying data structues e.g. to call `blas` or `lapack` routines not supported by `xerus` or to convert objects from
......
#tabs {
margin: 0 auto;
width: 100%; /* Ancho del contenedor */
}
input {
height: 0;
visibility: hidden;
margin: 0;
}
#tabs label {
background: #f9f9f9; /* Fondo de las pestañas */
border-radius: .25em .25em 0 0;
color: #888; /* Color del texto de las pestañas */
cursor: pointer;
display: block;
float: left;
font-size: 1em; /* Tamaño del texto de las pestañas */
height: 2.5em;
line-height: 2.5em;
margin-right: .25em;
padding: 0 1.5em;
margin-bottom: 0;
text-align: center;
}
#tabs label:hover label:active {
background: #ddd; /* Fondo de las pestañas al pasar el cursor por encima */
color: #666; /* Color del texto de las pestañas al pasar el cursor por encima */
}
input:checked + label {
position: relative;
z-index: 6;
}
#content {
display: flex;
flex-direction: column;
min-height: 2em;
background: #f1f1f1; /* Fondo del contenido */
border-radius: 0 .25em .25em .25em;
padding: 0;
position: relative;
width: 100%;
z-index: 5;
}
#content section {
opacity: 0;
padding: 0.5em;
position: absolute;
z-index: -100;
}
#content #content1 {
position: relative;
}
input#tab1:checked ~ #tabs #content #content1,
input#tab2:checked ~ #tabs #content #content2 {
opacity: 1;
z-index: 100;
}
input#tab1:checked ~ #tabs label[for=tab1] {
background: #f1f1f1;
color: #444;
}
input#tab2:checked ~ #tabs label[for=tab2] {
background: #f1f1f1;
color: #444;
}
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment