Skip to content
GitLab
Projects
Groups
Snippets
Help
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
xerus
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
40
Issues
40
List
Boards
Labels
Service Desk
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Operations
Operations
Incidents
Environments
Analytics
Analytics
CI / CD
Repository
Value Stream
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
xerus
xerus
Commits
bd082c76
Commit
bd082c76
authored
May 23, 2017
by
Ben Huber
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
Decomposition and Solve documentation
parent
b639ba9b
Pipeline
#731
passed with stages
in 8 minutes and 28 seconds
Changes
3
Pipelines
1
Hide whitespace changes
Inline
Side-by-side
Showing
3 changed files
with
168 additions
and
13 deletions
+168
-13
doc/new/_posts/2000-11-26-decompositions.md
doc/new/_posts/2000-11-26-decompositions.md
+153
-2
src/xerus/python/factorizations.cpp
src/xerus/python/factorizations.cpp
+6
-11
src/xerus/python/tensor.cpp
src/xerus/python/tensor.cpp
+9
-0
No files found.
doc/new/_posts/2000-11-26-decompositions.md
View file @
bd082c76
---
layout
:
post
title
:
"
Decompositions
and
Linear
Equations
"
title
:
"
Decompositions
and
Solve
"
date
:
2000-11-26
topic
:
"
Basic
Usage"
section
:
"
Documentation"
---
__
tabsInit
# Decompositions and Linear Equations
# Decompositions and Solve
In matrix calculus the decomposition of a matrix into the matrix product of several matrices with special properties (eg.
into an orthogonal and a triangular matrix (QR) or orthogonal matrices and diagonal of singular values (SVD)) are among the
most powerful tools to devise numerical algorithms. In the case of tensors of higher degree it is necessary to indicate along
which modes the decomposition is supposed to happen, so
`xerus`
uses the notation of indexed equations explained in the previous
chapter (
[
Indices and Equations
](
indices
)
).
## QR Decompositions
To provide an intuitive approach to decompositions,
`xerus`
uses the assignment of multiple tensors with a single operator to
denote them. Here
`(Q, R) = QR(A)`
reads as "Q and R are defined as the QR-Decomposition of A", even though we naturally have
to provide indices to make this line well defined:
__
tabsStart
~~~
cpp
// perform QR decomposition of A and store result in Q and R
(
Q
(
i
,
r
),
R
(
r
,
j
))
=
xerus
.
QR
(
A
(
i
,
j
));
~~~
__
tabsMid
~~~
py
# perform QR decomposition of A and store result in Q and R
(
Q
(
i
,
r
),
R
(
r
,
j
))
<<
xerus
.
QR
(
A
(
i
,
j
))
~~~
__
tabsEnd
In these decompositions we can distribute the modes of the original tensor as we please, but to be well defined,
`Q`
and
`R`
must
only share one and exactly one index.
__
tabsStart
~~~
cpp
// well defined QR decomposition of a degree 4 tensor
(
Q
(
i
,
r
,
k
),
R
(
l
,
r
,
j
))
=
xerus
.
QR
(
A
(
i
,
j
,
k
,
l
));
// invalid: (Q(i,r,s), R(r,s,j)) = xerus.QR(A(i,j));
~~~
__
tabsMid
~~~
py
# well defined QR decomposition of a degree 4 tensor
(
Q
(
i
,
r
,
k
),
R
(
l
,
r
,
j
))
<<
xerus
.
QR
(
A
(
i
,
j
,
k
,
l
))
# invalid: (Q(i,r,s), R(r,s,j)) << xerus.QR(A(i,j))
~~~
__
tabsEnd
For convenience
`xerus`
defines four variants of the QR decomposition. Assuming the input is of size $m
\t
imes n$, $min =
\o
peratorname{min}(m,n)$
and $r$ the rank of the input we have following resulting objects:
<table
class=
"table"
>
<thead>
<tr>
<th
style=
"border: none;"
></th>
<th
style=
"border-bottom: none; border-left: 2px solid #ddd;"
>
Left-Hand-Side
</th>
<th
style=
"border: none;"
></th>
<th
style=
"border-bottom: none; border-left: 2px solid #ddd; "
>
Right-Hand-Side
</th>
<th
style=
"border: none;"
></th>
</tr>
<tr>
<th
style=
"border-top: none;"
>
Decomposition
</th>
<th
style=
"border-top: none; border-left: 2px solid #ddd;"
>
Property
</th>
<th
style=
"border-top: none;"
>
Dimension
</th>
<th
style=
"border-top: none; border-left: 2px solid #ddd;"
>
Property
</th>
<th
style=
"border-top: none;"
>
Dimension
</th>
</tr>
</thead>
<tbody>
<tr>
<td
style=
"text-align: right"
><code
class=
"highlighter-rouge"
>
xerus.QR
</code></td>
<td
style=
"border-left: 2px solid #ddd;"
>
orthogonal
</td>
<td>
$m
\t
imes min$
</td>
<td
style=
"border-left: 2px solid #ddd;"
>
upper triangular
</td>
<td>
$min
\t
imes n$
</td>
</tr>
<tr>
<td
style=
"text-align: right"
><code
class=
"highlighter-rouge"
>
xerus.RQ
</code></td>
<td
style=
"border-left: 2px solid #ddd;"
>
upper triangular
</td>
<td>
$m
\t
imes min$
</td>
<td
style=
"border-left: 2px solid #ddd;"
>
orthogonal
</td>
<td>
$min
\t
imes n$
</td>
</tr>
<tr>
<td
style=
"text-align: right"
><code
class=
"highlighter-rouge"
>
xerus.QC
</code></td>
<td
style=
"border-left: 2px solid #ddd;"
>
orthogonal
</td>
<td>
$m
\t
imes r$
</td>
<td
style=
"border-left: 2px solid #ddd;"
>
upper or lower triangular
</td>
<td>
$r
\t
imes n$
</td>
</tr>
<tr>
<td
style=
"text-align: right"
><code
class=
"highlighter-rouge"
>
xerus.CQ
</code></td>
<td
style=
"border-left: 2px solid #ddd;"
>
upper or lower triangular
</td>
<td>
$m
\t
imes r$
</td>
<td
style=
"border-left: 2px solid #ddd;"
>
orthogonal
</td>
<td>
$r
\t
imes n$
</td>
</tr>
</tbody>
</table>
## Singular Value Decompositions
The Singular Value Decomposition in
`xerus`
is called very much like the
`QR`
decomposition:
__
tabsStart
~~~
cpp
// calculate the SVD of A and store the resulting matrices in U, S and Vt
(
U
(
i
,
r1
),
S
(
r1
,
r2
),
Vt
(
r2
,
j
))
=
xerus
.
SVD
(
A
(
i
,
j
));
~~~
__
tabsMid
~~~
py
# calculate the SVD of A and store the resulting matrices in U, S and Vt
(
U
(
i
,
r1
),
S
(
r1
,
r2
),
Vt
(
r2
,
j
))
<<
xerus
.
SVD
(
A
(
i
,
j
))
~~~
__
tabsEnd
In this form it is rank-revealing (so
`S`
is of dimensions $r
\t
imes r$ instead of $
\o
peratorname{min}(m,n)
\t
imes
\o
peratorname{min}(m,n)$)
and exact, but it is possible to pass optional arguments to use it as a truncated SVD.
__
tabsStart
~~~
cpp
// calculate the SVD, truncated to at most 5 singular values
size_t
numSingularVectors
=
5
;
// or until a singular value is smaller than 0.01 times the maximal singular value
double
epsilon
=
0.01
;
(
U
(
i
,
r1
),
S
(
r1
,
r2
),
Vt
(
r2
,
j
))
=
xerus
.
SVD
(
A
(
i
,
j
),
numSingularVectors
,
epsilon
);
~~~
__
tabsMid
~~~
py
# calculate the SVD, truncated to at most 5 singular values
numSingularVectors
=
5
# or until a singular value is smaller than 0.01 times the maximal singular value
epsilon
=
0.01
(
U
(
i
,
r1
),
S
(
r1
,
r2
),
Vt
(
r2
,
j
))
=
xerus
.
SVD
(
A
(
i
,
j
),
maxRank
=
numSingularVectors
,
eps
=
epsilon
);
~~~
__
tabsEnd
## Solving Linear Equations
A common application for matrix decompositions is to solve matrix equations of the form $A
\c
dot x = b$ for $x$ via QR, LU,
LDL$^T$ or Cholesky decompositions. In xerus this is again provided in indexed notation via the
`operator/`
.
__
tabsStart
~~~
cpp
// solve A(i,j)*x(j) = b(i) for x
x
(
j
)
=
b
(
i
)
/
A
(
i
,
j
);
~~~
__
tabsMid
~~~
py
# solve A(i,j)*x(j) = b(i) for x
x
(
j
)
<<
b
(
i
)
/
A
(
i
,
j
)
~~~
__
tabsEnd
Depending on the representation and some properties of
`A`
,
`xerus`
will automatically choose one of the above mentioned decompositions
to solve the system.
src/xerus/python/factorizations.cpp
View file @
bd082c76
...
...
@@ -35,10 +35,13 @@ void expose_factorizations() {
})
;
class_
<
SVD
,
bases
<
TensorFactorisation
>
,
boost
::
noncopyable
>
(
"SVD_temporary"
,
boost
::
python
::
no_init
);
def
(
"SVD"
,
+
[](
IndexedTensor
<
Tensor
>
&
_rhs
)
->
TensorFactorisation
*
{
return
new
SVD
(
std
::
move
(
_rhs
));
def
(
"SVD"
,
+
[](
IndexedTensor
<
Tensor
>
&
_rhs
,
size_t
_maxRank
,
double
_eps
)
->
TensorFactorisation
*
{
return
new
SVD
(
std
::
move
(
_rhs
)
,
_maxRank
,
_eps
);
},
return_value_policy
<
manage_new_object
,
// result is treated as a new object
with_custodian_and_ward_postcall
<
0
,
1
>>
());
// but the argument will not be destroyed before the result is destroyed
with_custodian_and_ward_postcall
<
0
,
1
>>
(),
// but the argument will not be destroyed before the result is destroyed
(
arg
(
"source"
),
arg
(
"maxRank"
)
=
std
::
numeric_limits
<
size_t
>::
max
(),
arg
(
"eps"
)
=
EPSILON
)
);
class_
<
QR
,
bases
<
TensorFactorisation
>
,
boost
::
noncopyable
>
(
"QR_temporary"
,
boost
::
python
::
no_init
);
def
(
"QR"
,
+
[](
IndexedTensor
<
Tensor
>
&
_rhs
)
->
TensorFactorisation
*
{
...
...
@@ -64,12 +67,4 @@ void expose_factorizations() {
},
return_value_policy
<
manage_new_object
,
// result is treated as a new object
with_custodian_and_ward_postcall
<
0
,
1
>>
());
// but the argument will not be destroyed before the result is destroyed
enum_
<
Tensor
::
Representation
>
(
"Representation"
,
"Possible representations of Tensor objects."
)
.
value
(
"Dense"
,
Tensor
::
Representation
::
Dense
)
.
value
(
"Sparse"
,
Tensor
::
Representation
::
Sparse
)
;
enum_
<
Tensor
::
Initialisation
>
(
"Initialisation"
,
"Possible initialisations of new Tensor objects."
)
.
value
(
"Zero"
,
Tensor
::
Initialisation
::
Zero
)
.
value
(
"None"
,
Tensor
::
Initialisation
::
None
)
;
}
src/xerus/python/tensor.cpp
View file @
bd082c76
...
...
@@ -26,6 +26,15 @@
#include "misc.h"
void
expose_tensor
()
{
enum_
<
Tensor
::
Representation
>
(
"Representation"
,
"Possible representations of Tensor objects."
)
.
value
(
"Dense"
,
Tensor
::
Representation
::
Dense
)
.
value
(
"Sparse"
,
Tensor
::
Representation
::
Sparse
)
;
enum_
<
Tensor
::
Initialisation
>
(
"Initialisation"
,
"Possible initialisations of new Tensor objects."
)
.
value
(
"Zero"
,
Tensor
::
Initialisation
::
Zero
)
.
value
(
"None"
,
Tensor
::
Initialisation
::
None
)
;
{
scope
Tensor_scope
=
class_
<
Tensor
>
(
"Tensor"
,
"a non-decomposed Tensor in either sparse or dense representation"
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment