2000-11-30-tensor.md 21.1 KB
Newer Older
1
2
3
---
layout: post
title: "The Tensor Class"
4
date: 2000-11-30
5
6
7
topic: "Basic Usage"
section: "Documentation"
---
8
__tabsInit
9
10
11
12
13
# The Tensor Class

The basic building stone of this library and all Tensor Network methods is the Tensor, represented in the class `xerus::Tensor`.
To simplify the work with these objects, `xerus` contains a number of helper functions that allow the quick creation and 
modification of sparse and dense tensors. In the following we will list the most important ones but advise you to also read
Ben Huber's avatar
Ben Huber committed
14
the tutorials on [indices and equations](/indices) and [decompositions](/decompositions) to have the full toolset with which to work on individual 
15
16
17
18
19
tensors.

## Creation of Tensors
The most basic tensors can be created with the empty constructor

20
__tabsStart
21
22
23
24
~~~ cpp
// creates a degree 0 tensor
A = xerus::Tensor();
~~~
25
__tabsMid
26
27
28
29
~~~ python
# creates a degree 0 tensor
A = xerus.Tensor()
~~~
30
31
__tabsEnd

32
33
34
35

it is of degree 0 and represents the single number 0. Similarly the constructors that take either the degree or a vector of 
dimensions as input create (sparse) tensors that are equal to 0 everywhere

36
__tabsStart
37
38
39
40
41
42
~~~ cpp
// creates a 1x1x1 tensor with entry 0
B = xerus::Tensor(3);

// creates a sparse 2x2x2 tensor without any entries
C = xerus::Tensor({2,2,2});
43

Ben Huber's avatar
Ben Huber committed
44
~~~
45
__tabsMid
46
47
48
49
50
51
52
53
~~~ python
# creates a 1x1x1 tensor with entry 0
B = xerus.Tensor(3)

# creates a sparse 2x2x2 tensor without any entries
C = xerus.Tensor([2,2,2])
# equivalently: xerus.Tensor(dim=[2,2,2])
~~~
54
__tabsEnd
55
56
57

The latter of these can be forced to create a dense tensor instead which can either be initialized to 0 or uninitialized

58
__tabsStart
59
60
61
62
63
64
65
~~~ cpp
// creates a dense 2x2x2 tensor with all entries set to 0
D = xerus::Tensor({2,2,2}, xerus::Tensor::Representation::Dense);

// creates a dense 2x2x2 tensor with uninitialized entries
E = xerus::Tensor({2,2,2}, xerus::Tensor::Representation::Dense, xerus::Tensor::Initialisation::None);
~~~
66
__tabsMid
67
68
69
70
71
72
73
~~~ python
# creates a dense 2x2x2 tensor with all entries set to 0
D = xerus.Tensor(dim=[2,2,2], repr=xerus.Tensor.Representation.Dense)

# creates a dense 2x2x2 tensor with uninitialized entries
E = xerus.Tensor(dim=[2,2,2], repr=xerus.Tensor.Representation.Dense, init=xerus.Tensor.Initialisation.None)
~~~
74
__tabsEnd
75
76

Other commonly used tensors (apart from the 0 tensor) are available through named constructors:
77
78

__tabsStart
79
80
81
82
83
84
85
86
87
88
89
90
91
92
~~~ cpp
// a 2x3x4 tensor with all entries = 1
xerus::Tensor::ones({2,3,4});

// an (3x4) x (3x4) identity operator
xerus::Tensor::identity({3,4,3,4});

// a 3x4x3x4 tensor with superdiagonal = 1 (where all 4 indices coincide) and = 0 otherwise
xerus::Tensor::kronecker({3,4,3,4});

// a 2x2x2 tensor with a 1 in position {1,1,1} and 0 everywhere else
xerus::Tensor::dirac({2,2,2}, {1,1,1});


Ben Huber's avatar
Ben Huber committed
93

94
95
96
97
98
99
100
101
102
// a 4x4x4 tensor with i.i.d. Gaussian random values
xerus::Tensor::random({4,4,4});

// a 4x4x4 sparse tensor with 10 random entries in uniformly distributed random positions
xerus::Tensor::random({4,4,4}, 10);

// a (4x4) x (4x4) random orthogonal operator drawn according to the Haar measure
xerus::Tensor::random_orthogonal({4,4},{4,4});
~~~
103
__tabsMid
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
~~~ python
# a 2x3x4 tensor with all entries = 1
xerus.Tensor.ones([2,3,4])

# an (3x4) x (3x4) identity operator
xerus.Tensor.identity([3,4,3,4])

# a 3x4x3x4 tensor with superdiagonal = 1 (where all 4 indices coincide) and = 0 otherwise
xerus.Tensor.kronecker([3,4,3,4])

# a 2x2x2 tensor with a 1 in position {1,1,1} and 0 everywhere else
xerus.Tensor.dirac([2,2,2], [1,1,1])
# equivalently xerus.Tensor.dirac(dim=[2,2,2], pos=[1,1,1])


# a 4x4x4 tensor with i.i.d. Gaussian random values
xerus.Tensor.random([4,4,4])

# a 4x4x4 sparse tensor with 10 random entries in uniformly distributed random positions
xerus.Tensor.random([4,4,4], n=10)

# a (4x4) x (4x4) random orthogonal operator drawn according to the Haar measure
xerus.Tensor.random_orthogonal([4,4],[4,4])
~~~
128
__tabsEnd
129
130
131
132

If the entries of the tensor should be calculated externally, it is possible in c++ to either pass the raw data directly (as
`std::unique_ptr<double>` or `std::shared_ptr<double>`, check section 'Advanced Use and Ownership of Data' for the latter!) 
or use a callback / lambda function to populate the entries:
133
134

__tabsStart
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
~~~ cpp
std::unique_ptr<double> ptr = foo();
// transfer ownership of the data to the Tensor object of size 2x2x2
// NOTE: make sure that the dimensions are correct as there is no way for xerus to check this!
F = xerus::Tensor({2,2,2}, ptr);

// create a dense 3x3x3 tensor with every entry populated by a callback (lambda) function
G = xerus::Tensor({3,3,3}, [](const std::vector<size_t> &_idx) -> double {
	// somehow derive the entry from the index positions in _idx
	double result = double( _idx[0] * _idx[1] * _idx[2] );
	return result;
});

// create a sparse 16x16x16 tensor with 16 entries determined by a callback (lambda) function
H = xerus::Tensor({16,16,16}, 16, [](size_t num, size_t max) -> std::pair<size_t, double> {
	// insert number 1/5, 2/5, 3/5, 4/5, 5/5
	// at positions 0 (= {0,0,0}), 17 (= {0,1,1}), 34 (= {0,2,2}) ... respectively
	return std::pair<size_t,double>(num*17, double(num)/double(max));
});
~~~
155
__tabsMid
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
~~~ python
# Transfering ownership of raw data directly is not possible from within python.

# create a dense 3x3x3 tensor with every entry populated by a callback (lambda) function
G = xerus.Tensor.from_function([3,3,3], lambda idx: idx[0]*idx[1]*idx[2])

# Creating a sparse tensor from a callback function in python is not supported by xerus.
# This behaviour can easily be achieved by setting the correspoding values as seen below though.
~~~

In python raw data structures are not directly compatible to those used in `xerus` internally. Tensors can be constructed from 
`numpy.ndarray` objects though. This function will also implicitely accept pythons native array objects.
~~~ python
# convert a numpy tensor (identity matrix) to a xerus.Tensor object
T = xerus.Tensor.from_ndarray(numpy.eye(2))

# alternatively the function also accepts pythons native arrays
U = xerus.Tensor.from_ndarray([[1,0], [0,1]])
~~~
175
__tabsEnd
176
177
178
179

Last but not least it is possible to populate the entries of a tensor by explicitely accessing them. During this process, an 
initially sparse 0 tensor as it is created by the default constructors will automatically be converted to a dense object as soon
as `xerus` deems this to be preferable.
180
181

__tabsStart
182
183
184
185
186
187
~~~ cpp
// creating an identity matrix by explicitely setting non-zero entries
V = xerus::Tensor({2,2});
V[{0,0}] = 1.0; // equivalently: V[0] = 1.0;
V[{1,1}] = 1.0; // equivalently: V[3] = 1.0;
~~~
188
__tabsMid
189
190
191
192
193
194
~~~ python
# creating an identity matrix by explicitely setting non-zero entries
V = xerus.Tensor([2,2])
V[[0,0]] = 1.0 # equivalently: V[0] = 1.0
V[[1,1]] = 1.0 # equivalently: V[3] = 1.0
~~~
195
196
__tabsEnd

Ben Huber's avatar
Ben Huber committed
197
Explicitely constructing a tensor similarly in a blockwise fashion will be covered in the tutorial on [indices and equations](/indices)
198
199
200
201
202
203
204
205


## Sparse and Dense Representations
The last example already mentioned, that `xerus` will dynamically convert sparse tensors do their dense counterpart when this
seems to be preferable. For this it does not matter whether the number of entries increased due to explicit access, summation or
contraction of tensors or as the result of decompositions.

This behaviour can be modified by changing the global setting
206
207

__tabsStart
208
209
210
211
~~~ cpp
// tell xerus to convert sparse tensors to dense if 1 in 4 entries are non-zero
xerus::Tensor::sparsityFactor = 4;
~~~
212
__tabsMid
213
214
215
216
~~~ python
# tell xerus to convert sparse tensors to dense if 1 in 4 entries are non-zero
xerus.Tensor.sparsityFactor = 4
~~~
217
218
__tabsEnd

219
in particular, setting the [sparsityFactor](__doxyref(xerus::Tensor::sparsityFactor)) to 0 will disable this feature.
220
221

__tabsStart
222
223
224
225
~~~ cpp
// stop xerus from automatically converting sparse tensors to dense
xerus::Tensor::sparsityFactor = 0;
~~~
226
__tabsMid
227
228
229
230
~~~ python
# stop xerus from automatically converting sparse tensors to dense
xerus.Tensor.sparsityFactor = 0
~~~
231
232
__tabsEnd

233
234
235
236
237
Note though, that calculations with non-sparse Tensors that are stored in a sparse representation are typically much slower than
in dense representation. You should thus manually convert overly full sparse Tensors to the dense representation.

To do this there are a number of ways to interact with the representation of `xerus::Tensor` objects. Above we already saw, that
the constructors can be used to explicitely construct sparse (default behaviour) or dense tensors. For already existing objects
238
239
240
you can use the member functions [.is_sparse()](__doxyref(xerus::Tensor::is_sparse)) and [.is_dense()](__doxyref(xerus::Tensor::is_dense)) to query their representation. To change representations call the 
member functions [.use_dense_representation()](__doxyref(xerus::Tensor::use_dense_representation)) or [.use_sparse_representation()](__doxyref(xerus::Tensor::use_sparse_representation)) to change it inplace or [.dense_copy()](__doxyref(xerus::Tensor::dense_copy)) or 
[.sparse_copy()](__doxyref(xerus::Tensor::sparse_copy)) to obtain new tensor objects with the desired representation.
241
242

To make more informed decisions about whether a conversion might be useful the tensor objects can be queried for the number of
243
defined entries with [.sparsity()](__doxyref(xerus::Tensor::sparsity)) or for the number of non-zero entries with [.count_non_zero_entries()](__doxyref(xerus::Tensor::count_non_zero_entries)).
244
245

__tabsStart
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
~~~ cpp
// create a sparse tensor with 100 random entries
W = xerus::Tensor::random({100,100}, 100);
// query its sparsity. likely output: "100 100"
std::cout << W.sparsity() << ' ' << W.count_non_zero_entries() << std:endl;

// store an explicit 0 value in the sparse representation
W[{0,0}] = 0.0;
// query its sparsity. likely output: "101 100"
std::cout << W.sparsity() << ' ' << W.count_non_zero_entries() << std:endl;

// convert the tensor to dense representation
W.use_dense_representation();
// query its sparsity. likely output: "10000 100"
std::cout << W.sparsity() << ' ' << W.count_non_zero_entries() << std:endl;
~~~
262
__tabsMid
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
~~~ python
# create a sparse tensor with 100 random entries
W = xerus.Tensor.random(dim=[100,100], n=100)
# query its sparsity. likely output: "100 100"
print(W.sparsity(), W.count_non_zero_entries())

# store an explicit 0 value in the sparse representation
W[[0,0]] = 0.0
# query its sparsity. likely output: "101 100"
print(W.sparsity(), W.count_non_zero_entries())

# convert the tensor to dense representation
W.use_dense_representation()
# query its sparsity. likely output: "10000 100"
print(W.sparsity(), W.count_non_zero_entries())
~~~
279
__tabsEnd
280
281
282
283



## Output and Storing
284
285
Probably the most common queries to the Tensor class are its degree with [.degree()](__doxyref(xerus::Tensor::degree))
as well as its precise dimensions by accessing [.dimensions](__doxyref(xerus::Tensor::dimensions)).
286
287

__tabsStart
288
289
290
291
292
293
294
295
296
297
~~~ cpp
// construct a random 3x4x5x6 tensor
A = xerus::Tensor::random({3, 4, 5, 6});

// use xerus' pipe operator to be able to print vectors to std::cout
using xerus::misc::operator<<;
// query its degree and dimensions
std::cout << "degree: " << A.degree() << " dim: " << A.dimensions << std::endl;
// expected output: "degree: 4 dim: {3, 4, 5, 6}"
~~~
298
__tabsMid
299
300
301
302
303
304
305
306
~~~ python
# construct a random 3x4x5x6 tensor
A = xerus.Tensor.random([3, 4, 5, 6])

# query its degree and dimensions
print("degree:", A.degree(), "dim:", A.dimensions())
# expected output: "degree: 4 dim: [3, 4, 5, 6]"
~~~
307
__tabsEnd
308
309

Another useful and commonly used query is for the norm of a tensor. At the moment `xerus` provides member functions for the
310
311
two most commonly used norms: [.frob_norm()](__doxyref(xerus::Tensor::frob_norm)) (or equivalently
`frob_norm(const Tensor&)`) to obtain the Frobenius norm and [.one_norm()](__doxyref(xerus::Tensor::one_norm)) (or equivalently
312
`one_norm(const Tensor&)`) to obtain the p=1 norm of the tensor.
313
314

__tabsStart
315
316
317
318
319
320
321
~~~ cpp
A = xerus::Tensor::identity({100,100});

// query the tensor for its p=1 and p=2 norm
std::cout << one_norm(A) << ' ' << frob_norm(A) << std::endl;
// expected output: "10000 100"
~~~
322
__tabsMid
323
324
325
326
327
328
329
~~~ python
A = xerus.Tensor.identity([100,100])

# query the tensor for its p=1 and p=2 norm
print(xerus.one_norm(A), xerus.frob_norm(A))
# expected output: "10000 100"
~~~
330
__tabsEnd
331
332


333
To obtain a human readable string representation of the tensor, [.to_string()](__doxyref(xerus::Tensor::to_string)) can be used.
334
335
336
Note that it is meant purely for debugging purposes, in particular of smaller objects, and it is not adequately possible to 
reconstruct the original tensor from this output.

337
338
Storing Tensors to files such that they can be reconstructed exactly from those is instead possible with [save_to_file()](__doxyref(xerus::misc::save_to_file))
and respectively [load_from_file()](__doxyref(xerus::misc::load_from_file)).
339
340

__tabsStart
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
~~~ cpp
// construct a random 3x3 tensor
A = xerus::Tensor::random({3, 3});

// store the Tensor to the file "tensor.dat"
xerus::misc::save_to_file(A, "tensor.dat");

// load the Tensor from the file
xerus::Tensor B = xerus::misc::load_from_file<xerus::Tensor>("tensor.dat");

// check for correct reconstruction
std::cout << "original tensor: " << A.to_string() << std::endl
          << "loaded tensor: " << B.to_string() << std::endl
          << "error: " << frob_norm(B-A) << std::endl;
~~~
356
__tabsMid
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
~~~ python
# construct a random 3x3 tensor
A = xerus.Tensor.random([3, 3])

# store the Tensor to the file "tensor.dat"
xerus.misc.save_to_file(A, "tensor.dat")

# load the Tensor from the file
B = xerus.misc.load_from_file("tensor.dat")

# check for correct reconstruction
print("original tensor:", A)
print("loaded tensor:", B)
print("error:", xerus.frob_norm(B-A))
~~~
372
__tabsEnd
373
374
375
376
377




## Operators and Modifications
378
We have already seen the most basic method of modifying a tensor via the [operator[]](__doxyref(xerus::Tensor::operator[])). With it
Ben Huber's avatar
Ben Huber committed
379
and the index notation presented in the [indices and equations](/indices) tutorial, most desired manipulations can be 
380
381
382
383
384
385
386
represented. Some of them would still be cumbersome though, so `xerus` includes several helper functions to make your life easier.
The purpose of this section is to present the most important ones.

Naturally tensor objects can be used in arithmetic expressions whereever this is well defined: addition and subtraction of equally
sized tensors as well as multiplication and division by scalars. Note, that tensors in general do not have the cannonical
isomorphism to operators that matrices have. As such the matrix multiplication can not be generalized trivially to tensors. Instead
you will have to use indexed equations to express such contractions in xerus.
387
388

__tabsStart
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
~~~ cpp
xerus::Tensor A = xerus::Tensor::random({100});

// normalizing the vector
A = A / frob_norm(A);

// adding a small pertubation
xerus::Tensor B = A + 1e-5 * xerus::Tensor::random({100});

// normalizing B
B /= frob_norm(B);

// determining the change from A to B
B -= A;

std::cout << "Distance on the unit sphere: " << B.frob_norm() << std::endl;
~~~
406
__tabsMid
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
~~~ python
A = xerus.Tensor.random([100])

# normalizing the vector
A = A / xerus.frob_norm(A)

# adding a small pertubation
B = A + 1e-5 * xerus.Tensor.random([100])

# normalizing B
B /= B.frob_norm()

# determining the change from A to B
B -= A

print("Distance on the unit sphere: ", B.frob_norm())
~~~
424
__tabsEnd
425
426
427
428
429

Many theoretical results use flattenings or expansions (vectorizations or tensorizations) to reduce tensor problems to ones with
vectors and matrices. While this is not as common in practical applications it can still be useful at times. Because the data
that the computer uses internally does not need not be reordered, such a reinterpretation of the tensor objects are constant in
time.
430
431

__tabsStart
432
433
434
435
436
437
~~~ cpp
A = xerus::Tensor::identity({2,2});

// flatten the identity matrix to the vector (1, 0, 0, 1)
A.reinterpret_dimensions({4});
~~~
438
__tabsMid
439
440
441
442
443
444
~~~ python
A = xerus.Tensor.identity([2,2])

# flatten the identity matrix to the vector (1, 0, 0, 1)
A.reinterpret_dimensions([4])
~~~
445
446
__tabsEnd

447
448
449
This operation is obviously only possible when the total number of entries remains unchanged.

If you want to change the dimensions of a tensor such that the total size changes, you have to specify how to do this. `xerus`
450
451
provides three functions to help you in such a case: [.resize_mode()](__doxyref(xerus::Tensor::resize_mode)) changes the dimension
of a single mode by adding zero slates or removing existing slates at a given position; [.fix_mode()](__doxyref(xerus::Tensor::fix_mode))
452
reduces the tensor to an object of degree d-1 that corresponds to the slate, selected in the call to the function; finally
453
[.remove_slate()](__doxyref(xerus::Tensor::remove_slate)) is a simplified version of `.resize_mode()` that removes a single slate
454
from the tensor, reducing the dimension of the specified mode by one.
455
456

__tabsStart
457
458
459
460
461
462
463
464
465
466
467
468
469
~~~ cpp
// start with a 2x2 matrix filled with ones
A = xerus::Tensor::ones({2,2});

// insert another row (consisting of zeros) i.e. increase dimension of mode 0 to 3
A.resize_mode(0, 3, 1);

// select the first column i.e. reduce to fixed value 0 for mode 1
A.fix_mode(1, 0);

// expected output: "1.0 0.0 1.0"
std::cout << A.to_string() << std::endl;
~~~
470
__tabsMid
471
472
473
474
475
476
477
478
479
480
481
482
483
~~~ python
# start with a 2x2 matrix filled with ones
A = xerus.Tensor.ones([2,2])

# insert another row (consisting of zeros) i.e. increase dimension of mode 0 to 3
A.resize_mode(mode=0, newDim=3, cutPos=1)

# select the first column i.e. reduce to fixed value 0 for mode 1
A.fix_mode(mode=1, value=0)

# expected output: "1.0 0.0 1.0"
print(A)
~~~
484
__tabsEnd
485
486

At the moment the Hadamard product is not available in a indexed notation (due to a lack of overloadable operators). Its
487
behaviour can instead be achieved with [entrywise_product()](__doxyref(xerus::misc::entrywise_product)).
488
489

__tabsStart
490
491
492
493
494
495
496
~~~ cpp
// constructing a tensor with i.i.d. entries sampled from a Chi-Squared distribution by (Note: this is not the most efficient way to achieve this!)
// 1. constructing a tensor with Gaussian i.i.d. entries
A = xerus::Tensor::random({10,10,10,10});
// 2. performing an entrywise product of this tensor with itself
A = entrywise_product(A, A);
~~~
497
__tabsMid
498
499
500
501
502
503
504
~~~ python
# constructing a tensor with i.i.d. entries sampled from a Chi-Squared distribution by (Note: this is not the most efficient way to achieve this!)
# 1. constructing a tensor with Gaussian i.i.d. entries
A = xerus.Tensor.random([10,10,10,10])
# 2. performing an entrywise product of this tensor with itself
A = xerus.entrywise_product(A, A)
~~~
505
__tabsEnd
506
507
508
509

## Ownership of Data and Advanced Usage
To reduce the number of unnecessary copies of tensors, `xerus` can share the underlying data arrays among several tensors and
even evaluate multiplication with scalars lazyly (or include them in the next contraction or other modfication).
510
511

__tabsStart
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
~~~ cpp
A = xerus::Tensor::random({100,100});

// after the copy, the underlying data array is shared among A and B
xerus::Tensor B(A);

// the array is still shared, even after multiplication with scalars
B *= 3.141;
// as well as after dimension reinterpretation
B.reinterpret_dimensions({10, 10, 10, 10});

// any change in the stored data of A or B will then copy the data to ensure that the other is not changed
B[{2,2,2,2}] = 0.0;
// A remains unchanged and does not share the data with B anymore
~~~
527
__tabsMid
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
~~~ python
A = xerus.Tensor.random([100,100])

# after the copy, the underlying data array is shared among A and B
B = xerus.Tensor(A)

# the array is still shared, even after multiplication with scalars
B *= 3.141
# as well as after dimension reinterpretation
B.reinterpret_dimensions([10, 10, 10, 10])

# any change in the stored data of A or B will then copy the data to ensure that the other is not changed
B[[2,2,2,2]] = 0.0
# A remains unchanged and does not share the data with B anymore
~~~
543
__tabsEnd
544
545
546

The average user of `xerus` does not need to worry about this internal mechanism. This changes though as soon as you need access
to the underlying data structues e.g. to call `blas` or `lapack` routines not supported by `xerus` or to convert objects from
547
548
other libraries to and from `xerus::Tensor` objects. If you do, make sure to check out the documentation
for the following functions (c++ only):
549
550
551
* [.has_factor()](__doxyref(xerus::Tensor::has_factor)) and [.apply_factor()](__doxyref(xerus::Tensor::apply_factor))
* [.get_dense_data()](__doxyref(xerus::Tensor::get_dense_data)); [.get_unsanitized_dense_data()](__doxyref(xerus::Tensor::get_unsanitized_dense_data)); [.get_internal_dense_data()](__doxyref(xerus::Tensor::get_internal_dense_data))
* [.get_sparse_data()](__doxyref(xerus::Tensor::get_sparse_data)); [.get_unsanitized_sparse_data()](__doxyref(xerus::Tensor::get_unsanitized_sparse_data)); [.get_internal_sparse_data()](__doxyref(xerus::Tensor::get_internal_sparse_data))
552
553
554
555
556
557
558
559
560
561

__warnStart

A note about **thread safety**: accessing different Tensor objects at the same time is always safe - even if the underlying
data structure is shared. The same can not be said about accessing the same object though. Depending on the [sparsityFactor](__doxyref(xerus::Tensor::sparsityFactor))
even an access via the [operator[]](__doxyref(xerus::Tensor::operator[])) can lead to a change in internal representation. If
you really have to read from the same Tensor object from several threads, make sure only to use non-modifying functions (eg. by
accessing the Tensor only via a `const` reference).

__warnEnd