1000-10-10-cascade.md 13.1 KB
Newer Older
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
---
layout: post
title: "Signal Cascade Markov Model"
date: 1000-10-10
topic: "Examples"
section: "Examples"
---
__tabsInit
# Signal Cascade Markov Model

In this example we want to solve a Markovian Masterequation corresponding to a genetic signal cascade. We will use an implicit Euler
method in the time direction using the ALS algorithm to solve the individual steps. The construction of the operator will be done
according to the SLIM decomposition derived in [[P. Gelß et al., 2017]](https://arxiv.org/abs/1611.03755) (cf. Example 4.1 therein).

## Transition Matrices

Our solution tensor $X[i_1, i_2, \dots]$ represent the likelyhood, that there are $i_1$ copies of protein one, $i_2$ copies of protein two
and so on. As the likelyhood of very large $i_j$ becomes small very fast we can restrict ourselves to a finite tensor with $i_j \in \\{0,1,\dots,n_j\\}$
represented in our sourcecode as

__tabsStart
~~~ cpp
23
const size_t MAX_NUM_PER_SITE = 32;
24 25 26
~~~
__tabsMid
~~~ py
27
MAX_NUM_PER_SITE = 32
28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454
~~~
__tabsEnd

For the different events we can now describe matrices that have the corresponding action. We will denote as $M$ the creation of a 
new protein (remove current number of proteins = diagonal equals -1; add current number + 1 proteins = offdiagonal equals 1)

__tabsStart
~~~ cpp
Tensor create_M() {
	Tensor M = -1*Tensor::identity({MAX_NUM_PER_SITE, MAX_NUM_PER_SITE});
	for (size_t i = 0; i < MAX_NUM_PER_SITE-1; ++i) {
		M[{i+1, i}] = 1.0;
	}
	return M;
}
~~~
__tabsMid
~~~ py
def create_M():
	M = -1 * xe.Tensor.identity([MAX_NUM_PER_SITE, MAX_NUM_PER_SITE])
	for i in xrange(MAX_NUM_PER_SITE-1) :
		M[[i+1, i]] = 1.0
	return M
~~~
__tabsEnd

The probability of construction of a protein $x_i$ is actually given in terms of the number of proteins $x_{i-1}$ as $\frac{x_{i-1}}{5+x_{i-1}}$, 
so we will need another matrix $L$ that gives these probabilities, such that we can later construct the corresponding two-site TT Operator as $L\otimes M$.

__tabsStart
~~~ cpp
Tensor create_L() {
	Tensor L({MAX_NUM_PER_SITE, MAX_NUM_PER_SITE});
	L.modify_diagonal_entries([](value_t& _x, const size_t _i) { 
		_x = double(_i)/double(_i+5); 
	});
	return L;
}
~~~
__tabsMid
~~~ py
def create_L():
	L = xe.Tensor([MAX_NUM_PER_SITE, MAX_NUM_PER_SITE])
	for i in xrange(MAX_NUM_PER_SITE) :
		L[[i,i]] = i / (i+5.0)
	return L
~~~
__tabsEnd


The corresponding destruction could be expressed similarly, but as the probability of destruction in our example only depends on the
number of proteins $x_i$ themselves, we will use a matrix denoted as $S$ instead, that already includes these propabilities. 
Relative to the number of proteins $x_i$ the destruction probability in this example can be given as $0.07 x_i$, we thus have:

__tabsStart
~~~ cpp
Tensor create_S() {
	Tensor S({MAX_NUM_PER_SITE, MAX_NUM_PER_SITE});
	
	// Set diagonal
	for (size_t i = 0; i < MAX_NUM_PER_SITE; ++i) {
		S[{i, i}] = -double(i);
	}
	
	// Set offdiagonal
	for (size_t i = 0; i < MAX_NUM_PER_SITE-1; ++i) {
		S[{i, i+1}] = double(i+1);
	}
	return 0.07*S;
}
~~~
__tabsMid
~~~ py
def create_S():
	S = xe.Tensor([MAX_NUM_PER_SITE, MAX_NUM_PER_SITE])
	
	# set diagonal
	for i in xrange(MAX_NUM_PER_SITE) :
		S[[i,i]] = -i
	
	# set offdiagonal
	for i in xrange(MAX_NUM_PER_SITE-1) :
		S[[i,i+1]] = i+1
	
	return 0.07*S
~~~
__tabsEnd


## System Operator

The operator corresponding to the full system of $N$ proteins can now be expressed with the use of these matrices. As can be
seen in above mentioned paper, it is given by

$$ A = \begin{bmatrix} S^* & L & I \end{bmatrix} \otimes \begin{bmatrix} I & 0 & 0 \\ M & 0 & 0 \\ S & L & I \end{bmatrix} \otimes \cdots \otimes \begin{bmatrix} I & 0 & 0 \\ M & 0 & 0 \\ S & L & I \end{bmatrix} \otimes \begin{bmatrix} I \\ M \\ S \end{bmatrix} $$

where $$ S^* $$ includes the construction of the first protein (that does not depend on any other proteins) as $S^* = 0.7\cdot M + S$.

The construction of this operator in `xerus` is straight-forward: we first construct the individual matrices, use them to
construct the components as given in the previous formula and then simply set them via `.set_component` as the components of 
our operator.

__tabsStart
~~~ cpp
TTOperator create_operator(const size_t _degree) { 
	const Index i, j, k, l;
	
	// Create matrices
	const Tensor M = create_M();
	const Tensor S = create_S();
	const Tensor L = create_L();
	const Tensor Sstar = 0.7*M + S;
	const Tensor I = Tensor::identity({MAX_NUM_PER_SITE, MAX_NUM_PER_SITE});
	
	// Create empty TTOperator
	TTOperator A(2*_degree);
	
	// Create first component
	Tensor comp;
	comp(i, j, k, l) = 
		Sstar(j, k) * Tensor::dirac({1, 3}, 0)(i, l) 
		+   L(j, k) * Tensor::dirac({1, 3}, 1)(i, l) 
		+   I(j, k) * Tensor::dirac({1, 3}, 2)(i, l);
    
	A.set_component(0, comp); 
	
	// Create middle components
	comp(i, j, k, l) =
		  I(j, k) * Tensor::dirac({3, 3}, {0, 0})(i, l)
		+ M(j, k) * Tensor::dirac({3, 3}, {1, 0})(i, l)
		+ S(j, k) * Tensor::dirac({3, 3}, {2, 0})(i, l)
		+ L(j, k) * Tensor::dirac({3, 3}, {2, 1})(i, l)
		+ I(j, k) * Tensor::dirac({3, 3}, {2, 2})(i, l);
	
	for(size_t c = 1; c+1 < _degree; ++c) {
		A.set_component(c, comp);
	}
	
	// Create last component
	comp(i, j, k, l) = 
		  I(j, k) * Tensor::dirac({3, 1}, 0)(i, l) 
		+ M(j, k) * Tensor::dirac({3, 1}, 1)(i, l) 
		+ S(j, k) * Tensor::dirac({3, 1}, 2)(i, l);
    
	A.set_component(_degree-1, comp);
	
	return A;
}
~~~
__tabsMid
~~~ py
def create_operator(degree):
	i,j,k,l = xe.indices(4)
	
	# create matrices
	M = create_M()
	S = create_S()
	L = create_L()
	Sstar = 0.7*M + S;
	I = xe.Tensor.identity([MAX_NUM_PER_SITE, MAX_NUM_PER_SITE])
	
	# create empty TTOperator
	A = xe.TTOperator(2*degree)
	
	# create first component
	comp = xe.Tensor()
	comp(i, j, k, l) << \
		Sstar(j, k) * xe.Tensor.dirac([1, 3], 0)(i, l) \
		+   L(j, k) * xe.Tensor.dirac([1, 3], 1)(i, l) \
		+   I(j, k) * xe.Tensor.dirac([1, 3], 2)(i, l)
	
	A.set_component(0, comp)
	
	# create middle components
	comp(i, j, k, l) << \
		  I(j, k) * xe.Tensor.dirac([3, 3], [0, 0])(i, l) \
		+ M(j, k) * xe.Tensor.dirac([3, 3], [1, 0])(i, l) \
		+ S(j, k) * xe.Tensor.dirac([3, 3], [2, 0])(i, l) \
		+ L(j, k) * xe.Tensor.dirac([3, 3], [2, 1])(i, l) \
		+ I(j, k) * xe.Tensor.dirac([3, 3], [2, 2])(i, l)
	
	for c in xrange(1, degree-1) :
		A.set_component(c, comp)
	
	# create last component
	comp(i, j, k, l) << \
		  I(j, k) * xe.Tensor.dirac([3, 1], 0)(i, l) \
		+ M(j, k) * xe.Tensor.dirac([3, 1], 1)(i, l) \
		+ S(j, k) * xe.Tensor.dirac([3, 1], 2)(i, l)
	
	A.set_component(degree-1, comp)
	
	return A
~~~
__tabsEnd



## Implicit Euler

To solve the Masterequation in the time domain we will use a simple implicit Euler method. In every step we have to solve
$ (I-\tau A) x_{i+1} = x_i $ for $x_{i+1}$. To do so, we will use the `xerus` builtin ALS method. From previous experiments
we know, that the `_SPD` (for "symmetric positive semi-definite") variant of the ALS works fine in this setting even though
the operator is not symmetric. To define the parameters only once, we create our own ALS variation.

We will keep an eye on the residual after each step for the purpose of this example to ensure, that these claims actually
hold true, and will store the result of every step to be able to plot the mean concentrations over time in the end.

To ensure, that the entries of the solution tensor actually represent probabilities we will also normalize the tensor at every
step to ensure that its one-norm is equal to 1. This norm is usually hard to calculate, but under the assumption that all entries
are positive we can express it as a simple contraction with a ones-tensor.

__tabsStart
~~~ cpp
double one_norm(const TTTensor &_x) {
	Index j;
	return double(_x(j&0) * TTTensor::ones(_x.dimensions)(j&0));
}

std::vector<TTTensor> implicit_euler(const TTOperator& _A, TTTensor _x, 
			const double _stepSize, const size_t _n) 
{ 
	const TTOperator op = TTOperator::identity(_A.dimensions)-_stepSize*_A;
	
	Index j,k;
	auto ourALS = ALS_SPD;
	ourALS.convergenceEpsilon = 1e-4;
	ourALS.numHalfSweeps = 100;
	
	std::vector<TTTensor> results;
	TTTensor nextX = _x;
	results.push_back(_x);
    
	for(size_t i = 0; i < _n; ++i) {
		ourALS(op, nextX, _x);
        
		// Normalize
		double norm = one_norm(nextX);
		nextX /= norm;
		
		XERUS_LOG(iter, "Done itr " << i 
				<< " residual: " << frob_norm(op(j/2,k/2)*nextX(k&0) - _x(j&0)) 
				<< " one-norm: " << norm);
		
		_x = nextX;
		results.push_back(_x);
	}
	
	return results;
}
~~~
__tabsMid
~~~ py
def one_norm(x):
	i = xe.Index()
	return float(x(i&0) * xe.TTTensor.ones(x.dimensions)(i&0))

def implicit_euler(A, x, stepSize, n):
	op = xe.TTOperator.identity(A.dimensions) - stepSize*A
	
	j,k = xe.indices(2)
	ourALS = xe.ALS_SPD
	ourALS.convergenceEpsilon = 1e-4
	ourALS.numHalfSweeps = 100
	
	results = [x]
	nextX = xe.TTTensor(x)
	
	for i in xrange(n) :
		ourALS(op, nextX, x)
		
		# normalize
		norm = one_norm(nextX)
		nextX /= norm
		
		print("done itr", i, \
			"residual:", xe.frob_norm(op(j/2,k/2)*nextX(k&0) - x(j&0)), \
			"one-norm:", norm)
		
		x = xe.TTTensor(nextX) # ensure it is a copy
		results.append(x)
	
	return results
~~~
__tabsEnd


All we have to do now, is to provide a starting point (no proteins with probability 1) and call the appropriate functions.
As our implicit Euler method as it stands has no means of adapting the rank of the solution we will increaseit from the start
by adding a small pertubation to the starting configuration. To do this we have to convert the (sparse) dirac TTTensor to
dense representation because sparse TT cannot (yet) be added to other TTTensors (see the [TTTensor](/tttensors) documentation).

__tabsStart
~~~ cpp
int main() {
	const size_t numProteins = 10;
	const size_t numSteps = 200;
	const double stepSize = 1.0;
	const size_t rankX = 3;
	
	const auto A = create_operator(numProteins);
	
	auto start = TTTensor::dirac(
			std::vector<size_t>(numProteins, MAX_NUM_PER_SITE), 
			0
	);
	start.use_dense_representations();
	start += 1e-14 * TTTensor::random(
			start.dimensions, 
			std::vector<size_t>(start.degree()-1, rankX-1)
	);
	
	const auto results = implicit_euler(A, start, stepSize, numSteps);
}
~~~
__tabsMid
~~~ py
numProteins = 10
numSteps = 200
stepSize = 1.0
rankX = 3

A = create_operator(numProteins)

start = xe.TTTensor.dirac([MAX_NUM_PER_SITE]*numProteins, 0)
start.use_dense_representations()
start += 1e-14 * xe.TTTensor.random( \
		start.dimensions, \
		[rankX-1]*(start.degree()-1))

results = implicit_euler(A, start, stepSize, numSteps)
~~~
__tabsEnd


## Output
At this point we have calculated the solution tensors and can start ot calculate quantities of interest from them. For the 
purpose of this example we will simple calculate the mean concentration of every protein at every timestep. To calculate the
mean we simply have to weight the mode corresponding to the protein in question with the number of proteins it represents
(the vector $(0, 1, 2, \dots)$) and sum over all other protein configurations (ie. contract a ones-vector to those modes).
We will use the most general `TensorNetwork` class to write these contractions. This way `xerus` can be very lazy and only
perform any actions (and decide upon a contraction order) when we query it for the final value.

__tabsStart
~~~ cpp
double get_mean_concentration(const TTTensor& _res, const size_t _i) { 
	const Index k,l;
	TensorNetwork result(_res);
	const Tensor weights({MAX_NUM_PER_SITE}, [](const size_t _k){ 
		return double(_k); 
	});
	const Tensor ones = Tensor::ones({MAX_NUM_PER_SITE});
	
	for (size_t j = 0; j < _res.degree(); ++j) {
		if (j == _i) {
			result(l&0) = result(k, l&1) * weights(k);
		} else {
			result(l&0) = result(k, l&1) * ones(k);
		}
	}
	// at this point the degree of 'result' is 0, so there is only one entry
	return result[{}]; 
}
~~~
__tabsMid
~~~ py
def get_mean_concentration(x, i):
	k,l = xe.indices(2)
	result = xe.TensorNetwork(x)
	weights = xe.Tensor.from_function([MAX_NUM_PER_SITE], lambda idx: idx[0])
	ones = xe.Tensor.ones([MAX_NUM_PER_SITE])
	
	for j in xrange(x.degree()) :
		if j == i :
			result(l&0) << result(k, l&1) * weights(k)
		else :
			result(l&0) << result(k, l&1) * ones(k)
	
	# at this point the degree of 'result' is 0, so there is only one entry
	return result[[]]
~~~
__tabsEnd

Observing the evolution of concentrations over time is now a simple matter of iterating over all solution steps and proteins.

__tabsStart
~~~ cpp
void print_mean_concentrations_to_file(const std::vector<TTTensor> &_result) {
	std::fstream out("mean.dat", std::fstream::out);
	for (const auto& res : _result) {
		for (size_t k = 0; k < res.degree(); ++k) {
			out << get_mean_concentration(res, k) << ' ';
		}
		out << std::endl;
	}
}
~~~
__tabsMid
~~~ py
def print_mean_concentrations_to_file(results):
	f = open("mean.dat", 'w')
	for res in results :
		for k in xrange(res.degree()) :
			f.write(str(get_mean_concentration(res, k))+' ')
		f.write('\n')
	f.close()
~~~
__tabsEnd


The solution shows nice saturation curves for all individual proteins:

<center><img src="/images/cascade.png" alt="Ten saturation curves with different onsets."></center>

## Complete Sourcecode
The full source code of this example looks as follows

__tabsStart
~~~ cpp
{% include examples/cascade.cpp %}
~~~
__tabsMid
~~~ python
{% include examples/cascade.py %}
~~~
__tabsEnd