Loading AI tools
Computing technique in probability theory From Wikipedia, the free encyclopedia
In probability theory, the matrix analytic method is a technique to compute the stationary probability distribution of a Markov chain which has a repeating structure (after some point) and a state space which grows unboundedly in no more than one dimension.[1][2] Such models are often described as M/G/1 type Markov chains because they can describe transitions in an M/G/1 queue.[3][4] The method is a more complicated version of the matrix geometric method and is the classical solution method for M/G/1 chains.[5]
An M/G/1-type stochastic matrix is one of the form[3]
where Bi and Ai are k × k matrices. (Note that unmarked matrix entries represent zeroes.) Such a matrix describes the embedded Markov chain in an M/G/1 queue.[6][7] If P is irreducible[broken anchor] and positive recurrent then the stationary distribution is given by the solution to the equations[3]
where e represents a vector of suitable dimension with all values equal to 1. Matching the structure of P, π is partitioned to π1, π2, π3, …. To compute these probabilities the column stochastic matrix G is computed such that[3]
G is called the auxiliary matrix.[8] Matrices are defined[3]
then π0 is found by solving[3]
and the πi are given by Ramaswami's formula,[3] a numerically stable relationship first published by Vaidyanathan Ramaswami in 1988.[9]
There are two popular iterative methods for computing G,[10][11]
Seamless Wikipedia browsing. On steroids.
Every time you click a link to Wikipedia, Wiktionary or Wikiquote in your browser's search results, it will show the modern Wikiwand interface.
Wikiwand extension is a five stars, simple, with minimum permission required to keep your browsing private, safe and transparent.