The multivariate stable distribution is a multivariate probability distribution that is a multivariate generalisation of the univariate stable distribution. The multivariate stable distribution defines linear relations between stable distribution marginals.[clarification needed] In the same way as for the univariate case, the distribution is defined in terms of its characteristic function.
| multivariate stable | |||
|---|---|---|---|
|
Probability density function Heatmap showing a Multivariate (bivariate) stable distribution with α = 1.1 | |||
| Parameters |
– exponent – shift/location vector – a spectral finite measure on the sphere | ||
| Support | |||
| (no analytic expression) | |||
| CDF | (no analytic expression) | ||
| Variance | Infinite when | ||
| CF | see text | ||
The multivariate stable distribution can also be thought as an extension of the multivariate normal distribution. It has parameter, α, which is defined over the range 0 < α ≤ 2, and where the case α = 2 is equivalent to the multivariate normal distribution. It has an additional skew parameter that allows for non-symmetric distributions, where the multivariate normal distribution is symmetric.
Definition
editLet be the Euclidean unit sphere in , that is, . A random vector has a multivariate stable distribution—denoted as —, if the joint characteristic function of is[1]
- ,
where 0 < α < 2, and for
This is essentially the result of Feldheim,[2] that any stable random vector can be characterized by a spectral measure (a finite measure on ) and a shift vector .
Parametrization using projections
editAnother way to describe a stable random vector is in terms of projections. For any vector the projection is univariate -stable with some skewness , scale , and some shift . The notation is used if X is stable with for every . This is called the projection parametrization.
The spectral measure determines the projection parameter functions by:
Special cases
editThere are special cases where the multivariate characteristic function takes a simpler form. Define the characteristic function of a stable marginal as
Isotropic multivariate stable distribution
editHere the characteristic function is . The spectral measure is a scalar multiple of the uniform distribution on the sphere, leading to radial/isotropic symmetry.[3] For the Gaussian case this corresponds to independent components, but this is not the case when . Isotropy is a special case of ellipticity (see the next paragraph)—just take to be a multiple of the identity matrix.
Elliptically contoured multivariate stable distribution
editThe elliptically contoured multivariate stable distribution is a special symmetric case of the multivariate stable distribution. X is α-stable and elliptically contoured iff it has joint characteristic function for some shift vector (equal to the mean when it exists) and some positive semidefinite matrix (akin to a correlation matrix, although the usual definition of correlation fails to be meaningful). Note the relation to the characteristic function of the multivariate normal distribution: , obtained when α = 2.
Independent components
editThe marginals are independent with iff the characteristic function is
- .
Observe that when α = 2 this reduces again to the multivariate normal; note that the i.i.d. case and the isotropic case do not coincide when α < 2. Independent components is a special case of a discrete spectral measure (see next paragraph), with the spectral measure supported by the standard unit vectors.
|
|
Discrete
editIf the spectral measure is discrete with mass at , , the characteristic function is
- .
Linear properties
editIf is d-dimensional -stable, A is an m × d matrix, and then AX + b is m-dimensional -stable with scale function , skewness function , and location function .
Inference in the independent-component model
editBickson and Guestrin have shown how to compute inference in closed form in a linear model (or equivalently a factor analysis model), involving independent-component models.[4]
More specifically, let be a family of i.i.d. unobserved univariates drawn from a stable distribution. Given a known linear relation matrix A of size , the observations are assumed to be distributed as a convolution of the hidden factors , hence . The inference task is to compute the most likely , given the linear relation matrix A and the observations . This task can be computed in closed form in O(n3).
An application for this construction is multiuser detection with stable, non-Gaussian noise.
See also
editResources
edit- Mark Veillette's stable distribution matlab package http://www.mathworks.com/matlabcentral/fileexchange/37514
- The plots in this page where plotted using Danny Bickson's inference in linear-stable model Matlab package: https://www.cs.cmu.edu/~bickson/stable
Notes
edit- ^ J. Nolan, Multivariate stable densities and distribution functions: general and elliptical case, BundesBank Conference, Eltville, Germany, 11 November 2005. See also http://academic2.american.edu/~jpnolan/stable/stable.html
- ^ Feldheim, E. (1937). Etude de la stabilité des lois de probabilité . Ph. D. thesis, Faculté des Sciences de Paris, Paris, France.
- ^ User manual for STABLE 5.1 Matlab version, Robust Analysis Inc., http://www.RobustAnalysis.com
- ^ D. Bickson and C. Guestrin. Inference in linear models with multivariate heavy-tails. In Neural Information Processing Systems (NIPS) 2010, Vancouver, Canada, Dec. 2010. https://www.cs.cmu.edu/~bickson/stable/