Página 1 dos resultados de 5 itens digitais encontrados em 0.009 segundos

## Accurate solution of structured least squares problems via rank-revealing decompositions

Fonte: Society for Industrial and Applied Mathematics
Publicador: Society for Industrial and Applied Mathematics

Tipo: info:eu-repo/semantics/publishedVersion; info:eu-repo/semantics/article

Publicado em /07/2013
ENG

Relevância na Pesquisa

86.54%

#Accurate solutions#least squares problems#multiplicative perturbation theory#rank revealing decompositions#structured matrices#Moore–Penrose pseudoinverse.#Matemáticas

Least squares problems min(x) parallel to b - Ax parallel to(2) where the matrix A is an element of C-mXn (m >= n) has some particular structure arise frequently in applications. Polynomial data fitting is a well-known instance of problems that yield highly structured matrices, but many other examples exist. Very often, structured matrices have huge condition numbers kappa(2)(A) = parallel to A parallel to(2) parallel to A(dagger)parallel to(2) (A(dagger) is the Moore-Penrose pseudoinverse of A) and therefore standard algorithms fail to compute accurate minimum 2-norm solutions of least squares problems. In this work, we introduce a framework that allows us to compute minimum 2-norm solutions of many classes of structured least squares problems accurately, i.e., with errors parallel to(x) over cap (0) - x(0)parallel to(2)/parallel to x(0)parallel to(2) = O(u), where u is the unit roundoff, independently of the magnitude of kappa(2)(A) for most vectors b. The cost of these accurate computations is O(n(2)m) flops, i.e., roughly the same cost as standard algorithms for least squares problems. The approach in this work relies in computing first an accurate rank-revealing decomposition of A, an idea that has been widely used in recent decades to compute...

Link permanente para citações:

## Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Relevância na Pesquisa

36.28%

Low-rank matrix approximations, such as the truncated singular value
decomposition and the rank-revealing QR decomposition, play a central role in
data analysis and scientific computing. This work surveys and extends recent
research which demonstrates that randomization offers a powerful tool for
performing low-rank matrix approximation. These techniques exploit modern
computational architectures more fully than classical methods and open the
possibility of dealing with truly massive data sets.
This paper presents a modular framework for constructing randomized
algorithms that compute partial matrix decompositions. These methods use random
sampling to identify a subspace that captures most of the action of a matrix.
The input matrix is then compressed---either explicitly or implicitly---to this
subspace, and the reduced matrix is manipulated deterministically to obtain the
desired low-rank factorization. In many cases, this approach beats its
classical competitors in terms of accuracy, speed, and robustness. These claims
are supported by extensive numerical experiments and a detailed error analysis.

Link permanente para citações:

## Rank-profile revealing Gaussian elimination and the CUP matrix decomposition

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Relevância na Pesquisa

26.25%

Transforming a matrix over a field to echelon form, or decomposing the matrix
as a product of structured matrices that reveal the rank profile, is a
fundamental building block of computational exact linear algebra. This paper
surveys the well known variations of such decompositions and transformations
that have been proposed in the literature. We present an algorithm to compute
the CUP decomposition of a matrix, adapted from the LSP algorithm of Ibarra,
Moran and Hui (1982), and show reductions from the other most common Gaussian
elimination based matrix transformations and decompositions to the CUP
decomposition. We discuss the advantages of the CUP algorithm over other
existing algorithms by studying time and space complexities: the asymptotic
time complexity is rank sensitive, and comparing the constants of the leading
terms, the algorithms for computing matrix invariants based on the CUP
decomposition are always at least as good except in one case. We also show that
the CUP algorithm, as well as the computation of other invariants such as
transformation to reduced column echelon form using the CUP algorithm, all work
in place, allowing for example to compute the inverse of a matrix on the same
storage as the input matrix.; Comment: 35 pages

Link permanente para citações:

## Finding Structure with Randomness: Probabilistic Algorithms for Constructing Approximate Matrix Decompositions

Fonte: Society for Industrial and Applied Mathematics
Publicador: Society for Industrial and Applied Mathematics

Tipo: Article; PeerReviewed
Formato: application/pdf

Publicado em //2011

Relevância na Pesquisa

36.28%

Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets. This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressed—either explicitly or
implicitly—to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, robustness, and/or speed. These claims are supported by extensive numerical experiments and a detailed error analysis. The specific benefits of randomized techniques depend on the computational environment. Consider the model problem of finding the k dominant components of the singular value decomposition of an m × n matrix. (i) For a dense input matrix...

Link permanente para citações:

## Finding Structure with Randomness: Stochastic Algorithms for Constructing Approximate matrix Decompositions

Fonte: California Institute of Technology
Publicador: California Institute of Technology

Tipo: Report or Paper; PeerReviewed
Formato: application/pdf

Publicado em /09/2009

Relevância na Pesquisa

36.28%

Low-rank matrix approximations, such as the truncated singular value decomposition
and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys recent research which demonstrates that randomization offers a powerful
tool for performing low-rank matrix approximation. These techniques exploit modern computational
architectures more fully than classical methods and open the possibility of dealing with truly massive
data sets. In particular, these techniques o®er a route toward principal component analysis (PCA)
for petascale data.
This paper presents a modular framework for constructing randomized algorithms that compute
partial matrix decompositions. These methods use random sampling to identify a subspace that
captures most of the action of a matrix. The input matrix is then compressed|either explicitly or
implicitly|to this subspace, and the reduced matrix is manipulated deterministically to obtain the
desired low-rank factorization. In many cases, this approach beats its classical competitors in terms
of accuracy, speed, and robustness. These claims are supported by extensive numerical experiments
and a detailed error analysis.
The specific benefits of randomized techniques depend on the computational environment. Consider the model problem of finding the k dominant components of the singular value decomposition
of an m x n matrix. (i) For a dense input matrix...

Link permanente para citações: