Página 1 dos resultados de 15 itens digitais encontrados em 0.008 segundos

Finding Near Rank Deficiency in Matrix Products

Stewart, Michael
Fonte: Universidade Nacional da Austrália Publicador: Universidade Nacional da Austrália
Tipo: Working/Technical Paper Formato: 263774 bytes; 356 bytes; application/pdf; application/octet-stream
EN_AU
Relevância na Pesquisa
36.22%
This paper gives a theorem characterizing approximately minimal norm rank one perturbations E and F that make the product (A + E)(B + F)T rank deficient. The theorem is stated in terms of the smallest singular value of a particular matrix chosen from a parameterized family of matrices by solving a nonlinear equation. Consequently, it is analogous to the special case of the Eckhart-Young theorem describing the minimal perturbation that induces an order one rank deficiency. While the theorem does not naturally extend to higher order rank deficiencies, it can be used to compute a complete orthogonal product decomposition to give improved practical reliability in revealing the numerical rank of ABT.; no

A completely rank revealing quotient URV decomposition

Stewart, Michael
Fonte: Universidade Nacional da Austrália Publicador: Universidade Nacional da Austrália
Tipo: Working/Technical Paper Formato: 211688 bytes; 356 bytes; application/pdf; application/octet-stream
EN_AU
Relevância na Pesquisa
66.39%
This paper introduces a completely rank revealing complete orthogonal quotient decomposition for a pair of rectangular matrices. It reliably reveals an approximation of the minimum distance from a matrix pair with a prescribed quotient SVD structure. The approximation gives the yes minimum distance up to a small constant factor that is independent of the sizes of the matrices. Consequently the decomposition is well suited to the recovery of the non-generic quotient SVD structure of a pair of matrices that have been corrupted by errors. The use of a completely rank revealing decomposition is shown to improve estimates of matrix range space intersections for a system identification problem.; no

Direct-sequence spread-spectrum acoustic communications with CRV Decomposition

Angelopoulos, Pavlos.
Fonte: Monterey, California. Naval Postgraduate School Publicador: Monterey, California. Naval Postgraduate School
Tipo: Tese de Doutorado
Relevância na Pesquisa
46.04%
Approved for public release; distribution is unlimited.; Direct-Sequence Spread-Spectrum (DS-SS) is among the preferred modulation techniques for military applications. DS-SS offers three greatly desired characteristics. It allows for the development of Low Probability of Detection (LPD) and Low Probability of Intercept (LPI) systems and has a very good performance in fading channels. This thesis investigates the performance of the "Cross-Product RV (CRV) decomposition" as the basis of blind-equalization algorithms. The CRV is a rank-revealing decomposition alternative to the Eigenvalue Decomposition (EVD) that can provide a recursively updated estimate of the signal and noise subspace at a reduced computational cost. The CRV updating algorithm is implemented in MATLAB and evaluated in a previously proposed communication scheme intended for use in an underwater acoustic network called Seaweb. The underwater channel is modeled with the Monterey-Miami Parabolic Equation Model (MMPE) for various multipath perturbations. The receiver performance is examined using Monte Carlo simulation. Bit-error rates versus signal-to-noise ratio are presented for various, noise assumptions, and receiver synchronization assumptions.

FORTRAN subroutines for updating the QR decomposition

Gragg, William B.; Reichel, Lother
Fonte: Monterey, California. Naval Postgraduate School Publicador: Monterey, California. Naval Postgraduate School
Tipo: Relatório
EN_US
Relevância na Pesquisa
36.15%
We present FORTRAN subroutines that update the QR decomposition in a numerically stable manner when A is modified by a matrix of rank one, or when a row or a column is inserted or deleted. These subroutines are modifications of the Algol procedures in Daniel et al. 5. We also present a subroutine that the elements in the lower right corner of R will generally be small if the columns of A are nearly linearly dependent. This subroutine is an implementation of the rank revealing QR decomposition scheme recently proposed by Chan (3). The subroutines have been written to perform well on a vector computer. Algorithms Additional Key Words and Phrases: QR decomposition, updating, subset selection. Computer programs

A new perturbation bound for the LDU factorization of diagonally dominant matrices

Dailey, Megan; Martínez Dopico, Froilán C.; Ye, Qiang
Fonte: Society for Industrial and Applied Mathematics Publicador: Society for Industrial and Applied Mathematics
Tipo: info:eu-repo/semantics/publishedVersion; info:eu-repo/semantics/article
Publicado em /07/2014 ENG
Relevância na Pesquisa
66.17%
This work introduces a new perturbation bound for the L factor of the LDU factorization of (row) diagonally dominant matrices computed via the column diagonal dominance pivoting strategy. This strategy yields L and U factors which are always well-conditioned and, so, the LDU factorization is guaranteed to be a rank-revealing decomposition. The new bound together with those for the D and U factors in [F. M. Dopico and P. Koev, Numer. Math., 119 (2011), pp. 337– 371] establish that if diagonally dominant matrices are parameterized via their diagonally dominant parts and off-diagonal entries, then tiny relative componentwise perturbations of these parameters produce tiny relative normwise variations of L and U and tiny relative entrywise variations of D when column diagonal dominance pivoting is used. These results will allow us to prove in a follow-up work that such perturbations also lead to strong perturbation bounds for many other problems involving diagonally dominant matrices.; Research supported in part by Ministerio de Economía y Competitividad of Spain under grant MTM2012-32542.

Accurate solution of structured least squares problems via rank-revealing decompositions

Castro González, Nieves; Ceballos Cañón, Johan Armando; Martínez Dopico, Froilán C.; Molera, Juan M.
Fonte: Society for Industrial and Applied Mathematics Publicador: Society for Industrial and Applied Mathematics
Tipo: info:eu-repo/semantics/publishedVersion; info:eu-repo/semantics/article
Publicado em /07/2013 ENG
Relevância na Pesquisa
76.23%
Least squares problems min(x) parallel to b - Ax parallel to(2) where the matrix A is an element of C-mXn (m >= n) has some particular structure arise frequently in applications. Polynomial data fitting is a well-known instance of problems that yield highly structured matrices, but many other examples exist. Very often, structured matrices have huge condition numbers kappa(2)(A) = parallel to A parallel to(2) parallel to A(dagger)parallel to(2) (A(dagger) is the Moore-Penrose pseudoinverse of A) and therefore standard algorithms fail to compute accurate minimum 2-norm solutions of least squares problems. In this work, we introduce a framework that allows us to compute minimum 2-norm solutions of many classes of structured least squares problems accurately, i.e., with errors parallel to(x) over cap (0) - x(0)parallel to(2)/parallel to x(0)parallel to(2) = O(u), where u is the unit roundoff, independently of the magnitude of kappa(2)(A) for most vectors b. The cost of these accurate computations is O(n(2)m) flops, i.e., roughly the same cost as standard algorithms for least squares problems. The approach in this work relies in computing first an accurate rank-revealing decomposition of A, an idea that has been widely used in recent decades to compute...

LU factorization with panel rank revealing pivoting and its communication avoiding version

Khabou, Amal; Demmel, James W.; Grigori, Laura; Gu, Ming
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 12/08/2012
Relevância na Pesquisa
46.25%
We present the LU decomposition with panel rank revealing pivoting (LU_PRRP), an LU factorization algorithm based on strong rank revealing QR panel factorization. LU_PRRP is more stable than Gaussian elimination with partial pivoting (GEPP). Our extensive numerical experiments show that the new factorization scheme is as numerically stable as GEPP in practice, but it is more resistant to pathological cases and easily solves the Wilkinson matrix and the Foster matrix. We also present CALU_PRRP, a communication avoiding version of LU_PRRP that minimizes communication. CALU_PRRP is based on tournament pivoting, with the selection of the pivots at each step of the tournament being performed via strong rank revealing QR factorization. CALU_PRRP is more stable than CALU, the communication avoiding version of GEPP. CALU_PRRP is also more stable in practice and is resistant to pathological cases on which GEPP and CALU fail.; Comment: No. RR-7867 (2012)

Subspace Iteration Randomization and Singular Value Problems

Gu, Ming
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 10/08/2014
Relevância na Pesquisa
26.11%
A classical problem in matrix computations is the efficient and reliable approximation of a given matrix by a matrix of lower rank. The truncated singular value decomposition (SVD) is known to provide the best such approximation for any given fixed rank. However, the SVD is also known to be very costly to compute. Among the different approaches in the literature for computing low-rank approximations, randomized algorithms have attracted researchers' recent attention due to their surprising reliability and computational efficiency in different application areas. Typically, such algorithms are shown to compute with very high probability low-rank approximations that are within a constant factor from optimal, and are known to perform even better in many practical situations. In this paper, we present a novel error analysis that considers randomized algorithms within the subspace iteration framework and show with very high probability that highly accurate low-rank approximations as well as singular values can indeed be computed quickly for matrices with rapidly decaying singular values. Such matrices appear frequently in diverse application areas such as data analysis, fast structured matrix computations and fast direct methods for large sparse linear systems of equations and are the driving motivation for randomized methods. Furthermore...

Minimal Residual Methods for Complex Symmetric, Skew Symmetric, and Skew Hermitian Systems

Sou-Cheng; Choi
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
25.85%
While there is no lack of efficient Krylov subspace solvers for Hermitian systems, there are few for complex symmetric, skew symmetric, or skew Hermitian systems, which are increasingly important in modern applications including quantum dynamics, electromagnetics, and power systems. For a large consistent complex symmetric system, one may apply a non-Hermitian Krylov subspace method disregarding the symmetry of $A$, or a Hermitian Krylov solver on the equivalent normal equation or an augmented system twice the original dimension. These have the disadvantages of increasing either memory, conditioning, or computational costs. An exception is a special version of QMR by Freund (1992), but that may be affected by non-benign breakdowns unless look-ahead is implemented; furthermore, it is designed for only consistent and nonsingular problems. For skew symmetric systems, Greif and Varah (2009) adapted CG for nonsingular skew symmetric linear systems that are necessarily and restrictively of even order. We extend the symmetric and Hermitian algorithms MINRES and MINRES-QLP by Choi, Paige and Saunders (2011) to complex symmetric, skew symmetric, and skew Hermitian systems. In particular, MINRES-QLP uses a rank-revealing QLP decomposition of the tridiagonal matrix from a three-term recurrent complex-symmetric Lanczos process. Whether the systems are real or complex...

A randomized blocked algorithm for efficiently computing rank-revealing factorizations of matrices

Martinsson, Per-Gunnar; Voronin, Sergey
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
46.18%
This manuscript describes a technique for computing partial rank-revealing factorizations, such as, e.g, a partial QR factorization or a partial singular value decomposition. The method takes as input a tolerance $\varepsilon$ and an $m\times n$ matrix $A$, and returns an approximate low rank factorization of $A$ that is accurate to within precision $\varepsilon$ in the Frobenius norm (or some other easily computed norm). The rank $k$ of the computed factorization (which is an output of the algorithm) is in all examples we examined very close to the theoretically optimal $\varepsilon$-rank. The proposed method is inspired by the Gram-Schmidt algorithm, and has the same $O(mnk)$ asymptotic flop count. However, the method relies on randomized sampling to avoid column pivoting, which allows it to be blocked, and hence accelerates practical computations by reducing communication. Numerical experiments demonstrate that the accuracy of the scheme is for every matrix that was tried at least as good as column-pivoted QR, and is sometimes much better. Computational speed is also improved substantially, in particular on GPU architectures.

Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions

Halko, Nathan; Martinsson, Per-Gunnar; Tropp, Joel A.
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
26.15%
Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets. This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressed---either explicitly or implicitly---to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, speed, and robustness. These claims are supported by extensive numerical experiments and a detailed error analysis.

Revealing latent factors of temporal networks for mesoscale intervention in epidemic spread

Gauvin, Laetitia; Panisson, André; Barrat, Alain; Cattuto, Ciro
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 12/01/2015
Relevância na Pesquisa
25.85%
The customary perspective to reason about epidemic mitigation in temporal networks hinges on the identification of nodes with specific features or network roles. The ensuing individual-based control strategies, however, are difficult to carry out in practice and ignore important correlations between topological and temporal patterns. Here we adopt a mesoscopic perspective and present a principled framework to identify collective features at multiple scales and rank their importance for epidemic spread. We use tensor decomposition techniques to build an additive representation of a temporal network in terms of mesostructures, such as cohesive clusters and temporally-localized mixing patterns. This representation allows to determine the impact of individual mesostructures on epidemic spread and to assess the effect of targeted interventions that remove chosen structures. We illustrate this approach using high-resolution social network data on face-to-face interactions in a school and show that our method affords the design of effective mesoscale interventions.

Rank-profile revealing Gaussian elimination and the CUP matrix decomposition

Jeannerod, Claude-Pierre; Pernet, Clément; Storjohann, Arne
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
26.2%
Transforming a matrix over a field to echelon form, or decomposing the matrix as a product of structured matrices that reveal the rank profile, is a fundamental building block of computational exact linear algebra. This paper surveys the well known variations of such decompositions and transformations that have been proposed in the literature. We present an algorithm to compute the CUP decomposition of a matrix, adapted from the LSP algorithm of Ibarra, Moran and Hui (1982), and show reductions from the other most common Gaussian elimination based matrix transformations and decompositions to the CUP decomposition. We discuss the advantages of the CUP algorithm over other existing algorithms by studying time and space complexities: the asymptotic time complexity is rank sensitive, and comparing the constants of the leading terms, the algorithms for computing matrix invariants based on the CUP decomposition are always at least as good except in one case. We also show that the CUP algorithm, as well as the computation of other invariants such as transformation to reduced column echelon form using the CUP algorithm, all work in place, allowing for example to compute the inverse of a matrix on the same storage as the input matrix.; Comment: 35 pages

Finding Structure with Randomness: Probabilistic Algorithms for Constructing Approximate Matrix Decompositions

Halko, N.; Martinsson, P. G.; Tropp, J. A.
Fonte: Society for Industrial and Applied Mathematics Publicador: Society for Industrial and Applied Mathematics
Tipo: Article; PeerReviewed Formato: application/pdf
Publicado em //2011
Relevância na Pesquisa
26.2%
Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets. This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressed—either explicitly or implicitly—to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, robustness, and/or speed. These claims are supported by extensive numerical experiments and a detailed error analysis. The specific benefits of randomized techniques depend on the computational environment. Consider the model problem of finding the k dominant components of the singular value decomposition of an m × n matrix. (i) For a dense input matrix...

Finding Structure with Randomness: Stochastic Algorithms for Constructing Approximate matrix Decompositions

Halko, N.; Martinsson, P. G.; Tropp, J. A.
Fonte: California Institute of Technology Publicador: California Institute of Technology
Tipo: Report or Paper; PeerReviewed Formato: application/pdf
Publicado em /09/2009
Relevância na Pesquisa
26.2%
Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets. In particular, these techniques o®er a route toward principal component analysis (PCA) for petascale data. This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressed|either explicitly or implicitly|to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, speed, and robustness. These claims are supported by extensive numerical experiments and a detailed error analysis. The specific benefits of randomized techniques depend on the computational environment. Consider the model problem of finding the k dominant components of the singular value decomposition of an m x n matrix. (i) For a dense input matrix...