Página 1 dos resultados de 3128 itens digitais encontrados em 0.013 segundos

## Controle e otimização em tempo real de um reator trifasico; Real time optimization and control-three-phase slurry reactor

Fonte: Biblioteca Digital da Unicamp
Publicador: Biblioteca Digital da Unicamp

Tipo: Tese de Doutorado
Formato: application/pdf

Publicado em 23/02/2006
PT

Relevância na Pesquisa

36.28%

#Otimização matematica#Controle de processo#Controle em tempo real#Reatores quimicos#Hidrogenação#Planejamento experimental#Real-time control#Chemical reactor#Optimization mathematical#Process control#Hydrogenation

O objetivo principal deste trabalho foi a avaliação de estratégias de controle clássicas e hierárquicas em tempo real. Utilizou-se como estudo de caso a hidrogenação de o-cresol em reator de lama catalítico trifásico com catalisador sólido para validar as propostas de otimização em tempo real, devido à importância inerente deste tipo de processo, uma vez que reatores catalíticos trifásicos estão presentes em muitos processos das indústrias químicas, petroquímicas e afins, havendo uma preferência crescente em comparação com outros tipos de reatores mais convencionais. Foi investigada neste trabalho a performance de algumas versões do algoritmo de controle avançado DMC (Dynamic Matrix Control) em sua forma quadrática (QDMC - Quadratic Dynamic Matrix Control) e adaptativa (STQDMC - Self Tuning Quadratic Dynamic Matrix Control) sob diferentes estratégias de controle (feedback, feedforward e mista). O processo foi representado através de modelos determinísticos e estatísticos. Foi proposta uma metodologia alternativa para a definição dos parâmetros ótimos do modelo para a faixa de operação válida para este estudo baseada em princípios de análise de superfícies de resposta. Devido à importância de otimização dos processos para o alcance de objetivos operacionais e econômicos...

Link permanente para citações:

## Fast Methods for Bimolecular Charge Optimization

Fonte: MIT - Massachusetts Institute of Technology
Publicador: MIT - Massachusetts Institute of Technology

Tipo: Artigo de Revista Científica
Formato: 12188 bytes; application/pdf

EN_US

Relevância na Pesquisa

36.18%

#Hessian-implicit optimization#bimolecular charge optimization#boundary element method#primal-dual interior point method#Krylov based iterative solver

We report a Hessian-implicit optimization method to quickly solve the charge optimization problem over protein molecules: given a ligand and its complex with a receptor, determine the ligand charge distribution that minimizes the electrostatic free energy of binding. The new optimization couples boundary element method (BEM) and primal-dual interior point method (PDIPM); initial results suggest that the method scales much better than the previous methods. The quadratic objective function is the electrostatic free energy of binding where the Hessian matrix serves as an operator that maps the charge to the potential. The unknowns are the charge values at the charge points, and they are limited by equality and inequality constraints that model physical considerations, i.e. conservation of charge. In the previous approaches, finite-difference method is used to model the Hessian matrix, which requires significant computational effort to remove grid-based inaccuracies. In the novel approach, BEM is used instead, with precorrected FFT (pFFT) acceleration to compute the potential induced by the charges. This part will be explained in detail by Shihhsien Kuo in another talk. Even though the Hessian matrix can be calculated an order faster than the previous approaches...

Link permanente para citações:

## A simulation-based resource optimization and time reduction model using design structure matrix

Fonte: Massachusetts Institute of Technology
Publicador: Massachusetts Institute of Technology

Tipo: Tese de Doutorado
Formato: 95 p.

ENG

Relevância na Pesquisa

46.08%

Project scheduling is an important research and application area in engineering management. Recent research in this area addresses resource constraints as well as stochastic durations. This thesis presents a simulation-based optimization model for solving resource-constrained product development project scheduling problems. The model uses design structure matrix (DSM) to represent the information exchange among various tasks of a project. Instead of a simple binary precedence relationship, DSM is able to quantify the extent of interactions as well. In particular, these interactions are characterized by rework probabilities, rework impacts and learning. As a result, modeling based on DSM allows iterations to take place. This stochastic characteristic is not well addressed in earlier literatures of project scheduling problems. Adding resource factors to DSM simulation is a relatively new topic. We not only model the constraints posed by resource requirements, but also explore the effect of allocating different amount of resources on iterations. Genetic algorithm (GA) is chosen to optimize the model over a weighted sum of a set of heuristics. GA is known for its robustness in solving many types of problems. While the normal branch-and-bound method depends on problem specific information to generate tight bounds...

Link permanente para citações:

## Computing matrix symmetrizers. Part 2: new methods using eigendata and linear means; a comparison

Fonte: Elsevier
Publicador: Elsevier

Tipo: info:eu-repo/semantics/acceptedVersion; info:eu-repo/semantics/article

Publicado em 10/07/2015
ENG

Relevância na Pesquisa

56.09%

#Symmetric matrix factorization#symmetrizer#symmetrizer computation#eigenvalue method#linear equation#principal subspace computation#matrix optimization#numerical algorithm#MATLAB code#Matemáticas

Over any field F every square matrix A can be factored into the product of two symmetric matrices as A = S1 . S2 with S_i = S_i^T ∈ F^(n,n) and either factor can be chosen nonsingular, as was discovered by Frobenius in 1910. Frobenius’ symmetric matrix factorization has been lying almost dormant for a century. The first successful method for computing matrix symmetrizers, i.e., symmetric matrices S such that SA is symmetric, was inspired by an iterative linear systems algorithm of Huang and Nong (2010) in 2013 [29, 30]. The resulting iterative algorithm has solved this computational problem over R and C, but at high computational cost. This paper develops and tests another linear equations solver, as well as eigen- and principal vector or Schur Normal Form based algorithms for solving the matrix symmetrizer problem numerically. Four new eigendata based algorithms use, respectively, SVD based principal vector chain constructions, Gram-Schmidt orthogonalization techniques, the Arnoldi method, or the Schur Normal Form of A in their formulations. They are helped by Datta’s 1973 method that symmetrizes unreduced Hessenberg matrices directly. The eigendata based methods work well and quickly for generic matrices A and create well conditioned matrix symmetrizers through eigenvector dyad accumulation. But all of the eigen based methods have differing deficiencies with matrices A that have ill-conditioned or complicated eigen structures with nontrivial Jordan normal forms. Our symmetrizer studies for matrices with ill-conditioned eigensystems lead to two open problems of matrix optimization.; This research was partially supported by the Ministerio de Economía y Competitividad of Spain through the research grant MTM2012-32542.

Link permanente para citações:

## Perron vector optimization applied to search engines

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Publicado em 09/11/2011

Relevância na Pesquisa

36.18%

In the last years, Google's PageRank optimization problems have been
extensively studied. In that case, the ranking is given by the invariant
measure of a stochastic matrix. In this paper, we consider the more general
situation in which the ranking is determined by the Perron eigenvector of a
nonnegative, but not necessarily stochastic, matrix, in order to cover
Kleinberg's HITS algorithm. We also give some results for Tomlin's HOTS
algorithm. The problem consists then in finding an optimal outlink strategy
subject to design constraints and for a given search engine.
We study the relaxed versions of these problems, which means that we should
accept weighted hyperlinks. We provide an efficient algorithm for the
computation of the matrix of partial derivatives of the criterion, that uses
the low rank property of this matrix. We give a scalable algorithm that couples
gradient and power iterations and gives a local minimum of the Perron vector
optimization problem. We prove convergence by considering it as an approximate
gradient method.
We then show that optimal linkage stategies of HITS and HOTS optimization
problems verify a threshold property. We report numerical results on fragments
of the real web graph for these search engine optimization problems.; Comment: 28 pages...

Link permanente para citações:

## Schatten-$p$ Quasi-Norm Regularized Matrix Optimization via Iterative Reweighted Singular Value Minimization

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Relevância na Pesquisa

46.04%

#Mathematics - Optimization and Control#Computer Science - Learning#Mathematics - Numerical Analysis#Statistics - Computation#Statistics - Machine Learning

In this paper we study general Schatten-$p$ quasi-norm (SPQN) regularized
matrix minimization problems. In particular, we first introduce a class of
first-order stationary points for them, and show that the first-order
stationary points introduced in [11] for an SPQN regularized $vector$
minimization problem are equivalent to those of an SPQN regularized $matrix$
minimization reformulation. We also show that any local minimizer of the SPQN
regularized matrix minimization problems must be a first-order stationary
point. Moreover, we derive lower bounds for nonzero singular values of the
first-order stationary points and hence also of the local minimizers of the
SPQN regularized matrix minimization problems. The iterative reweighted
singular value minimization (IRSVM) methods are then proposed to solve these
problems, whose subproblems are shown to have a closed-form solution. In
contrast to the analogous methods for the SPQN regularized $vector$
minimization problems, the convergence analysis of these methods is
significantly more challenging. We develop a novel approach to establishing the
convergence of these methods, which makes use of the expression of a specific
solution of their subproblems and avoids the intricate issue of finding the
explicit expression for the Clarke subdifferential of the objective of their
subproblems. In particular...

Link permanente para citações:

## Matrix-Free Convex Optimization Modeling

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Relevância na Pesquisa

36.18%

We introduce a convex optimization modeling framework that transforms a
convex optimization problem expressed in a form natural and convenient for the
user into an equivalent cone program in a way that preserves fast linear
transforms in the original problem. By representing linear functions in the
transformation process not as matrices, but as graphs that encode composition
of linear operators, we arrive at a matrix-free cone program, i.e., one whose
data matrix is represented by a linear operator and its adjoint. This cone
program can then be solved by a matrix-free cone solver. By combining the
matrix-free modeling framework and cone solver, we obtain a general method for
efficiently solving convex optimization problems involving fast linear
transforms.

Link permanente para citações:

## Formulas for calculating the extremal ranks and inertias of a matrix-valued function subject to matrix equation restrictions

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Publicado em 13/01/2013

Relevância na Pesquisa

36.16%

#Mathematics - Optimization and Control#Mathematics - Numerical Analysis#15A24, 15B57, 49K30, 65K10, 90C11, 90C22

Matrix rank and inertia optimization problems are a class of discontinuous
optimization problems in which the decision variables are matrices running over
certain matrix sets, while the ranks and inertias of the variable matrices are
taken as integer-valued objective functions. In this paper, we establish a
group of explicit formulas for calculating the maximal and minimal values of
the rank and inertia objective functions of the Hermitian matrix expression
$A_1 - B_1XB_1^{*}$ subject to the common Hermitian solution of a pair of
consistent matrix equations $B_2XB^{*}_2 = A_2$ and $B_3XB_3^{*} = A_3$, and
Hermitian solution of the consistent matrix equation $B_4X= A_4$, respectively.
Many consequences are obtained, in particular, necessary and sufficient
conditions are established for the triple matrix equations $B_1XB^{*}_1 =A_1$,
$B_2XB^{*}_2 = A_2$ and $B_3XB^{*}_3 = A_3$ to have a common Hermitian
solution, as necessary and sufficient conditions for the two matrix equations
$B_1XB^{*}_1 =A_1$ and $B_4X = A_4$ to have a common Hermitian solution.; Comment: 21 page. arXiv admin note: text overlap with arXiv:1301.0986

Link permanente para citações:

## Multi-Step Stochastic ADMM in High Dimensions: Applications to Sparse Optimization and Noisy Matrix Decomposition

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Relevância na Pesquisa

36.2%

We propose an efficient ADMM method with guarantees for high-dimensional
problems. We provide explicit bounds for the sparse optimization problem and
the noisy matrix decomposition problem. For sparse optimization, we establish
that the modified ADMM method has an optimal convergence rate of
$\mathcal{O}(s\log d/T)$, where $s$ is the sparsity level, $d$ is the data
dimension and $T$ is the number of steps. This matches with the minimax lower
bounds for sparse estimation. For matrix decomposition into sparse and low rank
components, we provide the first guarantees for any online method, and prove a
convergence rate of $\tilde{\mathcal{O}}((s+r)\beta^2(p) /T) +
\mathcal{O}(1/p)$ for a $p\times p$ matrix, where $s$ is the sparsity level,
$r$ is the rank and $\Theta(\sqrt{p})\leq \beta(p)\leq \Theta(p)$. Our
guarantees match the minimax lower bound with respect to $s,r$ and $T$. In
addition, we match the minimax lower bound with respect to the matrix dimension
$p$, i.e. $\beta(p)=\Theta(\sqrt{p})$, for many important statistical models
including the independent noise model, the linear Bayesian network and the
latent Gaussian graphical model under some conditions. Our ADMM method is based
on epoch-based annealing and consists of inexpensive steps which involve
projections on to simple norm balls. Experiments show that for both sparse
optimization and matrix decomposition problems...

Link permanente para citações:

## Diagonal and Low-Rank Matrix Decompositions, Correlation Matrices, and Ellipsoid Fitting

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Publicado em 05/04/2012

Relevância na Pesquisa

36.14%

In this paper we establish links between, and new results for, three problems
that are not usually considered together. The first is a matrix decomposition
problem that arises in areas such as statistical modeling and signal
processing: given a matrix $X$ formed as the sum of an unknown diagonal matrix
and an unknown low rank positive semidefinite matrix, decompose $X$ into these
constituents. The second problem we consider is to determine the facial
structure of the set of correlation matrices, a convex set also known as the
elliptope. This convex body, and particularly its facial structure, plays a
role in applications from combinatorial optimization to mathematical finance.
The third problem is a basic geometric question: given points
$v_1,v_2,...,v_n\in \R^k$ (where $n > k$) determine whether there is a centered
ellipsoid passing \emph{exactly} through all of the points.
We show that in a precise sense these three problems are equivalent.
Furthermore we establish a simple sufficient condition on a subspace $U$ that
ensures any positive semidefinite matrix $L$ with column space $U$ can be
recovered from $D+L$ for any diagonal matrix $D$ using a convex
optimization-based heuristic known as minimum trace factor analysis. This
result leads to a new understanding of the structure of rank-deficient
correlation matrices and a simple condition on a set of points that ensures
there is a centered ellipsoid passing through them.; Comment: 20 pages

Link permanente para citações:

## Sensing Matrix Optimization for Block-Sparse Decoding

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Publicado em 08/09/2010

Relevância na Pesquisa

46.04%

Recent work has demonstrated that using a carefully designed sensing matrix
rather than a random one, can improve the performance of compressed sensing. In
particular, a well-designed sensing matrix can reduce the coherence between the
atoms of the equivalent dictionary, and as a consequence, reduce the
reconstruction error. In some applications, the signals of interest can be well
approximated by a union of a small number of subspaces (e.g., face recognition
and motion segmentation). This implies the existence of a dictionary which
leads to block-sparse representations. In this work, we propose a framework for
sensing matrix design that improves the ability of block-sparse approximation
techniques to reconstruct and classify signals. This method is based on
minimizing a weighted sum of the inter-block coherence and the sub-block
coherence of the equivalent dictionary. Our experiments show that the proposed
algorithm significantly improves signal recovery and classification ability of
the Block-OMP algorithm compared to sensing matrix optimization methods that do
not employ block structure.

Link permanente para citações:

## Convex Optimization without Projection Steps

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Relevância na Pesquisa

46.22%

#Mathematics - Optimization and Control#Computer Science - Artificial Intelligence#Computer Science - Systems and Control#90C22, 90C20, 90C25#F.2.2#I.5.1#G.1.3

For the general problem of minimizing a convex function over a compact convex
domain, we will investigate a simple iterative approximation algorithm based on
the method by Frank & Wolfe 1956, that does not need projection steps in order
to stay inside the optimization domain. Instead of a projection step, the
linearized problem defined by a current subgradient is solved, which gives a
step direction that will naturally stay in the domain. Our framework
generalizes the sparse greedy algorithm of Frank & Wolfe and its primal-dual
analysis by Clarkson 2010 (and the low-rank SDP approach by Hazan 2008) to
arbitrary convex domains. We give a convergence proof guaranteeing
{\epsilon}-small duality gap after O(1/{\epsilon}) iterations.
The method allows us to understand the sparsity of approximate solutions for
any l1-regularized convex optimization problem (and for optimization over the
simplex), expressed as a function of the approximation quality. We obtain
matching upper and lower bounds of {\Theta}(1/{\epsilon}) for the sparsity for
l1-problems. The same bounds apply to low-rank semidefinite optimization with
bounded trace, showing that rank O(1/{\epsilon}) is best possible here as well.
As another application, we obtain sparse matrices of O(1/{\epsilon}) non-zero
entries as {\epsilon}-approximate solutions when optimizing any convex function
over a class of diagonally dominant symmetric matrices.
We show that our proposed first-order method also applies to nuclear norm and
max-norm matrix optimization problems. For nuclear norm regularized
optimization...

Link permanente para citações:

## A Rank-Corrected Procedure for Matrix Completion with Fixed Basis Coefficients

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Relevância na Pesquisa

46.13%

#Mathematics - Optimization and Control#Computer Science - Information Theory#Computer Science - Numerical Analysis#Statistics - Machine Learning

For the problems of low-rank matrix completion, the efficiency of the
widely-used nuclear norm technique may be challenged under many circumstances,
especially when certain basis coefficients are fixed, for example, the low-rank
correlation matrix completion in various fields such as the financial market
and the low-rank density matrix completion from the quantum state tomography.
To seek a solution of high recovery quality beyond the reach of the nuclear
norm, in this paper, we propose a rank-corrected procedure using a nuclear
semi-norm to generate a new estimator. For this new estimator, we establish a
non-asymptotic recovery error bound. More importantly, we quantify the
reduction of the recovery error bound for this rank-corrected procedure.
Compared with the one obtained for the nuclear norm penalized least squares
estimator, this reduction can be substantial (around 50%). We also provide
necessary and sufficient conditions for rank consistency in the sense of Bach
(2008). Very interestingly, these conditions are highly related to the concept
of constraint nondegeneracy in matrix optimization. As a byproduct, our results
provide a theoretical foundation for the majorized penalty method of Gao and
Sun (2010) and Gao (2010) for structured low-rank matrix optimization problems.
Extensive numerical experiments demonstrate that our proposed rank-corrected
procedure can simultaneously achieve a high recovery accuracy and capture the
low-rank structure.; Comment: 51 pages...

Link permanente para citações:

## Low-rank matrix completion by Riemannian optimization---extended version

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Publicado em 17/09/2012

Relevância na Pesquisa

36.17%

#Mathematics - Optimization and Control#Mathematics - Differential Geometry#Mathematics - Numerical Analysis

The matrix completion problem consists of finding or approximating a low-rank
matrix based on a few samples of this matrix. We propose a new algorithm for
matrix completion that minimizes the least-square distance on the sampling set
over the Riemannian manifold of fixed-rank matrices. The algorithm is an
adaptation of classical non-linear conjugate gradients, developed within the
framework of retraction-based optimization on manifolds. We describe all the
necessary objects from differential geometry necessary to perform optimization
over this low-rank matrix manifold, seen as a submanifold embedded in the space
of matrices. In particular, we describe how metric projection can be used as
retraction and how vector transport lets us obtain the conjugate search
directions. Finally, we prove convergence of a regularized version of our
algorithm under the assumption that the restricted isometry property holds for
incoherent matrices throughout the iterations. The numerical experiments
indicate that our approach scales very well for large-scale problems and
compares favorably with the state-of-the-art, while outperforming most existing
solvers.; Comment: This report is the extended version of the manuscript. It differs
only by the addition of Appendix A

Link permanente para citações:

## LM-CMA: an Alternative to L-BFGS for Large Scale Black-box Optimization

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Publicado em 01/11/2015

Relevância na Pesquisa

36.16%

The limited memory BFGS method (L-BFGS) of Liu and Nocedal (1989) is often
considered to be the method of choice for continuous optimization when first-
and/or second- order information is available. However, the use of L-BFGS can
be complicated in a black-box scenario where gradient information is not
available and therefore should be numerically estimated. The accuracy of this
estimation, obtained by finite difference methods, is often problem-dependent
that may lead to premature convergence of the algorithm.
In this paper, we demonstrate an alternative to L-BFGS, the limited memory
Covariance Matrix Adaptation Evolution Strategy (LM-CMA) proposed by Loshchilov
(2014). The LM-CMA is a stochastic derivative-free algorithm for numerical
optimization of non-linear, non-convex optimization problems. Inspired by the
L-BFGS, the LM-CMA samples candidate solutions according to a covariance matrix
reproduced from $m$ direction vectors selected during the optimization process.
The decomposition of the covariance matrix into Cholesky factors allows to
reduce the memory complexity to $O(mn)$, where $n$ is the number of decision
variables. The time complexity of sampling one candidate solution is also
$O(mn)$, but scales as only about 25 scalar-vector multiplications in practice.
The algorithm has an important property of invariance w.r.t. strictly
increasing transformations of the objective function...

Link permanente para citações:

## Analytical solutions to some optimization problems on ranks and inertias of matrix-valued functions subject to linear matrix inequalities

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Publicado em 06/01/2013

Relevância na Pesquisa

36.18%

Matrix rank and inertia optimization problems are a class of discontinuous
optimization problems, in which the decision variables are matrices running
over certain feasible matrix sets, while the ranks and inertias of the variable
matrices are taken as integer-valued objective functions. In this paper, we
establish a group of explicit formulas for calculating the maximal and minimal
values of the rank- and inertia-objective functions of the Hermitian matrix
expression $A_1 - B_1XB_1^{*}$ subject to the linear matrix inequality
$B_2XB_2^{*} \succcurlyeq A_2$ $(B_2XB_2^{*} \preccurlyeq A_2)$ in the L\"owner
partial ordering, and give applications of these formulas in characterizing
behaviors of some constrained matrix-valued functions.; Comment: 22pages

Link permanente para citações:

## Fixed-rank matrix factorizations and Riemannian low-rank optimization

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Relevância na Pesquisa

36.18%

Motivated by the problem of learning a linear regression model whose
parameter is a large fixed-rank non-symmetric matrix, we consider the
optimization of a smooth cost function defined on the set of fixed-rank
matrices. We adopt the geometric framework of optimization on Riemannian
quotient manifolds. We study the underlying geometries of several well-known
fixed-rank matrix factorizations and then exploit the Riemannian quotient
geometry of the search space in the design of a class of gradient descent and
trust-region algorithms. The proposed algorithms generalize our previous
results on fixed-rank symmetric positive semidefinite matrices, apply to a
broad range of applications, scale to high-dimensional problems and confer a
geometric basis to recent contributions on the learning of fixed-rank
non-symmetric matrices. We make connections with existing algorithms in the
context of low-rank matrix completion and discuss relative usefulness of the
proposed framework. Numerical experiments suggest that the proposed algorithms
compete with the state-of-the-art and that manifold optimization offers an
effective and versatile framework for the design of machine learning algorithms
that learn a fixed-rank matrix.

Link permanente para citações:

## Spectral Operators of Matrices

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Publicado em 10/01/2014

Relevância na Pesquisa

36.15%

The class of matrix optimization problems (MOPs) has been recognized in
recent years to be a powerful tool by researchers far beyond the optimization
community to model many important applications involving structured low rank
matrices. This trend can be credited to some extent to the exciting
developments in the emerging field of compressed sensing. The L\"owner
operator, which generates a matrix valued function by applying a
single-variable function to each of the singular values of a matrix, has played
an important role for a long time in solving matrix optimization problems.
However, the classical theory developed for L\"owner operators has become
inadequate in these recent applications. The main objective of this paper is to
provide some necessary theoretical foundations for designing numerical methods
for solving the MOP. This goal is achieved by introducing and conducting a
thorough study on a new class of matrix valued functions, coined as spectral
operators of matrices. Several fundamental properties of spectral operators,
including the well-definedness, continuity, directional differentiability,
Fr\'{e}chet-differentiability, locally Lipschitzian continuity, $\rho$-order
B(ouligand)-differentiability ($0<\rho\leq 1$), $\rho$-order G-semismooth
($0<\rho\leq 1$) and the characterization of Clarke's generalized Jacobian...

Link permanente para citações:

## Quadratic Optimization with Orthogonality Constraints: Explicit Lojasiewicz Exponent and Linear Convergence of Line-Search Methods

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Publicado em 05/10/2015

Relevância na Pesquisa

45.92%

#Mathematics - Optimization and Control#Computer Science - Learning#Computer Science - Numerical Analysis#Mathematics - Numerical Analysis#90C26, 90C46, 90C52

A fundamental class of matrix optimization problems that arise in many areas
of science and engineering is that of quadratic optimization with orthogonality
constraints. Such problems can be solved using line-search methods on the
Stiefel manifold, which are known to converge globally under mild conditions.
To determine the convergence rate of these methods, we give an explicit
estimate of the exponent in a Lojasiewicz inequality for the (non-convex) set
of critical points of the aforementioned class of problems. By combining such
an estimate with known arguments, we are able to establish the linear
convergence of a large class of line-search methods. A key step in our proof is
to establish a local error bound for the set of critical points, which may be
of independent interest.

Link permanente para citações:

## Tensor Principal Component Analysis via Convex Optimization

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Relevância na Pesquisa

46.09%

This paper is concerned with the computation of the principal components for
a general tensor, known as the tensor principal component analysis (PCA)
problem. We show that the general tensor PCA problem is reducible to its
special case where the tensor in question is super-symmetric with an even
degree. In that case, the tensor can be embedded into a symmetric matrix. We
prove that if the tensor is rank-one, then the embedded matrix must be rank-one
too, and vice versa. The tensor PCA problem can thus be solved by means of
matrix optimization under a rank-one constraint, for which we propose two
solution methods: (1) imposing a nuclear norm penalty in the objective to
enforce a low-rank solution; (2) relaxing the rank-one constraint by
Semidefinite Programming. Interestingly, our experiments show that both methods
yield a rank-one solution with high probability, thereby solving the original
tensor PCA problem to optimality with high probability. To further cope with
the size of the resulting convex optimization models, we propose to use the
alternating direction method of multipliers, which reduces significantly the
computational efforts. Various extensions of the model are considered as well.

Link permanente para citações: