Página 1 dos resultados de 117 itens digitais encontrados em 0.001 segundos

Estimação e testes de hipóteses em calibração comparativa; Estimationin hypotheses testings of comparative calibration

Oliveira, Paulo Tadeu Meira e Silva de
Fonte: Biblioteca Digitais de Teses e Dissertações da USP Publicador: Biblioteca Digitais de Teses e Dissertações da USP
Tipo: Dissertação de Mestrado Formato: application/pdf
Publicado em 21/12/2001 PT
Relevância na Pesquisa
27.69%
Sabemos da literatura que Calibração Comparativa está ligada à comparação de eficiência de instrumentos de medição. Neste trabalho discutimos estimação e testes de hipóteses em modelos de calibração comparativa. Para a estimação dos parâmetros do modelo, consideramos os algoritmos EM e o algoritmo BFGS do programa Ox. Testes para algumas hipóteses de interesse são implementados usando as estatísticas da razão de verossimilhanças e de Wald. Estudos de simulação são utilizados na comparação dos procedimentos. Uma aplicação é apresentada a um conjunto de dados constituído de medições de alturas de árvores com três, quatro e cinco hipsômetros.; We know about the literature that Comparative Calibration it is linked the efficiency comparison of measurement instruments. In this work we discuss estimates and tests of hypotheses in comparative calibration models. For the estimate of the parameters of the model, we consider the EM algorithm and the algorithm BFGS in the program Ox. Testings for some hypotheses of interest are implemented using the likelihood ratio and Wald statistics. Simulation studies are used in the comparison of the procedures. An application is presented to a data set, constituted of measurements of heights of trees with three...

An adaptive model order reduction for nonlinear dynamical problems.; Um modelo de redução de ordem adaptativo para problemas dinâmicos não-lineares.

Nigro, Paulo Salvador Britto
Fonte: Biblioteca Digitais de Teses e Dissertações da USP Publicador: Biblioteca Digitais de Teses e Dissertações da USP
Tipo: Tese de Doutorado Formato: application/pdf
Publicado em 21/03/2014 EN
Relevância na Pesquisa
27.69%
Model order reduction is necessary even in a time where the parallel processing is usual in almost any personal computer. The recent Model Reduction Methods are useful tools nowadays on reducing the problem processing. This work intends to describe a combination between POD (Proper Orthogonal Decomposition) and Ritz vectors that achieve an efficient Galerkin projection that changes during the processing, comparing the development of the error and the convergence rate between the full space and the projection space, in addition to check the stability of the projection space, leading to an adaptive model order reduction for nonlinear dynamical problems more efficient. This model reduction is supported by a secant formulation, which is updated by BFGS (Broyden - Fletcher - Goldfarb - Shanno) method to accelerate convergence of the model, and a tangent formulation to correct the projection space. Furthermore, this research shows that this method permits a correction of the reduced model at low cost, especially when the classical POD is no more efficient to represent accurately the solution.; A Redução de ordem de modelo é necessária, mesmo em uma época onde o processamento paralelo é usado em praticamente qualquer computador pessoal. Os recentes métodos de redução de modelo são ferramentas úteis nos dias de hoje para a redução de processamento de um problema. Este trabalho pretende descrever uma combinação entre POD (Proper Orthogonal Decomposition) e vetores de Ritz para uma projecção de Galerkin eficiente que sofre alterações durante o processamento...

O método L-BFGS com fatoração incompleta para a resolução de problemas de minimização

Mendonça, Melissa Weber
Fonte: Florianópolis, SC Publicador: Florianópolis, SC
Tipo: Dissertação de Mestrado Formato: vii, 72 f.| tabs., grafs.
POR
Relevância na Pesquisa
27.69%
Dissertação (mestrado) - Universidade Federal de Santa Catarina, Centro de Ciências Físicas e Matemáticas. Programa de Pós-Graduação em Matemática e Computação Científica; Neste trabalho, estudamos a resolução de problemas de minimização irrestrita por métodos quasenewtonianos, em particular o método BFGS, proposto na década de 60 por Broyden, Fletcher, Goldfarb e Shanno, bem como sua generalização para problemas de grande porte, o chamado método L-BFGS, proposto por Nocedal na década de 80. Apresentamos os resultados clássicos de convergência de ambos os métodos. No método L-BFGS, a matriz de recomeço utilizada é de grande importância na determinação da convergência do método. Neste sentido, propomos uma nova matriz de recomeço, utilizando a técnica de fatoração de Cholesky incompleta para matrizes simétricas positivas definidas, e situamos a fatoração incompleta dentro de seu contexto histórico como precondicionador para a resolução de sistemas lineares com o método do Gradiente Conjugado. Apresentamos testes numéricos, em que realizamos a decomposição de Cholesky incompleta da matriz Hessiana do problema em algumas iterações do algoritmo, e nos quais obtemos aceleração da convergência em relação a outras matrizes propostas anteriormente.

A otimização dos pesos das observações geodésicas por um problema de valor próprio inverso: solução pelo método de Newton e quase Newton - BFGS.

Dalmolin,Quintino; De Oliveira,Reginaldo
Fonte: Universidade Federal do Paraná Publicador: Universidade Federal do Paraná
Tipo: Artigo de Revista Científica Formato: text/html
Publicado em 01/12/2010 PT
Relevância na Pesquisa
27.18%
Neste trabalho apresenta-se a otimização dos pesos das observações baseada em um problema de valor próprio inverso e dois métodos para a sua solução. O método de Newton clássico (aproximação linear) e o método quase-Newton-BFGS (Broyden-Fletcher-Goldfarb-Shanno) que usa de modelos quadráticos para minimizar uma função. Apresentam-se ainda os fundamentos da otimização dos pesos das observações geodésicas formuladas através da análise dos valores próprios da matriz de covariâncias dos parâmetros estimados. Os métodos propostos são aplicados em três redes geodésicas bidimensionais locais e os resultados analisados e discutidos.

A Limited-Memory BFGS Algorithm Based on a Trust-Region Quadratic Model for Large-Scale Nonlinear Equations

Li, Yong; Yuan, Gonglin; Wei, Zengxin
Fonte: Public Library of Science Publicador: Public Library of Science
Tipo: Artigo de Revista Científica
Publicado em 07/05/2015 EN
Relevância na Pesquisa
27.5%
In this paper, a trust-region algorithm is proposed for large-scale nonlinear equations, where the limited-memory BFGS (L-M-BFGS) update matrix is used in the trust-region subproblem to improve the effectiveness of the algorithm for large-scale problems. The global convergence of the presented method is established under suitable conditions. The numerical results of the test problems show that the method is competitive with the norm method.

Um estudo de buscas unidirecionais aplicadas ao método BFGS

Panonceli, Diego Manoel
Fonte: Universidade Federal do Paraná Publicador: Universidade Federal do Paraná
Tipo: Dissertação Formato: 117f. : il., tabs., grafs., algumas color.; application/pdf
PORTUGUêS
Relevância na Pesquisa
27.82%
Orientador : Prof. Dr. Lucas Garcia Pedroso; Co-orientador : Prof. Dr. Rodolfo Gotardi Begiato; Dissertação (mestrado) - Universidade Federal do Paraná, Setor de Ciências Exatas, Programa de Pós-Graduação em Matemática. Defesa: Curitiba, 27/02/2015; Inclui referências; Resumo: Os processos iterativos existentes para a resolução do problema de minimizar uma função contínua muitas vezes realizam buscas unidirecionais. As buscas unidirecionais são importantes para garantir a convergência global de métodos de Otimização. Neste trabalho, analisamos algumas buscas unidirecionais propostas na literatura e seus resultados teóricos. Damos maior ênfase às buscas monótonas clássicas de Armijo, de Wolfe e de Goldstein, além das não monótonas de Grippo, Lamparielo e Lucidi, de Dai e de Zhang e Hager. As buscas unidirecionais não monótonas, ao contrário das monótonas, permitem vários acréscimos consecutivos na função objetivo. As buscas unidirecionais monótonas de Zhang, Zhou e Li e de Shi e Shen e as buscas não monótonas de Diniz-Ehrhardt, Martínez e Raydan, de Cheng e Li, de Yin e Du e de Shi e Shen são propostas que também foram abordadas no texto. Todas as buscas estudadas neste trabalho foram comparadas em seu desempenho através de suas utilizações no algoritmo BFGS de Otimização irrestrita. Cada busca foi testada em várias versões...

A quasi-newton approach to nonsmooth convex optimization problems in machine learning

Yu, J.; Vishwanathan, S.; Gunter, S.; Schraudolph, N.
Fonte: MIT Press Publicador: MIT Press
Tipo: Artigo de Revista Científica
Publicado em //2010 EN
Relevância na Pesquisa
27.5%
We extend the well-known BFGS quasi-Newton method and its memory-limited variant LBFGS to the optimization of nonsmooth convex objectives. This is done in a rigorous fashion by generalizing three components of BFGS to subdifferentials: the local quadratic model, the identification of a descent direction, and the Wolfe line search conditions. We prove that under some technical conditions, the resulting subBFGS algorithm is globally convergent in objective function value. We apply its memory-limited variant (subLBFGS) to L2-regularized risk minimization with the binary hinge loss. To extend our algorithm to the multiclass and multilabel settings, we develop a new, efficient, exact line search algorithm. We prove its worst-case time complexity bounds, and show that our line search can also be used to extend a recently developed bundle method to the multiclass and multilabel settings. We also apply the direction-finding component of our algorithm to L1-regularized risk minimization with logistic loss. In all these contexts our methods perform comparable to or better than specialized state-of-the-art solvers on a number of publicly available data sets. An open source implementation of our algorithms is freely available.; Jin Yu, S.V.N. Vishwanathan...

Historical development of the BFGS secant method and its characterization properties

Papakonstantinou, Joanna Maria
Fonte: Universidade Rice Publicador: Universidade Rice
Tipo: Thesis; Text Formato: application/pdf
ENG
Relevância na Pesquisa
27.99%
The BFGS secant method is the preferred secant method for finite-dimensional unconstrained optimization. The first part of this research consists of recounting the historical development of secant methods in general and the BFGS secant method in particular. Many people believe that the secant method arose from Newton's method using finite difference approximations to the derivative. We compile historical evidence revealing that a special case of the secant method predated Newton's method by more than 3000 years. We trace the evolution of secant methods from 18th-century B.C. Babylonian clay tablets and the Egyptian Rhind Papyrus. Modifications to Newton's method yielding secant methods are discussed and methods we believe influenced and led to the construction of the BFGS secant method are explored. In the second part of our research, we examine the construction of several rank-two secant update classes that had not received much recognition in the literature. Our study of the underlying mathematical principles and characterizations inherent in the updates classes led to theorems and their proofs concerning secant updates. One class of symmetric rank-two updates that we investigate is the Dennis class. We demonstrate how it can be derived from the general rank-one update formula in a purely algebraic manner not utilizing Powell's method of iterated projections as Dennis did it. The literature abounds with update classes; we show how some are related and show containment when possible. We derive the general formula that could be used to represent all symmetric rank-two secant updates. From this...

Comparison of tactile and chromatic confocal measurements of aspherical lenses for form metrology

EL HAYEK, Nadim; NOUIRA, Hichem; ANWER, Nabil; DAMAK, Mohamed; GIBARU, Olivier
Fonte: Arts et Métiers ParisTech Publicador: Arts et Métiers ParisTech
EN_US
Relevância na Pesquisa
27.18%
Both contact and non-contact probes are often used in dimensional metrology applications, especially for roughness, form and surface profile measurements. To perform such kind of measurements with a nanometer level of accuracy, LNE (French National Metrology Institute (NMI)) has developed a high precision profilometer traceable to the SI meter definition. The architecture of the machine contains a short and stable metrology frame dissociated from the supporting frame. It perfectly respects Abbe principle. The metrology loop incorporates three Renishaw laser interferometers and is equipped either with a chromatic confocal probe or a tactile probe to achieve measurements at the nanometric level of uncertainty. The machine allows the in-situ calibration of the probes by means of a differential laser interferometer considered as a reference. In this paper, both the architecture and the operation of the LNE’s high precision profilometer are detailed. A brief comparison of the behavior of the chromatic confocal and tactile probes is presented. Optical and tactile scans of an aspherical surface are performed and the large number of data are processed using the L-BFGS (Limited memory-Broyden-Fletcher-Goldfarb-Shanno) algorithm. Fitting results are compared with respect to the evaluated residual errors which reflect the form defects of the surface.; EMRP

A Modified BFGS Formula Using a Trust Region Model for Nonsmooth Convex Minimizations

Cui, Zengru; Yuan, Gonglin; Sheng, Zhou; Liu, Wenjie; Wang, Xiaoliang; Duan, Xiabin
Fonte: Public Library of Science Publicador: Public Library of Science
Tipo: Artigo de Revista Científica
Publicado em 26/10/2015 EN
Relevância na Pesquisa
27.69%
This paper proposes a modified BFGS formula using a trust region model for solving nonsmooth convex minimizations by using the Moreau-Yosida regularization (smoothing) approach and a new secant equation with a BFGS update formula. Our algorithm uses the function value information and gradient value information to compute the Hessian. The Hessian matrix is updated by the BFGS formula rather than using second-order information of the function, thus decreasing the workload and time involved in the computation. Under suitable conditions, the algorithm converges globally to an optimal solution. Numerical results show that this algorithm can successfully solve nonsmooth unconstrained convex problems.

Global Convergence of Online Limited Memory BFGS

Mokhtari, Aryan; Ribeiro, Alejandro
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 06/09/2014
Relevância na Pesquisa
27.18%
Global convergence of an online (stochastic) limited memory version of the Broyden-Fletcher- Goldfarb-Shanno (BFGS) quasi-Newton method for solving optimization problems with stochastic objectives that arise in large scale machine learning is established. Lower and upper bounds on the Hessian eigenvalues of the sample functions are shown to suffice to guarantee that the curvature approximation matrices have bounded determinants and traces, which, in turn, permits establishing convergence to optimal arguments with probability 1. Numerical experiments on support vector machines with synthetic data showcase reductions in convergence time relative to stochastic gradient descent algorithms as well as reductions in storage and computation relative to other online quasi-Newton methods. Experimental evaluation on a search engine advertising problem corroborates that these advantages also manifest in practical applications.; Comment: 37 pages

A Globally and Superlinearly Convergent Modified BFGS Algorithm for Unconstrained Optimization

Yang, Yaguang
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 24/12/2012
Relevância na Pesquisa
28.22%
In this paper, a modified BFGS algorithm is proposed. The modified BFGS matrix estimates a modified Hessian matrix which is a convex combination of an identity matrix for the steepest descent algorithm and a Hessian matrix for the Newton algorithm. The coefficient of the convex combination in the modified BFGS algorithm is dynamically chosen in every iteration. It is proved that, for any twice differentiable nonlinear function (convex or non-convex), the algorithm is globally convergent to a stationary point. If the stationary point is a local optimizer where the Hessian is strongly positive definite in a neighborhood of the optimizer, the iterates will eventually enter and stay in the neighborhood, and the modified BFGS algorithm reduces to the BFGS algorithm in this neighborhood. Therefore, the modified BFGS algorithm is super-linearly convergent. Moreover, the computational cost of the modified BFGS in each iteration is almost the same as the cost of the BFGS. Numerical test on the CUTE test set is reported. The performance of the modified BFGS algorithm implemented in our MATLAB function is compared to the BFGS algorithm implemented in the MATLAB Optimization Toolbox function, a limited memory BFGS implemented as L-BFGS, a descent conjugate gradient algorithm implemented as CG-Descent 5.3...

MSS: MATLAB Software for L-BFGS Trust-Region Subproblems for Large-Scale Optimization

Erway, Jennifer B.; Marcia, Roummel F.
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
27.69%
A MATLAB implementation of the More-Sorensen sequential (MSS) method is presented. The MSS method computes the minimizer of a quadratic function defined by a limited-memory BFGS matrix subject to a two-norm trust-region constraint. This solver is an adaptation of the More-Sorensen direct method into an L-BFGS setting for large-scale optimization. The MSS method makes use of a recently proposed stable fast direct method for solving large shifted BFGS systems of equations [13, 12] and is able to compute solutions to any user-defined accuracy. This MATLAB implementation is a matrix-free iterative method for large-scale optimization. Numerical experiments on the CUTEr [3, 16]) suggest that using the MSS method as a trust-region subproblem solver can require significantly fewer function and gradient evaluations needed by a trust-region method as compared with the Steihaug-Toint method.

LM-CMA: an Alternative to L-BFGS for Large Scale Black-box Optimization

Loshchilov, Ilya
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 01/11/2015
Relevância na Pesquisa
27.99%
The limited memory BFGS method (L-BFGS) of Liu and Nocedal (1989) is often considered to be the method of choice for continuous optimization when first- and/or second- order information is available. However, the use of L-BFGS can be complicated in a black-box scenario where gradient information is not available and therefore should be numerically estimated. The accuracy of this estimation, obtained by finite difference methods, is often problem-dependent that may lead to premature convergence of the algorithm. In this paper, we demonstrate an alternative to L-BFGS, the limited memory Covariance Matrix Adaptation Evolution Strategy (LM-CMA) proposed by Loshchilov (2014). The LM-CMA is a stochastic derivative-free algorithm for numerical optimization of non-linear, non-convex optimization problems. Inspired by the L-BFGS, the LM-CMA samples candidate solutions according to a covariance matrix reproduced from $m$ direction vectors selected during the optimization process. The decomposition of the covariance matrix into Cholesky factors allows to reduce the memory complexity to $O(mn)$, where $n$ is the number of decision variables. The time complexity of sampling one candidate solution is also $O(mn)$, but scales as only about 25 scalar-vector multiplications in practice. The algorithm has an important property of invariance w.r.t. strictly increasing transformations of the objective function...

Statistically adaptive learning for a general class of cost functions (SA L-BFGS)

Purpura, Stephen; Hillard, Dustin; Hubenthal, Mark; Walsh, Jim; Golder, Scott; Smith, Scott
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
27.5%
We present a system that enables rapid model experimentation for tera-scale machine learning with trillions of non-zero features, billions of training examples, and millions of parameters. Our contribution to the literature is a new method (SA L-BFGS) for changing batch L-BFGS to perform in near real-time by using statistical tools to balance the contributions of previous weights, old training examples, and new training examples to achieve fast convergence with few iterations. The result is, to our knowledge, the most scalable and flexible linear learning system reported in the literature, beating standard practice with the current best system (Vowpal Wabbit and AllReduce). Using the KDD Cup 2012 data set from Tencent, Inc. we provide experimental results to verify the performance of this method.; Comment: 7 pages, 2 tables

RES: Regularized Stochastic BFGS Algorithm

Mokhtari, Aryan; Ribeiro, Alejandro
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 29/01/2014
Relevância na Pesquisa
27.69%
RES, a regularized stochastic version of the Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton method is proposed to solve convex optimization problems with stochastic objectives. The use of stochastic gradient descent algorithms is widespread, but the number of iterations required to approximate optimal arguments can be prohibitive in high dimensional problems. Application of second order methods, on the other hand, is impracticable because computation of objective function Hessian inverses incurs excessive computational cost. BFGS modifies gradient descent by introducing a Hessian approximation matrix computed from finite gradient differences. RES utilizes stochastic gradients in lieu of deterministic gradients for both, the determination of descent directions and the approximation of the objective function's curvature. Since stochastic gradients can be computed at manageable computational cost RES is realizable and retains the convergence rate advantages of its deterministic counterparts. Convergence results show that lower and upper bounds on the Hessian egeinvalues of the sample functions are sufficient to guarantee convergence to optimal arguments. Numerical experiments showcase reductions in convergence time relative to stochastic gradient descent algorithms and non-regularized stochastic versions of BFGS. An application of RES to the implementation of support vector machines is developed.; Comment: 13 pages

Inappropriate use of L-BFGS, Illustrated on frame field design

Ray, Nicolas; Sokolov, Dmitry
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 12/08/2015
Relevância na Pesquisa
27.69%
L-BFGS is a hill climbing method that is guarantied to converge only for convex problems. In computer graphics, it is often used as a black box solver for a more general class of non linear problems, including problems having many local minima. Some works obtain very nice results by solving such difficult problems with L-BFGS. Surprisingly, the method is able to escape local minima: our interpretation is that the approximation of the Hessian is smoother than the real Hessian, making it possible to evade the local minima. We analyse the behavior of L-BFGS on the design of 2D frame fields. It involves an energy function that is infinitly continuous, strongly non linear and having many local minima. Moreover, the local minima have a clear visual interpretation: they corresponds to differents frame field topologies. We observe that the performances of LBFGS are almost unpredictables: they are very competitive when the field is sampled on the primal graph, but really poor when they are sampled on the dual graph.; Comment: 6 pages, 3 figures

Shifted L-BFGS Systems

Erway, Jennifer B.; Jain, Vibhor; Marcia, Roummel F.
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
27.5%
We investigate fast direct methods for solving systems of the form (B + G)x = y, where B is a limited-memory BFGS matrix and G is a symmetric positive-definite matrix. These systems, which we refer to as shifted L-BFGS systems, arise in several settings, including trust-region methods and preconditioning techniques for interior-point methods. We show that under mild assumptions, the system (B + G)x = y can be solved in an efficient and stable manner via a recursion that requies only vector inner products. We consider various shift matrices G and demonstrate the effectiveness of the recursion methods in numerical experiments.

Limited-memory BFGS Systems with Diagonal Updates

Erway, Jennifer B.; Marcia, Roummel F.
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
27.5%
In this paper, we investigate a formula to solve systems of the form (B + {\sigma}I)x = y, where B is a limited-memory BFGS quasi-Newton matrix and {\sigma} is a positive constant. These types of systems arise naturally in large-scale optimization such as trust-region methods as well as doubly-augmented Lagrangian methods. We show that provided a simple condition holds on B_0 and \sigma, the system (B + \sigma I)x = y can be solved via a recursion formula that requies only vector inner products. This formula has complexity M^2n, where M is the number of L-BFGS updates and n >> M is the dimension of x.

A Linearly-Convergent Stochastic L-BFGS Algorithm

Moritz, Philipp; Nishihara, Robert; Jordan, Michael I.
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 09/08/2015
Relevância na Pesquisa
27.5%
We propose a new stochastic L-BFGS algorithm and prove a linear convergence rate for strongly convex functions. Our algorithm draws heavily from a recent stochastic variant of L-BFGS proposed in Byrd et al. (2014) as well as a recent approach to variance reduction for stochastic gradient descent from Johnson and Zhang (2013). We demonstrate experimentally that our algorithm performs well on large-scale convex and non-convex optimization problems, exhibiting linear convergence and rapidly solving the optimization problems to high levels of precision. Furthermore, we show that our algorithm performs well for a wide-range of step sizes, often differing by several orders of magnitude.; Comment: 13 pages, 3 figures