Página 1 dos resultados de 26033 itens digitais encontrados em 0.024 segundos

Improved likelihood inference for the shape parameter in Weibull regression

SILVA, Michel Ferreira Da; FERRARI, Silvia L. P.; CRIBARI-NETO, Francisco
Fonte: TAYLOR & FRANCIS LTD Publicador: TAYLOR & FRANCIS LTD
Tipo: Artigo de Revista Científica
ENG
Relevância na Pesquisa
36.59%
We obtain adjustments to the profile likelihood function in Weibull regression models with and without censoring. Specifically, we consider two different modified profile likelihoods: (i) the one proposed by Cox and Reid [Cox, D.R. and Reid, N., 1987, Parameter orthogonality and approximate conditional inference. Journal of the Royal Statistical Society B, 49, 1-39.], and (ii) an approximation to the one proposed by Barndorff-Nielsen [Barndorff-Nielsen, O.E., 1983, On a formula for the distribution of the maximum likelihood estimator. Biometrika, 70, 343-365.], the approximation having been obtained using the results by Fraser and Reid [Fraser, D.A.S. and Reid, N., 1995, Ancillaries and third-order significance. Utilitas Mathematica, 47, 33-53.] and by Fraser et al. [Fraser, D.A.S., Reid, N. and Wu, J., 1999, A simple formula for tail probabilities for frequentist and Bayesian inference. Biometrika, 86, 655-661.]. We focus on point estimation and likelihood ratio tests on the shape parameter in the class of Weibull regression models. We derive some distributional properties of the different maximum likelihood estimators and likelihood ratio tests. The numerical evidence presented in the paper favors the approximation to Barndorff-Nielsen`s adjustment.; Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq); CNPq; FAPESP; Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Signed likelihood ratio tests in the Birnbaum-Saunders regression model

LEMONTE, Artur J.; FERRARI, Silvia L. P.
Fonte: ELSEVIER SCIENCE BV Publicador: ELSEVIER SCIENCE BV
Tipo: Artigo de Revista Científica
ENG
Relevância na Pesquisa
36.45%
The Birnbaum-Saunders regression model is becoming increasingly popular in lifetime analyses and reliability studies. In this model, the signed likelihood ratio statistic provides the basis for testing inference and construction of confidence limits for a single parameter of interest. We focus on the small sample case, where the standard normal distribution gives a poor approximation to the true distribution of the statistic. We derive three adjusted signed likelihood ratio statistics that lead to very accurate inference even for very small samples. Two empirical applications are presented. (C) 2010 Elsevier B.V. All rights reserved.; FAPESP; Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP); Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq); CNPq (Brazil)

Improved likelihood inference in beta regression

FERRARI, Silvia L. P.; PINHEIRO, Eliane C.
Fonte: TAYLOR & FRANCIS LTD Publicador: TAYLOR & FRANCIS LTD
Tipo: Artigo de Revista Científica
ENG
Relevância na Pesquisa
36.45%
We consider the issue of performing accurate small-sample likelihood-based inference in beta regression models, which are useful for modelling continuous proportions that are affected by independent variables. We derive small-sample adjustments to the likelihood ratio statistic in this class of models. The adjusted statistics can be easily implemented from standard statistical software. We present Monte Carlo simulations showing that inference based on the adjusted statistics we propose is much more reliable than that based on the usual likelihood ratio statistic. A real data example is presented.; CAPES; Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES); CNPq; Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq); Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP); FAPESP

Ajustes para o teste da razão de verossimilhanças em modelos de regressão beta; Adjusted likelihood ratio statistics in beta regression models

Pinheiro, Eliane Cantinho
Fonte: Biblioteca Digitais de Teses e Dissertações da USP Publicador: Biblioteca Digitais de Teses e Dissertações da USP
Tipo: Dissertação de Mestrado Formato: application/pdf
Publicado em 23/03/2009 PT
Relevância na Pesquisa
36.45%
O presente trabalho considera o problema de fazer inferência com acurácia para pequenas amostras, tomando por base a estatística da razão de verossimilhanças em modelos de regressão beta. Estes, por sua vez, são úteis para modelar proporções contínuas que são afetadas por variáveis independentes. Deduzem-se as estatísticas da razão de verossimilhanças ajustadas de Skovgaard (Scandinavian Journal of Statistics 28 (2001) 3-32) nesta classe de modelos. Os termos do ajuste, que têm uma forma simples e compacta, podem ser implementados em um software estatístico. São feitas simulações de Monte Carlo para mostrar que a inferência baseada nas estatísticas ajustadas propostas é mais confiável do que a inferência usual baseada na estatística da razão de verossimilhanças. Aplicam-se os resultados a um conjunto real de dados.; We consider the issue of performing accurate small-sample likelihood-based inference in beta regression models, which are useful for modeling continuous proportions that are affected by independent variables. We derive Skovgaards (Scandinavian Journal of Statistics 28 (2001) 3-32) adjusted likelihood ratio statistics in this class of models. We show that the adjustment terms have simple compact form that can be easily implemented from standard statistical software. We presentMonte Carlo simulations showing that inference based on the adjusted statistics we propose is more reliable than that based on the usual likelihood ratio statistic. A real data example is presented.

Verossimilhança hierárquica em modelos de fragilidade; Hierarchical likelihood in frailty models

Amorim, William Nilson de
Fonte: Biblioteca Digitais de Teses e Dissertações da USP Publicador: Biblioteca Digitais de Teses e Dissertações da USP
Tipo: Dissertação de Mestrado Formato: application/pdf
Publicado em 12/02/2015 PT
Relevância na Pesquisa
36.59%
Os métodos de estimação para modelos de fragilidade vêm sendo bastante discutidos na literatura estatística devido a sua grande utilização em estudos de Análise de Sobrevivência. Vários métodos de estimação de parâmetros dos modelos foram desenvolvidos: procedimentos de estimação baseados no algoritmo EM, cadeias de Markov de Monte Carlo, processos de estimação usando verossimilhança parcial, verossimilhança penalizada, quasi-verossimilhança, entro outros. Uma alternativa que vem sendo utilizada atualmente é a utilização da verossimilhança hierárquica. O objetivo principal deste trabalho foi estudar as vantagens e desvantagens da verossimilhança hierárquica para a inferência em modelos de fragilidade em relação a verossimilhança penalizada, método atualmente mais utilizado. Nós aplicamos as duas metodologias a um banco de dados real, utilizando os pacotes estatísticos disponíveis no software R, e fizemos um estudo de simulação, visando comparar o viés e o erro quadrático médio das estimativas de cada abordagem. Pelos resultados encontrados, as duas metodologias apresentaram estimativas muito próximas, principalmente para os termos fixos. Do ponto de vista prático, a maior diferença encontrada foi o tempo de execução do algoritmo de estimação...

Empirical Likelihood-Based Inference for Multiple Regression and Treatment Comparison

Su, Haiyan ; Liang, Hua
Fonte: Universidade de Rochester Publicador: Universidade de Rochester
Tipo: Tese de Doutorado
ENG
Relevância na Pesquisa
36.65%
Thesis (Ph.D.)--University of Rochester. School of Medicine and Dentistry. Dept. of Biostatistics and Computational Biology, 2009.; Parameter estimation and statistical inference are generally used in the analysis of epidemiological and biomedical data. Traditional parametric methods often im- pose the assumption of normality on the data. When this assumption is violated, methods based on the normal approximation can give biased results. Furthermore, normal approximation-based inference methods require estimation of the asymp- totic variance, which may be di±cult in semi-parametric or non-linear models. Empirical likelihood is a good alternative to make statistical inference for the parameters when the distribution of the data is unspeci¯ed. To derive statisti- cal inference for the parameter of interest, we develop empirical likelihood-based methods along with the Bartlett correction to improve the coverage probability of the parameter. The contributions we make to the existing literature in this dissertation contain two parts. In the ¯rst part, we develop an empirical likelihood-based inference for multiple regression models and show that the empirical likelihood ratio statistic follows a chi-square limiting distribution for several model settings. For high- dimensional parameter vectors...

Revisiting optimization algorithms for maximum likelihood estimation

Mai, Anh Tien
Fonte: Université de Montréal Publicador: Université de Montréal
Tipo: Thèse ou Mémoire numérique / Electronic Thesis or Dissertation
EN
Relevância na Pesquisa
36.45%
Parmi les méthodes d’estimation de paramètres de loi de probabilité en statistique, le maximum de vraisemblance est une des techniques les plus populaires, comme, sous des conditions l´egères, les estimateurs ainsi produits sont consistants et asymptotiquement efficaces. Les problèmes de maximum de vraisemblance peuvent être traités comme des problèmes de programmation non linéaires, éventuellement non convexe, pour lesquels deux grandes classes de méthodes de résolution sont les techniques de région de confiance et les méthodes de recherche linéaire. En outre, il est possible d’exploiter la structure de ces problèmes pour tenter d’accélerer la convergence de ces méthodes, sous certaines hypothèses. Dans ce travail, nous revisitons certaines approches classiques ou récemment d´eveloppées en optimisation non linéaire, dans le contexte particulier de l’estimation de maximum de vraisemblance. Nous développons également de nouveaux algorithmes pour résoudre ce problème, reconsidérant différentes techniques d’approximation de hessiens, et proposons de nouvelles méthodes de calcul de pas, en particulier dans le cadre des algorithmes de recherche linéaire. Il s’agit notamment d’algorithmes nous permettant de changer d’approximation de hessien et d’adapter la longueur du pas dans une direction de recherche fixée. Finalement...

Likelihood inference for small variance components

Stern, Steven E; Welsh, A. H
Fonte: Universidade Nacional da Austrália Publicador: Universidade Nacional da Austrália
Tipo: Working/Technical Paper Formato: 206647 bytes; application/pdf
EN_AU
Relevância na Pesquisa
36.56%
The authors explore likelihood-based methods for making inferences about the components of variance in a general normal mixed linear model. In particular, they use local asymptotic approximations to construct confidence intervals for the components of variance when the components are close to the boundary of the parameter space. In the process, they explore the question of how to profile the restricted likelihood (REML). Also, they show that general REML estimates are less likely to fall on the boundary of the parameter space than maximum likelihood estimates and that the likelihood ratio test based on the local asymptotic approximation has higher power than the likelihood ratio test based on the usual chi-squared approximation. They examine the finite sample properties of the proposed intervals by means of a simulation study.; no

Partitioned likelihood support and the evaluation of data set conflict

Lee, M.; Hugall, A.
Fonte: Taylor & Francis Ltd Publicador: Taylor & Francis Ltd
Tipo: Artigo de Revista Científica
Publicado em //2003 EN
Relevância na Pesquisa
36.56%
In simultaneous analyses of multiple data partitions, the trees relevant when measuring support for a clade are the optimal tree, and the best tree lacking the clade (i.e., the most reasonable alternative). The parsimony-based method of partitioned branch support (PBS) “forces” each data set to arbitrate between the two relevant trees. This value is the amount each data set contributes to clade support in the combined analysis, and can be very different to support apparent in separate analyses. The approach used in PBS can also be employed in likelihood: a simultaneous analysis of all data retrieves the maximum likelihood tree, and the best tree without the clade of interest is also found. Each data set is fitted to the two trees and the log-likelihood difference calculated, giving “partitioned likelihood support” (PLS) for each data set. These calculations can be performed regardless of the complexity of the ML model adopted. The significance of PLS can be evaluated using a variety of resampling methods, such as the Kishino-Hasegawa test, the Shimodiara-Hasegawa test, or likelihood weights, although the appropriateness and assumptions of these tests remains debated. [Artiodactyls; cetaceans; Kishino-Hasegawa test; partitioned branch support; partitioned likelihood support; Shimodaira-Hasegawa test; Templeton test.]; M. S. Y. Lee and A. F. Hugall

A conditional likelihood approach to residual maximum likelihood estimation in generalized linear models

Smyth, G.; Verbyla, A.
Fonte: Wiley Publicador: Wiley
Tipo: Artigo de Revista Científica
Publicado em //1996 EN
Relevância na Pesquisa
36.66%
Residual maximum likelihood (REML) estimation is often preferred to maximum likelihood estimation as a method of estimating covariance parameters in linear models because it takes account of the loss of degrees of freedom in estimating the mean and produces unbiased estimating equations for the variance parameters. In this paper it is shown that REML has an exact conditional likelihood interpretation, where the conditioning is on an appropriate sufficient statistic to remove dependence on the nuisance parameters. This interpretation clarifies the motivation for REML and generalizes directly to non-normal models in which there is a low dimensional sufficient statistic for the fitted values. The conditional likelihood is shown to be well defined and to satisfy the properties of a likelihood function, even though this is not generally true when conditioning on statistics which depend on parameters of interest. Using the conditional likelihood representation, the concept of REML is extended to generalized linear models with varying dispersion and canonical link. Explicit calculation of the conditional likelihood is given for the one-way lay-out. A saddlepoint approximation for the conditional likelihood is also derived.; Gordon K. Smyth and Arunas P. Verbyla

Empirical maximum likelihood kriging: The general case

Pardo-Iguzquiza, E.; Dowd, P.
Fonte: Kluwer Academic/Plenum Publ Publicador: Kluwer Academic/Plenum Publ
Tipo: Artigo de Revista Científica
Publicado em //2005 EN
Relevância na Pesquisa
36.54%
Although linear kriging is a distribution-free spatial interpolator, its efficiency is maximal only when the experimental data follow a Gaussian distribution. Transformation of the data to normality has thus always been appealing. The idea is to transform the experimental data to normal scores, krige values in the “Gaussian domain” and then back-transform the estimates and uncertainty measures to the “original domain.” An additional advantage of the Gaussian transform is that spatial variability is easier to model from the normal scores because the transformation reduces effects of extreme values. There are, however, difficulties with this methodology, particularly, choosing the transformation to be used and back-transforming the estimates in such a way as to ensure that the estimation is conditionally unbiased. The problem has been solved for cases in which the experimental data follow some particular type of distribution. In general, however, it is not possible to verify distributional assumptions on the basis of experimental histograms calculated from relatively few data and where the uncertainty is such that several distributional models could fit equally well. For the general case, we propose an empirical maximum likelihood method in which transformation to normality is via the empirical probability distribution function. Although the Gaussian domain simple kriging estimate is identical to the maximum likelihood estimate...

EMLK2D: a computer program for spatial estimation using empirical maximum likelihood kriging

Pardo-Iguzquiza, E.; Dowd, P.
Fonte: Pergamon-Elsevier Science Ltd Publicador: Pergamon-Elsevier Science Ltd
Tipo: Artigo de Revista Científica
Publicado em //2005 EN
Relevância na Pesquisa
36.45%
The authors describe a Fortran-90 program for empirical maximum likelihood kriging. More efficient estimates are obtained by solving the estimation problem in the ‘Gaussian domain’ (i.e., using the normal scores of the experimental data), where the simple kriging estimate is equivalent to the maximum likelihood estimate and to the conditional expectation. The transform to normality is done using the empirical cumulative probability distribution function. A Bayesian approach is adopted to ensure a conditionally unbiased estimate, which is obtained as the mean of the posterior distribution. The posterior distribution also provides a complete specification of the probability of the variable and thus provides the basis for a more realistic evaluation of uncertainty by various methods: inverting Gaussian confidence intervals, confidence intervals measured from the posterior distribution, variance measured from the posterior distribution or intervals obtained using the likelihood ratio statistic. A detailed case study is used to demonstrate the use of the program.; http://www.sciencedirect.com/science/journal/00983004; Eulogio Pardo-Igúzquiza and Peter A. Dowd; Copyright © 2004 Elsevier Ltd

Phylogeny of snakes (Serpentes): combining morphological and molecular data in likelihood Bayesian and parsimony analyses

Lee, M.; Hugall, A.; Lawson, R.; Scanlon, J.
Fonte: Cambridge University Press Publicador: Cambridge University Press
Tipo: Artigo de Revista Científica
Publicado em //2007 EN
Relevância na Pesquisa
36.59%
The phylogeny of living and fossil snakes is assessed using likelihood and parsimony approaches and a dataset combining 263 morphological characters with mitochondrial (2693 bp) and nuclear (1092 bp) gene sequences. The ‘no common mechanism’ (NCMr) and ‘Markovian’ (Mkv) models were employed for the morphological partition in likelihood analyses; likelihood scores in the NCMr model were more closely correlated with parsimony tree lengths. Both models accorded relatively less weight to the molecular data than did parsimony, with the effect being milder in the NCMr model. Partitioned branch and likelihood support values indicate that the mtDNA and nuclear gene partitions agree more closely with each other than with morphology. Despite differences between data partitions in phylogenetic signal, analytic models, and relative weighting, the parsimony and likelihood analyses all retrieved the following widely accepted groups: scolecophidians, alethinophidians, cylindrophiines, macrostomatans (sensu lato) and caenophidians. Anilius alone emerged as the most basal alethinophidian; the combined analyses resulted in a novel and stable position of uropeltines and cylindrophiines as the second-most basal clade of alethinophidians. The limbed marine pachyophiids...

Likelihood reinstates Archaeopteryx as a primitive bird

Lee, M.; Worthy, T.
Fonte: The Royal Society Publicador: The Royal Society
Tipo: Artigo de Revista Científica
Publicado em //2012 EN
Relevância na Pesquisa
36.45%
The widespread view that Archaeopteryx was a primitive (basal) bird has been recently challenged by a comprehensive phylogenetic analysis that placed Archaeopteryx with deinonychosaurian theropods. The new phylogeny suggested that typical bird flight (powered by the front limbs only) either evolved at least twice, or was lost/modified in some deinonychosaurs. However, this parsimony-based result was acknowledged to be weakly supported. Maximum-likelihood and related Bayesian methods applied to the same dataset yield a different and more orthodox result: Archaeopteryx is restored as a basal bird with bootstrap frequency of 73 per cent and posterior probability of 1. These results are consistent with a single origin of typical (forelimb-powered) bird flight. The Archaeopteryx–deinonychosaur clade retrieved by parsimony is supported by more characters (which are on average more homoplasious), whereas the Archaeopteryx–bird clade retrieved by likelihood-based methods is supported by fewer characters (but on average less homoplasious). Both positions for Archaeopteryx remain plausible, highlighting the hazy boundary between birds and advanced theropods. These results also suggest that likelihood-based methods (in addition to parsimony) can be useful in morphological phylogenetics.; Michael S. Y. Lee and Trevor H. Worthy

A Bootstrap Likelihood approach to Bayesian Computation

Zhu, Weixuan; Marín Diazaraque, Juan Miguel; Leisen, Fabrizio
Fonte: Universidade Carlos III de Madrid Publicador: Universidade Carlos III de Madrid
Tipo: info:eu-repo/semantics/draft; info:eu-repo/semantics/workingPaper
Publicado em 01/09/2014 ENG
Relevância na Pesquisa
36.54%
Recently, an increasingly amount of literature focused on Bayesian computational methods to address problems with intractable likelihood. These algorithms are known as Approximate Bayesian Computational (ABC) methods. One of the problems of these algorithms is that the performance depends on the tuning of some parameters, such as the summary statistics, distance and tolerance level. To bypass this problem, an alternative method based on empirical likelihood was introduced by Mengersen et al. (2013), which can be easily implemented when a set of constraints, related with the moments of the distribution, is known. However, the choice of the constraints is crucial and sometimes challenging in the sense that it determines the convergence property of the empirical likelihood. To overcome this problem, we propose an alternative method based on a bootstrap likelihood approach. The method is easy to implement and in some cases it is faster than the other approaches. The performance of the algorithm is illustrated with examples in Population Genetics, Time Series and a recent non-explicit bivariate Beta distribution. Finally, we test the method on simulated and real data random fields.

Estimação de maxima verossimilhança para processo de nascimento puro espaço-temporal com dados parcialmente com dados parcialmente observados; Maximum likelihood estimation for space-time pu birth process with missing data

Daniela Bento Fonsechi Goto
Fonte: Biblioteca Digital da Unicamp Publicador: Biblioteca Digital da Unicamp
Tipo: Dissertação de Mestrado Formato: application/pdf
Publicado em 10/09/2008 PT
Relevância na Pesquisa
36.54%
O objetivo desta dissertação é estudar estimação de máxima verossimilhança para processos de nascimento puro espacial para dois diferentes tipos de amostragem: a) quando há observação permanente em um intervalo [0, T]; b) quando o processo é observado após um tempo T fixo. No caso b) não se conhece o tempo de nascimento dos pontos, somente sua localização (dados faltantes). A função de verossimilhança pode ser escrita para o processo de nascimento puro não homogêneo em um conjunto compacto através do método da projeção descrito por Garcia and Kurtz (2008), como projeção da função de verossimilhança. A verossimilhança projetada pode ser interpretada como uma esperança e métodos de Monte Carlo podem ser utilizados para estimar os parâmetros. Resultados sobre convergência quase-certa e em distribuição são obtidos para a aproximação do estimador de máxima verossimilhança. Estudos de simulação mostram que as aproximações são adequadas; The goal of this work is to study the maximum likelihood estimation of a spatial pure birth process under two different sampling schemes: a) permanent observation in a fixed time interval [0, T]; b) observation of the process only after a fixed time T. Under scheme b) we don't know the birth times...

Likelihood for random-effect models (invited article)

Lee, Youngjo; Nelder, John A.
Fonte: Universidade Autônoma de Barcelona Publicador: Universidade Autônoma de Barcelona
Tipo: Artigo de Revista Científica Formato: application/pdf
Publicado em //2005 ENG
Relevância na Pesquisa
36.61%
For inferences from random-effect models Lee and Nelder (1996) proposed to use hierarchical likelihood (h-likelihood). It allows inference from models that may include both fixed and random parameters. Because of the presence of unobserved random variables h-likelihood is not a likelihood in the Fisherian sense. The Fisher likelihood framework has advantages such as generality of application, statistical and computational efficiency. We introduce an extended likelihood framework and discuss why it is a proper extension, maintaining the advantages of the original likelihood framework.; The new framework allows likelihood inferences to be drawn for a much wider class of models.; Per a la inferència en models amb efectes aleatoris Lee i Nelder (1996) proposaren utilitzar la versemblança jeràrquica (h-versemblanc¸a). Aquesta permet la inferència en models que presenten a la vegada paràmetres fixos i aleatoris. Per la presència de variables aleatòries no observables, la h-versemblança no és una versemblança en el sentit Fisherià. El marc de referència de la versemblança de Fisher té avantatges com la generalitat d’aplicació, l’eficiència estadística i computacional. Nosaltres introduïm un marc de versemblança ampliat i discutim perquè és una extensió apropiada...

Indirect likelihood inference (revised)

Creel, Michael; Kristensen, Dennis
Fonte: Universitat Autònoma de Barcelona. Unitat de Fonaments de l'Anàlisi Econòmica Publicador: Universitat Autònoma de Barcelona. Unitat de Fonaments de l'Anàlisi Econòmica
Tipo: Trabalho em Andamento Formato: application/pdf
Publicado em //2013 ENG
Relevância na Pesquisa
36.54%
Standard indirect Inference (II) estimators take a given finite-dimensional statistic, Z_{n} , and then estimate the parameters by matching the sample statistic with the model-implied population moment. We here propose a novel estimation method that utilizes all available information contained in the distribution of Z_{n} , not just its first moment. This is done by computing the likelihood of Z_{n}, and then estimating the parameters by either maximizing the likelihood or computing the posterior mean for a given prior of the parameters. These are referred to as the maximum indirect likelihood (MIL) and Bayesian Indirect Likelihood (BIL) estimators, respectively. We show that the IL estimators are first-order equivalent to the corresponding moment-based II estimator that employs the optimal weighting matrix. However, due to higher-order features of Z_{n} , the IL estimators are higher order efficient relative to the standard II estimator. The likelihood of Z_{n} will in general be unknown and so simulated versions of IL estimators are developed. Monte Carlo results for a structural auction model and a DSGE model show that the proposed estimators indeed have attractive finite sample properties.

Parallelization of the maximum likelihood approach to phylogenetic inference

Garnham, Janine
Fonte: Rochester Instituto de Tecnologia Publicador: Rochester Instituto de Tecnologia
Tipo: Tese de Doutorado
EN_US
Relevância na Pesquisa
36.54%
Phylogenetic inference refers to the reconstruction of evolutionary relationships among various species, usually presented in the form of a tree. DNA sequences are most often used to determine these relationships. The results of phylogenetic inference have many important applications, including protein function determination, drug discovery, disease tracking and forensics. There are several popular computational methods used for phylogenetic inference, among them distance-based (i.e. neighbor joining), maximum parsimony, maximum likelihood, and Bayesian methods. This thesis focuses on the maximum likelihood method, which is regarded as one of the most accurate methods, with its computational demand being the main hindrance to its widespread use. Maximum likelihood is generally considered to be a heuristic method providing a statistical evaluation of the results, where potential tree topologies are judged by how well they predict the observed sequences. While there have been several previous efforts to parallelize the maximum likelihood method, sequential implementations are more widely used in the biological research community. This is due to a lack of confidence in the results produced by the more recent, parallel programs. However...

On prediction intervals based on predictive likelihood or bootstrap methods

Hall, Peter; Peng, L; Tajvidi, Nader
Fonte: Biometrika Trust Publicador: Biometrika Trust
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
36.5%
We argue that prediction intervals based on predictive likelihood do not correct for curvature with respect to the parameter value when they implicitly approximate an unknown probability density. Partly as a result of this difficulty, the order of coverage error associated with predictive intervals and predictive limits is equal to only the inverse of sample size. In this respect those methods do not improve on the simpler, 'naive' or 'estimative' approach. Moreover, in cases of practical importance the latter can be preferable, in terms of both the size and sign of coverage error. We show that bootstrap calibration of both naive and predictive-likelihood approaches increases coverage accuracy of prediction intervals by an order of magnitude, and, in the case of naive intervals, preserves that method's numerical and analytical simplicity. Therefore, we argue, the bootstrap-calibrated naive approach is a particularly competitive alternative to more conventional, but more complex, techniques based on predictive likelihood.