Página 1 dos resultados de 207 itens digitais encontrados em 0.007 segundos

The log-exponentiated Weibull regression model for interval-censored data

HASHIMOTO, Elizabeth M.; ORTEGA, Edwin M. M.; CANCHO, Vicente G.; CORDEIRO, Gauss M.
Fonte: ELSEVIER SCIENCE BV Publicador: ELSEVIER SCIENCE BV
Tipo: Artigo de Revista Científica
ENG
Relevância na Pesquisa
26.44%
In interval-censored survival data, the event of interest is not observed exactly but is only known to occur within some time interval. Such data appear very frequently. In this paper, we are concerned only with parametric forms, and so a location-scale regression model based on the exponentiated Weibull distribution is proposed for modeling interval-censored data. We show that the proposed log-exponentiated Weibull regression model for interval-censored data represents a parametric family of models that include other regression models that are broadly used in lifetime data analysis. Assuming the use of interval-censored data, we employ a frequentist analysis, a jackknife estimator, a parametric bootstrap and a Bayesian analysis for the parameters of the proposed model. We derive the appropriate matrices for assessing local influences on the parameter estimates under different perturbation schemes and present some ways to assess global influences. Furthermore, for different parameter settings, sample sizes and censoring percentages, various simulations are performed; in addition, the empirical distribution of some modified residuals are displayed and compared with the standard normal distribution. These studies suggest that the residual analysis usually performed in normal linear regression models can be straightforwardly extended to a modified deviance residual in log-exponentiated Weibull regression models for interval-censored data. (C) 2009 Elsevier B.V. All rights reserved.; CNPq; Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Testing hypotheses in the Birnbaum-Saunders distribution under type-II censored samples

LEMONTE, ArturJ.; FERRARI, Silvia L. P.
Fonte: ELSEVIER SCIENCE BV Publicador: ELSEVIER SCIENCE BV
Tipo: Artigo de Revista Científica
ENG
Relevância na Pesquisa
36.44%
The two-parameter Birnbaum-Saunders distribution has been used successfully to model fatigue failure times. Although censoring is typical in reliability and survival studies, little work has been published on the analysis of censored data for this distribution. In this paper, we address the issue of performing testing inference on the two parameters of the Birnbaum-Saunders distribution under type-II right censored samples. The likelihood ratio statistic and a recently proposed statistic, the gradient statistic, provide a convenient framework for statistical inference in such a case, since they do not require to obtain, estimate or invert an information matrix, which is an advantage in problems involving censored data. An extensive Monte Carlo simulation study is carried out in order to investigate and compare the finite sample performance of the likelihood ratio and the gradient tests. Our numerical results show evidence that the gradient test should be preferred. Further, we also consider the generalized Birnbaum-Saunders distribution under type-II right censored samples and present some Monte Carlo simulations for testing the parameters in this class of models using the likelihood ratio and gradient tests. Three empirical applications are presented. (C) 2011 Elsevier B.V. All rights reserved.; Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq); CNPq; FAPESP; Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

A Bayesian MCMC approach to survival analysis with doubly-censored data

Yu, Binbing
Fonte: PubMed Publicador: PubMed
Tipo: Artigo de Revista Científica
Publicado em 01/08/2010 EN
Relevância na Pesquisa
26.49%
Doubly-censored data refers to time to event data for which both the originating and failure times are censored. In studies involving AIDS incubation time or survival after dementia onset, for example, data are frequently doubly-censored because the date of the originating event is interval-censored and the date of the failure event usually is right-censored. The primary interest is in the distribution of elapsed times between the originating and failure events and its relationship to exposures and risk factors. The estimating equation approach [Sun, et al. 1999. Regression analysis of doubly censored failure time data with applications to AIDS studies. Biometrics 55, 909-914] and its extensions assume the same distribution of originating event times for all subjects. This paper demonstrates the importance of utilizing additional covariates to impute originating event times, i.e., more accurate estimation of originating event times may lead to less biased parameter estimates for elapsed time. The Bayesian MCMC method is shown to be a suitable approach for analyzing doubly-censored data and allows a rich class of survival models. The performance of the proposed estimation method is compared to that of other conventional methods through simulations. Two examples...

Analysis of Two-sample Censored Data Using a Semiparametric Mixture Model

Li, Gang; Lin, Chien-tai
Fonte: PubMed Publicador: PubMed
Tipo: Artigo de Revista Científica
Publicado em /07/2009 EN
Relevância na Pesquisa
26.38%
In this article we study a semiparametric mixture model for the two-sample problem with right censored data. The model implies that the densities for the continuous outcomes are related by a parametric tilt but otherwise unspecified. It provides a useful alternative to the Cox (1972) proportional hazards model for the comparison of treatments based on right censored survival data. We propose an iterative algorithm for the semiparametric maximum likelihood estimates of the parametric and nonparametric components of the model. The performance of the proposed method is studied using simulation. We illustrate our method in an application to melanoma.

Imputation methods for doubly censored HIV data

Zhang, Wei; Zhang, Ying; Chaloner, Kathryn; Stapleton, Jack T.
Fonte: PubMed Publicador: PubMed
Tipo: Artigo de Revista Científica
Publicado em 01/10/2009 EN
Relevância na Pesquisa
26.38%
In medical research, it is common to have doubly censored survival data: origin time and event time are both subject to censoring. In this paper, we review simple and probability-based methods that are used to impute interval censored origin time and compare the performance of these methods through extensive simulations in the one-sample problem, two-sample problem and Cox regression model problem. The use of a bootstrap procedure for inference is demonstrated.

Weighted Moments Estimators of the Parameters for the Extreme Value Distribution Based on the Multiply Type II Censored Sample

Wu, Jong-Wuu; Chen, Sheau-Chiann; Lee, Wen-Chuan; Lai, Heng-Yi
Fonte: Hindawi Publishing Corporation Publicador: Hindawi Publishing Corporation
Tipo: Artigo de Revista Científica
Publicado em 26/06/2013 EN
Relevância na Pesquisa
46.25%
We propose the weighted moments estimators (WMEs) of the location and scale parameters for the extreme value distribution based on the multiply type II censored sample. Simulated mean squared errors (MSEs) of best linear unbiased estimator (BLUE) and exact MSEs of WMEs are compared to study the behavior of different estimation methods. The results show the best estimator among the WMEs and BLUE under different combinations of censoring schemes.

Sample size requirements for training high-dimensional risk predictors

Dobbin, Kevin K.; Song, Xiao
Fonte: Oxford University Press Publicador: Oxford University Press
Tipo: Artigo de Revista Científica
EN
Relevância na Pesquisa
26.38%
A common objective of biomarker studies is to develop a predictor of patient survival outcome. Determining the number of samples required to train a predictor from survival data is important for designing such studies. Existing sample size methods for training studies use parametric models for the high-dimensional data and cannot handle a right-censored dependent variable. We present a new training sample size method that is non-parametric with respect to the high-dimensional vectors, and is developed for a right-censored response. The method can be applied to any prediction algorithm that satisfies a set of conditions. The sample size is chosen so that the expected performance of the predictor is within a user-defined tolerance of optimal. The central method is based on a pilot dataset. To quantify uncertainty, a method to construct a confidence interval for the tolerance is developed. Adequacy of the size of the pilot dataset is discussed. An alternative model-based version of our method for estimating the tolerance when no adequate pilot dataset is available is presented. The model-based method requires a covariance matrix be specified, but we show that the identity covariance matrix provides adequate sample size when the user specifies three key quantities. Application of the sample size method to two microarray datasets is discussed.

Sample size determination for paired right-censored data based on the difference of Kaplan-Meier estimates

Su, Pei-Fang; Li, Chung-I; Shyr, Yu
Fonte: PubMed Publicador: PubMed
Tipo: Artigo de Revista Científica
EN
Relevância na Pesquisa
26.51%
Sample size determination is essential to planning clinical trials. Jung (2008) established a sample size calculation formula for paired right-censored data based on the logrank test, which has been well-studied for comparing independent survival outcomes. An alternative to rank-based methods for independent right-censored data, advocated by Pepe and Fleming (1989), tests for differences between integrated weighted Kaplan-Meier estimates and is more sensitive to the magnitude of difference in survival times between groups. In this paper, we employ the concept of the Pepe-Fleming method to determine an adequate sample size by calculating differences between Kaplan-Meier estimators considering pair-wise correlation. We specify a positive stable frailty model for the joint distribution of paired survival times. We evaluate the performance of the proposed method by simulation studies and investigate the impacts of the accrual times, follow-up times, loss to follow-up rate, and sensitivity of power under misspecification of the model. The results show that ignoring the pair-wise correlation results in overestimating the required sample size. Furthermore, the proposed method is applied to two real-world studies, and the R code for sample size calculation is made available to users.

Median Tests for Censored Survival Data; a Contingency Table Approach

Tang, Shaowu; Jeong, Jong-Hyeon
Fonte: PubMed Publicador: PubMed
Tipo: Artigo de Revista Científica
Publicado em /09/2012 EN
Relevância na Pesquisa
26.43%
The median failure time is often utilized to summarize survival data because it has a more straightforward interpretation for investigators in practice than the popular hazard function. However, existing methods for comparing median failure times for censored survival data either require estimation of the probability density function or involve complicated formulas to calculate the variance of the estimates. In this article, we modify a K-sample median test for censored survival data (Brookmeyer and Crowley, 1982, Journal of the American Statistical Association 77, 433–440) through a simple contingency table approach where each cell counts the number of observations in each sample that are greater than the pooled median or vice versa. Under censoring, this approach would generate noninteger entries for the cells in the contingency table. We propose to construct a weighted asymptotic test statistic that aggregates dependent χ2-statistics formed at the nearest integer points to the original noninteger entries. We show that this statistic follows approximately a χ2-distribution with k − 1 degrees of freedom. For a small sample case, we propose a test statistic based on combined p-values from Fisher’s exact tests, which follows a χ2-distribution with 2 degrees of freedom. Simulation studies are performed to show that the proposed method provides reasonable type I error probabilities and powers. The proposed method is illustrated with two real datasets from phase III breast cancer clinical trials.

Sample Size Planning For Detecting Treatment Effect On Biomarker Subset

Kazimer, Scarlett
Fonte: Quens University Publicador: Quens University
Tipo: Tese de Doutorado
EN
Relevância na Pesquisa
36.14%
A vast amount of research has been done on treatment and biomarker interactions, with the hope to recognize a subset of patients that will respond better to new treatments. In particular, Jiang et al. (2007) were interested in applying a threshold model and design to determine patients that were sensitive to new treatments using defined biomarkers. However, they assumed that the model contained an unknown cutpoint on the biomarker. Based on ideas of sample size formulas, as developed by Schoenfeld (1983), this work involves developing a sample size formula for a biomarker threshold model with a known cutpoint c. Partial likelihood methods and Wald statistic, along with asymptotic theorems are used to determine this formula by testing for the interaction effect of the treatment and the biomarker. Meanwhile, simulation in R is used to evaluate the accuracy of the sample size formula. The proposed formula can be extended to deal with censored data as well, and provides an accurate and easy tool to determine the sample size of a study, for detecting treatment effect on biomarker defined patient subset.

A simulation study of the error induced in one-sided reliability confidence bounds for the Weiball distribution using a small sample size with heavily censored data

Hartley, Michael A.
Fonte: Monterey California. Naval Postgraduate School Publicador: Monterey California. Naval Postgraduate School
Tipo: Tese de Doutorado
Relevância na Pesquisa
26.43%
Approved for public release; distribution in unlimited.; Budget limitations have reduced the number of military components available for testing, and time constraints have reduced the amount of time available for actual testing resulting in many items still operating at the end of test cycles. These two factors produce small test populations (small sample size) with "heavily" censored data. The assumption of "normal approximation" for estimates based on these small sample sizes reduces the accuracy of confidence bounds of the probability plots and the associated quantities. This creates a problem in acquisition analysis because the confidence in the probability estimates influences the number of spare parts required to support a mission or deployment or determines the length of warranty ensuring proper operation of systems. This thesis develops a method that simulates small samples with censored data and examines the error of the Fisher-Matrix (FM) and the Likelihood Ratio Bounds (LRB) confidence methods of two test populations (size 10 and 20) with three, five, seven and nine observed failures for the Weibull distribution. This thesis includes a Monte Carlo simulation code written in S-Plus that can be modified by the user to meet their particular needs for any sampling and censoring scheme. To illustrate the approach...

Exact goodness-of-fit tests for censored dats

Grané, Aurea
Fonte: Universidade Carlos III de Madrid Publicador: Universidade Carlos III de Madrid
Tipo: Trabalho em Andamento Formato: text/plain; application/octet-stream; application/octet-stream; application/octet-stream; application/pdf
Publicado em /05/2009 ENG
Relevância na Pesquisa
46.38%
The statistic introduced in Fortiana and Grané (2003) is modified so that it can be used to test the goodness-of-fit of a censored sample, when the distribution function is fully specified. Exact and asymptotic distributions of three modified versions of this statistic are obtained and exact critical values are given for different sample sizes. Empirical power studies show the good performance of these statistics in detecting symmetrical alternatives.

Estimating Correlation with Multiply Censored Data Arising from the Adjustment of Singly Censored Data

Newton, Elizabeth; Rudel, Ruthann
Fonte: PubMed Publicador: PubMed
Tipo: Artigo de Revista Científica
Publicado em 01/01/2007 EN
Relevância na Pesquisa
26.63%
Environmental data frequently are left censored due to detection limits of laboratory assay procedures. Left censored means that some of the observations are known only to fall below a censoring point (detection limit). This presents difficulties in statistical analysis of the data. In this paper, we examine methods for estimating the correlation between variables each of which is censored at multiple points. Multiple censoring frequently arises due to adjustment of singly censored laboratory results for physical sample size. We discuss maximum likelihood (ML) estimation of the correlation and introduce a new method (cp.mle2) that, instead of using the multiply censored data directly, relies on ML estimates of the covariance of the singly censored laboratory data. We compare the ML methods with Kendall's tau-b (ck.taub) which is a modification Kendall's tau adjusted for ties, and several commonly used simple substitution methods: correlations estimated with non-detects set to the detection limit divided by two and correlations based on detects only (cs.det) with non-detects set to missing. The methods are compared based on simulations and real data. In the simulations, censoring levels are varied from 0 to 90%, ρ from -0.8 to 0.8 and ν (variance of physical sample size) is set to 0 and 0.5...

Empirical Cummulative Density Function from a Univariate Censored Sample

Markov, Plamen
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
36.31%
Let F be an unknown univariate distribution function to be estimated from a sample containing censored observations and tau be in dom(F). The author has derived a novel nonparametric estimator F_hat for F without making any assumptions regarding the nature of the censoring mechanism or the distribution function F. The distribution of F_hat(tau) can be easily and accurately estimated even for small sample sizes. The estimator F_hat has significantly outperformed the Kaplan Meier estimator in a simulation study with an exponential and a lognormal distribution functions F and a censoring mechanism defined by i.i.d. uniform random observation points.; Comment: I would like to withdraw version 1 of the paper. The pdf file for ver 1 is displayed incorrectly. The content of version 2 is identical. I have changed the formatting

Data Transformations and Goodness-of-Fit Tests for Type-II Right Censored Samples

Goldmann, Christian; Klar, Bernhard; Meintanis, Simos G.
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 11/12/2013
Relevância na Pesquisa
36.38%
We suggest several goodness-of-fit methods which are appropriate with Type-II right censored data. Our strategy is to transform the original observations from a censored sample into an approximately i.i.d. sample of normal variates and then perform a standard goodness-of-fit test for normality on the transformed observations. A simulation study with several well known parametric distributions under testing reveals the sampling properties of the methods. We also provide theoretical analysis of the proposed method.

Bayesian prediction of minimal repair times of a series system based on hybrid censored sample of components' lifetimes under Rayleigh distribution

MirMostafaee, S. M. T. K.; Amini, Morteza; Asgharzadeh, A.
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 24/05/2015
Relevância na Pesquisa
46.25%
In this paper, we develop Bayesian predictive inferential procedures for prediction of repair times of a series system, applying a minimal repair strategy, using the information contained in an independent observed hybrid censored sample of the lifetimes of the components of the system, assuming the underlying distribution of the lifetimes to be Rayleigh distribution. An illustrative real data example and a simulation study are presented for the purpose of illustration and comparison of the proposed predictors.

Weighted empirical likelihood in some two-sample semiparametric models with various types of censored data

Ren, Jian-Jian
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 12/03/2008
Relevância na Pesquisa
26.63%
In this article, the weighted empirical likelihood is applied to a general setting of two-sample semiparametric models, which includes biased sampling models and case-control logistic regression models as special cases. For various types of censored data, such as right censored data, doubly censored data, interval censored data and partly interval-censored data, the weighted empirical likelihood-based semiparametric maximum likelihood estimator $(\tilde{\theta}_n,\tilde{F}_n)$ for the underlying parameter $\theta_0$ and distribution $F_0$ is derived, and the strong consistency of $(\tilde{\theta}_n,\tilde{F}_n)$ and the asymptotic normality of $\tilde{\theta}_n$ are established. Under biased sampling models, the weighted empirical log-likelihood ratio is shown to have an asymptotic scaled chi-squared distribution for censored data aforementioned. For right censored data, doubly censored data and partly interval-censored data, it is shown that $\sqrt{n}(\tilde{F}_n-F_0)$ weakly converges to a centered Gaussian process, which leads to a consistent goodness-of-fit test for the case-control logistic regression models.; Comment: Published in at http://dx.doi.org/10.1214/009053607000000695 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org)

The Anderson-Darling test of fit for the power law distribution from left censored samples

Coronel-Brizio, H. F.; Hernandez-Montoya, A. R.
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 03/04/2010
Relevância na Pesquisa
36.15%
Maximum likelihood estimation and a test of fit based on the Anderson-Darling statistic is presented for the case of the power law distribution when the parameters are estimated from a left-censored sample. Expressions for the maximum likelihood estimators and tables of asymptotic percentage points for the A^2 statistic are given. The technique is illustrated for data from the Dow Jones Industrial Average index, an example of high theoretical and practical importance in Econophysics, Finance, Physics, Biology and, in general, in other related Sciences such as Complexity Sciences.; Comment: 14 pages, 3 figures

Estimating of $P(Y
Elfattah, A. M. Abd; Marwa, O. Mohamed
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 07/01/2008
Relevância na Pesquisa
36.15%
In this article, the estimation of reliability of a system is discussed $p(y

Managerial Decision Making in Censored Environments: Biased Judgment of Demand, Risk, and Employee Capability

Feiler, Daniel C.
Fonte: Universidade Duke Publicador: Universidade Duke
Tipo: Dissertação
Publicado em //2012
Relevância na Pesquisa
36.45%

Individuals have the tendency to believe that they have complete information when making decisions. In many contexts this propensity allows for swift, efficient, and generally effective decision making. However, individuals cannot always see a representative picture of the world in which they operate. This paper examines judgment in censored environments where a constraint, the censorship point, systematically distorts the sample observed by a decision maker. Random instances beyond the censorship point are observed at the censorship point, while instances below the censorship point are observed at their true value. Many important managerial decisions occur in censored environments, such as inventory, risk-taking, and employee evaluation decisions. This empirical work demonstrates a censorship bias - individuals tend to rely too heavily on the observed censored sample, biasing their beliefs about the underlying population. Further, the censorship bias is exacerbated for higher rates of censorship, higher variance in the population, and higher variability in the censorship points. Evidence from four studies demonstrates how the censorship bias can cause managers to underestimate demand for their goods, over-estimate risk in their environments...