Página 1 dos resultados de 15704 itens digitais encontrados em 0.019 segundos

Improved testing inference in mixed linear models

MELO, Tatiane F. N.; FERRARI, Silvia L. P.; CRIBARI-NETO, Francisco
Fonte: ELSEVIER SCIENCE BV Publicador: ELSEVIER SCIENCE BV
Tipo: Artigo de Revista Científica
ENG
Relevância na Pesquisa
36.58%
Mixed linear models are commonly used in repeated measures studies. They account for the dependence amongst observations obtained from the same experimental unit. Often, the number of observations is small, and it is thus important to use inference strategies that incorporate small sample corrections. In this paper, we develop modified versions of the likelihood ratio test for fixed effects inference in mixed linear models. In particular, we derive a Bartlett correction to such a test, and also to a test obtained from a modified profile likelihood function. Our results generalize those in [Zucker, D.M., Lieberman, O., Manor, O., 2000. Improved small sample inference in the mixed linear model: Bartlett correction and adjusted likelihood. Journal of the Royal Statistical Society B, 62,827-838] by allowing the parameter of interest to be vector-valued. Additionally, our Bartlett corrections allow for random effects nonlinear covariance matrix structure. We report simulation results which show that the proposed tests display superior finite sample behavior relative to the standard likelihood ratio test. An application is also presented and discussed. (C) 2008 Elsevier B.V. All rights reserved.; Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP); FAPESP; Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq); CNPq

Inferência de redes de regulação gênica utilizando o paradigma de crescimento de sementes; Inference of gene regulatory networks using the seed growing paradigm

Higa, Carlos Henrique Aguena
Fonte: Biblioteca Digitais de Teses e Dissertações da USP Publicador: Biblioteca Digitais de Teses e Dissertações da USP
Tipo: Tese de Doutorado Formato: application/pdf
Publicado em 17/02/2012 PT
Relevância na Pesquisa
36.58%
Um problema importante na área de Biologia Sistêmica é o de inferência de redes de regulação gênica. Os avanços científicos e tecnológicos nos permitem analisar a expressão gênica de milhares de genes simultaneamente. Por "expressão gênica'', estamos nos referindo ao nível de mRNA dentro de uma célula. Devido a esta grande quantidade de dados, métodos matemáticos, estatísticos e computacionais têm sido desenvolvidos com o objetivo de elucidar os mecanismos de regulação gênica presentes nos organismos vivos. Para isso, modelos matemáticos de redes de regulação gênica têm sido propostos, assim como algoritmos para inferir estas redes. Neste trabalho, focamos nestes dois aspectos: modelagem e inferência. Com relação à modelagem, estudamos modelos existentes para o ciclo celular da levedura (Saccharomyces cerevisiae). Após este estudo, propomos um modelo baseado em redes Booleanas probabilísticas sensíveis ao contexto, e em seguida, um aprimoramento deste modelo, utilizando cadeias de Markov não homogêneas. Mostramos os resultados, comparando os nossos modelos com os modelos estudados. Com relação à inferência, propomos um novo algoritmo utilizando o paradigma de crescimento de semente de genes. Neste contexto...

Tuning of fuzzy inference systems through unconstrained optimization techniques

Flauzino, Rogerio A.; Ulson, Jose Alfredo Covolan; Da Silva, Ivan Nunes
Fonte: Universidade Estadual Paulista Publicador: Universidade Estadual Paulista
Tipo: Conferência ou Objeto de Conferência Formato: 417-422
ENG
Relevância na Pesquisa
36.63%
This paper presents a new methodology for the adjustment of fuzzy inference systems. A novel approach, which uses unconstrained optimization techniques, is developed in order to adjust the free parameters of the fuzzy inference system, such as its intrinsic parameters of the membership function and the weights of the inference rules. This methodology is interesting, not only for the results presented and obtained through computer simulations, but also for its generality concerning to the kind of fuzzy inference system used. Therefore, this methodology is expandable either to the Mandani architecture or also to that suggested by Takagi-Sugeno. The validation of the presented methodology is accomplished through an estimation of time series. More specifically, the Mackey-Glass chaotic time series estimation is used for the validation of the proposed methodology.

Efficient parametric adjustment of fuzzy inference system using unconstrained optimization

Da Silva, Ivan Nunes; Flauzino, Rogério Andrade
Fonte: Universidade Estadual Paulista Publicador: Universidade Estadual Paulista
Tipo: Conferência ou Objeto de Conferência Formato: 399-406
ENG
Relevância na Pesquisa
36.63%
This paper presents a new methodology for the adjustment of fuzzy inference systems, which uses technique based on error back-propagation method. The free parameters of the fuzzy inference system, such as its intrinsic parameters of the membership function and the weights of the inference rules, are automatically adjusted. This methodology is interesting, not only for the results presented and obtained through computer simulations, but also for its generality concerning to the kind of fuzzy inference system used. Therefore, this methodology is expandable either to the Mandani architecture or also to that suggested by Takagi-Sugeno. The validation of the presented methodology is accomplished through estimation of time series and by a mathematical modeling problem. More specifically, the Mackey-Glass chaotic time series is used for the validation of the proposed methodology. © Springer-Verlag Berlin Heidelberg 2007.

Enabling network inference methods to handle missing data and outliers

Folch-Fortuny, Abel; Villaverde, Alejandro F.; Ferrer, Alberto; Banga, Julio R.
Fonte: BioMed Central Publicador: BioMed Central
Tipo: Artigo de Revista Científica
Publicado em //2015 ENG
Relevância na Pesquisa
36.7%
The inference of complex networks from data is a challenging problem in biological sciences, as well as in a wide range of disciplines such as chemistry, technology, economics, or sociology. The quantity and quality of the data greatly affect the results. While many methodologies have been developed for this task, they seldom take into account issues such as missing data or outlier detection and correction, which need to be properly addressed before network inference. Results Here we present an approach to (i) handle missing data and (ii) detect and correct outliers based on multivariate projection to latent structures. The method, called trimmed scores regression (TSR), enables network inference methods to analyse incomplete datasets by imputing the missing values coherently with the latent data structure. Furthermore, it substitutes the faulty values in a dataset by proper estimations. We provide an implementation of this approach, and show how it can be integrated with any network inference method as a preliminary data curation step. This functionality is demonstrated with a state of the art network inference method based on mutual information distance and entropy reduction, MIDER. Conclusion The methodology presented here enables network inference methods to analyse a large number of incomplete and faulty datasets that could not be reliably analysed so far. Our comparative studies show the superiority of TSR over other missing data approaches used by practitioners. Furthermore...

Type inference for conversation types

Lourenço, Maria Luísa Sobreira Gouveia
Fonte: Faculdade de Ciências e Tecnologia Publicador: Faculdade de Ciências e Tecnologia
Tipo: Dissertação de Mestrado
Publicado em //2009 ENG
Relevância na Pesquisa
36.7%
Trabalho apresentado no âmbito do Mestrado em Engenharia Informática, como requisito parcial para obtenção do grau de Mestre em Engenharia Informática; This dissertation tackles the problem of type inference for conversation types by devising and implementing a type inference algorithm. This is an interesting issue to address if we take into account that service-oriented applications can have very rich and complex protocols of services’usage, thus requiring the programmer to annotate every service invocation with a type corresponding to his role in a protocol, which would make the development of such applications quite unpractical. Therefore, freeing the programmer from that task, by having inference of types that describe such protocols, is quite desirable not only because it is cumbersome and tedious to do such annotations but also because it reduces the occurrences of errors when developing real complex systems. While there is several work done related to session types and type inference in the context of binary sessions, work regarding multiparty conversations is still lacking even though there are some proposals related to multi-session conversations(i.e. interactions happen through shared channels that are distributed at service invocation time to all participants). Our approach is based on Conversation Calculus...

Stochastic Modeling and Bayesian Inference with Applications in Biophysics

Du, Chao
Fonte: Harvard University Publicador: Harvard University
Tipo: Thesis or Dissertation
EN_US
Relevância na Pesquisa
36.63%
This thesis explores stochastic modeling and Bayesian inference strategies in the context of the following three problems: 1) Modeling the complex interactions between and within molecules; 2) Extracting information from stepwise signals that are commonly found in biophysical experiments; 3) Improving the computational efficiency of a non-parametric Bayesian inference algorithm. Chapter 1 studies the data from a recent single-molecule biophysical experiment on enzyme kinetics. Using a stochastic network model, we analyze the autocorrelation of experimental fluorescence intensity and the autocorrelation of enzymatic reaction times. This chapter shows that the stochastic network model is capable of explaining the experimental data in depth and further explains why the enzyme molecules behave fundamentally differently from what the classical model predicts. The modern knowledge on the molecular kinetics is often learned through the information extracted from stepwise signals in experiments utilizing fluorescence spectroscopy. Chapter 2 proposes a new Bayesian method to estimate the change-points in stepwise signals. This approach utilizes marginal likelihood as the tool of inference. This chapter illustrates the impact of the choice of prior on the estimator and provides guidelines for setting the prior. Based on the results of simulation study...

Probabilistic Inference and Non-Monotonic Inference

Kyburg, Henry E.
Fonte: University of Rochester. Computer Science Department. Publicador: University of Rochester. Computer Science Department.
Tipo: Relatório
ENG
Relevância na Pesquisa
36.67%
Since the appearance of the influential article by McCarthy and Hayes, few people have tried to use probabilities as a basis for non-monotonic inference. One reason, perhaps the main one, is that probabilistic inference easily yields inconsistent bodies of knowledge, as is revealed by the lottery paradox. Here we establish three things: First, that standard systems of non-monotonic reasoning (default logic, non-monotonic logic, and circumscription) fall prey to the same lottery-like difficulties as does probabilistic inference. Second, that probabilistic inference provides equally plausible treatments of the standard examples of non-monotonic reasoning. Third, that the inconsistency threatened by the lottery paradox is a petty hobgoblin, and need not in any way interfere with the use of beliefs in planning and design.

Probabilistic Inference and Probabilistic Reasoning

Kyburg, Henry E.
Fonte: University of Rochester. Computer Science Department. Publicador: University of Rochester. Computer Science Department.
Tipo: Relatório
ENG
Relevância na Pesquisa
36.73%
There are two profoundly different (though not exclusive) approaches to uncertain inference. According to one, uncertain inference leads from one distribution of (non-extreme) uncertainties among a set of propositions to another distribution of (non-extreme) uncertainties among those propositions. According to the other, uncertain inference is like deductive inference in that the conclusion is detached from the premises (the evidence) and accepted as "practically certain"; it differs in being non-monotonic: an augmentatio nof the premises can lead to the withdrawal of conclusions already accepted. We show here, first, that probabilistic reasoning and probabilistic inference are distinct: second,that probabi1istic inference is what both trad1tlona1 inductive log1c ("ampliat1ve Inference") and non-monoton1c reasoning are designed to capture, third, that acceptance is legitimate and desirable, fourth, that statistical testing provides a model of probabilistic acceptance, and fifth, that a generalization of this model makes sense in AI.

A system of interaction and structure II: the need for deep inference

Tiu, Alwen
Fonte: Universidade Nacional da Austrália Publicador: Universidade Nacional da Austrália
Tipo: Journal article; Published Version Formato: 24 pages
Relevância na Pesquisa
36.67%
This paper studies properties of the logic BV, which is an extension of multiplicative linear logic (MLL) with a self-dual non-commutative operator. BV is presented in the calculus of structures, a proof theoretic formalism that supports deep inference, in which inference rules can be applied anywhere inside logical expressions. The use of deep inference results in a simple logical system for MLL extended with the self-dual non-commutative operator, which has been to date not known to be expressible in sequent calculus. In this paper, deep inference is shown to be crucial for the logic BV, that is, any restriction on the "depth" of the inference rules of BV would result in a strictly less expressive logical system.

An improved algorithm of multicast topology inference from end-to-end measurements

Tian, H.; Shen, H.
Fonte: Springer; Berlin Publicador: Springer; Berlin
Tipo: Conference paper
Publicado em //2003 EN
Relevância na Pesquisa
36.67%
Multicast topology inference from end-to-end measurements has been widely used recently. Algorithms of inference on loss distribution show good performance in inference accuracy and time complexity. However, to our knowledge, the existing results produce logical topology structures that are only in the complete binary tree form, which differ in most cases significantly from the actual network topology. To solve this problem, we propose an algorithm that makes use of an additional measure of hop count. The improved algorithm of incorporating hop count in binary tree topology inference is helpful to reduce time complexity and improve inference accuracy. Through comparison and analysis, it is obtained that the time complexity of our algorithm in the worst case is O (l 2) that is much better than O (l 3) required by the previous algorithm. The expected time complexity of the algorithm is estimated at , while that of the previous algorithm is O (l 3).; Hui Tian and Hong Shen; The original publication is available at www.springerlink.com

Do films make you think? - Inference processes in expository film comprehension; Regen Filme zum Denken an? - Inferenzprozesse beim expositorischen Filmverstehen

Tibus, Maike
Fonte: Universidade de Tubinga Publicador: Universidade de Tubinga
Tipo: Dissertação
EN
Relevância na Pesquisa
36.73%
This dissertation is motivated by the widespread assumption that expository films ("explaining films") are processed in a superficial manner and that they are, therefore, not particularly suitable for instructional purposes. For instance, it has been claimed that the dynamic-pictorial information, a core characteristic of film, rather hinders than supports deep cognitive processing. However, it has to be noted that the supposed superficial elaboration of film yet needs to be tested in a direct way by means of process measures (so called online measures). Existing research findings mostly rely on analyses of outcome measures such as knowledge acquisition that are obtained subsequently to the film reception (so called offline measures). However, offline measures do not allow for valid conclusions about the online film comprehension processes. Therefore, this dissertation closes this research gap by analyzing causal bridging inference processes both online and offline. Causal bridging inferences are crucial elaborations for an understanding of complex matters. A first experiment shows by means of the Newtson paradigm that bridging inferences are generated without the recipient’s awareness. In film research the Newtson paradigm is used to measure the recipient’s cognitive processes by having the recipient press a button. This requires a certain monitoring of one’s own cognitive processes. However...

Bayesian graphical models for biological network inference

Peterson, Christine
Fonte: Universidade Rice Publicador: Universidade Rice
Relevância na Pesquisa
36.63%
In this work, we propose approaches for the inference of graphical models in the Bayesian framework. Graphical models, which use a network structure to represent conditional dependencies among random variables, provide a valuable tool for visualizing and understanding the relationships among many variables. However, since these networks are complex systems, they can be difficult to infer given a limited number of observations. Our research is focused on development of methods which allow incorporation of prior information on particular edges or on the model structure to improve the reliability of inference given small to moderate sample sizes. First, we propose an approach to graphical model inference using the Bayesian graphical lasso. Our method incorporates informative priors on the shrinkage parameters specific to each edge. We demonstrate through simulations that this method allows improved learning of the network structure when relevant prior information is available, and illustrate the approach on inference of the cellular metabolic network under neuroinflammation. This application highlights the strength of our method since the number of samples available is fairly small, but we are able to draw on rich reference information from publicly available databases describing known metabolic interactions to construct informative priors. Next...

Identification, Weak Instruments and Statistical Inference in Econometrics

DUFOUR, Jean-Marie
Fonte: Université de Montréal Publicador: Université de Montréal
Tipo: Artigo de Revista Científica Formato: 268035 bytes; application/pdf
Relevância na Pesquisa
36.75%
We discuss statistical inference problems associated with identification and testability in econometrics, and we emphasize the common nature of the two issues. After reviewing the relevant statistical notions, we consider in turn inference in nonparametric models and recent developments on weakly identified models (or weak instruments). We point out that many hypotheses, for which test procedures are commonly proposed, are not testable at all, while some frequently used econometric methods are fundamentally inappropriate for the models considered. Such situations lead to ill-defined statistical problems and are often associated with a misguided use of asymptotic distributional results. Concerning nonparametric hypotheses, we discuss three basic problems for which such difficulties occur: (1) testing a mean (or a moment) under (too) weak distributional assumptions; (2) inference under heteroskedasticity of unknown form; (3) inference in dynamic models with an unlimited number of parameters. Concerning weakly identified models, we stress that valid inference should be based on proper pivotal functions —a condition not satisfied by standard Wald-type methods based on standard errors — and we discuss recent developments in this field...

Inférence topologique

Prévost, Noémie
Fonte: Université de Montréal Publicador: Université de Montréal
Tipo: Thèse ou Mémoire numérique / Electronic Thesis or Dissertation
FR
Relevância na Pesquisa
36.63%
Les données provenant de l'échantillonnage fin d'un processus continu (champ aléatoire) peuvent être représentées sous forme d'images. Un test statistique permettant de détecter une différence entre deux images peut être vu comme un ensemble de tests où chaque pixel est comparé au pixel correspondant de l'autre image. On utilise alors une méthode de contrôle de l'erreur de type I au niveau de l'ensemble de tests, comme la correction de Bonferroni ou le contrôle du taux de faux-positifs (FDR). Des méthodes d'analyse de données ont été développées en imagerie médicale, principalement par Keith Worsley, utilisant la géométrie des champs aléatoires afin de construire un test statistique global sur une image entière. Il s'agit d'utiliser l'espérance de la caractéristique d'Euler de l'ensemble d'excursion du champ aléatoire sous-jacent à l'échantillon au-delà d'un seuil donné, pour déterminer la probabilité que le champ aléatoire dépasse ce même seuil sous l'hypothèse nulle (inférence topologique). Nous exposons quelques notions portant sur les champs aléatoires, en particulier l'isotropie (la fonction de covariance entre deux points du champ dépend seulement de la distance qui les sépare). Nous discutons de deux méthodes pour l'analyse des champs anisotropes. La première consiste à déformer le champ puis à utiliser les volumes intrinsèques et les compacités de la caractéristique d'Euler. La seconde utilise plutôt les courbures de Lipschitz-Killing. Nous faisons ensuite une étude de niveau et de puissance de l'inférence topologique en comparaison avec la correction de Bonferroni. Finalement...

Approximate inference in graphical models

Hennig, Philipp
Fonte: University of Cambridge; Department of Physics; Cavendish Laboratory; Robinson College Publicador: University of Cambridge; Department of Physics; Cavendish Laboratory; Robinson College
Tipo: Thesis; doctoral; PhD
EN
Relevância na Pesquisa
36.75%
Probability theory provides a mathematically rigorous yet conceptually flexible calculus of uncertainty, allowing the construction of complex hierarchical models for real-world inference tasks. Unfortunately, exact inference in probabilistic models is often computationally expensive or even intractable. A close inspection in such situations often reveals that computational bottlenecks are confined to certain aspects of the model, which can be circumvented by approximations without having to sacrifice the model's interesting aspects. The conceptual framework of graphical models provides an elegant means of representing probabilistic models and deriving both exact and approximate inference algorithms in terms of local computations. This makes graphical models an ideal aid in the development of generalizable approximations. This thesis contains a brief introduction to approximate inference in graphical models (Chapter 2), followed by three extensive case studies in which approximate inference algorithms are developed for challenging applied inference problems. Chapter 3 derives the first probabilistic game tree search algorithm. Chapter 4 provides a novel expressive model for inference in psychometric questionnaires. Chapter 5 develops a model for the topics of large corpora of text documents...

Statistical Inference Utilizing Agent Based Models

Heard, Daniel Philip
Fonte: Universidade Duke Publicador: Universidade Duke
Tipo: Dissertação
Publicado em //2014
Relevância na Pesquisa
36.67%

Agent-based models (ABMs) are computational models used to simulate the behaviors,

actionsand interactions of agents within a system. The individual agents

each have their own set of assigned attributes and rules, which determine

their behavior within the ABM system. These rules can be

deterministic or probabilistic, allowing for a great deal of

flexibility. ABMs allow us to

observe how the behaviors of the individual agents affect the system

as a whole and if any emergent structure develops within the

system. Examining rule sets in conjunction with corresponding emergent

structure shows how small-scale changes can

affect large-scale outcomes within the system. Thus, we can better

understand and predict the development and evolution of systems of

interest.

ABMs have become ubiquitous---they used in business

(virtual auctions to select electronic ads for display), atomospheric

science (weather forecasting), and public health (to model epidemics).

But there is limited understanding of the statistical properties of

ABMs. Specifically, there are no formal procedures

for calculating confidence intervals on predictions, nor for

assessing goodness-of-fit...

The evolution of transitive inference: Chimpanzees’ performance with social and nonsocial stimuli

Kaiser, Leah
Fonte: Universidade Duke Publicador: Universidade Duke
Publicado em 16/05/2014
Relevância na Pesquisa
36.63%
A number of theories posit various social and nonsocial factors as the central drivers of the evolution of intelligence. Cognitive skills, such as transitive interference, that have important implications in both the social and nonsocial domains can help identify drivers of cognitive evolution. Transitive inference is an inferential reasoning skill, which allows individuals to deduce unknown relationships from known ones. Due to its importance in both social and nonsocial contexts it can provide a powerful test of the driving forces behind primate cognitive evolution. We compared chimpanzees’ (Pan troglodytes) performance on social and nonsocial versions of a transitive inference task in order to assess whether they are better adapted to apply transitive reasoning to social or nonsocial stimuli. Our preliminary findings provide partial support for the hypotheses that chimpanzees are better adapted to use transitive inference in the social and nonsocial domains. However, our statistical abilities are limited by a small sample size and several confounding factors regarding the age and sex of our subjects, which limit firm conclusions. Further research (outlined in our methods) will allow us to more accurately asses the factors associated with the evolution of transitive inference skills in chimpanzees.; Honoros Thesis

Bayesian Mixture Modeling Approaches for Intermediate Variables and Causal Inference

Schwartz, Scott Lee
Fonte: Universidade Duke Publicador: Universidade Duke
Tipo: Dissertação
Publicado em //2010
Relevância na Pesquisa
36.63%

This thesis examines causal inference related topics involving intermediate variables, and uses Bayesian methodologies to advance analysis capabilities in these areas. First, joint modeling of outcome variables with intermediate variables is considered in the context of birthweight and censored gestational age analyses. The proposed methodology provides improved inference capabilities for birthweight and gestational age, avoids post-treatment selection bias problems associated with conditional on gestational age analyses, and appropriately assesses the uncertainty associated with censored gestational age. Second, principal stratification methodology for settings where causal inference analysis requires appropriate adjustment of intermediate variables is extended to observational settings with binary treatments and binary intermediate variables. This is done by uncovering the structural pathways of unmeasured confounding affecting principal stratification analysis and directly incorporating them into a model based sensitivity analysis methodology. Demonstration focuses on a study of the efficacy of influenza vaccination in elderly populations. Third, flexibility, interpretability, and capability of principal stratification analyses for continuous intermediate variables are improved by replacing the current fully parametric methodologies with semiparametric Bayesian alternatives. This presentation is one of the first uses of nonparametric techniques in causal inference analysis...

Parallelization of the maximum likelihood approach to phylogenetic inference

Garnham, Janine
Fonte: Rochester Instituto de Tecnologia Publicador: Rochester Instituto de Tecnologia
Tipo: Tese de Doutorado
EN_US
Relevância na Pesquisa
36.7%
Phylogenetic inference refers to the reconstruction of evolutionary relationships among various species, usually presented in the form of a tree. DNA sequences are most often used to determine these relationships. The results of phylogenetic inference have many important applications, including protein function determination, drug discovery, disease tracking and forensics. There are several popular computational methods used for phylogenetic inference, among them distance-based (i.e. neighbor joining), maximum parsimony, maximum likelihood, and Bayesian methods. This thesis focuses on the maximum likelihood method, which is regarded as one of the most accurate methods, with its computational demand being the main hindrance to its widespread use. Maximum likelihood is generally considered to be a heuristic method providing a statistical evaluation of the results, where potential tree topologies are judged by how well they predict the observed sequences. While there have been several previous efforts to parallelize the maximum likelihood method, sequential implementations are more widely used in the biological research community. This is due to a lack of confidence in the results produced by the more recent, parallel programs. However...