Página 1 dos resultados de 179 itens digitais encontrados em 0.023 segundos

Um novo modelo para cálculo de probabilidade de paternidade - concepção e implementação; A Novel Model for Paternity Probability Calculation - Design and Implementation

Nakano, Fábio
Fonte: Biblioteca Digitais de Teses e Dissertações da USP Publicador: Biblioteca Digitais de Teses e Dissertações da USP
Tipo: Tese de Doutorado Formato: application/pdf
Publicado em 09/11/2006 PT
Relevância na Pesquisa
35.91%
Nesta tese são apresentados um novo modelo estatístico para cálculo de probabilidade de paternidade e sua implementação em software. O modelo proposto utiliza o genótipo como informação básica, em contraste com outros modelos que usam alelos. Por esta diferença, o modelo proposto resulta mais abrangente, mas que, sob certas restrições, reproduz os resultados dos modelos que usam alelos. Este modelo foi implementado em um software que recebe descrições da genealogia e dos marcadores em uma linguagem dedicada a isso e constrói uma rede bayesiana para cada marcador. O usuário pode definir livremente a genealogia e os marcadores. O cálculo da probabilidade de paternidade é feito, sobre as redes construídas, por um software para inferência em redes bayesianas e a probabilidade de paternidade combinada considerando todos os marcadores é calculada, resultando em um "índice de paternidade.; This thesis presents a novel statistical model for calculation of the probability of paternity and its implementation as a software. The proposed model uses genotype as basic information. Other models use alleles as basic information. As a result the proposed model is broader, in the sense that, under certain constraints the results from the other models are reproduced. The software implementation receives pedigree and markers data...

BSMac: A MATLAB toolbox implementing a Bayesian spatial model for brain activation and connectivity

Zhang, Lijun; Agravat, Sanjay; Derado, Gordana; Chen, Shuo; McIntosh, Belinda J.; Bowman, F. DuBois
Fonte: PubMed Publicador: PubMed
Tipo: Artigo de Revista Científica
EN
Relevância na Pesquisa
45.87%
We present a statistical and graphical visualization MATLAB toolbox for the analysis of functional magnetic resonance imaging (fMRI) data, called the Bayesian Spatial Model for activation and connectivity (BSMac). BSMac simultaneously performs whole-brain activation analyses at the voxel and region of interest (ROI) levels as well as task-related functional connectivity (FC) analyses using a flexible Bayesian modeling framework (Bowman et al., 2008). BSMac allows for inputting data in either Analyze or Nifti file formats. The user provides information pertaining to subgroup memberships, scanning sessions, and experimental tasks (stimuli), from which the design matrix is constructed. BSMac then performs parameter estimation based on Markov Chain Monte Carlo (MCMC) methods and generates plots for activation and FC, such as interactive 2D maps of voxel and region-level task-related changes in neural activity and animated 3D graphics of the FC results. The toolbox can be downloaded from http://www.sph.emory.edu/bios/CBIS/. We illustrate the BSMac toolbox through an application to an fMRI study of working memory in patients with schizophrenia.

A Bayesian computational model for online character recognition and disability assessment during cursive eye writing

Diard, Julien; Rynik, Vincent; Lorenceau, Jean
Fonte: Frontiers Media S.A. Publicador: Frontiers Media S.A.
Tipo: Artigo de Revista Científica
Publicado em 11/11/2013 EN
Relevância na Pesquisa
36.06%
This research involves a novel apparatus, in which the user is presented with an illusion inducing visual stimulus. The user perceives illusory movement that can be followed by the eye, so that smooth pursuit eye movements can be sustained in arbitrary directions. Thus, free-flow trajectories of any shape can be traced. In other words, coupled with an eye-tracking device, this apparatus enables “eye writing,” which appears to be an original object of study. We adapt a previous model of reading and writing to this context. We describe a probabilistic model called the Bayesian Action-Perception for Eye On-Line model (BAP-EOL). It encodes probabilistic knowledge about isolated letter trajectories, their size, high-frequency components of the produced trajectory, and pupil diameter. We show how Bayesian inference, in this single model, can be used to solve several tasks, like letter recognition and novelty detection (i.e., recognizing when a presented character is not part of the learned database). We are interested in the potential use of the eye writing apparatus by motor impaired patients: the final task we solve by Bayesian inference is disability assessment (i.e., measuring and tracking the evolution of motor characteristics of produced trajectories). Preliminary experimental results are presented...

Food Reconstruction Using Isotopic Transferred Signals (FRUITS): A Bayesian Model for Diet Reconstruction

Fernandes, Ricardo; Millard, Andrew R.; Brabec, Marek; Nadeau, Marie-Josée; Grootes, Pieter
Fonte: Public Library of Science Publicador: Public Library of Science
Tipo: Artigo de Revista Científica
Publicado em 13/02/2014 EN
Relevância na Pesquisa
35.87%
Human and animal diet reconstruction studies that rely on tissue chemical signatures aim at providing estimates on the relative intake of potential food groups. However, several sources of uncertainty need to be considered when handling data. Bayesian mixing models provide a natural platform to handle diverse sources of uncertainty while allowing the user to contribute with prior expert information. The Bayesian mixing model FRUITS (Food Reconstruction Using Isotopic Transferred Signals) was developed for use in diet reconstruction studies. FRUITS incorporates the capability to account for dietary routing, that is, the contribution of different food fractions (e.g. macronutrients) towards a dietary proxy signal measured in the consumer. FRUITS also provides relatively straightforward means for the introduction of prior information on the relative dietary contributions of food groups or food fractions. This type of prior may originate, for instance, from physiological or metabolic studies. FRUITS performance was tested using simulated data and data from a published controlled animal feeding experiment. The feeding experiment data was selected to exemplify the application of the novel capabilities incorporated into FRUITS but also to illustrate some of the aspects that need to be considered when handling data within diet reconstruction studies. FRUITS accurately predicted dietary intakes...

A BAYESIAN INTEGRATION MODEL OF HIGH-THROUGHPUT PROTEOMICS AND METABOLOMICS DATA FOR IMPROVED EARLY DETECTION OF MICROBIAL INFECTIONS

WEBB-ROBERTSON, BOBBIE-JO M.; MCCUE, LEE ANN; BEAGLEY, NATHANIAL; MCDERMOTT, JASON E.; WUNSCHEL, DAVID S.; VARNUM, SUSAN M.; HU, JIAN ZHI; ISERN, NANCY G.; BUCHKO, GARRY W.; MCATEER, KATHLEEN; POUNDS, JOEL G.; SKERRETT, SHAWN J.; LIGGITT, DENNY; FREVERT,
Fonte: PubMed Publicador: PubMed
Tipo: Artigo de Revista Científica
Publicado em //2009 EN
Relevância na Pesquisa
35.83%
High-throughput (HTP) technologies offer the capability to evaluate the genome, proteome, and metabolome of an organism at a global scale. This opens up new opportunities to define complex signatures of disease that involve signals from multiple types of biomolecules. However, integrating these data types is difficult due to the heterogeneity of the data. We present a Bayesian approach to integration that uses posterior probabilities to assign class memberships to samples using individual and multiple data sources; these probabilities are based on lower-level likelihood functions derived from standard statistical learning algorithms. We demonstrate this approach on microbial infections of mice, where the bronchial alveolar lavage fluid was analyzed by three HTP technologies, two proteomic and one metabolomic. We demonstrate that integration of the three datasets improves classification accuracy to ~89% from the best individual dataset at ~83%. In addition, we present a new visualization tool called Visual Integration for Bayesian Evaluation (VIBE) that allows the user to observe classification accuracies at the class level and evaluate classification accuracies on any subset of available data types based on the posterior probability models defined for the individual and integrated data.

parallelMCMCcombine: An R Package for Bayesian Methods for Big Data and Analytics

Miroshnikov, Alexey; Conlon, Erin M.
Fonte: Public Library of Science Publicador: Public Library of Science
Tipo: Artigo de Revista Científica
Publicado em 26/09/2014 EN
Relevância na Pesquisa
35.96%
Recent advances in big data and analytics research have provided a wealth of large data sets that are too big to be analyzed in their entirety, due to restrictions on computer memory or storage size. New Bayesian methods have been developed for data sets that are large only due to large sample sizes. These methods partition big data sets into subsets and perform independent Bayesian Markov chain Monte Carlo analyses on the subsets. The methods then combine the independent subset posterior samples to estimate a posterior density given the full data set. These approaches were shown to be effective for Bayesian models including logistic regression models, Gaussian mixture models and hierarchical models. Here, we introduce the R package parallelMCMCcombine which carries out four of these techniques for combining independent subset posterior samples. We illustrate each of the methods using a Bayesian logistic regression model for simulation data and a Bayesian Gamma model for real data; we also demonstrate features and capabilities of the R package. The package assumes the user has carried out the Bayesian analysis and has produced the independent subposterior samples outside of the package. The methods are primarily suited to models with unknown parameters of fixed dimension that exist in continuous parameter spaces. We envision this tool will allow researchers to explore the various methods for their specific applications and will assist future progress in this rapidly developing field.

Mixtures of g-priors for Bayesian Model Averaging with Economic Application

Ley, Eduardo; Steel, Mark F. J.
Fonte: Banco Mundial Publicador: Banco Mundial
Relevância na Pesquisa
36.01%
This paper examines the issue of variable selection in linear regression modeling, where there is a potentially large amount of possible covariates and economic theory offers insufficient guidance on how to select the appropriate subset. In this context, Bayesian Model Averaging presents a formal Bayesian solution to dealing with model uncertainty. The main interest here is the effect of the prior on the results, such as posterior inclusion probabilities of regressors and predictive performance. The authors combine a Binomial-Beta prior on model size with a g-prior on the coefficients of each model. In addition, they assign a hyperprior to g, as the choice of g has been found to have a large impact on the results. For the prior on g, they examine the Zellner-Siow prior and a class of Beta shrinkage priors, which covers most choices in the recent literature. The authors propose a benchmark Beta prior, inspired by earlier findings with fixed g, and show it leads to consistent model selection. Inference is conducted through a Markov chain Monte Carlo sampler over model space and g. The authors examine the performance of the various priors in the context of simulated and real data. For the latter...

Erkennung von Fahrmanövern mit objektorientierten Bayes-Netzen in Autobahnszenarien; Object-oriented Bayesian networks for driving maneuver recognition in highway scenarios

Kasper, Dietmar
Fonte: Universidade de Tubinga Publicador: Universidade de Tubinga
Tipo: Dissertação
DE_DE
Relevância na Pesquisa
36.02%
Die Steigerung der Verkehrssicherheit ist seit langer Zeit das Ziel der Automobilhersteller. Dazu wurden in vergangenen Jahren viele Fahrerassistenzsysteme entwickelt, die das Fahren sicherer oder komfortabler machen. Bei den bereits zur Verfügung stehenden Assistenzsystemen ist noch ein großer Verbesserungspotenzial vorhanden. Ein vom Fahrer nicht nachvollziehbares Warnen in unterschiedlichen Situationen ist nur eines der Probleme bei der Entwicklung von Fahrerassistenzsystemen. Um eine größere Kundenakzeptanz zu erreichen, muss die vorliegende Situation von Assistenzsystemen besser interpretiert werden können. Dies erfordert nicht nur die Verbesserung der eingesetzten Sensorik zur Umfeld-Erfassung, sondern auch den Einsatz neuer Techniken, unter anderem die Anwendung von Methoden der künstlichen Intelligenz. Durch die Entwicklung eines Verständnisses für eine vorliegende Situation kann das Verhalten eines Assistenzsystems an diese angepasst werden. Dadurch wird die Kundenakzeptanz erhöht, was dem Ziel der Entwicklung entspricht. Das Ziel dieser Arbeit ist die Entwicklung eines neuen Ansatzes zur Erkennung von unterschiedlichen Fahrmanövern in einem Autobahnszenario für alle Verkehrsteilnehmer. Das in dieser Arbeit entwickelte Klassifikationssystem dient zur Unterstützung eines ACC-Systems und Erhöhung des Fahrkomforts. Dabei wurden die Erkennungsleistung und die Umsetzbarkeit des Klassifikationssystems in einem Versuchsträger evaluiert. Zur Klassifikation von Fahrmanövern wird in dieser Arbeit der Ansatz der objektorientierten Bayes-Netze bevorzugt. Das Bayes-Netz-Modell ist hierarchisch in logischen Schichten strukturiert...

Using well-known techniques for classifying user behavior profiles

Iglesias, José Antonio; Ledezma, Agapito; Sanchis, Araceli
Fonte: Foresight Academy of Technology (FATech Press) Publicador: Foresight Academy of Technology (FATech Press)
Tipo: info:eu-repo/semantics/acceptedVersion; info:eu-repo/semantics/article Formato: text/plain; application/pdf
Publicado em /08/2008 ENG
Relevância na Pesquisa
35.98%
The security of a computer is based on the realization of confidentiality, integrity, and availability. A computer can keep track of computer users to improve the security in the system. However, this does not prevent a user from impersonating another user. If a computer system can model the behavior of the users, it can be very beneficial detecting masqueraders, assisting them or predicting their future actions. In this paper, we present three different methods for classifying the behavior of a computer user. The proposed three methods are: Bayesian Netwoks, Hidden Markov Models and a method based on Term Weighting (TFIDF). These three techniques have been chosen because we want to assess pure statistical techniques (Bayesian Networks) and information-oriented techniques (TFIDF) against a technique that appears to be more adequate (at least in principle) for identifying the behavior of users (HMMs).; This work has been supported by the Spanish Ministry of Education and Science under grant TRA-2007-67374-C02-02.; "Article accepted for publication in Communications of SIWN © 2008 The Foresight Academy of Technology" http://fatech.org.uk/press/siwn/

Using bayesian networks and parameterized questions in independent study

Descalço, Luís; Carvalho, Paula; Cruz, João Pedro; Oliveira, Paula; Seabra, Dina
Fonte: IATED Publicador: IATED
Tipo: Conferência ou Objeto de Conferência
ENG
Relevância na Pesquisa
55.93%
The teaching paradigm is changing from a traditional model of teachers as suppliers of knowledge and toward a model of teachers as advisers who carefully observe students, identify their learning needs, and help them in their independent study. In this new paradigm, computers and communication technology can be effective, not only as means for knowledge transmission, but also as tools for automatically providing feedback and diagnosis in the learning process. We present an approach integrating parameterized questions from two computer systems (Megua and PmatE), combined with a Web application (Siacua) implementing a Bayesian user model, using already many hundreds of questions from each of the two systems. Our approach allows the students a certain level of independence and provides some orientation in their independent study, by giving feedback about answers to questions and also about the general progress in the study subject. This progress is shown in the form of progress bars computed by Bayesian networks where nodes represent “concepts” and knowledge evidences. Teachers use Megua for creating and organizing their own database of (parameterized) questions, make them available for students, and for seeing the progress of each student or class in every topic being studied.

Using a Bayesian model for bankruptcy prediction : a comparative approach

He, Zhanpeng
Fonte: Brock University Publicador: Brock University
Relevância na Pesquisa
35.99%
The purpose of this study is to examine the impact of the choice of cut-off points, sampling procedures, and the business cycle on the accuracy of bankruptcy prediction models. Misclassification can result in erroneous predictions leading to prohibitive costs to firms, investors and the economy. To test the impact of the choice of cut-off points and sampling procedures, three bankruptcy prediction models are assessed- Bayesian, Hazard and Mixed Logit. A salient feature of the study is that the analysis includes both parametric and nonparametric bankruptcy prediction models. A sample of firms from Lynn M. LoPucki Bankruptcy Research Database in the U. S. was used to evaluate the relative performance of the three models. The choice of a cut-off point and sampling procedures were found to affect the rankings of the various models. In general, the results indicate that the empirical cut-off point estimated from the training sample resulted in the lowest misclassification costs for all three models. Although the Hazard and Mixed Logit models resulted in lower costs of misclassification in the randomly selected samples, the Mixed Logit model did not perform as well across varying business-cycles. In general, the Hazard model has the highest predictive power. However...

Sto(ry)chastics : a Bayesian network architecture for combined user modeling, sensor fusion, and computational storytelling for interactive spaces; Stochastics : a Bayesian network architecture for combined user modeling, sensor fusion, and computational storytelling for interactive spaces

Sparacino, Flavia, 1965-
Fonte: Massachusetts Institute of Technology Publicador: Massachusetts Institute of Technology
Tipo: Tese de Doutorado Formato: 220 p.; 30537720 bytes; 30537525 bytes; application/pdf; application/pdf
ENG
Relevância na Pesquisa
36.15%
This thesis presents a mathematical framework for real-time sensor-driven stochastic modeling of story and user-story interaction, which I call sto(ry)chastics. Almost all sensor-driven interactive entertainment, art, and architecture installations today rely on one-to-one mappings between content and participant's actions to tell a story. These mappings chain small subsets of scripted content, and do not attempt to understand the public's intention or desires during interaction, and therefore are rigid, ad hoc, prone to error, and lack depth in communication of meaning and expressive power. Sto(ry)chastics uses graphical probabilistic modeling of story fragments and participant input, gathered from sensors, to tell a story to the user, as a function of people's estimated intentions and desires during interaction. Using a Bayesian network approach for combined modeling of users, sensors, and story, sto(ry)chastics, as opposed to traditional systems based on one- to-one mappings, is flexible, reconfigurable, adaptive, context-sensitive, robust, accessible, and able to explain its choices. To illustrate sto(ry)chastics, this thesis describes the museum wearable, which orchestrates an audiovisual narration as a function of the visitor's interests and physical path in the museum. The museum wearable is a lightweight and small computer that people carry inside a shoulder pack. It offers an audiovisual augmentation of the surrounding environment using a small eye-piece display attached to conventional headphones. The wearable prototype described in this document relies on a custom-designed; (cont.) long-range infrared location-identification sensor to gather information on where and how long the visitor stops in the museum galleries. It uses this information as input to...

Estimating the Diets of Animals Using Stable Isotopes and a Comprehensive Bayesian Mixing Model

Hopkins, John B.; Ferguson, Jake M.
Fonte: Public Library of Science Publicador: Public Library of Science
Tipo: Artigo de Revista Científica
Publicado em 03/01/2012 EN
Relevância na Pesquisa
35.88%
Using stable isotope mixing models (SIMMs) as a tool to investigate the foraging ecology of animals is gaining popularity among researchers. As a result, statistical methods are rapidly evolving and numerous models have been produced to estimate the diets of animals—each with their benefits and their limitations. Deciding which SIMM to use is contingent on factors such as the consumer of interest, its food sources, sample size, the familiarity a user has with a particular framework for statistical analysis, or the level of inference the researcher desires to make (e.g., population- or individual-level). In this paper, we provide a review of commonly used SIMM models and describe a comprehensive SIMM that includes all features commonly used in SIMM analysis and two new features. We used data collected in Yosemite National Park to demonstrate IsotopeR's ability to estimate dietary parameters. We then examined the importance of each feature in the model and compared our results to inferences from commonly used SIMMs. IsotopeR's user interface (in R) will provide researchers a user-friendly tool for SIMM analysis. The model is also applicable for use in paleontology, archaeology, and forensic studies as well as estimating pollution inputs.

Hierarchical Bayesian Models with Factorization for Content-Based Recommendation

Zhang, Lanbo; Zhang, Yi
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 28/12/2014
Relevância na Pesquisa
36.22%
Most existing content-based filtering approaches learn user profiles independently without capturing the similarity among users. Bayesian hierarchical models \cite{Zhang:Efficient} learn user profiles jointly and have the advantage of being able to borrow discriminative information from other users through a Bayesian prior. However, the standard Bayesian hierarchical models assume all user profiles are generated from the same prior. Considering the diversity of user interests, this assumption could be improved by introducing more flexibility. Besides, most existing content-based filtering approaches implicitly assume that each user profile corresponds to exactly one user interest and fail to capture a user's multiple interests (information needs). In this paper, we present a flexible Bayesian hierarchical modeling approach to model both commonality and diversity among users as well as individual users' multiple interests. We propose two models each with different assumptions, and the proposed models are called Discriminative Factored Prior Models (DFPM). In our models, each user profile is modeled as a discriminative classifier with a factored model as its prior, and different factors contribute in different levels to each user profile. Compared with existing content-based filtering models...

parallelMCMCcombine: An R Package for Bayesian Methods for Big Data and Analytics

Miroshnikov, Alexey; Conlon, Erin
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
35.96%
Recent advances in big data and analytics research have provided a wealth of large data sets that are too big to be analyzed in their entirety, due to restrictions on computer memory or storage size. New Bayesian methods have been developed for large data sets that are only large due to large sample sizes; these methods partition big data sets into subsets, and perform independent Bayesian Markov chain Monte Carlo analyses on the subsets. The methods then combine the independent subset posterior samples to estimate a posterior density given the full data set. These approaches were shown to be effective for Bayesian models including logistic regression models, Gaussian mixture models and hierarchical models. Here, we introduce the R package parallelMCMCcombine which carries out four of these techniques for combining independent subset posterior samples. We illustrate each of the methods using a Bayesian logistic regression model for simulation data and a Bayesian Gamma model for real data; we also demonstrate features and capabilities of the R package. The package assumes the user has carried out the Bayesian analysis and has produced the independent subposterior samples outside of the package. The methods are primarily suited to models with unknown parameters of fixed dimension that exist in continuous parameter spaces. We envision this tool will allow researchers to explore the various methods for their specific applications...

A Bayesian Hierarchical Model for Comparative Evaluation of Teaching Quality Indicators in Higher Education

Fouskakis, Dimitris; Petrakos, George; Vavouras, Ioannis
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 07/04/2014
Relevância na Pesquisa
35.88%
The problem motivating the paper is the quantification of students' preferences regarding teaching/coursework quality, under certain numerical restrictions, in order to build a model for identifying, assessing and monitoring the major components of the overall academic quality. After reviewing the strengths and limitations of conjoint analysis and of the random coefficient regression model used in similar problems in the past, we propose a Bayesian beta regression model with a Dirichlet prior on the model coefficients. This approach not only allows for the incorporation of informative prior when it is available but also provides user friendly interfaces and direct probability interpretations for all quantities. Furthermore, it is a natural way to implement the usual constraints for the model weights/coefficients. This model was applied to data collected in 2009 and 2013 from undergraduate students in Panteion University, Athens, Greece and besides the construction of an instrument for the assessment and monitoring of teaching quality, it gave some input for a preliminary discussion on the association of the differences in students preferences between the two time periods with the current Greek economic and financial crisis.

CLUSTERnGO: a user-defined modelling platform for two-stage clustering of time-series data

Fidaner, Is?k Bar?s; Cankorur-Cetinkaya, Ayca; Dikicioglu, Duygu; Kirdar, Betul; Cemgil, Ali Taylan; Oliver, Stephen G.
Fonte: University of Cambridge Publicador: University of Cambridge
Tipo: Relatório Formato: The compressed folder (.zip extension) includes the publication and the user manual for the software in .pdf format. The source codes (src) and the executable files (app) for Windows-, Linux- and MacOS-based operating systems are included in separate comp
Relevância na Pesquisa
36.02%
Simple bioinformatic tools are frequently used to analyse time-series datasets regardless of their ability to deal with transient phenomena, limiting the meaningful information that may be extracted from them. This situation requires the development and exploitation of tailor-made, easy-to-use and flexible tools designed specifically for the analysis of time-series datasets. We present a novel statistical application called CLUSTERnGO, which uses a model-based clustering algorithm that fulfils this need. This algorithm involves two components of operation. Component 1 constructs a Bayesian non-parametric model (Infinite Mixture of Piecewise Linear Sequences) and Component 2, which applies a novel clustering methodology (Two-Stage Clustering). The software can also assign biological meaning to the identified clusters using an appropriate ontology. It applies multiple hypothesis testing to report the significance of these enrichments. The algorithm has a four-phase pipeline. The application can be executed using either command-line tools or a user-friendly Graphical User Interface. The C++ and QT source codes, the GUI applications for Windows, OS X and Linux operating systems and user manual are freely available for download under the GNU GPL v3 license in the compressed folder.; This record supports publication and is available at http://bioinformatics.oxfordjournals.org/content/early/2015/09/30/bioinformatics.btv532.long; This work was supported by the Biotechnology and Biological Sciences Research Council [BRIC2.2 grant BB/K011138/1 to S.G.O.]...

Bayesian Modeling and Adaptive Monte Carlo with Geophysics Applications

Wang, Jianyu
Fonte: Universidade Duke Publicador: Universidade Duke
Tipo: Dissertação
Publicado em //2013
Relevância na Pesquisa
36.11%

The first part of the thesis focuses on the development of Bayesian modeling motivated by geophysics applications. In Chapter 2, we model the frequency of pyroclastic flows collected from the Soufriere Hills volcano. Multiple change points within the dataset reveal several limitations of existing methods in literature. We propose Bayesian hierarchical models (BBH) by introducing an extra level of hierarchy with hyper parameters, adding a penalty term to constrain close consecutive rates, and using a mixture prior distribution to more accurately match certain circumstances in reality. We end the chapter with a description of the prediction procedure, which is the biggest advantage of the BBH in comparison with other existing methods. In Chapter 3, we develop new statistical techniques to model and relate three complex processes and datasets: the process of extrusion of magma into the lava dome, the growth of the dome as measured by its height, and the rockfalls as an indication of the dome's instability. First, we study the dynamic Negative Binomial branching process and use it to model the rockfalls. Moreover, a generalized regression model is proposed to regress daily rockfall numbers on the extrusion rate and dome height. Furthermore...

A Bayesian approach to identfication of gaseous effluents in passive LWIR imagery

Higbee, Shawn
Fonte: Rochester Instituto de Tecnologia Publicador: Rochester Instituto de Tecnologia
Tipo: Dissertação
EN_US
Relevância na Pesquisa
36.01%
Typically a regression approach is applied in order to identify the gaseous constituents present in a hyperspectral image, and the task of species identification amounts to choosing the best regression model. Common model selection approaches (stepwise and criterion based methods) have well known multiple comparisons problems, and they do not allow the user to control the experiment-wise error rate, or allow the user to include scene-specific knowledge in the inference process. A Bayesian model selection technique called Gibbs Variable Selection (GVS) that better handles these issues is presented and implemented via Markov chain monte carlo (MCMC). GVS can be used to simultaneously conduct inference on the optical path depth and probability of inclusion in a pixel for a each species in a library. This method flexibly accommodates an analyst's prior knowledge of the species present in a scene, as well as mixtures of species of any arbitrary complexity. A modified version of GVS with fast convergence properties that is tailored to unsupervised use in hyperspectral image analysis will be presented. Additionally a series of automated diagnostic measures have been developed to monitor convergence of the MCMC with minimal operator intervention. Finally...

Using parameterized calculus questions for learning and assessment

Descalço, Luís; Carvalho, Paula Reis
Fonte: IEEE Publicador: IEEE
Tipo: Conferência ou Objeto de Conferência
ENG
Relevância na Pesquisa
65.92%
We have implemented a Web application reusing questions from two computer systems, true/false questions from Project A and multiple choice questions from Project B. Our application implements a Bayesian user model for diagnosing student knowledge in the topics covered. In this article we propose the use of this system for both learning and assessment in a calculus course, encouraging the students to work during the semester without increasing the work load for teachers.