Página 1 dos resultados de 3120 itens digitais encontrados em 0.015 segundos

- SIAM PUBLICATIONS
- SPRINGER
- APPLIED PROBABILITY TRUST
- Springer-Verlag
- Sociedade Brasileira de Pesquisa Operacional
- Universidade de Adelaide
- John Wiley & Sons Ltd
- Academic Press Inc
- IEEE-Inst Electrical Electronics Engineers Inc
- Universidade Cornell
- Xarxa de Referència en Economia Aplicada (XREAP)
- Universidade Autônoma de Barcelona
- Rochester Instituto de Tecnologia
- Institute of Electrical and Electronics Engineers (IEEE Inc)
- Mais Publicadores...

## AVERAGE CONTINUOUS CONTROL OF PIECEWISE DETERMINISTIC MARKOV PROCESSES

Fonte: SIAM PUBLICATIONS
Publicador: SIAM PUBLICATIONS

Tipo: Artigo de Revista Científica

ENG

Relevância na Pesquisa

56.16%

#piecewise deterministic Markov process#continuous-time#long run average cost#optimal control#integro-differential optimality equation#vanishing discount approach#DECISION-PROCESSES#DISCRETE-TIME#UNBOUNDED COSTS#OPTIMAL POLICIES#OPTIMALITY

This paper deals with the long run average continuous control problem of piecewise deterministic Markov processes (PDMPs) taking values in a general Borel space and with compact action space depending on the state variable. The control variable acts on the jump rate and transition measure of the PDMP, and the running and boundary costs are assumed to be positive but not necessarily bounded. Our first main result is to obtain an optimality equation for the long run average cost in terms of a discrete-time optimality equation related to the embedded Markov chain given by the postjump location of the PDMP. Our second main result guarantees the existence of a feedback measurable selector for the discrete-time optimality equation by establishing a connection between this equation and an integro-differential equation. Our final main result is to obtain some sufficient conditions for the existence of a solution for a discrete-time optimality inequality and an ordinary optimal feedback control for the long run average cost using the so-called vanishing discount approach. Two examples are presented illustrating the possible applications of the results developed in the paper.; CNPq (Brazilian National Research Council)[301067/09-0]; ANR - French National Agency of Research[ANR-09-SEGI-004]

Link permanente para citações:

## The Policy Iteration Algorithm for Average Continuous Control of Piecewise Deterministic Markov Processes

Fonte: SPRINGER
Publicador: SPRINGER

Tipo: Artigo de Revista Científica

ENG

Relevância na Pesquisa

66.14%

#Piecewise-deterministic Markov Processes#Continuous-time#Long-run average cost#Optimal control#Integro-differential optimality inequation#Policy iteration algorithm#DECISION-PROCESSES#BOREL SPACES#OPTIMALITY#Mathematics, Applied

The main goal of this paper is to apply the so-called policy iteration algorithm (PIA) for the long run average continuous control problem of piecewise deterministic Markov processes (PDMP`s) taking values in a general Borel space and with compact action space depending on the state variable. In order to do that we first derive some important properties for a pseudo-Poisson equation associated to the problem. In the sequence it is shown that the convergence of the PIA to a solution satisfying the optimality equation holds under some classical hypotheses and that this optimal solution yields to an optimal control strategy for the average control problem for the continuous-time PDMP in a feedback form.; CNPq (Brazilian National Research Council)[301067/09-0]; French National Agency of Research (ANR)[ANR-09-SEGI-004]

Link permanente para citações:

## THE VANISHING DISCOUNT APPROACH FOR THE AVERAGE CONTINUOUS CONTROL OF PIECEWISE DETERMINISTIC MARKOV PROCESSES

Fonte: APPLIED PROBABILITY TRUST
Publicador: APPLIED PROBABILITY TRUST

Tipo: Artigo de Revista Científica

ENG

Relevância na Pesquisa

56.11%

#Piecewise-deterministic Markov process#continuous time#long-run average cost#optimal control#integro-differential optimality inequation#vanishing discount approach#DECISION-PROCESSES#CAPACITY EXPANSION#OPTIMALITY#STABILITY#SPACES

This work is concerned with the existence of an optimal control strategy for the long-run average continuous control problem of piecewise-deterministic Markov processes (PDMPs). In Costa and Dufour (2008), sufficient conditions were derived to ensure the existence of an optimal control by using the vanishing discount approach. These conditions were mainly expressed in terms of the relative difference of the alpha-discount value functions. The main goal of this paper is to derive tractable conditions directly related to the primitive data of the PDMP to ensure the existence of an optimal control. The present work can be seen as a continuation of the results derived in Costa and Dufour (2008). Our main assumptions are written in terms of some integro-differential inequalities related to the so-called expected growth condition, and geometric convergence of the post-jump location kernel associated to the PDMP. An example based on the capacity expansion problem is presented, illustrating the possible applications of the results developed in the paper.; CNPq (Brazilian National Research Council)[304866/03-2]; FAPESP (Research Council of the State of Sao Paulo)[03/06736-7]; ANR[ANR-09-SEGI-004]

Link permanente para citações:

## Singular Perturbation for the Discounted Continuous Control of Piecewise Deterministic Markov Processes

Fonte: SPRINGER
Publicador: SPRINGER

Tipo: Artigo de Revista Científica

ENG

Relevância na Pesquisa

66.2%

#Piecewise-deterministic Markov processes#Continuous-time#Infinite discounted expected cost#Optimal control#Singular perturbation#Mathematics, Applied

This paper deals with the expected discounted continuous control of piecewise deterministic Markov processes (PDMP`s) using a singular perturbation approach for dealing with rapidly oscillating parameters. The state space of the PDMP is written as the product of a finite set and a subset of the Euclidean space a""e (n) . The discrete part of the state, called the regime, characterizes the mode of operation of the physical system under consideration, and is supposed to have a fast (associated to a small parameter epsilon > 0) and a slow behavior. By using a similar approach as developed in Yin and Zhang (Continuous-Time Markov Chains and Applications: A Singular Perturbation Approach, Applications of Mathematics, vol. 37, Springer, New York, 1998, Chaps. 1 and 3) the idea in this paper is to reduce the number of regimes by considering an averaged model in which the regimes within the same class are aggregated through the quasi-stationary distribution so that the different states in this class are replaced by a single one. The main goal is to show that the value function of the control problem for the system driven by the perturbed Markov chain converges to the value function of this limit control problem as epsilon goes to zero. This convergence is obtained by...

Link permanente para citações:

## Advances in Regression, Survival Analysis, Extreme Values, Markov Processes and Other Statistical Applications

Fonte: Springer-Verlag
Publicador: Springer-Verlag

Tipo: Livro

POR

Relevância na Pesquisa

66.11%

Selected papers of the 17th Congress of the Portuguese Statistical Society, covering recent advances in Statistics, particularly in Regression, Extreme values, Markov processes and statistical applications in several areas.

Link permanente para citações:

## A continuous-time semi-markov bayesian belief network model for availability measure estimation of fault tolerant systems

Fonte: Sociedade Brasileira de Pesquisa Operacional
Publicador: Sociedade Brasileira de Pesquisa Operacional

Tipo: Artigo de Revista Científica
Formato: text/html

Publicado em 01/08/2008
EN

Relevância na Pesquisa

56.16%

#semi-Markov processes#Bayesian belief networks#Laplace transforms#availability measure#fault tolerant systems

In this work it is proposed a model for the assessment of availability measure of fault tolerant systems based on the integration of continuous time semi-Markov processes and Bayesian belief networks. This integration results in a hybrid stochastic model that is able to represent the dynamic characteristics of a system as well as to deal with cause-effect relationships among external factors such as environmental and operational conditions. The hybrid model also allows for uncertainty propagation on the system availability. It is also proposed a numerical procedure for the solution of the state probability equations of semi-Markov processes described in terms of transition rates. The numerical procedure is based on the application of Laplace transforms that are inverted by the Gauss quadrature method known as Gauss Legendre. The hybrid model and numerical procedure are illustrated by means of an example of application in the context of fault tolerant systems.

Link permanente para citações:

## On the accumulated sojourn time in finite-state Markov processes / Lucia Falzon.

Fonte: Universidade de Adelaide
Publicador: Universidade de Adelaide

Tipo: Tese de Doutorado
Formato: 203985 bytes; application/pdf

Publicado em //1997
EN

Relevância na Pesquisa

56.06%

The subject of this thesis is the joint probability density of the accumulated sojourn time in each state of a Markov process when the initial state is known.; Thesis (Ph.D.)--University of Adelaide, Dept. of Applied Mathematics, 1998?; Bibliography: leaves 80-82.; vi, 82 leaves ; 30 cm.

Link permanente para citações:

## Alternative decision modelling techniques for the evaluation of health care technologies: Markov processes versus discrete event simulation

Fonte: John Wiley & Sons Ltd
Publicador: John Wiley & Sons Ltd

Tipo: Artigo de Revista Científica

Publicado em //2003
EN

Relevância na Pesquisa

56.16%

Markov models have traditionally been used to evaluate the cost-effectiveness of competing health care technologies that require the description of patient pathways over extended time horizons. Discrete event simulation (DES) is a more flexible, but more complicated decision modelling technique, that can also be used to model extended time horizons. Through the application of a Markov process and a DES model to an economic evaluation comparing alternative adjuvant therapies for early breast cancer, this paper compares the respective processes and outputs of these alternative modelling techniques. DES displays increased flexibility in two broad areas, though the outputs from the two modelling techniques were similar. These results indicate that the use of DES may be beneficial only when the available data demonstrates particular characteristics.; Copyright © 2008 John Wiley & Sons, Ltd.

Link permanente para citações:

## Time-dependence in Markovian decision processes.

Fonte: Universidade de Adelaide
Publicador: Universidade de Adelaide

Tipo: Tese de Doutorado

Publicado em //2008

Relevância na Pesquisa

46.35%

#telecommunications#Markov decision processes#Markov process#time-dependent processes#phase-type distrubutions#Markov processes

The main focus of this thesis is Markovian decision processes with an emphasis on incorporating time-dependence into the system dynamics. When considering such decision processes, we provide value equations that apply to a large range of classes of Markovian decision processes, including Markov decision processes (MDPs) and semi-Markov decision processes (SMDPs), time-homogeneous or otherwise. We then formulate a simple decision process with exponential state transitions and solve this decision process using two separate techniques. The first technique solves the value equations directly, and the second utilizes an existing continuous-time MDP solution technique.
To incorporate time-dependence into the transition dynamics of the process, we examine a particular decision process with state transitions determined by the Erlang distribution. Although this process is originally classed as a generalized semi-Markov decision process, we re-define it as a time-inhomogeneous SMDP. We show that even for a simply stated process with desirable state-space properties, the complexity of the value equations becomes so substantial that useful analytic expressions for the optimal solutions for all states of the process are unattainable.
We develop a new technique...

Link permanente para citações:

## On parameter estimation in population models II: multi-dimensional processes and transient dynamics

Fonte: Academic Press Inc
Publicador: Academic Press Inc

Tipo: Artigo de Revista Científica

Publicado em //2009
EN

Relevância na Pesquisa

56.1%

#Ecology#epidemiology#parameter estimation#infectious period distribution#Markov processes#dynamic landscape#stochasticity#diffusion approximations

Recently, a computationally-efficient method was presented for calibrating a wide-class of Markov processes from discrete-sampled abundance data. The method was illustrated with respect to one-dimensional processes and required the assumption of stationarity. Here we demonstrate that the approach may be directly extended to multi-dimensional processes, and two analogous computationally-efficient methods for non-stationary processes are developed. These methods are illustrated with respect to disease and population models, including application to infectious count data from an outbreak of "Russian influenza" (A/USSR/1977 H1N1) in an educational institution. The methodology is also shown to provide an efficient, simple and yet rigorous approach to calibrating disease processes with gamma-distributed infectious period.; J. V. Ross, D. E. Pagendam and P. K. Pollett

Link permanente para citações:

## Optimal smoothing for finite state hidden reciprocal processes

Fonte: IEEE-Inst Electrical Electronics Engineers Inc
Publicador: IEEE-Inst Electrical Electronics Engineers Inc

Tipo: Artigo de Revista Científica

Publicado em //2011
EN

Relevância na Pesquisa

46.28%

#Terms—Finite state systems#hidden Markov models (HMMs)#Markov processes#optimal smoothing#reciprocal processes (RP).

This technical note addresses modelling and estimation of a class of finite state random processes called hidden reciprocal chains (HRC). A hidden reciprocal chain consists of a finite state reciprocal process, together with an observation process conditioned on the reciprocal process much as in the case of a hidden Markov model (HMM). The key difference between Markov models and reciprocal models is that reciprocal models are non-causal. The technical note presents a characterization of a HRC by a finite set of hidden Markov bridges, which are HMMs with the final state fixed. The technical note then uses this characterization to derive the optimal fixed interval smoother for a HRC. Performance of linear and optimal smoothers derived for both HMM and HRC are compared (using simulations) for a class of HRC derived from underlying Markov transitions. These experiments suggest that, not surprisingly, the performance of the optimal HMM and HRC smoothers are signifcantly better than their linear counterparts, and that some performance improvement is obtained using the HRC smoothers compared to the HMM smoothers. The technical note concludes by mentioning some ongoing and future work which exploits this new Markov bridge characterization of a HRC.; Langford B White and Francesco Carravetta

Link permanente para citações:

## Random-step Markov processes

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Publicado em 06/10/2014

Relevância na Pesquisa

46.32%

We explore two notions of stationary processes. The first is called a
random-step Markov process in which the stationary process of states, $(X_i)_{i
\in \mathbb{Z}}$ has a stationary coupling with an independent process on the
positive integers, $(L_i)_{i \in \mathbb{Z}}$ of `random look-back distances'.
That is,
$L_0$ is independent of the `past states', $(X_i, L_i)_{i<0}$, and for every
positive integer $n$, the probability distribution on the `present', $X_0$,
conditioned on the event $\{L_0 = n\}$ and on the past is the same as the
probability distribution on $X_0$ conditioned on the `$n$-past', $(X_i)_{-n\leq
i <0}$ and $\{L_0 = n\}$. A random Markov process is a generalization of a
Markov chain of order $n$ and has the property that the distribution on the
present given the past can be uniformly approximated given the $n$-past, for
$n$ sufficiently large. Processes with the latter property are called uniform
martingales, closely related to the notion of a `continuous $g$-function'.
We show that every stationary process on a countable alphabet that is a
uniform martingale and is dominated by a finite measure is also a random Markov
process and that the random variables $(L_i)_{i \in \mathbb{Z}}$ and associated
coupling can be chosen so that the distribution on the present given the
$n$-past and the event $\{L_0 = n\}$ is `deterministic': all probabilities are
in $\{0...

Link permanente para citações:

## A Compositional Framework for Markov Processes

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Relevância na Pesquisa

46.32%

We define the concept of an "open" Markov process, or more precisely,
continuous-time Markov chain, which is one where probability can flow in or out
of certain states called "inputs" and "outputs". One can build up a Markov
process from smaller open pieces. This process is formalized by making open
Markov processes into the morphisms of a dagger compact category. We show that
the behavior of a detailed balanced open Markov process is determined by a
principle of minimum dissipation, closely related to Prigogine's principle of
minimum entropy production. Using this fact, we set up a functor mapping open
detailed balanced Markov processes to open circuits made of linear resistors.
We also describe how to "black box" an open Markov process, obtaining the
linear relation between input and output data that holds in any steady state,
including nonequilibrium steady states with a nonzero flow of probability
through the system. We prove that black boxing gives a symmetric monoidal
dagger functor sending open detailed balanced Markov processes to Lagrangian
relations between symplectic vector spaces. This allows us to compute the
steady state behavior of an open detailed balanced Markov process from the
behaviors of smaller pieces from which it is built. We relate this black box
functor to a previously constructed black box functor for circuits.; Comment: 43 pages...

Link permanente para citações:

## Markov vs. nonMarkovian processes A comment on the paper Stochastic feedback, nonlinear families of Markov processes, and nonlinear Fokker-Planck equations by T.D. Frank

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Relevância na Pesquisa

46.3%

The purpose of this comment is to correct mistaken assumptions and claims
made in the paper Stochastic feedback, nonlinear families of Markov processes,
and nonlinear Fokker-Planck equations by T. D. Frank. Our comment centers on
the claims of a nonlinear Markov process and a nonlinear Fokker-Planck
equation. First, memory in transition densities is misidentified as a Markov
process. Second, Frank assumes that one can derive a Fokker-Planck equation
from a Chapman-Kolmogorov equation, but no proof was given that a
Chapman-Kolmogorov equation exists for memory-dependent processes. A nonlinear
Markov process is claimed on the basis of a nonlinear diffusion pde for a
1-point probability density. We show that, regardless of which initial value
problem one may solve for the 1-point density, the resulting stochastic
process, defined necessarily by the transition probabilities, is either an
ordinary linearly generated Markovian one, or else is a linearly generated
nonMarkovian process with memory. We provide explicit examples of diffusion
coefficients that reflect both the Markovian and the memory-dependent cases. So
there is neither a nonlinear Markov process nor nonlinear Fokker-Planck
equation for a transition density. The confusion rampant in the literature
arises in part from labeling a nonlinear diffusion equation for a 1-point
probability density as nonlinear Fokker-Planck...

Link permanente para citações:

## Optimal control of semi-Markov processes with a backward stochastic differential equations approach

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Relevância na Pesquisa

46.29%

In the present work we employ, for the first time, backward stochastic
differential equations (BSDEs) to study the optimal control of semi-Markov
processes on finite horizon, with general state and action spaces. More
precisely, we prove that the value function and the optimal control law can be
represented by means of the solution of a class of BSDEs driven by a
semi-Markov process or, equivalently, by the associated random measure. The
peculiarity of the semi-Markov framework, with respect to the pure jump Markov
case, consists in the proof of the relation between BSDE and optimal control
problem. This is done, as usual, via the Hamilton-Jacobi-Bellman (HJB)
equation, which however in the semi-Markov case is characterized by an
additional differential term $\partial_a$. Taking into account the particular
structure of semi-Markov processes we rewrite the HJB equation in a suitable
integral form which involves a directional derivative operator $D$ related to
$\partial_a$. Then, using a formula of Ito type tailor-made for semi-Markov
processes and the operator $D$, we are able to prove that the BSDE provides the
unique classical solution to the HJB equation, which is shown to be the value
function of our control problem.; Comment: arXiv admin note: text overlap with arXiv:1302.0679

Link permanente para citações:

## Discrete time Non-homogeneous Semi-Markov Processes applied to Models for Disability Insurance

Fonte: Xarxa de Referència en Economia Aplicada (XREAP)
Publicador: Xarxa de Referència en Economia Aplicada (XREAP)

Tipo: Trabalho em Andamento
Formato: application/pdf

Publicado em //2012
ENG

Relevância na Pesquisa

56.2%

In this paper, we present a stochastic model for disability insurance contracts. The model is based on a discrete time non-homogeneous semi-Markov process (DTNHSMP) to which the backward recurrence time process is introduced. This permits a more exhaustive study of disability evolution and a more efficient approach to the duration problem. The use of semi-Markov reward processes facilitates the possibility of deriving equations of the prospective and retrospective mathematical reserves. The model is applied to a sample of contracts drawn at random from a mutual insurance company.

Link permanente para citações:

## Stochastic cash flows modelled by homogeneous and non-homogeneous discrete time backward semi-Markov reward processes

Fonte: Universidade Autônoma de Barcelona
Publicador: Universidade Autônoma de Barcelona

Tipo: Artigo de Revista Científica
Formato: application/pdf

Publicado em //2014
ENG

Relevância na Pesquisa

46.31%

#Stochastic cash flows#Insurance contracts#Discrete time backward semi-Markov processes#Reward processes#Homogeneous and non-homogeneous processes

The main aim of this paper is to give a systematization on the stochastic cash flows evolution. The tools that are used for this purpose are discrete time semi-Markov reward processes. The paper is directed not only to semi-Markov researchers but also to a wider public, presenting a full treatment of these tools both in homogeneous and non-homogeneous environment. The main result given in the paper is the natural correspondence of the stochastic cash flows with the semiMarkov reward processes. Indeed, the semi-Markov environment gives the possibility to follow a multi-state random system in which the randomness is not only in the transition to the next state but also in the time of transition. Furthermore, rewards permit the introduction of a financial environment into the model. Considering all these properties, any stochastic cash flow can be naturally modelled by means of semi-Markov reward processes. The backward case offers the possibility of considering in a complete way the duration inside a state of the studied system and this fact can be very useful in the evaluation of insurance contracts.

Link permanente para citações:

## Convergence of adaptive morphological filters in the context of Markov chains

Fonte: Rochester Instituto de Tecnologia
Publicador: Rochester Instituto de Tecnologia

Tipo: Dissertação

EN_US

Relevância na Pesquisa

56.16%

#Imaging science#TA1637 .C428 1995#Image processing--Mathematics#Digital filters--Mathematics#Markov processes

A typical parameterized r-opening *r is a filter defined as a union of openings by a collection of
compact, convex structuring elements, each of which is governed by a parameter vector r. It reduces to a single parameter r-opening filter by a set of structuring elements when r is a scalar sizing
parameter. The parameter vector is adjusted by a set of adaptation rules according to whether the re
construction Ar derived from <3>r correctly or incorrectly passes the signal and noise grains sampled
from the image. Applied to the signal-union-noise model, the optimization problem is to find the
vector of r that minimizes the Mean-Absolute-Error between the filtered and ideal image processes.
The adaptive r-opening filter fits into the framework of Markov processes, the adaptive parameter
being the state of the process. For a single parameter r-opening filter, we proved that there exists
a stationary distribution governing the parameter in the steady state and convergence is characterized in terms of the steady-state distribution. Key filter properties such as parameter mean, parameter variance, and expected error in the steady state are characterized via the stationary distribution.
Steady-state behavior is compared to the optimal solution for the uniform model...

Link permanente para citações:

## Lumpable Hidden Markov Models - Model Reduction and Reduced Complexity Filtering

Fonte: Institute of Electrical and Electronics Engineers (IEEE Inc)
Publicador: Institute of Electrical and Electronics Engineers (IEEE Inc)

Tipo: Artigo de Revista Científica

Relevância na Pesquisa

56.2%

#Keywords: Algorithms#Approximation theory#Computer simulation#Markov processes#Mathematical models#Matrix algebra#Lumpable hidden Markov models#Signal filtering and prediction

This paper is concerned with filtering of hidden Markov processes (HMPs) which possess (or approximately possess) the property of lumpability. This property is a generalization of the property of lumpability of a Markov chain which has been previously addressed by others. In essence, the property of lumpability means that there is a partition of the (atomic) states of the Markov chain into aggregated sets which act in a similar manner as far as the state dynamics and observation statistics are concerned. We prove necessary and sufficient conditions on the HMP for exact lumpability to hold. For a particular class of hidden Markov models (HMMs), namely finite output alphabet models, conditions for lumpability of all HMPs representable by a specified HMM are given. The corresponding optimal filter algorithms for the aggregated states are then derived. The paper also describes an approach to efficient suboptimal filtering for HMPs which are approximately lumpable. By this we mean that the HMM generating the process may be approximated by a lumpable HMM. This approach involves directly finding a lumped HMM which approximates the original HMM well, in a matrix norm sense. An alternative approach for model reduction based on approximating a given HMM by an exactly lumpable HMM is also derived. This method is based on the alternating convex projections algorithm. Some simulation examples are presented which illustrate the performance of the suboptimal filtering algorithms.

Link permanente para citações:

## Risk-sensitive filtering and smoothing for continuous-time Markov processes

Fonte: Institute of Electrical and Electronics Engineers (IEEE Inc)
Publicador: Institute of Electrical and Electronics Engineers (IEEE Inc)

Tipo: Artigo de Revista Científica

Relevância na Pesquisa

56.11%

#Keywords: Approximation theory#Brownian movement#Computer simulation#Integral equations#Markov processes#Mathematical models#Mathematical transformations#Partial differential equations#Poisson distribution#Probability#Theorem proving

We consider risk sensitive filtering and smoothing for a dynamical system whose output is a vector process in ℝ2. The components of the observation process are a Markov process observed through a Brownian motion and a Markov process observed through a P

Link permanente para citações: