The edges detection model by a non-linear anisotropic diffusion, consists in a mathematical model of smoothing based in Partial Differential Equation (PDE), alternative to the conventional low-pass filters. The smoothing model consists in a selective process, where homogeneous areas of the image are smoothed intensely in agreement with the temporal evolution applied to the model. The level of smoothing is related with the amount of undesired information contained in the image, i.e., the model is directly related with the optimal level of smoothing, eliminating the undesired information and keeping selectively the interest features for Cartography area. The model is primordial for cartographic applications, its function is to realize the image preprocessing without losing edges and other important details on the image, mainly airports tracks and paved roads. Experiments carried out with digital images showed that the methodology allows to obtain the features, e.g. airports tracks, with efficiency.
Esta tese tem como objetivo comparar quais critérios lingüísticos são utilizados para colocação dos espaços em branco por crianças consideradas normais e por uma criança diagnosticada, de forma genérica, com "Distúrbio de leitura e escrita". O desenvolvimento deste trabalho fundamenta-se nas seguintes bases teóricas: (i) na neurolinguística de base discursiva (ND) desenvolvida no IEL, mais especificamente, no referencial teórico privilegiado pela ND referente aos estudos sobre a relação normal/patológico e a relação cérebro/linguagem; (ii) nos estudos que tratam do processo de aquisição da escrita a partir de relação sujeito/linguagem e da relação fala/oralidade/escrita/letramento. Partimos da hipótese de que, ainda que sejam encontradas diferenças nos critérios lingüísticos para a disposição dos espaços em branco utilizados pelas crianças, consideradas normais ou não, tais diferenças resultariam, provavelmente, de processos já previstos pela língua, enquanto condição de possibilidade da atividade discursiva. Para a realização deste estudo, foram selecionadas 15 crianças que freqüentavam o segundo ano do primeiro ciclo do ensino fundamental, na mesma sala de aula, em uma escola particular do município de Hortolândia (SP). Dentre essas crianças...
Defining myocardial contours is often the most time consuming portion of dynamic cardiac MRI image analysis. Displacement encoding with stimulated echoes (DENSE) is a quantitative MRI technique that encodes tissue displacement into the phase of the complex MRI images. Cine DENSE provides a time series of these images, thus facilitating the non-invasive study of myocardial kinematics. Epicardial and endocardial contours need to be defined at each frame on cine DENSE images for the quantification of regional displacement and strain as a function of time. This work presents a reliable and effective two dimensional semi-automated segmentation technique that uses the encoded motion to project a manually defined region of interest through time. Contours can then easily be extracted for each cardiac phase. This method boasts several advantages, including, 1. parameters are based on practical physiological limits, 2. contours are calculated for the first few cardiac phases, where it is difficult to visually distinguish blood from myocardium, and 3. the method is independent of the shape of the tissue delineated and can be applied to short- or long-axis views, and on arbitrary regions of interest. Motion-guided contours were compared to manual contours for six conventional and six slice-followed mid-ventricular short-axis cine DENSE datasets. Using an area measure of segmentation error...
Breast cancer is a major public health problem for women in the Iran and many other parts of the world. Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) plays a pivotal role in breast cancer care, including detection, diagnosis, and treatment monitoring. But segmentation of these images which is seriously affected by intensity inhomogeneities created by radio-frequency coils is a challenging task. Markov Random Field (MRF) is used widely in medical image segmentation especially in MR images. It is because this method can model intensity inhomogeneities occurring in these images. But this method has two critical weaknesses: Computational complexity and sensitivity of the results to the models parameters. To overcome these problems, in this paper, we present Improved-Markov Random Field (I-MRF) method for breast lesion segmentation in MR images. Unlike the conventional MRF, in the proposed approach, we don’t use the Iterative Conditional Mode (ICM) method or Simulated Annealing (SA) for class membership estimation of each pixel (lesion and non-lesion). The prior distribution of the class membership is modeled as a ratio of two conditional probability distributions in a neighborhood which is defined for each pixel: probability distribution of similar pixels and non-similar ones. Since our proposed approach don’t use an iterative method for maximizing the posterior probability...
A three-dimensional balanced steady state free precession (b-SSFP)-Dixon technique with a novel group-encoded k-space segmentation scheme called GUINNESS (Group-encoded Ungated Inversion Nulling for Non-contrast Enhancement in the Steady State) was developed. GUINNESS was evaluated for breath-held non-contrast-enhanced MR angiography of the renal arteries on 18 subjects (6 healthy volunteers, 12 patients) at 3.0 T. The method provided high signal-to-noise and contrast renal angiograms with homogeneous fat and background suppression in short breath-holds on the order of 20 seconds with high spatial resolution and coverage. GUINNESS has potential as a short breath-hold alternative to conventional respiratory-gated methods, which are often suboptimal in pediatric subjects and patients with significant diaphragmatic drift/sleep apnea.
Magnetic resonance (MR) imaging-based virtual cystoscopy (VCys), as a non-invasive, safe and cost-effective technique, has shown its promising virtue for early diagnosis and recurrence management of bladder carcinoma. One primary goal of VCys is to identify bladder lesions with abnormal bladder wall thickness, and consequently a precise segmentation of the inner and outer borders of the wall is required. In this paper, we propose a unified expectation-maximization (EM) approach to the maximum-a-posteriori (MAP) solution of bladder wall segmentation, by integrating a novel adaptive Markov random field (AMRF) model and the coupled level-set (CLS) information into the prior term. The proposed approach is applied to the segmentation of T1-weighted MR images, where the wall is enhanced while the urine and surrounding soft tissues are suppressed. By introducing scale-adaptive neighborhoods as well as adaptive weights into the conventional MRF model, the AMRF model takes into account the local information more accurately. In order to mitigate the influence of image artifacts adjacent to the bladder wall and to preserve the continuity of the wall surface, we apply geometrical constraints on the wall using our previously developed CLS method. This paper not only evaluates the robustness of the presented approach against the known ground truth of simulated digital phantoms...
Conventional automated segmentation techniques for magnetic resonance imaging (MRI) fail to perform in a robust and consistent manner when brain anatomy differs wildly from expectations – as is often the case in brain cancers. We propose a novel out-of-atlas technique to estimate the spatial extent of abnormal brain regions by combining multi-atlas based segmentation with semi-local non-parametric intensity analysis. In a study with 30 clinically-acquired MRI scans of patients with malignant gliomas and 29 atlases of normal anatomy from research acquisitions, we demonstrate that this technique robustly identifies cancerous regions. The resulting segmentations could be used to study cancer morphometrics or guide selection/application/refinement of tumor analysis models or regional image quantification approaches.
Serotonin (5-HT) has been recognized for decades as an important signaling molecule in the gut, but it is still revealing its secrets. We continue to discover novel gastrointestinal (GI) functions of 5-HT, as well as actions of gut-derived 5-HT outside of the gut, and we are learning how 5-HT signaling is altered in GI disorders. Furthermore, new therapeutic targets related to 5-HT signaling are being identified that can hopefully be exploited to alleviate the symptoms of functional GI disorders. Conventional functions of 5-HT in the gut involving intrinsic reflexes include stimulation of propulsive and segmentation motility patterns, epithelial secretion, and vasodilation. Activation of extrinsic vagal and spinal afferent fibers results in slowed gastric emptying, pancreatic secretion, satiation, pain and discomfort, as well as nausea and vomiting. Within the gut, 5-HT also exerts non-conventional actions that include serving as a pro-inflammatory signaling molecule and as a trophic factor to promote the development and maintenance of neurons and interstitial cells of Cajal. Platelet 5-HT, which comes from the gut, can promote hemostasis, influence bone development, and contribute to allergic airway inflammation. 5-HT3 receptor antagonists and 5-HT4 receptor agonists have been used to treat functional disorders with diarrhea or constipation...
Segmenting prostate from MR images is important yet challenging. Due to non-Gaussian distribution of prostate appearances in MR images, the popular active appearance model (AAM) has its limited performance. Although the newly developed sparse dictionary learning method[1, 2] can model the image appearance in a non-parametric fashion, the learned dictionaries still lack the discriminative power between prostate and non-prostate tissues, which is critical for accurate prostate segmentation. In this paper, we propose to integrate deformable model with a novel learning scheme, namely the Distributed Discriminative Dictionary (DDD) learning, which can capture image appearance in a non-parametric and discriminative fashion. In particular, three strategies are designed to boost the tissue discriminative power of DDD. First, minimum Redundancy Maximum Relevance (mRMR) feature selection is performed to constrain the dictionary learning in a discriminative feature space. Second, linear discriminant analysis (LDA) is employed to assemble residuals from different dictionaries for optimal separation between prostate and non-prostate tissues. Third, instead of learning the global dictionaries, we learn a set of local dictionaries for the local regions (each with small appearance variations) along prostate boundary...
Fonte: Universidade Federal de Pelotas; Educa????o; Programa de P??s-Gradua????o em Educa????o; UFPel; BRPublicador: Universidade Federal de Pelotas; Educa????o; Programa de P??s-Gradua????o em Educa????o; UFPel; BR
This dissertation, which is based on the analysis of non-conventional segmentation data produzed by Brazilian and Portuguese children, shows the importance of initial writing data in order to discuss the linguistic rhythms of Brazilian
Portuguese (BP) and European Portuguese (EP). The data were collected in texts which had been spontaneously written by Brazilian and Portuguese children who were attending the first grades in Elementary School (Pelotas, RS, Brazil) and the
Basic School (Porto, Portugal). All texts belong to the a database named Banco de Textos de Aquisi????o da Escrita, at the Universidade Federal de Pelotas (FaE
UFPel). Studies of rhythm in Phonology have been rather polemical and controversial, mainly the ones that discuss the rhythmic classification of languages. Abaurre and Galves (1998), based on an optimalist and minimalist approach, have
developed a study of the rhythmic differences between BP and EP. The authors believe that the rhythm of every language is the result of the hierarchization of three
principles: the integrity of the phonological word , the trochaic foot , and the foot binarity . After the detailed data description and analysis which link the prosodic constituents, the phonological processes and the stress...
Fonte: Universidade Federal de Pelotas; Educa????o; Programa de P??s-Gradua????o em Educa????o; UFPel; BRPublicador: Universidade Federal de Pelotas; Educa????o; Programa de P??s-Gradua????o em Educa????o; UFPel; BR
The theme of this thesis deals with the non-conventional segmentation
processes, hiposegmentations, hypersegmentations and hybrid forms which can
be found in texts written by students who take part in EJA, a government project
for the education of the youth and the adult. In this study, non-conventional
segmentations of words are considered structures that arise from the hypotheses
which the subjects have formulated in their writing acquisition process
(FERREIRO & TEBEROSKY, 1999). This information is valuable linguistic material
since it can give clues regarding the phonological knowledge these learners use
when they write (ABAURRE, 1991). From this perspective, this study aimed at
describing and analyzing the processes which adults go through - during their
literacy process when they face the task of segmenting their writing according to
conventions, as well as comparing these results to the ones related to child writing
in order to check the adequacy of categories proposed by Cunha (2004) to the
analysis of non-conventional segmentation in texts written by EJA students. The
data used for analysis were extracted from texts written by three EJA students and
collected longitudinally in text production workshops which aimed at creative and
spontaneous texts. Results show that clitics...
The ultimate goal of many applications of augmented reality is to immerse the user into the augmented scene, which is enriched with virtual models. In order to achieve this immersion, it is necessary to create the visual impression that the graphical objects are a natural part of the user’s environment. Producing this effect with conventional computer graphics algorithms is a complex task. Various rendering artifacts in the three-dimensional graphics create a noticeable visual discrepancy between the real background image and virtual objects.
We have recently proposed a novel approach to generating an augmented video stream. With this new method, the output images are a non-photorealistic reproduction of the augmented environment. Special stylization methods are applied to both the background camera image and the virtual objects. This way the visual realism of both the graphical foreground and the real background image is reduced, so that they are less distinguishable from each other.
Here, we present a new method for the cartoon-like stylization of augmented reality images, which uses a novel post-processing filter for cartoon-like color segmentation and high-contrast silhouettes. In order to make a fast postprocessing of rendered images possible...
The objective of this thesis is to find and analyse practical numerical algorithms for the minimisation and gradient-flows of the Mumford-Shah and Mumford-Shah-Euler functionals for unit vector fields.
The motivation for these questions is twofold: First, these are interesting model-problems combining non-convex functionals with a non-convex constraint, as an extension of existing works on harmonic maps to the sphere.
Second, bot functionals were originally introduced in image processing: The Mumford-Shah functional for segmentation, and the Mumford-Shah-Euler functional for inpainting; and the sphere-constraint can be used to implement the chromaticity and brightness colour model in this context.
In the first part of the thesis, two schemes for the minimisation of the Mumford-Shah functional for unit-vector fields are presented and discretised using first-order finite elements.
The first scheme uses a projection approach to enforce the sphere-constraint. It works well in simulations, but we only have partial convergence results.
The second scheme uses a penalisation approach, which only approximates the sphere-constraint, but allows for a complete proof of convergence.
In the second part of the thesis, two schemes for the gradient-flow of the Mumford-Shah-Euler functional for unit-vector fields are presented and discretised...
The problem of segmenting a given image into coherent regions is important in Computer Vision and many industrial applications require segmenting a known object into its components. Examples include identifying individual parts of a component for process control work in a manufacturing plant and identifying parts of a car from a photo for automatic damage detection. Unfortunately most of an object's parts of interest in such applications share the same pixel characteristics, having similar colour and texture. This makes segmenting the object into its components a non-trivial task for conventional image segmentation algorithms. In this paper, we propose a "Model Assisted Segmentation" method to tackle this problem. A 3D model of the object is registered over the given image by optimising a novel gradient based loss function. This registration obtains the full 3D pose from an image of the object. The image can have an arbitrary view of the object and is not limited to a particular set of views. The segmentation is subsequently performed using a level-set based method, using the projected contours of the registered 3D model as initialisation curves. The method is fully automatic and requires no user interaction. Also, the system does not require any prior training. We present our results on photographs of a real car.
In this thesis we investigate the problem of image restoration. The main focus of
our research is to come up with novel algorithms and enhance existing techniques in order to deliver efficient and effective methodologies, applicable in real-time image restoration scenarios. Our research starts with a literature review, which identifies the gaps in existing techniques and helps us to come up with a novel classification on image restoration, which integrates and discusses more recent developments in the area of image restoration. With this novel classification, we identified three major areas which need our attention.
The first developments relate to non-blind image restoration. The two mostly used techniques, namely deterministic linear algorithms and stochastic nonlinear algorithms are compared and contrasted. Under deterministic linear algorithms, we develop a class of more effective novel quadratic linear regularization models, which outperform the existing linear regularization models. In addition, by looking in a new perspective, we evaluate and compare the performance of deterministic and stochastic restoration algorithms and explore the validity of the performance claims made so far on those algorithms. Further, we critically challenge the ne- cessity of some complex mechanisms in Maximum A Posteriori (MAP) technique under stochastic image deconvolution algorithms.
The next developments are focussed in blind image restoration...
Este trabalho tem como objetivo central descrever e analisar as rasuras em segmentações não-convencionais de palavras encontradas durante o processo de aquisição de linguagem, em textos produzidos por duas alunas, do pré-primário à quarta série, de uma mesma escola particular do município de Campinas (SP). Com base no conceito de heterogeneidade da escrita de Corrêa (2004), busca-se, aqui, investigar a relação que o sujeito escrevente estabelece com a linguagem no momento de construção de seu texto e, principalmente, como a inserção desse sujeito em práticas letradas pode contribuir ou influenciar seu trabalho com a escrita. Para o trabalho com os indícios do trânsito do escrevente por práticas sociais orais/faladas e letradas/escritas, tomaremos como modelo o Paradigma Indiciário proposto por Ginzburg (1986), que atenta para a observação dos detalhes como indícios de um processo maior. A análise das estruturas das rasuras em segmentação não convencional de palavras levará em consideração os domínios prosódicos propostos por Nespor & Vogel (1989), assim como já foi feito por Capristano (2013). A partir do levantamento dos dados encontrados nesses textos, verificou-se a necessidade de aproximação aos resultados obtidos durante o projeto de Iniciação Científica (FAPESP 2011/06602-7)...
The objective of this study is to develop a patch-based labeling method that
cooperates with a label fusion using non-rigid registrations. We present a
novel patch-based label fusion method, whose selected patches and their weights
are calculated from a combination of similarity measures between patches using
intensity-based distances and labeling-based distances, where a previous
labeling of the target image is inferred through a label fusion method using
non-rigid registrations. These combined similarity measures result in better
selection of the patches, and their weights are more robust, which improves the
segmentation results compared to other label fusion methods, including the
conventional patch-based labeling method. To evaluate the performance and the
robustness of the proposed label fusion method, we employ two available
databases of T1-weighted (T1W) magnetic resonance imaging (MRI) of human
brains. We compare our approach with other label fusion methods in the
automatic hippocampal segmentation from T1W-MRI.
Our label fusion method yields mean Dice coefficients of 0.847 and 0.798 for
the two databases used with mean times of approximately 180 and 320 seconds,
respectively. The collaboration between the patch-based labeling method and the
label fusion using non-rigid registrations is given in the several levels: (a)
The pre-selection of the patches in the atlases are improved...
We present LS-CRF, a new method for very efficient large-scale training of
Conditional Random Fields (CRFs). It is inspired by existing closed-form
expressions for the maximum likelihood parameters of a generative graphical
model with tree topology. LS-CRF training requires only solving a set of
independent regression problems, for which closed-form expression as well as
efficient iterative solvers are available. This makes it orders of magnitude
faster than conventional maximum likelihood learning for CRFs that require
repeated runs of probabilistic inference. At the same time, the models learned
by our method still allow for joint inference at test time. We apply LS-CRF to
the task of semantic image segmentation, showing that it is highly efficient,
even for loopy models where probabilistic inference is problematic. It allows
the training of image segmentation models from significantly larger training
sets than had been used previously. We demonstrate this on two new datasets
that form a second contribution of this paper. They consist of over 180,000
images with figure-ground segmentation annotations. Our large-scale experiments
show that the possibilities of CRF-based image segmentation are far from
exhausted, indicating, for example...
The problem of segmenting a given image into coherent regions is important in
Computer Vision and many industrial applications require segmenting a known
object into its components. Examples include identifying individual parts of a
component for process control work in a manufacturing plant and identifying
parts of a car from a photo for automatic damage detection. Unfortunately most
of an object's parts of interest in such applications share the same pixel
characteristics, having similar colour and texture. This makes segmenting the
object into its components a non-trivial task for conventional image
segmentation algorithms. In this paper, we propose a "Model Assisted
Segmentation" method to tackle this problem. A 3D model of the object is
registered over the given image by optimising a novel gradient based loss
function. This registration obtains the full 3D pose from an image of the
object. The image can have an arbitrary view of the object and is not limited
to a particular set of views. The segmentation is subsequently performed using
a level-set based method, using the projected contours of the registered 3D
model as initialisation curves. The method is fully automatic and requires no
user interaction. Also, the system does not require any prior training. We
present our results on photographs of a real car.; Comment: 18 LaTeX pages...
It is common in the oil industry to complete horizontal wells selectively. Even, this selection is performed naturally since reservoir heterogeneity may cause segmented well performance. Segmentation may be partially open to flux due to high skin factor or low permeability bands. They can be treated as a nonuniform skin distribution. A few models have been introduced to capture these special details. Existing interpretation methodologies use non-linear regression analysis and the TDS technique; but, there is an absence of equations for the conventional technique. In this study, the conventional methodology is developed for the analysis of pressure transient tests in horizontal wells with isolated zones so directional permeabilities and skin factors can be obtained. The developed expressions were tested successfully with several examples reported in the literature and compared to results from other sources.