Página 1 dos resultados de 12099 itens digitais encontrados em 0.048 segundos

"Recuperação de imagens por conteúdo através de análise multiresolução por Wavelets" ; "Content based image retrieval through multiresolution wavelet analysis

Castañon, Cesar Armando Beltran
Fonte: Biblioteca Digitais de Teses e Dissertações da USP Publicador: Biblioteca Digitais de Teses e Dissertações da USP
Tipo: Dissertação de Mestrado Formato: application/pdf
Publicado em 28/02/2003 PT
Relevância na Pesquisa
35.88%
Os sistemas de recuperação de imagens por conteúdo (CBIR -Content-based Image Retrieval) possuem a habilidade de retornar imagens utilizando como chave de busca outras imagens. Considerando uma imagem de consulta, o foco de um sistema CBIR é pesquisar no banco de dados as "n" imagens mais similares à imagem de consulta de acordo com um critério dado. Este trabalho de pesquisa foi direcionado na geração de vetores de características para um sistema CBIR considerando bancos de imagens médicas, para propiciar tal tipo de consulta. Um vetor de características é uma representação numérica sucinta de uma imagem ou parte dela, descrevendo seus detalhes mais representativos. O vetor de características é um vetor "n"-dimensional contendo esses valores. Essa nova representação da imagem pode ser armazenada em uma base de dados, e assim, agilizar o processo de recuperação de imagens. Uma abordagem alternativa para caracterizar imagens para um sistema CBIR é a transformação do domínio. A principal vantagem de uma transformação é sua efetiva caracterização das propriedades locais da imagem. Recentemente, pesquisadores das áreas de matemática aplicada e de processamento de sinais desenvolveram técnicas práticas de "wavelet" para a representação multiescala e análise de sinais. Estas novas ferramentas diferenciam-se das tradicionais técnicas de Fourier pela forma de localizar a informação no plano tempo-freqüência; basicamente...

Exploração visual do espaço de características: uma abordagem para análise de imagens via projeção de dados multidimensionais; Visual feature space exploration: an approach to image analysis via multidimensional data projection

Machado, Bruno Brandoli
Fonte: Biblioteca Digitais de Teses e Dissertações da USP Publicador: Biblioteca Digitais de Teses e Dissertações da USP
Tipo: Dissertação de Mestrado Formato: application/pdf
Publicado em 13/12/2010 PT
Relevância na Pesquisa
35.9%
Sistemas para análise de imagens partem da premissa de que o conjunto de dados sob investigação está corretamente representado por características. Entretanto, definir quais características representam apropriadamente um conjunto de dados é uma tarefa desafiadora e exaustiva. Grande parte das técnicas de descrição existentes na literatura, especialmente quando os dados têm alta dimensionalidade, são baseadas puramente em medidas estatísticas ou abordagens baseadas em inteligência artificial, e normalmente são caixas-pretas para os usuários. A abordagem proposta nesta dissertação busca abrir esta caixa-preta por meio de representações visuais criadas pela técnica Multidimensional Classical Scaling, permitindo que usuários capturem interativamente a essência sobre a representatividade das características computadas de diferentes descritores. A abordagem é avaliada sobre seis conjuntos de imagens que contém texturas, imagens médicas e cenas naturais. Os experimentos mostram que, conforme a combinação de um conjunto de características melhora a qualidade da representação visual, a acurácia de classificação também melhora. A qualidade das representações é medida pelo índice da silhueta, superando problemas relacionados com a subjetividade de conclusões baseadas puramente em análise visual. Além disso...

Feature extraction and visualization from higher-order CFD data; Extração de estruturas e visualização de soluções de DFC de alta ordem

Pagot, Christian Azambuja
Fonte: Universidade Federal do Rio Grande do Sul Publicador: Universidade Federal do Rio Grande do Sul
Tipo: Tese de Doutorado Formato: application/pdf
ENG
Relevância na Pesquisa
35.9%
Métodos de simulação baseados em dinâmica de fluidos computacional (DFC) têm sido empregado em diversas areas de estudo, tais como aeroacústica, dinâmica dos gases, fluidos viscoelásticos, entre outros. Entretanto, a necessidade de maior acurácia e desempenho destes métodos têm dado origem a soluções representadas por conjuntos de dados cada vez mais complexos. Neste contexto, técnicas voltadas à extração de estruturas relevantes (features), e sua posterior visualização, têm um papel muito importante, tornando mais fácil e intuitiva a análise dos dados gerados por simulações. Os métodos de extração de estruturas detectam e isolam elementos significativos no contexto da análise dos dados. No caso da análise de fluidos, estas estruturas podem ser isosuperfícies de pressão, vórtices, linhas de separação, etc. A visualização, por outro lado, confere atributos visuais a estas estruturas, permitindo uma análise mais intuitiva através de sua inspeção visual. Tradicionalmente, métodos de DFC representam suas soluções como funções lineares definidas sobre elementos do domínio. Entretanto, a evolução desses métodos tem dado origem a soluções representadas analiticamente através de funções de alta ordem. Apesar destes métodos apresentarem características desejáveis do ponto de vista de eficiência e acurácia...

Principal Feature Analysis: A Multivariate Feature Selection Method for fMRI Data

Wang, Lijun; Lei, Yu; Zeng, Ying; Tong, Li; Yan, Bin
Fonte: Hindawi Publishing Corporation Publicador: Hindawi Publishing Corporation
Tipo: Artigo de Revista Científica
EN
Relevância na Pesquisa
45.9%
Brain decoding with functional magnetic resonance imaging (fMRI) requires analysis of complex, multivariate data. Multivoxel pattern analysis (MVPA) has been widely used in recent years. MVPA treats the activation of multiple voxels from fMRI data as a pattern and decodes brain states using pattern classification methods. Feature selection is a critical procedure of MVPA because it decides which features will be included in the classification analysis of fMRI data, thereby improving the performance of the classifier. Features can be selected by limiting the analysis to specific anatomical regions or by computing univariate (voxel-wise) or multivariate statistics. However, these methods either discard some informative features or select features with redundant information. This paper introduces the principal feature analysis as a novel multivariate feature selection method for fMRI data processing. This multivariate approach aims to remove features with redundant information, thereby selecting fewer features, while retaining the most information.

Slow Feature Analysis on Retinal Waves Leads to V1 Complex Cells

Dähne, Sven; Wilbert, Niko; Wiskott, Laurenz
Fonte: Public Library of Science Publicador: Public Library of Science
Tipo: Artigo de Revista Científica
Publicado em 08/05/2014 EN
Relevância na Pesquisa
45.72%
The developing visual system of many mammalian species is partially structured and organized even before the onset of vision. Spontaneous neural activity, which spreads in waves across the retina, has been suggested to play a major role in these prenatal structuring processes. Recently, it has been shown that when employing an efficient coding strategy, such as sparse coding, these retinal activity patterns lead to basis functions that resemble optimal stimuli of simple cells in primary visual cortex (V1). Here we present the results of applying a coding strategy that optimizes for temporal slowness, namely Slow Feature Analysis (SFA), to a biologically plausible model of retinal waves. Previously, SFA has been successfully applied to model parts of the visual system, most notably in reproducing a rich set of complex-cell features by training SFA with quasi-natural image sequences. In the present work, we obtain SFA units that share a number of properties with cortical complex-cells by training on simulated retinal waves. The emergence of two distinct properties of the SFA units (phase invariance and orientation tuning) is thoroughly investigated via control experiments and mathematical analysis of the input-output functions found by SFA. The results support the idea that retinal waves share relevant temporal and spatial properties with natural visual input. Hence...

Modeling place field activity with hierarchical slow feature analysis

Schönfeld, Fabian; Wiskott, Laurenz
Fonte: Frontiers Media S.A. Publicador: Frontiers Media S.A.
Tipo: Artigo de Revista Científica
Publicado em 22/05/2015 EN
Relevância na Pesquisa
45.72%
What are the computational laws of hippocampal activity? In this paper we argue for the slowness principle as a fundamental processing paradigm behind hippocampal place cell firing. We present six different studies from the experimental literature, performed with real-life rats, that we replicated in computer simulations. Each of the chosen studies allows rodents to develop stable place fields and then examines a distinct property of the established spatial encoding: adaptation to cue relocation and removal; directional dependent firing in the linear track and open field; and morphing and scaling the environment itself. Simulations are based on a hierarchical Slow Feature Analysis (SFA) network topped by a principal component analysis (ICA) output layer. The slowness principle is shown to account for the main findings of the presented experimental studies. The SFA network generates its responses using raw visual input only, which adds to its biological plausibility but requires experiments performed in light conditions. Future iterations of the model will thus have to incorporate additional information, such as path integration and grid cell activity, in order to be able to also replicate studies that take place during darkness.

Automatic Heart Sound Analysis for Cardiovascular Disease Assessment

Kumar, Dinesh
Fonte: Universidade de Coimbra Publicador: Universidade de Coimbra
Tipo: Tese de Doutorado
ENG
Relevância na Pesquisa
45.78%
Cardiovascular diseases (CVDs) are the most deadly diseases worldwide leaving behind diabetes and cancer. Being connected to ageing population above 65 years is prone to CVDs; hence a new trend of healthcare is emerging focusing on preventive health care in order to reduce the number of hospital visits and to enable home care. Auscultation has been open of the oldest and cheapest techniques to examine the heart. Furthermore, the recent advancement in digital technology stethoscopes is renewing the interest in auscultation as a diagnosis tool, namely for applications for the homecare context. A computer-based auscultation opens new possibilities in health management by enabling assessment of the mechanical status of the heart by using inexpensive and non-invasive methods. Computer based heart sound analysis techniques facilitate physicians to diagnose many cardiac disorders, such as valvular dysfunction and congestive heart failure, as well as to measure several cardiovascular parameters such as pulmonary arterial pressure, systolic time intervals, contractility, stroke volume, etc. In this research work, we address the problem of extracting a diagnosis using anal-ysis of the heart sounds. Heart sound analysis consists of three main tasks: i) identification of non-cardiac sounds which are unavoidably mixed with the heart sound during auscultation; ii) segmentation of the heart sound in order to localize the main sound component; and finally...

Spectral-Feature-Based Analysis of Reflectance and Emission Spectral Libraries and Imaging Spectrometer Data

Kruse, Fred A.
Fonte: Escola de Pós-Graduação Naval Publicador: Escola de Pós-Graduação Naval
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
35.88%
This research demonstrates the application of spectral-feature-based analysis to identifying and mapping Earth-surface materials using spectral libraries and imaging spectrometer data. Feature extraction utilizing a continuum-removal and local minimum detection approach was tested for analysis of both reflectance and emissivity spectral libraries by extracting and characterizing spectral features of rocks, soils, minerals, and man-made materials. Library-derived information was then used to illustrate both reflectance- and emissivity-feature-based spectral mapping using imaging spectrometer data (AVIRIS and SEBASS). An additional spectral library of emission spectra from selected nocturnal lighting types was used to develop a database of key spectral features that allowed mapping and characterization of night lights from ProSpecTIR-VS imaging spectrometer data. Results from these case histories demonstrate that the spectralfeature- based approach can be used with either reflectance or emission spectra and applied to a wide variety of imaging spectrometer data types for extraction of key surface composition information.

Classification and feature extraction in man and machine; Klassifikation und Merkmalsextraktion in Mensch und Maschine

Graf, Arnulf B. A.
Fonte: Universität Tübingen Publicador: Universität Tübingen
Tipo: Dissertation; info:eu-repo/semantics/doctoralThesis
DE_DE
Relevância na Pesquisa
35.89%
Diese Dissertation befasst sich mit den Mechanismen, die Menschen verwenden, um Merkmale aus visuellen Reizen zu erzeugen und anschliessend zu klassifizieren. Es wird eine experimentelle Methode entwickelt, die menschliche Psychophysik mit maschinellem Lernen verbindet. Im Mittelpunkt der Arbeit steht ein Geschlechtsklassifikationsexperiment, das mit Hilfe der Kopfdatenbank des Max Planck Instituts durchgeführt wird. Hierzu werden verschiedene niedrig-dimensionale Merkmale aus den Gesichtsbildern extrahiert. Das Klassifikationsverfahren auf diesen Merkmalen ist durch eine Trennebene zwischen den beiden Klassen modelliert. Die Antworten der Versuchspersonen werden verglichen und korreliert mit der Distanz der Merkmale zur Trennebene. In dieser Arbeit wird bewiesen, dass maschinelles Lernen ein neues und wirksames algorithmisches Verfahren ist, um Einblicke in menschliche kognitive Prozesse zu erhalten. In einem ersten psychophysischen Klassifikationsexperiment wird gezeigt, dass eine hohe Fehlerrate und ein niedriges Vertrauen der Versuchspersonen einer längeren Verarbeitung der Information im Gehirn entsprechen. Ein zweites Klassifikationsexperiment auf den selben Reizen aber in unterschiedlicher Reihenfolge, bestätigt die Konsistenz der Antworten der Versuchspersonen und die Reproduzierbarkeit der folgenden Resultate. Es wird gezeigt...

Prediction of Protein Modification Sites of Pyrrolidone Carboxylic Acid Using mRMR Feature Selection and Analysis

Zheng, Lu-Lu; Niu, Shen; Hao, Pei; Feng, KaiYan; Cai, Yu-Dong; Li, Yixue
Fonte: Public Library of Science Publicador: Public Library of Science
Tipo: Artigo de Revista Científica
Publicado em 09/12/2011 EN
Relevância na Pesquisa
35.9%
Pyrrolidone carboxylic acid (PCA) is formed during a common post-translational modification (PTM) of extracellular and multi-pass membrane proteins. In this study, we developed a new predictor to predict the modification sites of PCA based on maximum relevance minimum redundancy (mRMR) and incremental feature selection (IFS). We incorporated 727 features that belonged to 7 kinds of protein properties to predict the modification sites, including sequence conservation, residual disorder, amino acid factor, secondary structure and solvent accessibility, gain/loss of amino acid during evolution, propensity of amino acid to be conserved at protein-protein interface and protein surface, and deviation of side chain carbon atom number. Among these 727 features, 244 features were selected by mRMR and IFS as the optimized features for the prediction, with which the prediction model achieved a maximum of MCC of 0.7812. Feature analysis showed that all feature types contributed to the modification process. Further site-specific feature analysis showed that the features derived from PCA's surrounding sites contributed more to the determination of PCA sites than other sites. The detailed feature analysis in this paper might provide important clues for understanding the mechanism of the PCA formation and guide relevant experimental validations.

Linked Component Analysis from Matrices to High Order Tensors: Applications to Biomedical Data

Zhou, Guoxu; Zhao, Qibin; Zhang, Yu; Adalı, Tülay; Xie, Shengli; Cichocki, Andrzej
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 29/08/2015
Relevância na Pesquisa
45.64%
With the increasing availability of various sensor technologies, we now have access to large amounts of multi-block (also called multi-set, multi-relational, or multi-view) data that need to be jointly analyzed to explore their latent connections. Various component analysis methods have played an increasingly important role for the analysis of such coupled data. In this paper, we first provide a brief review of existing matrix-based (two-way) component analysis methods for the joint analysis of such data with a focus on biomedical applications. Then, we discuss their important extensions and generalization to multi-block multiway (tensor) data. We show how constrained multi-block tensor decomposition methods are able to extract similar or statistically dependent common features that are shared by all blocks, by incorporating the multiway nature of data. Special emphasis is given to the flexible common and individual feature analysis of multi-block data with the aim to simultaneously extract common and individual latent components with desired properties and types of diversity. Illustrative examples are given to demonstrate their effectiveness for biomedical data analysis.; Comment: 20 pages, 11 figures, Proceedings of the IEEE, 2015

Robot Navigation using Reinforcement Learning and Slow Feature Analysis

Böhmer, Wendelin
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 04/05/2012
Relevância na Pesquisa
45.69%
The application of reinforcement learning algorithms onto real life problems always bears the challenge of filtering the environmental state out of raw sensor readings. While most approaches use heuristics, biology suggests that there must exist an unsupervised method to construct such filters automatically. Besides the extraction of environmental states, the filters have to represent them in a fashion that support modern reinforcement algorithms. Many popular algorithms use a linear architecture, so one should aim at filters that have good approximation properties in combination with linear functions. This thesis wants to propose the unsupervised method slow feature analysis (SFA) for this task. Presented with a random sequence of sensor readings, SFA learns a set of filters. With growing model complexity and training examples, the filters converge against trigonometric polynomial functions. These are known to possess excellent approximation capabilities and should therfore support the reinforcement algorithms well. We evaluate this claim on a robot. The task is to learn a navigational control in a simple environment using the least square policy iteration (LSPI) algorithm. The only accessible sensor is a head mounted video camera...

Slow and Steady Feature Analysis: Higher Order Temporal Coherence in Video

Jayaraman, Dinesh; Grauman, Kristen
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 15/06/2015
Relevância na Pesquisa
45.9%
Learned image representations constitute the current state-of-the-art for visual recognition, yet they notoriously require large amounts of human-labeled data to learn effectively. Unlabeled video data has the potential to reduce this cost, if learning algorithms can exploit the frames' temporal coherence as a weak---but free---form of supervision. Existing methods perform "slow" feature analysis, encouraging the image representations of temporally close frames to exhibit only small differences. While this standard approach captures the fact that high-level visual signals change slowly over time, it fails to capture how the visual content changes. We propose to generalize slow feature analysis to "steady" feature analysis. The key idea is to impose a prior that higher order derivatives in the learned feature space must be small. To this end, we train a convolutional neural network with a regularizer that minimizes a contrastive loss on tuples of sequential frames from unlabeled video. Focusing on the case of triplets of frames, the proposed method encourages that feature changes over time should be smooth, i.e., similar to the most recent changes. Using five diverse image and video datasets, including unlabeled YouTube and KITTI videos...

Estimating Driving Forces of Nonstationary Time Series with Slow Feature Analysis

Wiskott, Laurenz
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 12/12/2003
Relevância na Pesquisa
45.69%
Slow feature analysis (SFA) is a new technique for extracting slowly varying features from a quickly varying signal. It is shown here that SFA can be applied to nonstationary time series to estimate a single underlying driving force with high accuracy up to a constant offset and a factor. Examples with a tent map and a logistic map illustrate the performance.; Comment: 8 pages, 4 figures

Incremental Slow Feature Analysis: Adaptive and Episodic Learning from High-Dimensional Input Streams

Kompella, Varun Raj; Luciw, Matthew; Schmidhuber, Juergen
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 09/12/2011
Relevância na Pesquisa
45.74%
Slow Feature Analysis (SFA) extracts features representing the underlying causes of changes within a temporally coherent high-dimensional raw sensory input signal. Our novel incremental version of SFA (IncSFA) combines incremental Principal Components Analysis and Minor Components Analysis. Unlike standard batch-based SFA, IncSFA adapts along with non-stationary environments, is amenable to episodic training, is not corrupted by outliers, and is covariance-free. These properties make IncSFA a generally useful unsupervised preprocessor for autonomous learning agents and robots. In IncSFA, the CCIPCA and MCA updates take the form of Hebbian and anti-Hebbian updating, extending the biological plausibility of SFA. In both single node and deep network versions, IncSFA learns to encode its input streams (such as high-dimensional video) by informative slow features representing meaningful abstract environmental properties. It can handle cases where batch SFA fails.

Predictable Feature Analysis

Richthofer, Stefan; Wiskott, Laurenz
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 11/11/2013
Relevância na Pesquisa
45.79%
Every organism in an environment, whether biological, robotic or virtual, must be able to predict certain aspects of its environment in order to survive or perform whatever task is intended. It needs a model that is capable of estimating the consequences of possible actions, so that planning, control, and decision-making become feasible. For scientific purposes, such models are usually created in a problem specific manner using differential equations and other techniques from control- and system-theory. In contrast to that, we aim for an unsupervised approach that builds up the desired model in a self-organized fashion. Inspired by Slow Feature Analysis (SFA), our approach is to extract sub-signals from the input, that behave as predictable as possible. These "predictable features" are highly relevant for modeling, because predictability is a desired property of the needed consequence-estimating model by definition. In our approach, we measure predictability with respect to a certain prediction model. We focus here on the solution of the arising optimization problem and present a tractable algorithm based on algebraic methods which we call Predictable Feature Analysis (PFA). We prove that the algorithm finds the globally optimal signal...

FATS: Feature Analysis for Time Series

Nun, Isadora; Protopapas, Pavlos; Sim, Brandon; Zhu, Ming; Dave, Rahul; Castro, Nicolas; Pichara, Karim
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
45.83%
In this paper, we present the FATS (Feature Analysis for Time Series) library. FATS is a Python library which facilitates and standardizes feature extraction for time series data. In particular, we focus on one application: feature extraction for astronomical light curve data, although the library is generalizable for other uses. We detail the methods and features implemented for light curve analysis, and present examples for its usage.

A Convex Sparse PCA for Feature Analysis

Chang, Xiaojun; Nie, Feiping; Yang, Yi; Huang, Heng
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 23/11/2014
Relevância na Pesquisa
45.92%
Principal component analysis (PCA) has been widely applied to dimensionality reduction and data pre-processing for different applications in engineering, biology and social science. Classical PCA and its variants seek for linear projections of the original variables to obtain a low dimensional feature representation with maximal variance. One limitation is that it is very difficult to interpret the results of PCA. In addition, the classical PCA is vulnerable to certain noisy data. In this paper, we propose a convex sparse principal component analysis (CSPCA) algorithm and apply it to feature analysis. First we show that PCA can be formulated as a low-rank regression optimization problem. Based on the discussion, the l 2 , 1 -norm minimization is incorporated into the objective function to make the regression coefficients sparse, thereby robust to the outliers. In addition, based on the sparse model used in CSPCA, an optimal weight is assigned to each of the original feature, which in turn provides the output with good interpretability. With the output of our CSPCA, we can effectively analyze the importance of each feature under the PCA criteria. The objective function is convex, and we propose an iterative algorithm to optimize it. We apply the CSPCA algorithm to feature selection and conduct extensive experiments on six different benchmark datasets. Experimental results demonstrate that the proposed algorithm outperforms state-of-the-art unsupervised feature selection algorithms.

Fault analysis using state-of-the-art classifiers

Chandrashekar, Girish
Fonte: Rochester Instituto de Tecnologia Publicador: Rochester Instituto de Tecnologia
Tipo: Tese de Doutorado
EN_US
Relevância na Pesquisa
35.89%
Fault Analysis is the detection and diagnosis of malfunction in machine operation or process control. Early fault analysis techniques were reserved for high critical plants such as nuclear or chemical industries where abnormal event prevention is given utmost importance. The techniques developed were a result of decades of technical research and models based on extensive characterization of equipment behavior. This requires in-depth knowledge of the system and expert analysis to apply these methods for the application at hand. Since machine learning algorithms depend on past process data for creating a system model, a generic autonomous diagnostic system can be developed which can be used for application in common industrial setups. In this thesis, we look into some of the techniques used for fault detection and diagnosis multi-class and one-class classifiers. First we study Feature Selection techniques and the classifier performance is analyzed against the number of selected features. The aim of feature selection is to reduce the impact of irrelevant variables and to reduce computation burden on the learning algorithm. We introduce the feature selection algorithms as a literature survey. Only few algorithms are implemented to obtain the results. Fault data from a Radio Frequency (RF) generator is used to perform fault detection and diagnosis. Comparison between continuous and discrete fault data is conducted for the Support Vector Machines (SVM) and Radial Basis Function Network (RBF) classifiers. In the second part we look into one-class classification techniques and their application to fault detection. One-class techniques were primarily developed to identify one class of objects from all other possible objects. Since all fault occurrences in a system cannot be simulated or recorded...

Exploring Hidden Coherent Feature Groups and Temporal Semantics for Multimedia Big Data Analysis

Yang, Yimin
Fonte: FIU Digital Commons Publicador: FIU Digital Commons
Tipo: text Formato: application/pdf
Relevância na Pesquisa
45.92%
Thanks to the advanced technologies and social networks that allow the data to be widely shared among the Internet, there is an explosion of pervasive multimedia data, generating high demands of multimedia services and applications in various areas for people to easily access and manage multimedia data. Towards such demands, multimedia big data analysis has become an emerging hot topic in both industry and academia, which ranges from basic infrastructure, management, search, and mining to security, privacy, and applications. Within the scope of this dissertation, a multimedia big data analysis framework is proposed for semantic information management and retrieval with a focus on rare event detection in videos. The proposed framework is able to explore hidden semantic feature groups in multimedia data and incorporate temporal semantics, especially for video event detection. First, a hierarchical semantic data representation is presented to alleviate the semantic gap issue, and the Hidden Coherent Feature Group (HCFG) analysis method is proposed to capture the correlation between features and separate the original feature set into semantic groups, seamlessly integrating multimedia data in multiple modalities. Next, an Importance Factor based Temporal Multiple Correspondence Analysis (i.e....