Página 1 dos resultados de 41224 itens digitais encontrados em 0.028 segundos

Tabela Brasileira de Composição de Alimentos (TBCA-USP): atualização e inclusão de dados de vitaminas; Brazilian Food Composition Database (TBCA-USP): database enhancement and update on vitamins

Grande, Fernanda
Fonte: Biblioteca Digitais de Teses e Dissertações da USP Publicador: Biblioteca Digitais de Teses e Dissertações da USP
Tipo: Dissertação de Mestrado Formato: application/pdf
Publicado em 23/04/2013 PT
Relevância na Pesquisa
36.46%
A Tabela Brasileira de Composição de Alimentos (TBCA-USP) tem como meta a disseminação de dados de qualidade sobre a composição química dos alimentos, sendo que sua base de dados é continuamente atualizada com a inserção de novos alimentos e nutrientes. Vários estudos tem sido realizados com o objetivo de quantificar vitaminas em alimentos brasileiros, entretanto estas informações encontram-se dispersas em diversas publicações. O objetivo do trabalho foi a elaboração de uma base de dados referente a composição das vitaminas A, C e E em alimentos brasileiros, visando sua futura disponibilização pela TBCA-USP. Na elaboração da base de dados de vitaminas foram compilados apenas os dados originados através de metodologias validadas para cada composto, sendo que para a base de dados de vitamina A foram incluidos dados de retinol e de sete tipos de carotenoides (α-caroteno, β-caroteno, β-criptoxantina, licopeno, luteína, violaxantina e zeaxantina), enquanto que para a base de dados de vitamina C foram compilados dados de ácido ascórbico e dehidroascórbico, e para a base de dados de vitamina E foram compilados dados de todos os tocoferóis e tocotrienóis disponíveis. O conteúdo de vitamina A total foi calculado tanto na forma de Equivalentes de Retinol (Retinol Equivalent - RE) quanto de Equivalentes de Atividade de Retinol (Retinol Activity Equivalent - RAE); a vitamina C total foi calculada pela somatória dos ácidos ascórbico e dehidroascórbico; e...

Um novo processo para refatoração de bancos de dados.; A new process to database refactoring.

Domingues, Márcia Beatriz Pereira
Fonte: Biblioteca Digitais de Teses e Dissertações da USP Publicador: Biblioteca Digitais de Teses e Dissertações da USP
Tipo: Tese de Doutorado Formato: application/pdf
Publicado em 15/05/2014 PT
Relevância na Pesquisa
36.46%
O projeto e manutenção de bancos de dados é um importante desafio, tendo em vista as frequentes mudanças de requisitos solicitados pelos usuários. Para acompanhar essas mudanças o esquema do banco de dados deve passar por alterações estruturais que muitas vezes prejudicam o desempenho e o projeto das consultas, tais como: relacionamentos desnecessários, chaves primárias ou estrangeiras criadas fortemente acopladas ao domínio, atributos obsoletos e tipos de atributos inadequados. A literatura sobre Métodos Ágeis para desenvolvimento de software propõe o uso de refatorações para evolução do esquema do banco de dados quando há mudanças de requisitos. Uma refatoração é uma alteração simples que melhora o design, mas não altera a semântica do modelo de dados, nem adiciona novas funcionalidades. Esta Tese apresenta um novo processo para aplicar refatorações ao esquema do banco de dados. Este processo é definido por um conjunto de tarefas com o objetivo de executar as refatorações de uma forma controlada e segura, permitindo saber o impacto no desempenho do banco de dados para cada refatoração executada. A notação BPMN foi utilizada para representar e executar as tarefas do processo. Como estudo de caso foi utilizado um banco de dados relacional...

Metodologia e aplicativo de banco de dados para o desenvolvimento virtual de produtos; Methodology and database application for virtual product development

Carniel, Denize Regina
Fonte: Universidade Federal do Rio Grande do Sul Publicador: Universidade Federal do Rio Grande do Sul
Tipo: Dissertação Formato: application/pdf
POR
Relevância na Pesquisa
36.48%
Este trabalho tem por objetivo propor uma metodologia de desenvolvimento virtual de produtos por meio da criação de um aplicativo de banco de dados com componentes individuais e utilizando a tecnologia da realidade virtual (VRML) para a visualização do produto final. Para o desenvolvimento do trabalho foram investigados: a influência da difusão das tecnologias de informação e comunicação (TICs) no setor industrial; o processo de desenvolvimento de produtos e suas etapas; o design virtual e os recursos computacionais utilizados atualmente tanto na concepção quanto na prototipagem de produtos; a tecnologia da realidade virtual e sua aplicação nas diversas áreas, em especial, na indústria; e a utilização de padrões de metadados para a modelagem de banco de dados. O processo de intervenção ocorre por meio das seguintes etapas: análise e seleção de produtos para a montagem; projeto dos componentes individuais dos produtos; estruturação dos metadados; desenvolvimento do banco de dados e da metodologia de montagem. A metodologia de montagem é constituída de duas fases: na primeira, são definidos os pontos de inserção dos componentes (onde ocorre a conexão) e obtidas as suas coordenadas em um software CAD, a partir do seu modelo tridimensional...

Estudo e construção de um sistema gerenciador de banco de dados dedutivo; Study and construction of a deductive database management system

Nardon, Fabiane Bizinella
Fonte: Universidade Federal do Rio Grande do Sul Publicador: Universidade Federal do Rio Grande do Sul
Tipo: Dissertação Formato: application/pdf
POR
Relevância na Pesquisa
36.46%
Este trabalho apresenta o estudo e a construção de um Sistema Gerenciador de Bancos de Dados Dedutivos. Um Banco de Dados Dedutivo (BDD) é um Banco de Dados que, alem de sua parte tradicional, ou seja, as informações contidas nas relações básicas, que são explicitamente inseridas, possui um conjunto de regras dedutivas que permite derivar novas informações a partir das relações básicas. Neste trabalho, as deficiências da linguagem de consulta Datalog foram identificadas e, com o objetivo de obter uma linguagem que atenda melhor algumas das necessidades de aplicações do mundo real, foram propostas extensões ao Datalog, que deram origem a linguagem DEDALO. As atualizações sobre Bancos de Dados Dedutivos também foram estudadas, sendo identificados dois problemas: o primeiro se refere a necessidade de propagar modificações sobre as relações básicas para as relações derivadas materializadas; o segundo problema diz respeito as atualizações sobre as relações derivadas, que devem ser traduzidas em atualizações sobre as relações básicas, para que a atualização pretendida se tome visível na relação derivada. Para o primeiro problema, métodos de propagação foram estudados, analisados e implementados. Para o segundo...

Projeto evolutivo de bases de dados : uma abordagem iterativa e incremental usando modularização de bases de dados; Evolutionary database design : an iterative and incremental approach using database modularization

Gustavo Bartz Guedes
Fonte: Biblioteca Digital da Unicamp Publicador: Biblioteca Digital da Unicamp
Tipo: Dissertação de Mestrado Formato: application/pdf
Publicado em 11/02/2014 PT
Relevância na Pesquisa
36.48%
Sistemas de software evoluem ao longo do tempo devido a novos requisitos ou a alterações nos já existentes. As mudanças são ainda mais presentes nos métodos de desenvolvimento de software iterativos e incrementais, como os métodos ágeis, que pressupõem a entrega contínua de módulos operacionais de software. Os métodos ágeis, como o Scrum e a Programação Extrema, são baseados em aspectos gerenciais do projeto e em técnicas de codificação do sistema. Entretanto, mudanças nos requisitos provavelmente terão reflexo no esquema da base de dados, que deverá ser alterado para suportá-los. Quando o sistema se encontra em produção, alterações no esquema da base de dados são onerosas, pois é necessário manter a semântica dos dados em relação à aplicação. Portanto, este trabalho de mestrado apresenta o processo evolutivo de modularização de bases de dados, uma abordagem para projetar a base de dados de modo iterativo e incremental. A modularização é executada no projeto conceitual e amplia a capacidade de abstração do esquema de dados gerado facilitando as evoluções futuras. Por fim, foi desenvolvida uma ferramenta que automatiza o processo evolutivo de modularização de bases de dados, chamada de Evolutio DB Designer. Essa ferramenta permite modularizar o esquema da base de dados e gerar automaticamente o esquema relacional a partir dos módulos de bases de dados.; Software systems evolve through time due to new requirements or changing in the existing ones. The need for constant changes is even more present on the iterative and incremental software development methods...

Database marketing como ferramenta de suporte às políticas de CRM na área da distribuição alimentar

Girão, Pedro Miguel Dinis
Fonte: Universidade de Évora Publicador: Universidade de Évora
Tipo: Dissertação de Mestrado
POR
Relevância na Pesquisa
36.52%
Nos dias que correm, é de capital importância para as organizações, centrar o conhecimento organizacional e, desta forma, as suas competências centrais nos fatores críticos de sucesso do negócio onde estão a atuar. O processo de Database Marketing tem vindo a ter uma aplicação crescente no suporte às decisões em diversas organizações da nossa sociedade. De facto, este pode ser aplicado numa grande diversidade de contextos. O trabalho agora proposto tem por objetivo descrever a importância do Database Marketing na construção do conhecimento organizacional na área da distribuição alimentar, nomeadamente no suporte as políticas de marketing relacional. No âmbito deste trabalho, será apresentado a aplicação de um processo de Database Marketing na área da distribuição alimentar. O trabalho desenvolvido na extração de conhecimento, tendo por base técnicas de estatística descritiva implementadas com recurso a técnicas de Structured Query Language, revelou que a utilização destas, permite extrair informação e conhecimento capazes de alimentar, quer processos estratégicos, quer processos operacionais, na área da distribuição alimentar. Desta forma, as organizações podem, com poucos recursos, implementar processos de Database Marketing sem terem de recorrer a investimentos avultados...

ASGARD: an open-access database of annotated transcriptomes for emerging model arthropod species

Zeng, Victor; Extavour, Cassandra G.
Fonte: Oxford University Press Publicador: Oxford University Press
Tipo: Artigo de Revista Científica
Publicado em 22/11/2012 EN
Relevância na Pesquisa
36.46%
The increased throughput and decreased cost of next-generation sequencing (NGS) have shifted the bottleneck genomic research from sequencing to annotation, analysis and accessibility. This is particularly challenging for research communities working on organisms that lack the basic infrastructure of a sequenced genome, or an efficient way to utilize whatever sequence data may be available. Here we present a new database, the Assembled Searchable Giant Arthropod Read Database (ASGARD). This database is a repository and search engine for transcriptomic data from arthropods that are of high interest to multiple research communities but currently lack sequenced genomes. We demonstrate the functionality and utility of ASGARD using de novo assembled transcriptomes from the milkweed bug Oncopeltus fasciatus, the cricket Gryllus bimaculatus and the amphipod crustacean Parhyale hawaiensis. We have annotated these transcriptomes to assign putative orthology, coding region determination, protein domain identification and Gene Ontology (GO) term annotation to all possible assembly products. ASGARD allows users to search all assemblies by orthology annotation, GO term annotation or Basic Local Alignment Search Tool. User-friendly features of ASGARD include search term auto-completion suggestions based on database content...

Nencki Genomics Database—Ensembl funcgen enhanced with intersections, user data and genome-wide TFBS motifs

Krystkowiak, Izabella; Lenart, Jakub; Debski, Konrad; Kuterba, Piotr; Petas, Michal; Kaminska, Bozena; Dabrowski, Michal
Fonte: Oxford University Press Publicador: Oxford University Press
Tipo: Artigo de Revista Científica
Publicado em 01/10/2013 EN
Relevância na Pesquisa
36.48%
We present the Nencki Genomics Database, which extends the functionality of Ensembl Regulatory Build (funcgen) for the three species: human, mouse and rat. The key enhancements over Ensembl funcgen include the following: (i) a user can add private data, analyze them alongside the public data and manage access rights; (ii) inside the database, we provide efficient procedures for computing intersections between regulatory features and for mapping them to the genes. To Ensembl funcgen-derived data, which include data from ENCODE, we add information on conserved non-coding (putative regulatory) sequences, and on genome-wide occurrence of transcription factor binding site motifs from the current versions of two major motif libraries, namely, Jaspar and Transfac. The intersections and mapping to the genes are pre-computed for the public data, and the result of any procedure run on the data added by the users is stored back into the database, thus incrementally increasing the body of pre-computed data. As the Ensembl funcgen schema for the rat is currently not populated, our database is the first database of regulatory features for this frequently used laboratory animal. The database is accessible without registration using the mysql client: mysql –h database.nencki-genomics.org –u public. Registration is required only to add or access private data. A WSDL webservice provides access to the database from any SOAP client...

Design and implementation of a prototype PC based graphical and interactive MILSATCOM requirements database system

Major, William M.
Fonte: Monterey, California. Naval Postgraduate School Publicador: Monterey, California. Naval Postgraduate School
Tipo: Tese de Doutorado Formato: 126 p.
EN_US
Relevância na Pesquisa
36.48%
Approved for public release; distribution is unlimited.; This thesis develops a prototype PC based Military Satellite Communications MILSATCOM) Requirements Database (MRDB) application for U.S. Space Command, using Microsoft's Access relational database management system (DBMS) for Windows. It demonstrates the advantages of using the proposed database system over the existing one and shows how U.S. Space Command can save both time and money by using a PC based interactive, graphical, and user friendly database system. A rapid prototyping approach in concert with a six phase database design process was used to develop the prototype. The first two chapters of the thesis provide a background of the application and describe database management systems in general and Microsoft Access in particular. The applications of Access - tables, queries, forms, reports, macros, and modules - to the design of the MRDB are then discussed in the succeeding five chapters. The Conclusions describe the advantages and benefits of using the prototype MRDB database system and make recommendations for future improvements.; Captain, United States Air Force

The design and implementation of an object-oriented interface for the multimodel/multilingual database system

Moore, John William.; Karlidere, Turgay
Fonte: Monterey, California. Naval Postgraduate School Publicador: Monterey, California. Naval Postgraduate School
Tipo: Tese de Doutorado Formato: 75 p.
EN_US
Relevância na Pesquisa
36.51%
Approved for public release; distribution is unlimited.; Database designs in today's information-intensive environment, Challenge the database-system user to adhere to strict and somewhat archaic means, i.e., traditional data models and their data languages, of expressing their database applications. In light of these requirements, the user must purchase the new database system that supports the latest data model and its data language. We design and implement a comprehensive data-model-and-data- language interface which is a simple and yet effective alternative to the costly and cumbersome standard method of purchasing o developing a new database system. Our solution is two-fold. First, we use the concept of a data-model-and-data language interface to an existing database system. This not only eliminates the costs associated with building a arate, stand-alone database system to support each new data model and its language, but also allows for resource consolidation and data duplication elimination. Second, using the data-model-and-data-language interface concept, we design and implement an object-oriented-data-model-and- data-language interface for the multimodel/multilingual database system.; Lieutenant, United States Navy; Lieutenant Junior Grade...

Techniques for multiple database integration

Whitaker, Barron D
Fonte: Monterey, California. Naval Postgraduate School Publicador: Monterey, California. Naval Postgraduate School
ENG
Relevância na Pesquisa
36.49%
Approved for public release; distribution is unlimited; There are several graphic client/server application development tools which can be used to easily develop powerful relational database applications. However, they do not provide a direct means of performing queries which require relational joins across multiple database boundaries. This thesis studies ways to access multiple databases. Specifically, it examines how a 'cross-database join' can be performed. A case study of techniques used to perform joins between academic department financial management system and course management system databases was done using PowerBuilder 5.0. Although we were able to perform joins across database boundaries, we found that PowerBuilder is not conducive to cross-database join access because no relational database engine is available to execute cross-database queries; http://archive.org/details/techniquesformul00whit; Major, United States Marine Corps

Design and implementation of a database for an integrated system for daily management in an industrial and commercial organization

Trigui, Noureddine
Fonte: Monterey, California. Naval Postgraduate School Publicador: Monterey, California. Naval Postgraduate School
Tipo: Tese de Doutorado Formato: xviii, 129 p. : ill. (some col.) ;
Relevância na Pesquisa
36.48%
Approved for public release; distribution is unlimited; The purpose of this research is to define a centralized database containing all necessary information related to the daily management in an industrial and commercial organization that is publicly owned and equipped with civil personality and financial autonomy. The system is composed of the following subsystems: o Subsystem "Human resource management" o Subsystem "Provisioning" o Subsystem "Financial, budgetary and accounting management" The three subsystems should be installed in a central site and at regional sites. Each site will have its own database. The central database will be supplied with the data, which come from the other sites at the end of the day or according to need via modems. It is necessary to develop a tool for remote database queries in order to accomplish this work. The platform on which the application must be executed is IBM-INFORMIX running on top of the WINDOWS operating system. The database will be a relational database. The framework used in the design and modeling consists of: o Object Oriented Analysis (OOA), which enables the development of high quality software by defining the problem structure. o The Delphi Language, which provides a robust development environment. The installation of the solution will be executed according to the following scenario: o Client/Server architecture with the object oriented development tool DELPHI. o The database will be installed on the central and regional servers. o The application will be installed on the end users' stations. o Data access will be through an open ODBC. This software will present an integrated solution that will provide centralized and accurate data...

Development of a prototype database to support business process reengineering in the Department of Defense; Database to support DoD business process redesign

Kotheimer, William C.
Fonte: Monterey, California. Naval Postgraduate School Publicador: Monterey, California. Naval Postgraduate School
Tipo: Tese de Doutorado Formato: 164 p.;28 cm.
EN_US
Relevância na Pesquisa
36.48%
Approved for public release; distribution is unlimited.; This thesis describes the development of a database to support business process redesign in the Department of Defense (DOD). Business process redesign is rapidly becoming an important part of DOD's Corporate Information Management (CIM) initiatives. DOD is changing the way it does business in order to meet its commitments with fewer resources. In describing the development of a database to support business process redesign, this thesis reveals insights into the methods and practices that are changing the way business is practiced. The challenge encountered in this project is the process of business process redesign in DOD is being developed concurrently with the database. In effect, the database is built to support a process that is itself not fully understood. It was found that sufficient information on business process redesign existed and could be quantified in such a manner as to be made available in database format. The development of a prototype database progressed to a stage where it could be implemented. The next step is to build a fully functional model of the database in order to evaluate its role in supporting business process redesign.; Lieutenant, United States Navy

Automatic Identification of Protein Characterization Articles in support of Database Curation

Denroche, Robert
Fonte: Quens University Publicador: Quens University
Tipo: Tese de Doutorado Formato: 1143268 bytes; application/pdf
EN; EN
Relevância na Pesquisa
36.48%
Experimentally determining the biological function of a protein is a process known as protein characterization. Establishing the role a specific protein plays is a vital step toward fully understanding the biochemical processes that drive life in all its forms. In order for researchers to efficiently locate and benefit from the results of protein characterization experiments, the relevant information is compiled into public databases. To populate such databases, curators, who are experts in the biomedical domain, must search the literature to obtain the relevant information, as the experiment results are typically published in scientific journals. The database curators identify relevant journal articles, read them, and then extract the required information into the database. In recent years the rate of biomedical research has greatly increased, and database curators are unable to keep pace with the number of articles being published. Consequently, maintaining an up-to-date database of characterized proteins, let alone populating a new database, has become a daunting task. In this thesis, we report our work to reduce the effort required from database curators in order to create and maintain a database of characterized proteins. We describe a system we have designed for automatically identifying relevant articles that discuss the results of protein characterization experiments. Classifiers are trained and tested using a large dataset of abstracts...

An Object-Based Architecture for Biomedical Expert Database Systems

Barsalou, Thierry
Fonte: PubMed Publicador: PubMed
Tipo: Artigo de Revista Científica
Publicado em 09/11/1988 EN
Relevância na Pesquisa
36.48%
Objects play a major role in both database and artificial intelligence research. In this paper, we present a novel architecture for expert database systems that introduces an object-based interface between relational databases and expert systems. We exploit a semantic model of the database structure to map relations automatically into object templates, where each template can be a complex combination of join and projection operations. Moreover, we arrange the templates into object networks that represent different views of the same database. Separate processes instantiate those templates using data from the base relations, cache the resulting instances in main memory, navigate through a given network's objects, and update the database according to changes made at the object layer. In the context of an immunologic-research application, we demonstrate the capabilities of a prototype implementation of the architecture. The resulting model provides enhanced tools for database structuring and manipulation. In addition, this architecture supports efficient bidirectional communication between database and expert systems through the shared object layer.

BUILDING A DATABASE FOR NANOMATERIAL EXPOSURE

He, Linchen
Fonte: Universidade Duke Publicador: Universidade Duke
Tipo: Masters' project
Publicado em 23/04/2015 EN_US
Relevância na Pesquisa
36.57%
Nanomaterials is a type of material with more advanced properties than conventional materials, and both scientists and engineers have a strong motivation to apply them in lots of areas. However, before they are widely applied, it is necessary to understand their toxicity on organisms. To date, large amounts of studies have explored the toxicity of nanomaterials, and they have greatly helped people understand how nanomaterials impact organisms. However, the developing speed of this field is getting slower because it is becoming more difficult for researchers to effectively search for information they need. Building a user-friendly database for nanomaterials and bioactivity is the main objective of this project and it is also an effective solution to address this problem by strengthening the information dissemination in this field. Based on the basic database structure developed by researchers in the Center of Environmental Implications of Nanotechnology (CEINT), exposure data for carbon nanotubes (CNTs) will be collected and imported into the database, and in the meanwhile, the database structure will be further optimized to fit new dataset imported. The method of this project is based on five steps: 1. Finding related studies and sources. 2. Extracting data from sources. 3. Preparing source files for the database. 4. Imputing data into MySQL database. 5. Query data from the database. The database consists of six sections: 1. Materials section: Recording the properties of nanomaterials tested in each study. 2. Environmental System section: Describing the environmental system in which the study was conducted. 3. Biological System section: Recording information about the organisms chosen to conduct exposure experiments. 4. Functional Assay section: Recording the assay that provides a parameter that can be used to describe fate or effects of nanomaterials exposure. 5. Study section: Serving as the main section to connect each previous part and functionalize the whole database. 6. Study_PI_Publication section: Recording information about primary investigator and publication...

Integrating database and data stream systems

Mashruwala, Rutul
Fonte: Rochester Instituto de Tecnologia Publicador: Rochester Instituto de Tecnologia
Tipo: Masters Project Formato: 1052380 bytes; 56739 bytes; application/pdf; application/pdf
EN_US
Relevância na Pesquisa
36.53%
Traditionally, Database systems are viewed as passive data storage. Finite data sets are stored in traditional Database Systems and retrieved when needed. But applications such as sensor networks, network monitoring, retail transactions, and others, produce infinite data sets. A new system is under research and development, known as Data Stream Management System (DSMS), to deal with the infinite data sets. In DSMS, Data stream is a continuous source of sequential data. In Object-Oriented languages, like C/C++ and Java, the concept of stream does exist. The stream is viewed as a channel to which data is being inserted at one end and retrieved from the other end. To the database world, stream is a relatively new concept. In DSMS, data is processed on-line. Due to its very nature, the data fed to application through Data Stream can get lost, as it is never stored. This makes Data Stream non-persistent. Unlike this, Database Systems are persistent, which is the basis of my hypothesis. My hypothesis is Data Stream Management System and Database System can be combined under the same concepts and Data Stream can be made persistent. In this project, I have used an embedded database as a middleware to cache the data that is fed to an application through Data Stream. The embedded database is directly linked to the application that requires access to the stored data and is faster compared to a conventional database management system. Storing the streaming data in an embedded database makes Data Stream persistent. In the system developed...

Performance issues in mid-sized relational database machines

Sullivan, Larry
Fonte: Rochester Instituto de Tecnologia Publicador: Rochester Instituto de Tecnologia
Tipo: Tese de Doutorado
EN_US
Relevância na Pesquisa
36.51%
Relational database systems have provided end users and application programmers with an improved working environment over older hierarchial and networked database systems. End users now use interactive query languages to inspect and manage their data. And application programs are easier to write and maintain due to the separation of physical data storage information from the application program itself. These and other benefits do not come without a price however. System resource consumption has long been the perceived problem with relational systems. The additional resource demands usually force computing sites to upgrade existing systems or add additional facilities. One method of protecting the current investment in systems is to use specialized hardware designed specifically for relational database processing. 'Database Machines' provide that alternative. Since the commercial introduction of database machines in the early 1980's, both software and hardware vendors of relational database systems have claimed superior performance over competing products. Without a STANDARD performance measurement technique, the database user community has been flooded with benchmarks and claims from vendors which are immediately discarded by some competitors as being biased towards a particular system design. This thesis discusses the issues of relational database performance measurement with an emphasis on database machines...

An Extended Video Database Model for Supporting Finer-Grained Multi-Policy and Multi-Level Access Controls

Thy Tran,Nguyen Anh; Khanh Dang,Tran
Fonte: Instituto Politécnico Nacional, Centro de Innovación y Desarrollo Tecnológico en Cómputo Publicador: Instituto Politécnico Nacional, Centro de Innovación y Desarrollo Tecnológico en Cómputo
Tipo: Artigo de Revista Científica Formato: text/html
Publicado em 01/12/2008 EN
Relevância na Pesquisa
36.48%
The growing amount of multimedia data available to the average user has reached a critical phase, where methods for indexing, searching, and efficient retrieval are needed to manage the information overload. Many research works related to this field have been conducted within the last few decades and consequently, some video database models have been proposed. Most of the modern video database models make use of hierarchical structures to organize huge amount of videos to support video retrieval efficiently. Even now, among open research issues, video database access control is still an interesting research area with many proposed models. In this paper, we present a hybrid video database model which is a combination of the hierarchical video database model and annotations. In particular, we extend the original hierarchical indexing mechanism to add frames and salient objects at the lowest granularity level in the video tree with the aim to support multi-level access control. Also, we give users more solutions to query for videos based on the video contents using annotations. In addition, we also suggest the original database access control model to fit the characteristics of video data. Our modified model supports both multiple access control policies...

Update of coal pillar database for South African coal mining

van der Merwe,J.N.; Mathey,M.
Fonte: Journal of the Southern African Institute of Mining and Metallurgy Publicador: Journal of the Southern African Institute of Mining and Metallurgy
Tipo: Artigo de Revista Científica Formato: text/html
Publicado em 01/11/2013 EN
Relevância na Pesquisa
36.54%
Following the Coalbrook disaster in 1960, research into coal pillar strength resulted in the adoption of the concept of a safety factor for the design of stable pillars in South African coal mining. At the time when the original statistical analysis was performed by Salamon and Munro in the early 1960s, 27 cases of failed pillar workings were considered suitable for inclusion in the database of failed pillars. Pillar failure did not stop after the introduction of the safety factor formula by Salamon and Munro (1967). In the ensuing years, pillars that were created before the application of the formula deteriorated and later failed, as did ones that were created after the introduction of the formula. This means that over time, the database of failed pillar cases increased in size, allowing ever more reliable analyses to be performed. The number of failed cases in the database had grown from the original 27 in the 1960s to 86 by 2011. All the failed cases are contained in the updated database. The database of stable pillars, which is also used in the derivation of strength formulae, has now been extended from 125 to 337 cases. The new database of intact pillar cases is more complete as it bridges the time gap between the Salamon and Munro (1967) and the Van der Merwe (2006) databases. The original requirements for inclusion into the database were satisfied in the compilation of this latest collection. The characteristics of the original database of intact pillars did not change in a meaningful way. The mining depth and pillar dimensions of the new database are largely as they were in the original database. Time-related trends with regard to pillar dimensions and depth of mining could not be found...