Página 1 dos resultados de 15 itens digitais encontrados em 0.007 segundos

Implementação de simulador de fluidodinâmica em meso-escala com o método DPD; Implementation of fluid dynamics simulator in meso-scale with the DPD method

Agnus Azevedo Horta
Fonte: Biblioteca Digital da Unicamp Publicador: Biblioteca Digital da Unicamp
Tipo: Dissertação de Mestrado Formato: application/pdf
Publicado em 29/04/2014 PT
Relevância na Pesquisa
16.22%
Este trabalho tem como objetivo principal a implementação do motor de simulação de um framework de simulação com partículas, o motor de simulação utilizará o método DPD (Dissipative Particle Dynamics), baseando-se no paradigma de programação orientada a objeto (POO) e no uso de estruturas de dados otimizadas. O motor de simulação foi escrito em linguagem C++. A concepção do sistema foi realizada de forma a facilitar e promover a reutilização e manutenção do código. Buscou-se, também, a flexibilidade e generalização através do uso da linguagem Python na geração dos arquivos de entrada correspondentes a distribuição espacial das partículas, sendo utilizada a linguagem de marcação XML (eXtensible Markup Language) na estruturação dos arquivos resultantes da simulação. No final, o motor de simulação é avaliado aplicando o problema do fluxo de um fluído entre placas paralelas e o resultado comparado com os resultados obtidos no simulador Hoomd-Blue.; This work aims the implementation of the simulation of a simulation framework with particle engine, the simulation engine will use DPD (Dissipative Particle Dynamics) method, based on the paradigm of object oriented programming (OOP) and use optimized data structures. The simulation engine is written in C language ++. The system design was performed in order to facilitate and promote the reuse and maintainability of the code. Also...

Multifocal : a strategic bidirectional transformation language for XML schemas

Pacheco, Hugo; Cunha, Alcino
Fonte: Springer Publicador: Springer
Tipo: Artigo de Revista Científica
Publicado em //2012 ENG
Relevância na Pesquisa
36.49%
Lenses are one of the most popular approaches to define bidirectional transformations between data models. However, writing a lens transformation typically implies describing the concrete steps that convert values in a source schema to values in a target schema. In contrast, many XML-based languages allow writing structure-shy programs that manipulate only specific parts of XML documents without having to specify the behavior for the remaining structure. In this paper, we propose a structure-shy bidirectional two-level transformation language for XML Schemas, that describes generic type-level transformations over schema representations coupled with value-level bidirectional lenses for document migration. When applying these two-level programs to particular schemas, we employ an existing algebraic rewrite system to optimize the automatically-generated lens transformations, and compile them into Haskell bidirectional executables. We discuss particular examples involving the generic evolution of recursive XML Schemas, and compare their performance gains over non-optimized definitions.; Fundação para a Ciência e a Tecnologia

Behaviours for simulated humanoid robots; Comportamentos para robôs humanóides simulados

Mendonça, José Lucas Lemos
Fonte: Universidade de Aveiro Publicador: Universidade de Aveiro
Tipo: Dissertação de Mestrado
ENG
Relevância na Pesquisa
16.22%
This thesis in inserted in the FC Portugal 3D team, which competes in the humanoid simulation league 3D from RoboCup. The objectives of this thesis are to improve the behaviours already created and to develop tools to support the development and debugging of the robotic agent. With this in mind, the process of optimization was improved to make it more efficient and adapted to include the new heterogeneous models. Executing the optimization process, using the state of the art algorithm CMA-ES, the time of the getup was reduced by half. Afterwards, the agent was put running in sync mode, which allows the simulations to run as fast as the computer in use can process, and not the simulation speed of the competion with cycles of 20ms. In the agent posture, it is now used the information from the gyroscope and the euler angles are calculated to get a better estimative of the robot orientation. On the other hand, the agent architecture was updated and new behaviours were created and optimized to support the new heterogeneous models. In relation to the standard model, some behaviours execute faster because of their physical difference. In the slot behaviours, it is now possible to defined preconditions in each step, so the agent can abort the behaviour when any condition does not comply. This change reduces the time wasted executing all the behaviour in situations in which the success is improbable. In terms of tools...

Marcação semântica de páginas web apoiada por parsers de dependências gramaticais

Reis, Rúben Alberto Mendes Simões dos
Fonte: Universidade de Lisboa Publicador: Universidade de Lisboa
Tipo: Dissertação de Mestrado
Publicado em //2010 POR
Relevância na Pesquisa
26.34%
Tese de mestrado em Engenharia Informática, apresentada à Universidade de Lisboa, através da Faculdade de Ciências, 2010; Com o crescimento exponencial da informação na Web, torna-se necessário cada vez mais que o acesso à informação não só seja rápido, como eficiente. A procura por informação através da ocorrência de palavras-chave é o método usado pelos motores de busca na web mais conhecidos. Contudo a busca por informação na Web pode ser optimizada usando uma representação semântica da informação pela qual se procura. Este trabalho apresenta o desenvolvimento de uma ferramenta para a anotação semântica de páginas Web escritas em Português, apoiada por Analisadores de Dependências Gramaticais. Essa ferramenta recebeu o nome de Marcador Semântico e tem a capacidade de atribuir uma representação semântica a frases inseridas num texto e deixar essa representação semântica registada na linguagem de marcação RDF/XML. Neste trabalho, também é documentado uma ferramenta Web, adicionada ao repositório de ferramentas on-line do grupo NLX, da Faculdade de Ciências Universidade de Lisboa. Esta ferramenta, chamada de LX Dep Parser, é uma Analisador de Dependências Gramaticais e tem a finalidade de devolver ao utilizador uma representação das dependências gramaticais entre as palavras da frase.; With the exponential growth of the information in the Web...

InParanoid 6: eukaryotic ortholog clusters with inparalogs

Berglund, Ann-Charlotte; Sjölund, Erik; Östlund, Gabriel; Sonnhammer, Erik L. L.
Fonte: Oxford University Press Publicador: Oxford University Press
Tipo: Artigo de Revista Científica
EN
Relevância na Pesquisa
16%
The InParanoid eukaryotic ortholog database (http://InParanoid.sbc.su.se/) has been updated to version 6 and is now based on 35 species. We collected all available ‘complete’ eukaryotic proteomes and Escherichia coli, and calculated ortholog groups for all 595 species pairs using the InParanoid program. This resulted in 2 642 187 pairwise ortholog groups in total. The orthology-based species relations are presented in an orthophylogram. InParanoid clusters contain one or more orthologs from each of the two species. Multiple orthologs in the same species, i.e. inparalogs, result from gene duplications after the species divergence. A new InParanoid website has been developed which is optimized for speed both for users and for updating the system. The XML output format has been improved for efficient processing of the InParanoid ortholog clusters.

jmzReader: A Java parser library to process and visualize multiple text and XML-based mass spectrometry data formats

Griss, Johannes; Reisinger, Florian; Hermjakob, Henning; Vizcaíno, Juan Antonio
Fonte: Blackwell Publishing Ltd Publicador: Blackwell Publishing Ltd
Tipo: Artigo de Revista Científica
EN
Relevância na Pesquisa
26.22%
We here present the jmzReader library: a collection of Java application programming interfaces (APIs) to parse the most commonly used peak list and XML-based mass spectrometry (MS) data formats: DTA, MS2, MGF, PKL, mzXML, mzData, and mzML (based on the already existing API jmzML). The library is optimized to be used in conjunction with mzIdentML, the recently released standard data format for reporting protein and peptide identifications, developed by the HUPO proteomics standards initiative (PSI). mzIdentML files do not contain spectra data but contain references to different kinds of external MS data files. As a key functionality, all parsers implement a common interface that supports the various methods used by mzIdentML to reference external spectra. Thus, when developing software for mzIdentML, programmers no longer have to support multiple MS data file formats but only this one interface. The library (which includes a viewer) is open source and, together with detailed documentation, can be downloaded from http://code.google.com/p/jmzreader/.

Integrating and visualizing primary data from prospective and legacy taxonomic literature

Miller, Jeremy A.; Agosti, Donat; Penev, Lyubomir; Sautter, Guido; Georgiev, Teodor; Catapano, Terry; Patterson, David; King, David; Pereira, Serrano; Vos, Rutger Aldo; Sierra, Soraya
Fonte: Pensoft Publishers Publicador: Pensoft Publishers
Tipo: Artigo de Revista Científica
Publicado em 12/05/2015 EN
Relevância na Pesquisa
16.46%
Specimen data in taxonomic literature are among the highest quality primary biodiversity data. Innovative cybertaxonomic journals are using workflows that maintain data structure and disseminate electronic content to aggregators and other users; such structure is lost in traditional taxonomic publishing. Legacy taxonomic literature is a vast repository of knowledge about biodiversity. Currently, access to that resource is cumbersome, especially for non-specialist data consumers. Markup is a mechanism that makes this content more accessible, and is especially suited to machine analysis. Fine-grained XML (Extensible Markup Language) markup was applied to all (37) open-access articles published in the journal Zootaxa containing treatments on spiders (Order: Araneae). The markup approach was optimized to extract primary specimen data from legacy publications. These data were combined with data from articles containing treatments on spiders published in Biodiversity Data Journal where XML structure is part of the routine publication process. A series of charts was developed to visualize the content of specimen data in XML-tagged taxonomic treatments, either singly or in aggregate. The data can be filtered by several fields (including journal...

Constructing a Relational Query Optimizer for Non-Relational Languages; Erstellung eines relationalen Anfrageoptimierers für nicht-relationale Sprachen

Rittinger, Jan
Fonte: Universidade de Tubinga Publicador: Universidade de Tubinga
Tipo: Dissertação
EN
Relevância na Pesquisa
16.52%
Flat, unordered table data and a declarative query language established today’s success of relational database systems. Provided with the freedom to choose the evaluation order and underlying algorithms, their complex query optimizers are geared to come up with the best execution plan for a given query. With over 30 years of development and research, relational database management systems belong to the most mature and efficient query processors (especially for substantial amounts of data). In contrast, ordered lists of possibly nested data structures are used throughout in programming languages. Most developers handle these constructs on a daily basis and need to change their programming style, when confronted with a relational database system. To leverage the potential of the relational query processing facility in the context of non-relational languages—without the need for a context switch—we provide a query language that copes with order, nesting, and more complex data structures (such as tuples, named records, and XML data). Queries expressed in this language are compiled into relational queries over flat, unordered tables. We take great care in the faithful mapping of the “alien” language constructs. This work describes the Pathfinder compiler achieving the transformation based on the loop lifting compilation strategy. The compiler transforms the input queries into logical algebra plans. The operators of this unordered algebra consist mainly of standard table algebra operators. Additional numbering and XML operators generate surrogate values to ensure an accurate representation of order...

Mejora de la productividad de los trabajadores de la Información y el Conocimiento

Pajares Nevado, Mª Isabel
Fonte: Universidade Carlos III de Madrid Publicador: Universidade Carlos III de Madrid
Tipo: Tese de Doutorado Formato: application/pdf
SPA
Relevância na Pesquisa
16%
La correcta gestión de la información en cualquier institución u organización es vital para que ésta exista y progrese. En la aldea global de nuestra sociedad actual, la información es poder. Sin embargo, las malas prácticas de algunos departamentos, unidades o algunas personas que aglutinan información relevante, ocultándola y alejándola de los criterios y estrategias globales de la organización, fragilizan la estrategia corporativa. Este gran lastre en el entorno actual, puede disminuir significativamente la producción y por ende la competitividad en el mercado. Esta atomización basada en el provecho propio y no en provecho de toda la empresa, se abordará desde un único prisma "la mejora de la producción de los trabajadores de la información y el conocimiento". Para ello, es esencial asegurar que la información fluya libremente por toda la organización. Este trabajo de tesis describe a través de sus diferentes apartados, el ámbito propicio para el despliegue de las capacidades de los trabajadores del conocimiento y el consabido aumento de su productividad, gracias al empleo de varias tecnologías como: Workflow, SharePoint, Formato XML, Sistemas de gestión del almacenamiento de la información, y Comunicaciones Unificadas. Una de las conclusiones a las que se puede llegar es que la empresa que gestiona eficazmente su información se anticipa a los acontecimientos del mercado...

Efficient XML Interchange: Compact, Efficient, and Standards-Based XML

Snyder, Sheldon; McGregor, Don; Brutzman, Don
Fonte: Escola de Pós-Graduação Naval Publicador: Escola de Pós-Graduação Naval
Tipo: Trabalho em Andamento
EN_US
Relevância na Pesquisa
46.61%
Documents include Paper and Presentation.; Simulation Interoperability Standards Organization (SISO) SIW Conference Paper; XML has become a popular representation format for data, both in modeling and simulation and elsewhere. However, XML's design choice of a text-based format also makes XML data files much larger than binary files, making XML languages difficult to use in bandwidth-constrained military applications. This limitation has resulted in several ad-hoc attempts to make XML more compact, each of which tends to be incompatible with the other. Efficient XML Interchange (EXI) is a World Wide Web Consortium (W3C) Working Draft for the compact and efficient representation of the XML infoset. EXI is designed to be generally applicable to all XML documents, and lays the foundation for a unified format for compact XML document representation. This paper presents compactness results for several popular modeling and simulation XML file formats, including Distributed Interactive Simulation (DIS), Scalable Vector Graphics (SVG) and Extensible 3D Graphics (X3D). Recent commercial and open source EXI implementations are also described.; Naval Postgraduate School, Monterey, CA.

TiD -- Documentation of TOBI Interface D

Breitwieser, Christian
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 05/07/2015
Relevância na Pesquisa
16%
This document contains the documentation of TOBI (Tools for BCI) Interface D (TiD). TiD tries to establish a standardized interface for event transmission in neuroscience experiments. It is designed in a client-server architecture. Clients are connecting to a single server and events are delivered in a "bus-like" manner. TiD events are based on XML messages. So TiD messages can also get custom extended. A cross platform C++ library is available and provides all TiD functionality. To avoid jitter effect, hampering event processing, TiD was optimized towards performance. The documentation provides further performance tweaks to decrease TiD message processing latency and also decrease event timing jitter.

Gestion efficace de s\'eries temporelles en P2P: Application \`a l'analyse technique et l'\'etude des objets mobiles

Gardarin, Georges; Nguyen, Benjamin; Yeh, Laurent; Zeitouni, Karine; Butnaru, Bogdan; Sandu-Popa, Iulian
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 03/06/2010
Relevância na Pesquisa
16%
In this paper, we propose a simple generic model to manage time series. A time series is composed of a calendar with a typed value for each calendar entry. Although the model could support any kind of XML typed values, in this paper we focus on real numbers, which are the usual application. We define basic vector space operations (plus, minus, scale), and also relational-like and application oriented operators to manage time series. We show the interest of this generic model on two applications: (i) a stock investment helper; (ii) an ecological transport management system. Stock investment requires window-based operations while trip management requires complex queries. The model has been implemented and tested in PHP, Java, and XQuery. We show benchmark results illustrating that the computing of 5000 series of over 100.000 entries in length - common requirements for both applications - is difficult on classical centralized PCs. In order to serve a community of users sharing time series, we propose a P2P implementation of time series by dividing them in segments and providing optimized algorithms for operator expression computation.

An Optimized Data Structure for High Throughput 3D Proteomics Data: mzRTree

Nasso, Sara; Silvestri, Francesco; Tisiot, Francesco; Di Camillo, Barbara; Pietracaprina, Andrea; Toffolo, Gianna Maria
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
26.38%
As an emerging field, MS-based proteomics still requires software tools for efficiently storing and accessing experimental data. In this work, we focus on the management of LC-MS data, which are typically made available in standard XML-based portable formats. The structures that are currently employed to manage these data can be highly inefficient, especially when dealing with high-throughput profile data. LC-MS datasets are usually accessed through 2D range queries. Optimizing this type of operation could dramatically reduce the complexity of data analysis. We propose a novel data structure for LC-MS datasets, called mzRTree, which embodies a scalable index based on the R-tree data structure. mzRTree can be efficiently created from the XML-based data formats and it is suitable for handling very large datasets. We experimentally show that, on all range queries, mzRTree outperforms other known structures used for LC-MS data, even on those queries these structures are optimized for. Besides, mzRTree is also more space efficient. As a result, mzRTree reduces data analysis computational costs for very large profile datasets.; Comment: Paper details: 10 pages, 7 figures, 2 tables. To be published in Journal of Proteomics. Source code available at http://www.dei.unipd.it/mzrtree

Concept Relation Discovery and Innovation Enabling Technology (CORDIET)

Poelmans, Jonas; Elzinga, Paul; Neznanov, Alexey; Viaene, Stijn; Kuznetsov, Sergei O.; Ignatov, Dmitry; Dedene, Guido
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 13/02/2012
Relevância na Pesquisa
16%
Concept Relation Discovery and Innovation Enabling Technology (CORDIET), is a toolbox for gaining new knowledge from unstructured text data. At the core of CORDIET is the C-K theory which captures the essential elements of innovation. The tool uses Formal Concept Analysis (FCA), Emergent Self Organizing Maps (ESOM) and Hidden Markov Models (HMM) as main artifacts in the analysis process. The user can define temporal, text mining and compound attributes. The text mining attributes are used to analyze the unstructured text in documents, the temporal attributes use these document's timestamps for analysis. The compound attributes are XML rules based on text mining and temporal attributes. The user can cluster objects with object-cluster rules and can chop the data in pieces with segmentation rules. The artifacts are optimized for efficient data analysis; object labels in the FCA lattice and ESOM map contain an URL on which the user can click to open the selected document.

Empirical study of sensor observation services server instances

Tamayo, Alain; Viciano, Pablo; Granell, Carlos; Huerta, Joaquín
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 21/09/2011
Relevância na Pesquisa
16%
The number of Sensor Observation Service (SOS) instances available online has been increasing in the last few years. The SOS specification standardises interfaces and data formats for exchanging sensor-related in-formation between information providers and consumers. SOS in conjunction with other specifications in the Sensor Web Enablement initiative, at-tempts to realise the Sensor Web vision, a worldwide system where sensor networks of any kind are interconnected. In this paper we present an empirical study of actual instances of servers implementing SOS. The study focuses mostly in which parts of the specification are more frequently included in real implementations, and how exchanged messages follows the structure defined by XML Schema files. Our findings can be of practical use when implementing servers and clients based on the SOS specification, as they can be optimized for common scenarios.; Comment: 25 pages, 11 tables, 6 figures