Os sistemas de informática têm ganhado grande importância e hoje são considerados fatores críticos do mundo dos negócios. A ampla difusão e importância vital de tais sistemas atribuem papel de destaque ao gerenciamento dos mesmos: sua eficiência é necessária não apenas como garantia de produtividade, mas da própria sobrevivência de muitas empresas, dada a natureza do comércio eletrônico, onde a competitividade está, literalmente, a um click. Associado a isto a notável evolução dos sistemas de informática nas últimas décadas, partindo de sistemas centralizados a complexos sistemas descentralizados distribuídos geograficamente, têm exigido igual evolução de seus sistemas de gerenciamento. Alternativas ao modelo convencional de gerenciamento, baseado no protocolo SNMP, têm sido propostas, a maioria delas é baseada na tecnologia XML que tem ganhado cada vez mais espaço e popularidade narepresentação e padronização de informações. Recentemente uma nova tecnologia baseada em XML surgiu como uma grande promessa principalmente devido a sua forte característica de interoperabilidade: os Web Services. O propósito deste trabalho é analisar a viabilidade e a potencialidade desta tecnologia no que tange o gerenciamento de redes e comparar sua aplicação com os modelos atualmente utilizados; The computing systems are growing in importance and today are considered critical factors in the business world. The vital importance and ubiquity of such systems bring special attention to its management aspects: efficiency is needed not only to guarantee productivity but also the survival of many companies...
Este artigo apresenta uma solução para conquistar, com o uso de ontologias, interoperabilidade entre sistemas de informação heterogêneos (formados por bases de dados relacionais e documentos anotados, entre outros). O Oveia é um extrator de ontologias representadas no formato Topic Maps. Sua arquitetura é composta por duas especificações e os referentes processadores: a primeira, escrita na linguagem XSDS (XML Specification for
DataSources/DataSets), especifica os dados a serem extraídos das fontes de informação; enquanto que segunda, escrita na linguagem XS4TM (XML Specification for Topic Maps), é responsável
por declarar as ontologias a serem geradas. Com base nestas especificações, o extrator busca as
informações nas fontes de informação e produz um topic map. Este topic map gerado pode ser armazenado em formato XTM (XML Topic Maps) ou em uma base de dados relacional. Essa dupla capacidade (manipular vários tipos de fontes de informação e armazenar o resultado em um suporte diferente) é uma clara vantagem na comparação com outras ferramentas de extração de ontologias; nomeadamente com o seu antecessor, o TM-Builder, que apenas permitia lidar
com documentos XML, relativo ao qual este representa uma evolução justificativa.; This paper presents a proposal based on ontology to achieve semantic interoperability in a
heterogeneous information system. Oveia is an ontology extractor...
Librelotto, Giovani Rubert; Ramalho, José Carlos; Henriques, Pedro Rangel
Fonte: Universidade do MinhoPublicador: Universidade do Minho
Tipo: Conferência ou Objeto de Conferência
Publicado em //2003ENG
Relevância na Pesquisa
Everyday thousands of new information resources are linked to the web. This way the web is growing
very fast what makes search tasks more difficult. To solve the problem several initiatives were
undertaken and a new area of research and development emerged: the one called Semantic Web.
When we refer to the semantic web we are thinking about a network of concepts. Each concept has a
group of related resources and can be related to other resources, thought we can use this concept
network to navigate among web resources or simply among information resources. From the undertaken
initiatives one became an ISO standard: Topic Map ISO 13250.
The aim of this paper is to introduce a Topic Map (TM) Builder, that is a processor that
extracts topics and relations from instances of a family of XML documents.
A Topic Map Builder is strongly dependent on the resources structure. So if we have an
heterogeneous set of information resources we have to implement several Topic Map Builders. This is
not very useful, and to overcome this problem we have created an XML abstraction layer for Topic
To describe that process, i.e. the extraction of knowledge from XML documents to produce a TM, we
present a language to specify topic maps for a class of XML documents...
Fonte: Universidade do MinhoPublicador: Universidade do Minho
Tipo: Conferência ou Objeto de Conferência
Publicado em 16/04/1999ENG
Relevância na Pesquisa
In this paper we present a Perl module, called XML::DT, that can
be used to translate and transform XML documents.
A programmer always looks for the simplest tool to do a certain
task, development and maintenance will be easier. That is the main
idea behind the module we are presenting: a simple tool that will
enable the user to speed up his work and that will help him to
XML::DT includes some down translation features that are common to
other SGML/XML processors available on the market like Omnimark, or Balise, and some other features to deal with input and
output of Unicode character sets.
The idea was to adopt familiar concepts together with a familiar
syntax to SGML/XML programmers but shaped to the usual Perl
CoDeSys "Controller Development Systems" is a development environment for programming in the area of automation controllers. It is an open source solution completely in line with the international industrial standard IEC 61131-3. All five programming languages for application programming as defined in IEC 61131-3 are available in the development environment. These features give professionals greater flexibility with regard to programming and allow control engineers have the ability to program for many different applications in the languages in which they feel most comfortable.
Over 200 manufacturers of devices from different industrial sectors offer intelligent automation devices with a CoDeSys programming interface. In 2006, version 3 was released with new updates and tools.
One of the great innovations of the new version of CoDeSys is object oriented programming. Object oriented programming (OOP) offers great advantages to the user for example when wanting to reuse existing parts of the application or when working on one application with several developers. For this reuse can be prepared a source code with several well known parts and this is automatically generated where necessary in a project, users can improve then the time/cost/quality management.
Until now in version 2 it was necessary to have hardware interface called “Eni-Server” to have access to the generated XML code. Another of the novelties of the new version is a tool called Export PLCopenXML. This tool makes it possible to export the open XML code without the need of specific hardware. This type of code has own requisites to be able to comply with the standard described above. With XML code and with the knowledge how it works it is possible to do component-oriented development of machines with modular programming in an easy way. Eplan Engineering Center (EEC) is a software tool developed by Mind8 GmbH & Co. KG that allows configuring and generating automation projects.
Therefore it uses modules of PLC code. The EEC already has a library to generate code for CoDeSys version 2. For version 3 and the constant innovation of drivers by manufacturers...
Objective: To provide a standardized and scaleable mechanism for exchanging digital radiologic educational content between software systems that use disparate authoring, storage, and presentation technologies.Materials/Methods: Our institution uses two distinct software systems for creating educational content for radiology. Each system is used to create in-house educational content as well as commercial educational products. One system is an authoring and viewing application that facilitates the input and storage of hierarchical knowledge and associated imagery, and is capable of supporting a variety of entity relationships. This system is primarily used for the production and subsequent viewing of educational CD-ROMS. Another software system is primarily used for radiologic education on the world wide web. This system facilitates input and storage of interactive knowledge and associated imagery, delivering this content over the internet in a Socratic manner simulating in-person interaction with an expert. A subset of knowledge entities common to both systems was derived. An additional subset of knowledge entities that could be bidirectionally mapped via algorithmic transforms was also derived. An extensible markup language (XML) object model and associated lexicon were then created to represent these knowledge entities and their interactive behaviors. Forward-looking attention was exercised in the creation of the object model in order to facilitate straightforward future integration of other sources of educational content. XML generators and interpreters were written for both systems.Results: Deriving the XML object model and lexicon was the most critical and time-consuming aspect of the project. The coding of the XML generators and interpreters required only a few hours for each environment. Subsequently...
Há muito trabalho em mineração de padrões com foco em estruturas de dados simples como itemsets ou seqüência de itemsets. Entretanto, recentes aplicações utilizam dados mais complexos como componentes químicos, estruturas proteicas, rede social, XML e logs da Web, exigindo estruturas de dados mais sofisticadas (árvores ou grafos) para serem especificadas. Aqui, padrões de interesse não envolvem apenas valores de objetos frequentes labels que aparecem em árvores (ou grafos), mas também topologias específicas frequentes encontradas nessas estruturas. A mineração de padrões de árvores frequentes tem sido bastante estudada, com a motivação do crescente interesse e aplicabilidade em diferentes áreas (Web Mining, Bioinformática, etc.). Porém, os sistemas convencionais de mineração de árvores permitiam ao usuário apenas definir o suporte mínimo como mecanismo de filtro dos padrões a serem minerados. Após o processo de mineração, um árduo trabalho é necessário para filtrar os padrões de interesse dos usuários. Nessa dissertação, propomos o algoritmo CobMiner, Constrained-based Miner, um algoritmo de mineração de padrões arborescentes, incorporando ao processo de mineração os Autômatos de Árvores...
Approved for public release, distribution is unlimited; alities. The extensible Markup Language (XML) is used for data storage and message exchange, Extensible 3D (X3D) Graphics for visualization and XML Schema-based Binary Compression (XSBC) for data compression. The AUV Workbench provides an intuitive cross-platform-capable tool with extensibility to provide for future enhancements such as agent-based control, asynchronous reporting and communication, loss-free message compression and built-in support for mission data archiving. This thesis also investigates the Jabber instant messaging protocol, showing its suitability for text and file messaging in a tactical environment. Exemplars show that the XML backbone of this open-source technology can be leveraged to enable both human and agent messaging with improvements over current systems. Integrated Jabber instant messaging support makes the NPS AUV Workbench the first custom application supporting XML Tactical Chat (XTC). Results demonstrate that the AUV Workbench provides a capable testbed for diverse AUV technologies, assisting in the development of traditional single-vehicle operations and agent-based multiple-vehicle methodologies. The flexible design of the Workbench further encourages integration of new extensions to serve operational needs. Exemplars demonstrate how in-mission and post-mission event monitoring by human operators can be achieved via simple web page...
Approved for public release; distribution is unlimited; This thesis presents an analysis of a capability to employ CAPCO (Controlled Access Program Coordination Office) compliant Metadata security tags as the basis for making security decisions. My research covers all the security aspects of the related technologies, such as XML, Web Services, Java API's for XML, .NET Architecture to help determine how security conscious enterprises such as the Intelligence Community can implement this approach in the real insecure world, with commercial off-the-self products, to meet their needs. There were many concerns about using the XML Metadata Label Tags as the basis for making security decisions, due to an un -trusted environment. By using appropriate trusted parts, when really necessary, and new technologies , we can find secure solutions for creating, storing and disseminating XML documents. Besides the theoretical research, this thesis also presents a prototype development of a Web Service that can handle most of the tasks (save, save locally, review etc), which are required to securely manage XML documents. In order to implement the above Web Service, open -source products, such as Java and Apache Tomcat Web Server, are used. These are not only available free...
Extensible Markup Language (XML) is on its way to becoming a global standard for the representation, exchange, and presentation of information on the World Wide Web (WWW). More than that, XML is creating a standardization framework, in terms of an open network of meta-standards and mediators that allows for the definition of further conventions and agreements in specific business domains. Such an approach is particularly needed in the healthcare domain; XML promises to especially suit the particularities of patient records and their lifelong storage, retrieval, and exchange. At a time when change rather than steadiness is becoming the faithful feature of our society, standardization frameworks which support a diversified growth of specifications that are appropriate to the actual needs of the users are becoming more and more important; and efforts should be made to encourage this new attempt at standardization to grow in a fruitful direction. Thus, the introduction of XML reflects a standardization process which is neither exclusively based on an acknowledged standardization authority, nor a pure market standard. Instead, a consortium of companies, academic institutions, and public bodies has agreed on a common recommendation based on an existing standardization framework. The consortium's process of agreeing to a standardization framework will doubtlessly be successful in the case of XML...
This paper presents a Framework for converting a class diagram into an XML
structure and shows how to use Web files for the design of data warehouses
based on the classification UML. Extensible Markup Language (XML) has become a
standard for representing data over the Internet. We use XSD schema for define
the structure of XML documents and validate XML documents.
A prototype has been developed, which migrates successfully UML Class into
XML document based on the formulation mathematics model. The experimental
results were very encouraging, demonstrating that the proposed approach is
feasible efficient and correct.
With the rise of XML as a standard for representing business data, XML data
warehouses appear as suitable solutions for Web-based decision-support
applications. In this context, it is necessary to allow OLAP analyses over XML
data cubes (XOLAP). Thus, XQuery extensions are needed. To help define a formal
framework and allow much-needed performance optimizations on analytical queries
expressed in XQuery, having an algebra at one's disposal is desirable. However,
XOLAP approaches and algebras from the literature still largely rely on the
relational model and/or only feature a small number of OLAP operators. In
opposition, we propose in this paper to express a broad set of OLAP operators
with the TAX XML algebra.; Comment: in 3rd International Workshop on Database Technologies for Handling
XML Information on the Web (DataX-EDBT 08), Nantes : France (2008)
With the multiplication of XML data sources, many XML data warehouse models
have been proposed to handle data heterogeneity and complexity in a way
relational data warehouses fail to achieve. However, XML-native database
systems currently suffer from limited performances, both in terms of manageable
data volume and response time. Fragmentation helps address both these issues.
Derived horizontal fragmentation is typically used in relational data
warehouses and can definitely be adapted to the XML context. However, the
number of fragments produced by classical algorithms is difficult to control.
In this paper, we propose the use of a k-means-based fragmentation approach
that allows to master the number of fragments through its $k$ parameter. We
experimentally compare its efficiency to classical derived horizontal
fragmentation algorithms adapted to XML data warehouses and show its
Most state-of-the art approaches for securing XML documents allow users to
access data only through authorized views defined by annotating an XML grammar
(e.g. DTD) with a collection of XPath expressions. To prevent improper
disclosure of confidential information, user queries posed on these views need
to be rewritten into equivalent queries on the underlying documents. This
rewriting enables us to avoid the overhead of view materialization and
maintenance. A major concern here is that query rewriting for recursive XML
views is still an open problem. To overcome this problem, some works have been
proposed to translate XPath queries into non-standard ones, called Regular
XPath queries. However, query rewriting under Regular XPath can be of
exponential size as it relies on automaton model. Most importantly, Regular
XPath remains a theoretical achievement. Indeed, it is not commonly used in
practice as translation and evaluation tools are not available. In this paper,
we show that query rewriting is always possible for recursive XML views using
only the expressive power of the standard XPath. We investigate the extension
of the downward class of XPath, composed only by child and descendant axes,
with some axes and operators and we propose a general approach to rewrite
queries under recursive XML views. Unlike Regular XPath-based works...
XML is of great importance in information storage and retrieval because of
its recent emergence as a standard for data representation and interchange on
the Internet. However XML provides little semantic content and as a result
several papers have addressed the topic of how to improve the semantic
expressiveness of XML. Among the most important of these approaches has been
that of defining integrity constraints in XML. In a companion paper we defined
strong functional dependencies in XML(XFDs). We also presented a set of axioms
for reasoning about the implication of XFDs and showed that the axiom system is
sound for arbitrary XFDs. In this paper we prove that the axioms are also
complete for unary XFDs (XFDs with a single path on the l.h.s.). The second
contribution of the paper is to prove that the implication problem for unary
XFDs is decidable and to provide a linear time algorithm for it.
XML is now becoming an industry standard for data description and exchange.
Despite this there are still some questions about how or if this technology can
be useful in High Energy Physics software development and data analysis. This
paper aims to answer these questions by demonstrating how XML is used in the
IceCube software development system, data handling and analysis. It does this
by first surveying the concepts and tools that make up the XML technology. It
then goes on to discuss concrete examples of how these concepts and tools are
used to speed up software development in IceCube and what are the benefits of
using XML in IceCube's data handling and analysis chain. The overall aim of
this paper it to show that XML does have many benefits to bring High Energy
Physics software development and data analysis.; Comment: Talk from the 2003 Computing in High Energy and Nuclear Physics
(CHEP03), La Jolla, Ca, USA, March 2003, 8 pages, 8 Figures, LaTeX. PSN
As XML becomes ubiquitous and XML storage and processing becomes more
efficient, the range of use cases for these technologies widens daily. One
promising area is the integration of XML and data warehouses, where an
XML-native database stores multidimensional data and processes OLAP queries
written in the XQuery interrogation language. This paper explores issues
arising in the implementation of such a data warehouse. We first compare
approaches for multidimensional data modelling in XML, then describe how
typical OLAP queries on these models can be expressed in XQuery. We then show
how, regardless of the model, the grouping features of XQuery 1.1 improve
performance and readability of these queries. Finally, we evaluate the
performance of query evaluation in each modelling choice using the eXist
database, which we extended with a grouping clause implementation.
We study the complexity of query answering using views in a probabilistic XML
setting, identifying large classes of XPath queries -- with child and
descendant navigation and predicates -- for which there are efficient (PTime)
algorithms. We consider this problem under the two possible semantics for XML
query results: with persistent node identifiers and in their absence.
Accordingly, we consider rewritings that can exploit a single view, by means of
compensation, and rewritings that can use multiple views, by means of
intersection. Since in a probabilistic setting queries return answers with
probabilities, the problem of rewriting goes beyond the classic one of
retrieving XML answers from views. For both semantics of XML queries, we show
that, even when XML answers can be retrieved from views, their probabilities
may not be computable. For rewritings that use only compensation, we describe a
PTime decision procedure, based on easily verifiable criteria that distinguish
between the feasible cases -- when probabilistic XML results are computable --
and the unfeasible ones. For rewritings that can use multiple views, with
compensation and intersection, we identify the most permissive conditions that
make probabilistic rewriting feasible...
Querying over XML elements using keyword search is steadily gaining
popularity. The traditional similarity measure is widely employed in order to
effectively retrieve various XML documents. A number of authors have already
proposed different similarity-measure methods that take advantage of the
structure and content of XML documents. They do not, however, consider the
similarity between latent semantic information of element texts and that of
keywords in a query. Although many algorithms on XML element search are
available, some of them have the high computational complexity due to searching
a huge number of elements. In this paper, we propose a new algorithm that makes
use of the semantic similarity between elements instead of between entire XML
documents, considering not only the structure and content of an XML document,
but also semantic information of namespaces in elements. We compare our
algorithm with the three other algorithms by testing on the real datasets. The
experiments have demonstrated that our proposed method is able to improve the
query accuracy, as well as to reduce the running time.; Comment: 9 pages